text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Evidence is accumulating to show that the Earth's biosphere extends underground into deep igneous rock formations. In certain formulations, abiotic energy-yielding reactions between reduced rocks and groundwater provide a potential for in situ primary production by anaerobic microorganisms-- thus obviating any dependence on a surface ecosystem. Conceivably, such ecosystems could exist in the subsurface of other planets in the solar system. The main requirements are water, ferrous silicate minerals, carbon dioxide, and nitrogen. Unfortunately, observation of the subsurface is difficult. For example, current estimates suggest that the hydrosphere on Mars might be more than 2 km below the surface. Living SLMEs might be detected through conduits to the subsurface, such as wells, springs, or seeps in deep canyon walls. Signals produced by SLMEs might include cells, metabolic products (such as reduced gases) and their isotope ratios, and isotope ratios in residual substrates. Rocks in which now-defunct SLMEs once existed might be more accessible if they are brought to the surface by rock cycle processes. Signals of extinct SLME remnants have not yet been investigated, but might include microfossils, certain secondary mineralization patterns, and isotope ratios of secondary materials. Examples of both extant and extinct SLMEs have been identified on Earth, and are available for study and experimentation.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
<urn:uuid:36ad92b3-2c42-406d-bea4-9639a124e0d2>
3.65625
313
Academic Writing
Science & Tech.
20.124817
Beginning in August 2006 and continuing to the present, a persistent bloom of the red-tide alga, Karenia brevis, has been present in Sarasota Bay and surrounding waters. Cell concentrations have reached levels of several million cells per liter of seawater, indicating a severe red tide event. As point of reference, cell concentrations of 1,000 cells per liter or less are considered background levels, and 100,000 cells per liter or more typically cause respiratory irritation in humans and fish kills. The previous red tide event that occurred in Sarasota Bay waters persisted from February 2005 to January 2006 and cell concentrations reached a maximum of 300 million cells per liter. During the 2006 red tide bloom there have been significant increases in sea turtles stranding. Since 1 August 2006, Mote has responded to 84 sick or dead sea turtles in the Sarasota Bay region. On average, Mote responds to 50-60 sea turtle strandings per year. This increase in sea turtle strandings was also seen last summer during the prolonged red tide event in which over 92 sea turtles stranded between August and November 2005. Lastly, dolphin strandings do not appear to have increased during the current red tide event, with only 3 dolphins stranding since August 2006. This is in contrast to last year’s event when over 20 dolphins stranded between August and November 2005, and Sarasota Bay was included in the region identified as part of a Florida West Coast Unusual Mortality Event. Blood samples tested from the live sea turtles stranding during this year’s event have been positive for the red tide toxin. The scope of this year’s red tide event is currently smaller in magnitude and severity than last year’s year-long red tide bloom, but the cumulative effects of these continued red tide events on the Sarasota Bay ecosystem are unknown and currently under investigation.
<urn:uuid:2a501714-306b-4d7d-a2a9-c5f86203eeac>
2.765625
373
Knowledge Article
Science & Tech.
40.891477
One of the most often-asked questions about climate data is, “How long a time period do we need to establish a statistically significant trend?” Statistical significance of trends is an issue which has been dealt with often at this blog and many others. Yet previous posts, at least here, have made general statements and/or illustrated by example (often with simulated data) how “statistical significance” can come within your grasp or slip through your fingers. But the issue is so important — and so often abused by fake skeptics — that I think it’s worthwhile to give it a close look. In general terms, the real question we’re considering is: Do these data show a trend? In practical terms the question becomes more specific, usually amounting to: Do these data show a linear trend? In other words, are they reasonably approximated by a pattern over time which follows a straight line, one which is rising (upward trend) or falling (downward trend) but not flat (no trend)? This is hardly the only trend pattern which can exist, it’s certainly possible for data to follow trends which are not linear. In fact it happens all the time. But establishing the existence of a nonlinear trend is usually harder than showing that a linear trend is present, and a linear trend test will often detect the existence of trends even when they’re highly nonlinear. So, the simple fact is that when scientists study data to determine whether or not a trend is present, the “default” first analysis is to perform linear regression, which is the basic test for the existence of a linear trend. There are even many varieties of linear regression, but by far the most common is least-squares regression. It has some distinct advantages over other forms (but others have their advantages too), but it’s not our purpose to muse about the virtues and vices of different types of regression. We’ll focus our discussion on the circumstances under which data might or might not reveal the existence of a trend, when we test for a linear trend using least-squares regression. We’ll suppose that we have n data points which represent measurements or estimates of the data values at times which are evenly spaced, e.g., monthly or annual data. The entire time span covered by the data we’ll call T. We recognize that in addition to the underlying pattern (which we’re assuming is linear, but of unknown slope), there’s also noise added into the mix. We’ll say that is the variance of the noise, so that is its standard deviation. And as I’ve often emphasized, the noise values may not be independent of each other. In particular they may show autocorrelation, meaning that nearby (in time) noise values are correlated with each other. We’ll characterize the impact of autocorrelation by estimating a quantity , which we can call the number of data points per effective degree of freedom. For noise without autocorrelation, this quantity is equal to 1 — there’s 1 data point per degree of freedom. For noise with positive autocorrelation (it’s almost never negative) it will be greater than 1, which means that we need multiple data points to get a single “degree of freedom.” We’ll let represent the slope of the trend line we estimate using linear regression. What’s the uncertainty in that slope estimate? When the number of data values is not too small, a very good approximate formula for the square of the uncertainty (the square of the “standard error”) of the slope is Note the subscript on , to distinguish it from the variance of the noise which we’ve just called . The standard deviation of the slope, a.k.a. the standard error of our estimate, is the square root of that Great! There’s a general formula for you, but what does it mean for real-world data, in particular for global temperature data? Let’s take monthly average global temperature data from NASA GISS to estimate the parameters. The noise variance is approximately , so the noise standard deviation is about . The “number of data points per effective degree of freedom” turns out to be about 10.6, with its square root about 3.25. Note these are only estimates! Using monthly data, number n of data points in a time span of T years is . Putting it all together we have In order to be conservative, I’ll use 0.5 as an approximation for the numerator instead of 0.475, yielding a useful approximate formula for the standard error of the warming rate in NASA GISS monthly global temperature data That’s the standard error we can expect — but is a slope significant? It will be so at the usual “95% confidence” level if the slope estimate is at least as big as 2 standard errors. Here’s a plot of twice the standard error as a function of the time span T: I’ve also place a horizontal, thick-dashed line at the value 0.017 deg.C/yr, which is just about the modern rate of global warming. It intercepts our 2-standard-error curve when the time span T is smidgen over 15 years (also indicated by a dashed line). That means that if we have 15 years of data, we can confirm a trend at the present rate of global warming, right? Not necessarily! When we estimate the slope, our estimate is a random variable. It will approximately follow the normal distribution, with mean value equal to the true slope and standard deviation equal to the standard error. That means that the quantity “estimated slope minus two standard errors” will follow the normal distribution with mean value zero, and standard deviation equal to the standard error. For a warming trend, if that quantity, “estimated minus two standard errors,” is positive, then we achieve statistical significance for a warming trend. If not, then we don’t. For T just a hair above 15 years, the quantity in question roughly follows the normal distribution with mean value zero because the trend is twice as big as the standard error. So there’s a 50/50 chance of its being above zero and permitting us to declare “statistical significance.” There’s also a 50/50 chance of it’s not being so. Therefore, for the parameters estimated from NASA GISS data, a 15-year time span (actually a wee bit more) gives us about a 50/50 chance to detect a trend with statistical significance. It also gives a 50/50 chance for the significance test to fail — which does not mean there’s no warming (another very common misconception pushed by fake skeptics), just that the given data don’t show it with statistical significance. How long would we need to have a really good chance — say, a 95% chance — of detecting the trend with statistical significance? For that to happen, the trend has to be four times as large as the standard error. That happens, with the given parameters, when the time span T is 24 years, not 15. Here’s an expanded plot with yet another dashed line indicating a 24-year time span: So, 15 years of global temperature data from NASA GISS has about a 50/50 chance to show the trend with statistical significance. But for a 95% chance to achieve that threshold, you need about 24 years. All of this is approximate, but it does give a good perspective on the quantity of data needed. It also shows how easy it is for fake skeptics to crow about the lack of statistical significance, even when the trend is present and is real. Would they have the audacity to be so misleading? I’d say that’s something we can expect with 100% confidence.
<urn:uuid:f48bb80d-7402-4a9b-925d-a6bbc34c216d>
2.78125
1,655
Personal Blog
Science & Tech.
53.482186
Scientists learn about animals by observing them and from analyzing their DNA. Starting your own field journal is the first step towards understanding the wildlife in your area. Here's how: Pick a type of animal or pet to watch in your area. Write down the following information: Date and time Draw a picture of the animal. Write down any observations you have about the way the animal looks or behaves. Write down any questions that you have about the animal. Write DNA next to any questions that you think you could answer if you could go to the lab and analyze the animal's DNA.
<urn:uuid:9290b179-1b91-483a-9606-5e71d50ecabc>
3.203125
122
Tutorial
Science & Tech.
58.978208
Jan. 31, 2013 — The evolutionary history of proteins shows that protein folding is an important factor. Especially the speed of protein folding plays a key role. This was the result of a computer analysis carried out by researchers at the Heidelberg Institute for Theoretical Studies (HITS) and the University of Illinois at Urbana Champaign. For almost four billions of years, there has been a trend towards faster folding. "The reason might be that this makes proteins less susceptible to clumping, and that they can carry out their tasks faster," says Dr. Frauke Gräter (HITS) who led the analysis. Proteins are elementary building blocks of life. They often perform vital functions. In order to become active, proteins have to fold into three-dimensional structures. Misfolding of proteins leads to diseases such as Alzheimer's or Creutzfeld-Jakob. So which strategies did nature develop over the course of evolution to improve protein folding? To examine this question, the chemist Dr. Frauke Gräter (Heidelberg Institute for Theoretical Studies) looked far back into the history of Earth. Together with her colleague Prof. Gustavo Caetano-Anolles at the University of Illinois at Urbana-Champaign, she used computer analyses to examine the folding speed of all currently known proteins. The researchers have seen the following trend: For most of protein evolution, the folding speed increased, from archaea to multicellular organisms. However, 1.5 billion years ago, more complex structures emerged and caused a biological 'Big Bang'. This has led to the development of slower-folding protein structures. Remarkably, the tendency towards higher speed in protein origami overall dominated, regardless of the length of amino acid chains constituting the proteins. "The reason for higher folding speed might be that this makes proteins less susceptible to aggregation, so that they can carry out their tasks faster," says Dr. Frauke Gräter, head of the Molecular Biomechanics research group at HITS. In their work, the researchers used an interdisciplinary approach combining genetics and biophysics. "It is the first analysis to combine all known protein structures and genomes with folding rates as a physical parameter," says Dr. Gräter. The analysis of 92,000 proteins and 989 genomes can only be tackled with computational methods. The group of Gustavo Caetano-Anolles, head of the Evolutionary Bioinformatics Laboratory at Urbana-Champaign, had originally classified most structurally known proteins from the Protein Database (PDB) according to age. For this study, Minglei Wang in his laboratory identified protein sequences in the genomes, which had the same folding structure as the known proteins. He then applied an algorithm to compare them to each other on a time scale. In this way, it is possible to determine which proteins became part of which organism and when. After that, Cedric Debes, a member of Dr. Gräter's group, applied a mathematical model to predict the folding rate of proteins. The individual folding steps differ in speed and can take from nanoseconds to minutes. No microscope or laser would be able to capture these different time scales for so many proteins. A computer simulation calculating all folding structures in all proteins would take centuries to run on a mainframe computer. This is why the researchers worked with a less data-intensive method. They calculated the folding speed of the single proteins using structures that have been previously determined in experiments: A protein always folds at the same points. If these points are far apart from each other, it takes longer to fold than if they lie close to each other. With the so-called Size-Modified Contact Order (SMCO), it is possible to predict how fast these points will meet and thus how fast the protein will fold, regardless of its length. "Our results show that in the beginning there were proteins which could not fold very well," Dr. Gräter summarizes. "Over time, nature improved protein folding so that eventually, more complex structures such as the many specialized proteins of humans were able to develop." Shorter and faster for evolution Amino acid chains, which make up proteins, also became shorter over the course of evolution. This was another factor contributing to the increase in folding speed, as has been shown in the study. "Since eukaryotes, i.e. organisms with a cell nucleus, emerged, protein folding became somewhat less crucial," says Frauke Gräter. Since then, nature has developed a complex machinery to prevent and repair misfolded proteins. One example are the so-called chaperones. "It seems as if nature would accept a certain level of disorder in order to develop structures which could not have evolved otherwise." The number of known genomes and protein structures is continually increasing, thus expanding the data bases for further computer analyses of protein evolution. Frauke Gräter says "With future analyses of protein evolution, it might be possible for us to answer the related question whether proteins became more stable or more flexible over their billion-year-long history of evolution." The study was supported by the Klaus Tschira Foundation and the National Science Foundation of the US. - Cédric Debès, Minglei Wang, Gustavo Caetano-Anollés, Frauke Gräter. Evolutionary Optimization of Protein Folding. PLoS Computational Biology, 2013; 9 (1): e1002861 DOI: 10.1371/journal.pcbi.1002861 Note: If no author is given, the source is cited instead.
<urn:uuid:7ad4c8af-a9c1-407c-b4a1-59c0c32c3448>
4
1,159
Knowledge Article
Science & Tech.
41.452363
Effects of Increases in Atmospheric CO2 and Nitrogen Deposition on the Productivity of the Terrestrial Biosphere Churkina, G., Brovkin, V., von Bloh, W., Trusilova, K., Jung, M. and Dentener, F. 2009. Synergy of rising nitrogen depositions and atmospheric CO2 on land carbon uptake moderately offsets global warming. Global Biogeochemical Cycles 23: 10.1029/2008GB003291. Churkina et al. first determined that their global- and continental-scale estimates of land carbon uptake in the 1990s were "consistent with previously reported data." This comparison with the real world gave them confidence in the results their modeling exercise projected for the future, namely, that "increasing nitrogen deposition and the physiological effect of elevated atmospheric CO2 on plants have the potential to increase the land carbon sink, to offset the rise of CO2 concentration in the atmosphere, and to reduce global warming." More specifically, they found that predicted changes in climate, CO2 and nitrogen deposition for the year 2030 were sufficient to offset atmospheric CO2 by a sizable 41 ppm. And if likely land use changes were included in the calculations, the offset rose to a huge 76 ppm. Considering these findings, the six scientists who conducted the work, say their study suggests that "reforestation and sensible ecosystem management in industrialized regions may have larger potential for climate change mitigation [italics added, which they equate with buying time] than anticipated [italics added, which they equate with currently thought]."
<urn:uuid:36da9f6a-c390-4955-81f0-25c69dd55984>
3.09375
316
Academic Writing
Science & Tech.
32.294406
Browser object commands are prefix commands. If you have done your homework, you will instantly recall that the job of a prefix command is to modify the interactive context of the command that follows it. Browser object commands modify the interactive context by placing a reference to their own handler in I.browser_object. The following command can then call the handler of the browser object command. The browser object command's handler will usually carry out some interaction, such as prompting the user, and then return a piece of data to its caller. The function define_browser_object_class does what its name suggests. Its arguments are as follows: Name of the browser object class given as a string. Multi-word names should be hyphenated. From this name will be generated a top level variable of the form browser_object_NAME, as well as an interactive command called browser-object-NAME. - A documentation string. A coroutine function of two arguments, (I, prompt), to carry out the UI interaction and yield the datum. Return the datum with yield co_return(ob). - Optional keyword argument. It is a terse string, usually noun and verb, instructing the user in what they will do with this browser object class. For example, "select link". Any browser object class which uses the minibuffer prompt for interaction should provide this keyword. When defining a new command with interactive, you can specify the default browser object for that command. This browser object will be used when no other browser object is given by a prefix command by the user. interactive(command_name, docstring, handler, $browser_object = browser_object_foo); A browser object given in a command definition can be one of three types of objects: a constant, a function, or an object of type browser_object_class. Constants are used for things like an url which is always the same. Function browser-objects will be called when read_browser_object is called, and its value will be the thing that gets passed to the command. Function browser-objects are used for computable values that do not involve UI interaction. The browser_object_class type is the most dynamic. A browser_object_class holds a procedure for carrying out a UI interaction to obtain a datum. Examples are the hints class (selecting an element by number or text) and the url-reader class (that prompts for an url or webjump in the minibuffer).
<urn:uuid:1070ea78-9eeb-4d91-91b6-8990c69f9572>
3.328125
513
Documentation
Software Dev.
46.017449
A device consisting of n quantum bits allows for operations on superpositions of classical states. This ability to operate simultaneously on an exponentially large number of states with just a linear number of bits is the basis for quantum parallelism. In particular, repeating the operation of Eq. 7 n times, each on a different bit, gives a superposition with equal amplitudes in states. At first sight quantum computers would seem to be ideal for combinatorial search problems that are in the class NP. In such problems, there is an efficient procedure that takes a potential solution set s and determines whether s is in fact a solution, but there are exponentially many potential solutions, very few of which are in fact solutions. If are the potential sets to consider, we can quickly form the superposition and then simultaneously evaluate for all these states, resulting in a superposition of the sets and their evaluation, i.e., . Here represents a classical search state considering the set along with a variable soln whose value is true or false according to the result of evaluating the consistency of the set with respect to the problem requirements. At this point the quantum computer has, in a sense, evaluated all possible sets and determined which are solutions. Unfortunately, if we make a measurement of the system, we get each set with equal probability and so are very unlikely to observe a solution. This is thus no better than the slow classical search method of random generate-and-test where sets are randomly constructed and tested until a solution is found. Alternatively, we can obtain a solution with high probability by repeating this operation times, either serially (taking a long time) or with multiple copies of the device (requiring a large amount of hardware or energy if, say, the computation is done by using multiple photons). This shows a trade-off between time and energy (or other physical resources), conjectured to apply more generally to solving these search problems , and also seen in the trade-off of time and number of processors in parallel computers. To be useful for combinatorial search, we can't just evaluate the various sets but instead must arrange for amplitude to be concentrated into the solution sets so as to greatly increase the probability a solution will be observed. Ideally this would be done with a mapping that gives constructive interference of amplitude in solutions and destructive interference in nonsolutions. Designing such maps is complicated by the fact that they must be linear unitary operators as described above. Beyond this physical restriction, there is an algorithmic or computational requirement: the mapping should be efficiently computable . For example, the map cannot require a priori knowledge of the solutions (otherwise constructing the map would require first doing the search). This computational requirement is analogous to the restriction on search heuristics: to be useful, the heuristic itself must not take a long time to compute. These requirements on the mapping trade off against each other. Ideally one would like to find a way to satisfy them all so the map can be computed in polynomial time and give, at worst, polynomially small probability to get a solution if the problem is soluble. One approach is to arrange for constructive interference in solutions while nonsolutions receive random contributions to their amplitude. While such random contributions are not as effective as a complete destructive interference, they are easier to construct and form the basis for a recent factoring algorithm as well as the method presented here. Classical search algorithms can suggest ways to combine the use of superpositions with interference. These include local repair styles of search where complete assignments are modified, and backtracking search, where solutions are built up incrementally. Using superpositions, many possibilities could be simultaneously considered. However these search methods have no a priori specification of the number of steps required to reach a solution so it is unclear how to determine when enough amplitude might be concentrated into solution states to make a measurement worthwhile. Since the measurement process destroys the superposition, it is not possible to resume the computation at the point where the measurement was made if it does not produce a solution. A more subtle problem arises because different search choices lead to solutions in differing numbers of steps. Thus one would also need to maintain any amplitude already in solution states while the search continues. This is difficult due to the requirement for reversible computations. While it may be fruitful to investigate these approaches further, the quantum method proposed below is based instead on a breadth-first search that incrementally builds up all solutions. Classically, such methods maintain a list of goods of a given size. At each step, the list is updated to include all goods with one additional variable. Thus at step i, the list consists of sets of size i which are used to create the new list of sets of size . For a CSP with n variables, i ranges from 0 to , and after completing these n steps the list will contain all solutions to the problem. Classically, this is not a useful method for finding a single solution because the list of partial assignments grows exponentially with the number of steps taken. A quantum computer, on the other hand, can handle such lists readily as superpositions. In the method described below, the superposition at step i consists of all sets of size i, not just consistent ones, i.e., the sets at level i in the lattice. There is no question of when to make the final measurement because the computation requires exactly n steps. Moreover, there is an opportunity to use interference to concentrate amplitude toward goods. This is done by changing the phase of amplitudes corresponding to nogoods encountered while moving through the lattice. As with the division of search methods into a general strategy (e.g., backtrack) and problem specific choices, the quantum mapping described below has a general matrix that corresponds to exploring all possible changes to the partial sets, and a separate, particularly simple, matrix that incorporates information on the problem specific constraints. More complex maps are certainly possible, but this simple decomposition is easier to design and describe. With this decomposition, the difficult part of the quantum mapping is independent of the details of the constraints in a particular problem. This suggests the possibility of implementing a special purpose quantum device to perform the general mapping. The constraints of a specific problem are used only to adjust phases as described below, a comparatively simple operation. For constraint satisfaction problems, a simple alternative representation to the full lattice structure is to use partial assignments only, i.e., sets of variable-value pairs that have no variable more than once. At first sight this might seem better in that it removes from consideration the necessary nogoods and hence increases the proportion of complete sets that are solutions. However, in this case the number of sets as a function of level in the lattice decreases before reaching the solution level, precluding the simple form of a unitary mapping described below for the quantum search algorithm. Another representation that avoids this problem is to consider assignments in only a single order for the variables (selected randomly or through the use of heuristics). This version of the set lattice has been previously used in theoretical analyses of phase transitions in search . This may be useful to explore further for the quantum search, but is unlikely to be as effective. This is because in a fixed ordering some sets will become nogood only at the last few steps, resulting is less opportunity for interference based on nogoods to focus on solutions.
<urn:uuid:b376304b-ec23-427c-b47c-437dbb2f4f8f>
3.125
1,502
Academic Writing
Science & Tech.
28.594935
Earlier we covered some of the new features of HTML 5, such as canvas, audio and video. In this article we'll cover many of the useful new elements that the specification brings to the table. Please remember, though, that HTML 5 is a work in progress, and not fully supported by many browsers yet. There are many new elements added to HTML 5. These elements are divided into form elements and other markup elements. Let's start with the form elements. Form elements as you know them from previous versions of HTML (specifically 4.01) are still supported, but new elements and attributes have been added to address the limitations that web developers have been facing. However, browser support for HTML 5 form elements is still limited. Here is an example that contains some of the new form features (only a portion of the code is shown): The key points to address are: The name field has a required attribute (has a red border, as you'll see in the output below). Also, there is a placeholder attribute that directs the user to what to write in the textbox. The type of the e-mail address is "email," and its property of autocomplete is set to off. If the user types an email address with the wrong format, the textbox will have a red border, and a tooltip with a message is displayed. The type of the phone number is "tel." The type of the web site is url. The same condition applies here as the email address if the user types a url in the wrong format. The score is of type "range" with both minimum and maximum values. The output (with some information filled in) is as follows:
<urn:uuid:56bab796-15bc-417b-97e9-b3ed5daaaade>
2.6875
347
Documentation
Software Dev.
59.217289
Contact: Lee J. Siegel University of Utah Caption: This map shows how mountains surrounding Utah's Great Salt Lake interact with the lake to cause some "lake-effect" snowstorms. Air masses from the north-northwest are channeled by mountains north of the lake so they converge above the lake. The air picks up heat and moisture from the lake, so it rises, cools and produces snow as it is funneled into the Salt Lake Valley by surrounding mountains. Credit: Jim Steenburgh, University of Utah Usage Restrictions: Credit required Related news release: Lake-effect snow sometimes needs mountains
<urn:uuid:ee6f149a-c939-4878-8b19-056a0c808d8c>
3.53125
129
Truncated
Science & Tech.
37.441869
|CHP Home||GenChem Analytical Instrumentation Index| This doucument introduces the thermodynamics of chemical reactions. To start, imagine the following demonstration: Cold packs contain separate compartments of water and a salt such as ammonium nitrate. When you mix the salt and water the cold pack gets cold. Such a reaction is called an endothermic reaction. NH4NO3 (s) NH4+(aq) + NO3-(aq) Ho = +28.1 kJ/mol This reaction is spontaneous even though thermal energy is needed to break the ionic bonds of the crystalline NH4NO3. Why do endothermic reactions occur? The driving force for the NH4NO3 reaction is the much greater disorder that is possible for the NH4+ and NO3- ions in solution compared to these ions arranged in a solid. We describe the disorder or randomness of a system with the entropy, S, which has units of kJ/mol·K. Liquids are more disordered than solids and gases are more disordered than liquids. At 0oC SH2O(l) > SH2O(s) At 100oC SH2O(g) > SH2O(l) As 90 people enter a 180-seat lecture hall, do they take seats beginning in the front row to completely fill the front half of the room and leaving the back half of the room empty? No, the probability of that arrangement is very, very small. The more likely arrangement is for the 90 people to more or less distribute themselves randomly in the room. This arrangement is more probable and we could quantitate this more probable arrangement in terms of entropy. The entropy must be included to predict if a reaction will be spontaneous. The total change in the available energy of a system is called the change in the Gibbs free energy, G: G = H - TS G is the change in Gibbs free energy (kJ/mol) H is the change in enthalpy (kJ/mol) T is absolute temperature in K S is the change in entropy (kJ/mol K) The change in enthalpy is the amount of heat or work that is transferred when an reaction occurs. A reaction is spontaneous if G is negative, that is, the products have a lower Gibbs free energy than the reactants. Note that depending on the sign of H and S, the spontaneity of a reaction can be temperature dependent. For standard conditions of 1 atm for gases and 1 M for solutes in solution, these energies are given an "o" superscript. The change in energy for any reaction (at standard conditions) can be found using tabulated standard energies of formation and standard entropies. The f subscript for the Hfo and Gfo indicates standard heats of formation. I've used the rexn subscript to specify that the energy changes are of a reaction. This subscript is usually left off. From tables of standard enthalpies and free energies we can calculate Ho and Go for this reaction. NH4NO3 (s) NH4+(aq) + NO3-(aq) |Hfo (kJ/mol)||Gfo (kJ/mol)| Ho = -132.5 kJ/mol - 205.0 kJ/mol - (-365.6 kJ/mol) Ho = +28.1 kJ/mol Go = -79.3 kJ/mol - 108.7 kJ/mol - (-184 kJ/mol) Go = -4.0 kJ/mol Since Go is a negative number this reaction is spontaneous at standard concentrations. (We also observed that this reaction was spontaneous for the salt concentrations in the cold pack, which were probably higher than 1 M.) Recall the activation energy diagram that was introduced in the kinetics document. The model is valid for both exothermic or endothermic reactions. A diagram for an exothermic reaction: ^ | ________ Energy | / \ | reactants / Ea \ | __________/ __ \ | \ | \ | G \ | \ products | __ \_________ | | A diagram for an endothermic reaction: ^ | ________ Energy | / \ | / \ | / \ | / \ products | / Ea \____________ | / | reactants / G | ___________/ ___ ___ | | Ea is the activation energy and G is the chagne in Gibbs free energy (in kJ/mol). Kinetics describes how quickly or slowly a reaction occurs. Thermodynamics describes the changes in the form of energy when a reaction occurs, for example, converting chemical energy to heat. Equilibrium describes reactions in which the reactants and products coexist. |Top of Page|
<urn:uuid:8f481f8c-c53e-4d7a-aeb3-2b93a8b8a6a8>
4
1,018
Academic Writing
Science & Tech.
63.872193
Pulsar science with the SKA The SKA (Square Kilometre Array) is a planned multi purpose radio telescope with a collecting area approaching 1 million square metres. It will consist of individual elements placed in a 5-kilometre core and in smaller `islands' that extend up to several thousands(!) of kilometres from the core. The elements will vary from small dipolar antenna's to 15-metre dishes, thus enabling the SKA to observe radio-waves with frequencies from 100 MHz to 10 GHz. The site for the SKA will be selected in 2011. The shortlist consists of South-Africa and West-Australia, two sites which are amongst the most radio-quiet zones in the world. Artists impression of the core of the SKA. Created by: Xilostudios To motivate and steer the design of this telescope, Dr. Roy Smits and Prof. Michael Kramer of JBCA are studying the scientific possibilities to find and study radio pulsars with the SKA. A full sky survey for pulsars would yield a enormous data-rate of several tera-bytes every second! To analyse this data real-time requires a computer power close to 100 peta-operations per second, which is equivalent to that of 100 million laptops. Such a survey would increase the known pulsar population by a factor of 10. It will probably also yield several pulsar-black hole binaries of which at least one is likely to allow tests of General Relativity far beyond current efforts. Furthermore, by finding and re-observing a choice of several thousand millisecond pulsars for a period of several years, the SKA will be able to provide direct proof and study the effects of gravitional waves. By: Roy Smits (JBCA)
<urn:uuid:807f2157-8e0c-4eeb-8f51-a6681bbaccfb>
3.21875
369
Knowledge Article
Science & Tech.
49.028496
January 26, 2011 in PHP & MySQL I was recently faced with the situation of needing to enter a dollar amount from a <form> via PHP to a MySQL Database. This may sound like a simple feat, but it resulted in being completely the opposite. The reason being is that when data is “uploaded” into MySQL, it views a comma as being the end of a string. So, when the amoutn 75,000 is entered, the amount that MySQL actually “sees” is 75. To find the solution took me a bit longer than expected. And, as per usual, it ended up being something simple to correct. To removed the “,” from the string: |<?php $price = ereg_replace(“,”, “”, $_POST['nprice']); ?>| Let’s break this down: ereg_replace function is to stating, locate the “,” in the $_POST['nprice'] string and replace it with “”, which is nothing. You can place anything inside the first “” of any symbol you would like removed. Now the data is clean and ready to be uploaded into MySQL accurately. The next solution to find, is how do we show the number (example: 75000) as a currency? That proved to be much simpler. |<?php echo number_format($price); ?>| This indicates that the value stored in $price should be shown in a number format (example: 75,000). To learn more about the function ereg_replace: http://php.net/manual/en/function.ereg-replace.php To learn more about the function number_format: http://php.net/manual/en/function.number-format.php
<urn:uuid:de472085-9d02-46b2-8ffd-15eefec05e54>
2.8125
389
Personal Blog
Software Dev.
71.798219
Thin Layer Chromatography Gas Chromatography-Mass Spectrometry Mohrig, pp. 178-187.) is one of the most useful methods for the separation and purification of both solids and liquids when carrying out small-scale experiments. Column chromatography is another solid-liquid technique in which the two phases are a solid (stationary phase) and a liquid (moving phase). The theory of column chromatography is analogous to that of thin-layer chromatography. The most common adsorbents - silica gel and alumina - are the same ones used in TLC. The sample is dissolved in a small quantity of solvent (the eluent) and applied to the top of the column. The eluent, instead of rising by capillary action up a thin layer, flows down through the column filled with the adsorbent. Just as in TLC, there is an equilibrium established between the solute adsorbed on the silica gel or alumina and the eluting solvent flowing down through the column. In this experiment, you will be using column chromatography to separate the two components of a binary mixture. You will identify them from their melting points. to detailed procedure for Lab III: Column Chromatography.
<urn:uuid:67c2fda5-2141-45ad-b7cd-b773f5cedbfa>
3.5625
271
Tutorial
Science & Tech.
40.818986
On August 27, 2003, Mars was closest to the Sun, and Earth near its most distant point from the Sun during the time of the Mars opposition. This combination brought the Earth and Mars unusually close together. As a result, Earth and Mars were 34.6 million miles away from each other; the closest they had been in 60,000 years. Photo credit: John Nemy & Carol Legate of Whistler, B.C. (NASA: http://science.nasa.gov/science-news/science-at-nasa/2009/09jun_marshoax/)
<urn:uuid:7395e953-9652-4e33-8e96-14d579b29a68>
3.359375
124
Knowledge Article
Science & Tech.
82.145339
The Poincaré Conjecture Imaginestretching a rubber band around the surface of an apple, then shrinking it down slowly. This shrinking could occur without tearing the rubber band or breaking the apple - and the band would never have to leave the surface. However, if this rubber band were to be stretched across, say, a tire - there is no way to shrink to a point without breaking one or the other. The surface of such an apple is “simply connected,” but the tire is not. Henri Poincaré (shown below), during the early twentieth century - knew that two dimensional spheres had this ‘connected’ property - and he asked if the same applied for three dimensional spheres. The conjecture turned out to be immensely difficult to prove. After more than a century, Grigori Perelman finally devised a solution. In 2006, Perelman was awarded the Fields Medal for this contribution, but he decided to turn it down, stating that: “I’m not interested in money or fame, I don’t want to be on display like an animal in a zoo.”
<urn:uuid:dc4cfb11-20a4-455a-802f-6cad965899d0>
3.046875
238
Personal Blog
Science & Tech.
46.371304
Open Database Connectivity (ODBC) is a widely accepted application-programming interface (API) for database access. It is based on the Call-Level Interface (CLI) specifications from X/Open and ISO/IEC for database APIs and uses Structured Query Language (SQL) as its database access language. The Connector/ODBC architecture is based on five components, as shown in the following diagram: The Application uses the ODBC API to access the data from the MySQL server. The ODBC API in turn communicates with the Driver Manager. The Application communicates with the Driver Manager using the standard ODBC calls. The Application does not care where the data is stored, how it is stored, or even how the system is configured to access the data. It needs to know only the Data Source Name (DSN). A number of tasks are common to all applications, no matter how they use ODBC. These tasks are: Because most data access work is done with SQL, the primary tasks for applications that use ODBC are submitting SQL statements and retrieving any results generated by those statements. The Driver Manager is a library that manages communication between application and driver or drivers. It performs the following tasks: Resolves Data Source Names (DSN). The DSN is a configuration string that identifies a given database driver, database, database host and optionally authentication information that enables an ODBC application to connect to a database using a standardized reference. Because the database connectivity information is identified by the DSN, any ODBC compliant application can connect to the data source using the same DSN reference. This eliminates the need to separately configure each application that needs access to a given database; instead you instruct the application to use a pre-configured DSN. Loading and unloading of the driver required to access a specific database as defined within the DSN. For example, if you have configured a DSN that connects to a MySQL database then the driver manager will load the Connector/ODBC driver to enable the ODBC API to communicate with the MySQL host. Processes ODBC function calls or passes them to the driver for processing. The Connector/ODBC driver is a library that implements the functions supported by the ODBC API. It processes ODBC function calls, submits SQL requests to MySQL server, and returns results back to the application. If necessary, the driver modifies an application's request so that the request conforms to syntax supported by MySQL. The ODBC configuration file stores the driver and database information required to connect to the server. It is used by the Driver Manager to determine which driver to be loaded according to the definition in the DSN. The driver uses this to read connection parameters based on the DSN specified. For more information, Section 21.1.4, “Configuring Connector/ODBC”. The MySQL database where the information is stored. The database is used as the source of the data (during queries) and the destination for data (during inserts and updates). An ODBC Driver Manager is a library that manages communication between the ODBC-aware application and any drivers. Its main functionality includes: Resolving Data Source Names (DSN). Driver loading and unloading. Processing ODBC function calls or passing them to the driver. Both Windows and Mac OS X include ODBC driver managers with the operating system. Most ODBC Driver Manager implementations also include an administration application that makes the configuration of DSN and drivers easier. Examples and information on these managers, including Unix ODBC driver managers, are listed below: Microsoft Windows ODBC Driver Manager Mac OS X includes a GUI application that provides a simpler configuration mechanism for the Unix iODBC Driver Manager. You can configure DSN and driver information either through ODBC Administrator or through the iODBC configuration files. This also means that you can test ODBC Administrator configurations using the unixODBC Driver Manager for Unix http://www.unixodbc.org, for more Manager includes the Connector/ODBC driver 3.51 in the installation package, starting with version iODBC ODBC Driver Manager for Unix http://www.iodbc.org, for more information.
<urn:uuid:cda0cbff-f036-466e-8725-74d44398775b>
3.671875
884
Documentation
Software Dev.
34.149634
|Unit system:||SI derived unit| |Named after:||Heinrich Hertz| |In SI base units:||1 Hz = 1 s-1| The hertz (symbol Hz) is the SI unit of frequency defined as the number of cycles per second of a periodic phenomenon. One of its most common uses is the description of the sine wave, particularly those used in radio and audio applications, such as the frequency of musical tones. The word "hertz" is named for Heinrich Rudolf Hertz, who was the first to conclusively prove the existence of electromagnetic waves. The hertz is equivalent to cycles per second. In defining the second, the CIPM declared that "the standard to be employed is the transition between the hyperfine levels F = 4, M = 0 and F = 3, M = 0 of the ground state 2S1/2 of the cesium 133 atom, unperturbed by external fields, and that the frequency of this transition is assigned the value 9 192 631 770 hertz" thereby effectively defining the hertz and the second simultaneously. In English, "hertz" is also used as the plural form. As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, 103 Hz), MHz (megahertz, 106 Hz), GHz (gigahertz, 109 Hz) and THz (terahertz, 1012 Hz). One hertz simply means "one cycle per second" (typically that which is being counted is a complete cycle); 100 Hz means "one hundred cycles per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at 1 Hz, or a human heart might be said to beat at 1.2 Hz. The "frequency" or activity of aperiodic or stochastic events, such as radioactive decay, is expressed in becquerels, not hertz. Even though angular velocity, angular frequency and hertz all have the dimensions of 1/s, angular velocity and angular frequency are not expressed in hertz, but rather in an appropriate angular unit such as radians per second. Thus a disc rotating at 60 revolutions per minute (rpm) is said to be rotating at either 2π rad/s or 1 Hz, where the former measures the angular velocity and the latter reflects the number of complete revolutions per second. The conversion between a frequency f measured in hertz and an angular velocity ω measured in radians per second are: - and . This SI unit is named after Heinrich Hertz. As with every International System of Units (SI) unit whose name is derived from the proper name of a person, the first letter of its symbol is upper case (Hz). However, when an SI unit is spelled out in English, it should always begin with a lower case letter (hertz), except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in capitalized material such as a title. Note that "degree Celsius" conforms to this rule because the "d" is lowercase. —Based on The International System of Units, section 5.2. The hertz is named after the German physicist Heinrich Hertz (1857–1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1930. It was adopted by the General Conference on Weights and Measures (CGPM) (Conférence générale des poids et mesures) in 1960, replacing the previous name for the unit, cycles per second (cps), along with its related multiples, primarily kilocycles per second (kc/s) and megacycles per second (Mc/s), and occasionally kilomegacycles per second (kMc/s). The term cycles per second was largely replaced by hertz by the 1970s. Sound is a traveling longitudinal wave which is an oscillation of pressure. Humans perceive frequency of sound waves as pitch. Each musical note corresponds to a particular frequency which can be measured in hertz. An infant's ear is able to perceive frequencies ranging from 20 Hz to 20,000 Hz; the average adult human can hear sounds between 20 Hz and 16,000 Hz. The range of ultrasound, high-intensity infrasound and other physical vibrations such as molecular vibrations extends into the megahertz range and well beyond. Electromagnetic radiation Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens (infrared) to thousands (ultraviolet) of terahertz. Electromagnetic radiation with frequencies in the low terahertz range, (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light), is often called terahertz radiation. Even higher frequencies exist, such as that of gamma rays, which can be measured in exahertz. (For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see electromagnetic spectrum.) In computing, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz or gigahertz (106 or 109 hertz, respectively). This number refers to the frequency of the CPU's master clock signal ("clock rate"). This signal is simply an electrical voltage which changes from low to high and back again at regular intervals. This signal is a square wave. Hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, but many experts have criticized this approach, which they claim is an easily manipulable benchmark. For home-based personal computers, the CPU has ranged from approximately 1 megahertz in the late 1970s (Atari, Commodore, Apple computers) to up to 6 GHz in the present (IBM POWER processors). SI multiples |10−1 Hz||dHz||decihertz||101 Hz||daHz||decahertz| |10−2 Hz||cHz||centihertz||102 Hz||hHz||hectohertz| |10−3 Hz||mHz||millihertz||103 Hz||kHz||kilohertz| |10−6 Hz||µHz||microhertz||106 Hz||MHz||megahertz| |10−9 Hz||nHz||nanohertz||109 Hz||GHz||gigahertz| |10−12 Hz||pHz||picohertz||1012 Hz||THz||terahertz| |10−15 Hz||fHz||femtohertz||1015 Hz||PHz||petahertz| |10−18 Hz||aHz||attohertz||1018 Hz||EHz||exahertz| |10−21 Hz||zHz||zeptohertz||1021 Hz||ZHz||zettahertz| |10−24 Hz||yHz||yoctohertz||1024 Hz||YHz||yottahertz| |Common prefixed units are in bold face.| Frequencies not expressed in hertz Even higher frequencies are believed to occur naturally, in the frequencies of the quantum-mechanical wave functions of high-energy (or, equivalently, massive) particles, although these are not directly observable, and must be inferred from their interactions with other phenomena. For practical reasons, these are typically not expressed in hertz, but in terms of the equivalent quantum energy, which is proportional to the frequency by the factor of Planck's constant. See also - "hertz". (1992). American Heritage Dictionary of the English Language, 3rd ed. Boston: Houghton Mifflin. - "SI brochure: Table 3. Coherent derived units in the SI with special names and symbols". Retrieved 20102025. - "[Resolutions of the] CIPM, 1964 - Atomic and molecular frequency standards". SI brochure, Appendix 1. Retrieved 2010-20-26. - NIST Guide to SI Units - 9 Rules and Style Conventions for Spelling Unit Names, National Institute of Standards and Technology - "SI brochure, Section 2.2.2, paragraph 6". - "IEC History". Iec.ch. 1904-09-15. Retrieved 2012-04-28. - Ernst Terhardt (2000-02-20). "Dominant spectral region". Mmk.e-technik.tu-muenchen.de. Retrieved 2012-04-28. - Amit Asaravala (2004-03-30). "Good Riddance, Gigahertz". Wired.com. Retrieved 2012-04-28. - BIPM Cesium ion fCs definition - National Research Council of Canada: Generation of the Hz - National Research Council of Canada: Cesium fountain clock - National Physical Laboratory: Trapped ion optical frequency standards - National Research Council of Canada: Optical frequency standard based on a single trapped ion - National Research Council of Canada: Optical frequency comb - One Hertz in Radians per Second (Google). Note, as of 06 May 2009 there is an error of 2.
<urn:uuid:94a8bc54-8086-4a38-b9f0-458d9f0a4500>
3.96875
2,015
Knowledge Article
Science & Tech.
52.45473
Time Dilation Gets a Quantum Twist Quantum vs general relativistic conceptions of time go head-to-head in a proposed table-top test. October 1, 2012 University of Vienna The story goes that when Galileo pondered what might happen if you threw two balls of different mass off the leaning tower of Pisa, he realised that they would fall at the same rate. When Einstein mused whether it would be possible to catch up with a light beam if you could run fast enough, he hit on the idea that light’s speed must be constant. Both men were fans of the gedanken —or thought—experiment: a flight of fancy that allows you to conceive the inconceivable and make startling strides in understanding. Caslav Brukner , a theoretical quantum physicist, and his team at the University of Vienna in Austria, have taken a leaf out of the book of the greats and come up with their own thought experiment, pitting the conceptions of time in general relativity and quantum mechanics against each other. The difference? Theirs could soon be carried out in a table-top test in the lab. Just talking about such an experiment has enticed experimentalists to Brukner’s door. It has long been assumed that the overlap between quantum theory, which governs the behavior of the very small, and general relativity, which deals with how planets and stars warp spacetime on cosmic scales, lies way beyond experimental reach—perhaps only within the center of black holes. But surprisingly, Brukner believes there may be a way to test these two theories much closer to home. This would not be a test of quantum gravity—a theory uniting quantum mechanics and general relativity—itself, Brukner is quick to add. But he and his colleagues, Magdalena Zych, Fabio Costa and Igor Pikowski, are proposing a test in which the effects of both quantum mechanics and general relativity on a clock are important. Their idea, outlined in the journal Nature Communications is based on a modification of the classic double-slit experiment , in which particles are shot at a wall with two closely separated slits in it. In the standard version of the experiment, the particles are collected, after they have passed through the slits, on a screen beyond the wall. Over time, they build up to create an interference pattern on the screen—similar to the pattern you would expect to see if two waves, rather than particles, were interfering with each other. The particles exhibit wave-like behavior, as though individual particles are passing through both slits and interfering with themselves on the other side. (See "Charting the Post-Quantum Landscape " for other ways in which Brukner and colleagues are revisiting the double-slit experiment.) The team’s twist on this quantum classic hinges on an important aspect of the double-slit experiment: quantum particles do not like to be spied on. If you watch to see which path a particle takes—whether it passed through the right or left slit—you destroy this wave-like behavior and the interference pattern disappears. Instead, the particles appear to have shot through the slits like bullets. Brukner’s team has combined this effect, known as quantum complementarity , with an equally whacky but central characteristic of general relativity: time dilation . While developing relativity, Einstein realized that gravity affects the rate at which clocks tick. This has been confirmed experimentally, using atomic clocks raised to different heights; clocks closer to the ground tick more slowly. This vanishing of interference will really be a proof that there was a general relativistic notion of time involved. - Caslav Brukner So here is the experiment proposed by Brukner and his team: Imagine you have a particle that carries its own wristwatch—some sort of evolving internal degree of freedom, such as its spin, that has some repetitive behavior that can serve as a clock. Usually when you send particles through a double-slit experiment, the slits are arranged side-by-side, right and left, at the same height. But what if you send that clock through a wall in which the two slits are arranged so that one slit is higher up—and thus in a different gravitational potential—than the other? General relativity says that the clock travelling along the lower path will tick slower than the clock passing through the upper slit. So far, so good for Einstein. But here’s the kicker: quantum complementarity says that the clocks can only continue to behave as waves if there is no significant time dilation effect between the two paths. That’s because, if there is a discernible time dilation, you would be able to look at the clock and deduce which path it had taken, based on whether it seemed to have ticked faster or slower en route . "This vanishing of the interference will really be a proof that there was a general relativistic notion of time involved," says Brukner. The experiment pits two conceptions of time—the quantum mechanical and the general relativistic—head to head. On one side, the double-slit experiment puts the clock into a quantum superposition—a blurry confusion of multiple identities. We should not know which path it took during the experiment, and the time shown on the clock is undefined. This is in contrast with general relativity in which time has an objective status: it is well-defined at single points. "In this experiment the time shown by the clock becomes quantum mechanically indefinite, that is, before it is measured it has no predetermined value," says Brukner. On the Fringe The standard double-slit experiment creates a distinctive pattern of fringes. Will the team’s proposed experiment destroy it? The main significance of the proposal is in providing a new way to measure time dilation on a breathtakingly small scale, using a single clock. To date the effect of time dilation has only been tested by comparing two independent clocks, where each independent clock experiences a well-defined time. In the proposed experiment, there is only one clock, which—by the wonders of quantum mechanics—can take two paths simultaneously because it exists in a superposition of going through two pathways. That also makes it tough to carry out, however. "One clock in two arms is the cool thing, but also the difficult thing," says Markus Arndt , a quantum optics experimenter at the University of Vienna. But it can be done, he adds: "It is a very quantum and a very sound thing to suggest." It is a difficult practical undertaking because the separation so far achieved in experiments that can maintain the required superposition between the two paths is small, so it is tough to accumulate enough of a difference in gravitational potential between the two paths to discern a time dilation effect. The effect would be very tiny—to see a difference of the order of a quadrillionth of a second (10-15 seconds) one would need to preserve the superposition with a path separation of 1 meter in Earth’s gravitational field for about 10 seconds. But it is not insurmountable—GPS navigation systems based on atomic clocks need to compensate that sort of difference. There’s also the question of what to use as a clock: molecules that have different rotational and vibrational internal dynamics could be used. Arndt is frequently approached by colleagues about the relevance of such ‘internal clocks’ in macromolecules. In that sense, Brukner’s proposal did not come as a full surprise, he says. But their use for exploring time dilation gives it a conceptually important new twist. Getting a molecule to move slow enough, so that it accumulates sufficient time dilation, is another issue to resolve. "One would first have to prepare a suitable rotational state that acts as the hand of the clock…in small molecules this might be done," says Arndt. In short, the experiment is far from trivial, "nothing for next year…" muses Arndt. "But the proposal addresses conceptual questions of quantum mechanics and should for that reason be experimentally realized." What will the result of the experiment be? Only time will tell. Please read the important Introduction that governs your participation in this community. Inappropriate language will not be tolerated and posts containing such language will be deleted. Otherwise, this is a free speech Forum and all are welcome! AMRIT wrote on March 29, 2013 Fundamental time which is a numerical order of change has only a mathematical existence. Emergent time which is aduration of change enters existence when measurement by the observer. TERRY BOLLINGER wrote on February 23, 2013 Is your analysis related in some way to the one by Paul Dirac in his Lectures on Quantum Mechanics? A quick example is this partial quote from page 66 of the currently paperback version: "... it doesn't seem possible to fulfill the conditions which are necessary for building up a relativistic quantum field theory on curved surfaces." His arguments are based on a use of Hamiltonian methods that I strongly suspect are equivalent to the assuming the existence of a flat space,... BERT wrote on February 7, 2013 read all article comments Attached is a paper where double slit was performed using microwaves in the 10 Ghz range (3 cm wavelength). Is 3 cm enough of a separation adequate to detect time dilation?
<urn:uuid:18473dcc-fa07-467e-a0db-1315c817ad9c>
3.015625
1,961
Comment Section
Science & Tech.
40.738637
Adult female Bagheera kiplingi eats Beltian body harvested from ant-acacia Photo/R.L. Curry For ages, ants have had a monopoly on the coveted acacia, protecting the plant from would-be predators in exchange for shelter and food, or so they thought. Skulking in the background, and recently discovered, is an unlikely competitor of the ant — a spider. And this is no ordinary arachnid. The Bagheera kiplingi also happens to be a vegetarian, and is the first of its kind known to science. “This is really the first spider known to specifically ‘hunt’ plants,” said Christopher Meehan of Villanova University. “It is also the first known to go after plants as a primary food source.” The veggie-loving tendency of this jumping spider was first discovered in Central America back in 2001 by Eric Olson of Brandeis University. Since then he has teamed up with Meehan, who independently observed the jumping spider in 2007, to learn more about this unusual creature and the extent to which it likes plants. Not only is Bagheera kiplingi the only predominant vegetarian of 40,000 known spider species with plants making up more than 90 percent of its diet, but it’s showing scientists a complex side of arachnid biology and behavior that indicates the spider’s diet is just the beginning of this animal’s surprising life history. Adult female Bagheera kiplingi defends her nest against acacia-ant worker. Photo/R.L. Curry Ants are aggressive defenders of the acacia plant making life difficult for outsiders who attempt to encroach on their turf. After all, they want those yummy beltian bodies all to themselves. So how is the jumping spider managing to exploit the acacia for both food and shelter? Science is still trying to figure that out, but preliminary research shows the spiders take advantage of the invertebrate equivalent of run-down real estate, setting up residence in less-than-desirable regions of the acacia. But their ingenuity doesn’t stop there. Bagheera kiplingi are outsmarting their ant foes, said Meehan, exploiting their intelligence and agility to get around the ants. “Individuals employ diverse, situation-specific strategies to evade ants, and the ants simply cannot catch them,” he said. As if to add insult to co-evolution, the ants may not even know when spiders are in their midst. Bagheera kiplingi literally dupes the ant by baring young that look like carbon-copies of the ants, and Meehan has reason to suspect that the spiders actually wear a sort of insect perfume that makes them smell like their would-be attackers. More research is forthcoming, including a look at the possibility that spider dad’s help raise the babies, a virtually unheard of behavior in spider biology. In the meantime, I hear Meehan and Olson’s methods included high-definition video of these smarty-pant vegetarian spiders. Now that would be some footage to see. Meehan and Olson’s study is available in the October 12 issue of Current Biology.
<urn:uuid:2e067406-bfbf-48d2-ae4f-a1988f947850>
3.125
676
Personal Blog
Science & Tech.
45.324392
3 October 2008 Marcus Gheeraerts the Younger The complicated shape of a warped leaf or a human ear can emerge from surprisingly simple rules--cells grow in the right places and create the right stresses. In the 8 August and 10 October issues of Physical Review Letters, researchers mathematically analyze how a growing disc can take on any shape from a cone to a potato chip to a ruffled Elizabethan collar. The results expand the understanding of the rich palette of spontaneously-forming shapes in nature. If you cut out a wedge from a circle of stiff paper and tape the two cut edges together, you get a cone. Julien Dervaux and Martine Ben Amar of the École Normale Supérieure (ENS) in Paris describe in their August paper how something similar happens when a thin, circular membrane expands in a non-uniform way, for example, in a growing plant. Others have looked at a disk with a growth rate that varies from the center to the edge and that grows equally in all directions. But Dervaux and Ben Amar analyze the case where the circumference and the radius grow at different rates. If the outward, radial growth is faster, the outside edge isn't long enough to go around the flat disk, so it distorts into a cone. But if the outward growth is slower than the circumference growth, the outside edge becomes too long instead of too short. The same effect occurs in the paper cutout if you tape in a larger wedge than you removed, the extra amount measured by what the team calls the excess angle. Using a mathematical analysis--as well as actual paper cutouts--the researchers show that the membrane naturally buckles to form a shape reminiscent of a saddle. But unlike a saddle, if you sliced this shape along the diameter, you would see perfectly straight edges in the cross section. Like a cone, it's curved in the "circumferential" direction but straight in the radial direction. The team suggests that this shape is similar to that of Acetabularia acetabulum, an algae that also forms a cone and a flat disk, during different phases of its growth. Martin Michael Müller, now at ENS, and Jemal Guven of the National Autonomous University of Mexico (UNAM) in Mexico City have previously described ordinary cones using differential geometry techniques more commonly employed for curved spacetime. While Dervaux and Ben Amar's August paper considers the tiny deviations from flatness that result from inserting a sliver of extra material in the disk, Müller and Guven's October paper with Ben Amar applies these mathematical tools to shapes with large excess angles. The team found that there are infinitely many ways to warp the disk, each corresponding to a different number of undulations around the perimeter. The twofold buckling, with two upward and two downward bends, requires the least energy. But as the excess angle increases, neighboring undulations eventually bump into each other. "At that point, where it starts to touch, the threefold becomes important," says Müller. If the excess grows larger still, new shapes with four, five or more undulations successively take over, eventually resembling the "ruff" collars seen in Elizabethan portraits. Finally, for excess angles larger than about 2000 degrees, no undulating shape can accommodate the excess without bumping into itself. Thomas Witten of the University of Chicago calls the new results a "pretty piece of math." He says "examples of this kind of buckling are well appreciated and known," but "the richness of this buckling has only recently been appreciated."--Don Monroe If you cut into a paper disk along the radius and tape in a wedge (green), it increases the circumference but not the radius. As the "excess angle" increases from zero to 360 degrees, the disk becomes a warped surface with two deep lobes. As the excess angle increases further (not shown in the video), additional lobes appear, eventually making shapes like Queen Elizabeth's collar. Video courtesy of Martin Michael Müller, École Normale Supérieure. Information on viewing video files.
<urn:uuid:63755ddf-ebc4-4cf5-9845-cd78a8cfc000>
3.734375
846
Truncated
Science & Tech.
37.878254
On with the demonstration.... Begin with a bar of Ivory soap (or you may want to use a sliver of soap.... you'll see what I mean). Make the appropriate observations of the soap. Place the soap on a microwave-safe plate. |It deflated a little at this point, because my camera's batteries died at this point and I had to wait for them to recharge.| - A follow-up to the previously mentioned density experiment. - A discussion of gas laws (Charles Law, specifically) - when a gas is heated, its volume will increase. - A lesson on physical and chemical changes. Explosions are chemical changes by definition. This demonstration looks like explosion, but it's not. It's just a physical change. *A funny story - I told my 5 year old that we were going to do a science experiment after dinner. He asked what we were going to do and all I would tell him is that we were using soap. Then I asked if he had any hypotheses about what would happen to the soap (knowing absolutely nothing about what we were going to do to it) and he said "It's going to explode." I think he was a little surprised at how close to right he was!
<urn:uuid:434db414-43b9-4ec7-a08d-a0f002a251c4>
3.640625
258
Personal Blog
Science & Tech.
69.843506
The common layman’s definition of topology generally involves rubber sheets or clay, with the idea that things are “the same” if they can be stretched, squeezed, or bent from one shape into the other. But the notions of topological equivalence we’ve been using up until now don’t really match up to this picture. Homeomorphism — or diffeomorphism, for differentiable manifolds — is about having continuous maps in either direction, but there’s nothing at all to correspond to the whole stretching and squeezing idea. Instead, we have homotopy. But instead of saying that spaces are homotopic, we say that two maps are homotopic if the one can be “stretched and squeezed” into the other. And since this stretching and squeezing is a process to take place over time, we will view it sort of like a movie. We say that a continuous function is a continuous homotopy from to if and for all . For any time , the map is a continuous map from to , which is sort of like a “frame” in the movie that takes us from to . As time passes over the interval, we highlight one frame at a time to watch the one function transform into the other. To flip this around, imagine starting with a process of stretching and squeezing to turn one shape into another. In this case, when we say “shape” we really mean a subspace or submanifold of some outside space we occupy, like the three-dimensional space that contains our idiomatic doughnuts and coffee mugs. The maps in this case are the inclusions of the subspaces into the larger space. Anyway, next imagine carrying out this process, but with a camera recording it at each step. Then cut out all the frames from the movie and stack them up. We see in each layer of this flipbook how the shape at that time is included into the larger space . That is, we have a homotopy. Now, for an example: we say that a space is “contractible” if its inclusion into itself is homotopic to a map of the whole space to a single point within the space. As a particular example, the unit ball is contractible. Explicitly, we define a homotopy latex H(p,t)=(1-t)p$, which is certainly smooth; we can check that and , so at one end we have the identity map of into itself, while at the other we have the constant map sending all of to the single point at the origin. We should be careful to point out that homotopy only requires that the function be continuous, and not invertible in any sense. In particular, there’s no guarantee that the frame for some fixed is a homeomorphism from onto its image. If it turns out that each frame is a homeomorphism of onto its image, then we say that is an “isotopy”.
<urn:uuid:9beeeec0-aba9-4756-a953-8314c60687c3>
3.578125
626
Personal Blog
Science & Tech.
50.682655
Jake conducted a research project under the supervision of Dr. Davis that was a follow-up to a recent project that involved examining color change in museum specimens of mammals. That project showed how the degree of redness in the coat color of a certain species of bat appears to vary with the age of the museum specimen (not the age of the live animal). Jake conducted a follow-up analysis of pelage color in golden mice specimens and found a similar trend - that older specimens are actually more colorful than newer specimens, which was also found with the bats. Jake's results suggest that there is something about the preservation of mammalian specimens that changes the properties of their fur over time, and it is not simply color-fading. Photo - Jake examining golden mice specimens at the Georgia Museum of Natural History on campus. The specimen in his hand was collected and prepared by none other than Eugene Odum in 1942!
<urn:uuid:479a6bec-0fe7-4c4f-8157-fdd874605335>
2.90625
182
Knowledge Article
Science & Tech.
38.167222
sizeof will return the number of bytes reserved for a variable or data type. The sizeof operator is a compile-time operator that returns an integer value. In other words since it is a compile time operator When the programmer uses the sizeof operator in a expression then the operator do not get compiled into executable code in the expression. Syntax of this operator is For instance if one wants to know the length of a data type it can be obtained by using sizeof operator as follows: printf("%d n", sizeof(int)); This returns the size of integer which is generally 2. Suppose we have a program like below a= sizeof(10) / sizeof(b); Then the result would be sizeof integer value generally would be 2 and sizeof float which would be generally 4. Therefore result would be 2/4 which gives result as 0.
<urn:uuid:7f03d3c5-70d3-412a-945f-73dfa17ad3ff>
3.984375
176
Q&A Forum
Software Dev.
42.488721
The planet Venus has an atmosphere containing 92% carbon dioxide and an atmospheric pressure nearly 100 times that of Earth. As a consequence, the greenhouse effect on Venus is enormous. Without that greenhouse effect the planet would be several hundred degrees less hot than it is at the surface. The Earth has very little CO2 in comparison, only 396 parts per million of the atmosphere by molecule or 0.04% by volume. How can such a seemingly feable amount of the gas be a concern for us here on Earth? Well for one thing, the total greenhouse effect on Earth amounts to 33C rather than 100s of degrees on Venus, and CO2 accounts for maybe 6C degrees on it own, while water vapor and clouds make up about 26C degrees. Not much compared to Venus, but quantity does not tell the whole story. The absolute quantity of a greenhouse gas is less important than the marginal difference increase as a percentage of the total. For each doubling of CO2 3.7 watts per square meter (3.7W/m^2) of additional energy is retained within the troposphere (the lowest level of the atmosphere). We will double CO2 over pre-industrial levels ( 280 part per million / 560ppm) by mid 21st century. By that time 3.7W/m^2 of additional warming energy will be radiated to the surface by CO2. This will warm the surface by 1.2C after the surface temperature reaches thermal equilibrium with the new “forcing”. This is basic physics. In actuality there will be more warming than that, how much more we are not certain because of feedbacks within the climate system. One thing we do know, the last time it was that warm or warmer, sea levels were at least 10s of feet higher than today, and that is no joke.
<urn:uuid:3016abf9-b0e7-4417-8a73-e0251ea3e7d6>
3.84375
374
Personal Blog
Science & Tech.
63.208053
Joined: 16 Mar 2004 |Posted: Wed Sep 20, 2006 10:11 am Post subject: Space shuttle nanostructures tests |Space shuttle tests shows live cells influence growth of nanostructures - implications for sensors, tuberculosis modeling, cell preparation, surgical implant safety. Far above the heads of Earthlings, arrays of single-cell creatures are circling Earth in nanostructures... The sample devices are riding on the International Space Station (courtesy of Sandia National Laboratories and the University of New Mexico, NASA and US Air Force) to test whether nanostructures whose formations were directed by yeast and other single cells can create more secure homes for their occupants — even in the vacuum and radiation of outer space — than those created by more standard chemical procedures. Sandia is a National Nuclear Security Administration laboratory. “Cheap, tiny, and very lightweight sensors of chemical or biological agents could be made from long-lived cells that require no upkeep, yet sense and then communicate effectively with each other and their external environment,” says former UNM graduate student and Sandia consultant Helen Baca, lead author on the paper. Baca was advised by Sandia Fellow and UNM professor of chemical engineering, molecular genetics & microbiology Jeff Brinker. Groups of such long-lived cells may also serve as models to investigate how tuberculosis bacteria survive long periods of dormancy within human bodies. En masse, they also may be used to generate signals to repel harmful bacteria from the surfaces of surgical tools like catheters. Finally, the method also offers a simple method to genetically modify cells. “This is not the end of the story, but the beginning,” says Brinker. “No one else has created these symbiotic materials and observed these effects. It’s a totally new area.” How does this happen? In a paper in the July 21 issue of Science, a team of researchers from Sandia and UNM under the leadership of Brinker demonstrated that common yeast cells (as well as bacterial and some mammalian cells) customize the construction of nanocompartments built for them. These nanocompartments — imagine a kind of tiny apartment house — form when single cells are added to a visually clear, aqueous solution of silica and phospholipids, and the slurry is then dried on a surface. (Phospholipids are two-sided molecules that make up cell membranes.) Ordinarily, the drying of lipid-silica solutions produces an ordered porous nanostructure by a process known as molecular self-assembly. This can be visualized as a kind of tract housing. In the current experiments, however, the construction process is altered by the live yeast or bacteria. During drying, the cells actively organize lipids into a sort of multi-layered cell membrane that not only serves as an interface between the cell and the surrounding silica nanostructure, but acts as a template helping to direct the formation of the surrounding silica nanostructure. This improved architecture seamlessly retains water, needed by the cell to stay alive. Further, by eliminating stresses ordinarily caused by drying, the nanostructure forms without fine-line cracks. These improvements help maintain the functionality of the cell and the accessibility of its surface. By comparison, the more common practice of merely ‘trapping cells in gels’ leads to stress, cracks, and rapid cell death upon drying. Already launched on the space shuttle. The incorporated cells of the Brinker group are self-sustaining — they do not need external buffers and even survive being placed in a vacuum. To study their use as cell-based sensors for extreme environments, samples of the yeast- and bacteria-containing nanostructures were launched on the recently completed mission of the US space shuttle Discovery. On the Space Station, experiments will be performed to determine their longevity when exposed to the extreme stresses of the radiation and vacuum of outer space. Of the NASA mission, Brinker says, “Ordinarily, exposed to such extreme conditions, the cells would turn into raisins. But, because of the remarkable coherency of the cell-lipid-silica interface and the ability of the lipid-silica nanostructure to serve as a reservoir for water, no cracking or shrinkage is observed. The cells are maintained in the necessary fluidic environment.” The cell-architected nanostructure is, he says, “an amazing way to preserve a cell.” The cells already have emerged still viable after examination in electron microscopes and after X-ray exposure in Argonne National Laboratory’s Advanced Photon Source, where the accelerating voltage ranges from one to 20 keV, says Brinker. Genetic modification done cheap. It is noteworthy that the entrapped cells easily absorb other nano-components inserted at the cellular interface. Because of this, the cell can internalize new DNA (introduced as a plasmid), providing an efficient form of genetic modification of cells without the usual procedures of heat shock or cumbersome puncturing procedures (electroporation) that could result in cell death. Thus, the yeast can be modified to glow fluoresecent green when they contact a harmful chemical or biotoxin. Because such nanostructures are cheap, extremely light and small, and easy to make, they could conceivably be attached to insects and their emanations read remotely by beams from unmanned aircraft. The method also makes it easier to prepare individual cells for laboratory investigation under microscopes. “Normally, to visually examine a cell, researchers use time-consuming fixation or solvent extraction techniques,” says Brinker. “We can spin-coat a cell in seconds, pop the cell into an electron microscope, and it doesn’t shrink when air is evacuated from the microscope chamber.” The cell can be immediately imaged, says Brinker. “Spin-coating” refers to deposition of the cell slurry on a spinning substrate until dry. From their comfortable “home,” the empowered cells can also direct their own landscaping. They can organize metallic nanocrystals added at the cell surface. These may enhance the sensitivity of Raman spectroscopy for monitoring the onset of infection or the course of therapy. The cells also localize proteins at the cellular interface. Assistant Professor Graham Timmins of UNM’s College of Pharmacy notes that the encapsulated cells’ unusual longevity may serve as a model for persistent infections such as tuberculosis, which has a long latency period. TB bacteria can remain dormant in vivo for 30-50 years and then re-activate to cause disease. Presently the state of the dormant bacterium is not understood. Timmins and Brinker are discussing further experiments to validate the model. Finally, building the cells into a coating with a high enough density might elicit from them a defensive, multi-cellular signal of an unpleasant nature that discourages unwanted biofilm formation on the coated surface — important for avoiding infections that could be carried by implants and catheters. The cell’s ability to sense and respond to its environment is what forms these unique nanostructures, says Brinker. During spin-coating, the cells react to the increasing concentrations of materials in the developing silica nanostructure by expelling water and developing a gradient in the local pH. This in turn influences lipid organization and the form of the silica nanostructure, reduces stress, and ultimately improves the living conditions of the ensconced cellular tenants. Source and further information: This story was first posted on 21st July 2006.
<urn:uuid:a98cf1d4-7d9e-4b67-ae2b-df3a282f46e1>
2.890625
1,593
Comment Section
Science & Tech.
26.068896
The scientific name for fleas is Siphonaptera, which comes from the Greek words 'siphon', meaning pipe, and 'aptera', meaning wingless, relating to the sucking mouthparts and wingless condition of fleas. Fleas are one of the best-known groups of parasitic insect due to their notoriety as pests and their medical and veterinary importance as vectors of disease, such as the plague, myxomatosis and murine typhus. They are holometabolous insects, meaning that they go through a metamorphosis of egg, larva, pupa and adult during their development. The eggs are laid in the host nest, where the larvae develop as scavengers, feeding on detritus and flea faeces. The resulting pupae can remain dormant for some time, emerging as adults in response to vibration, heat and carbon dioxide, when a suitable host becomes available. Adults are obligate ectoparasites, with both males and females feeding on the blood of their mammalian/avian hosts. If the host dies, the decrease in body temperature prompts the adult flea to move onto a new host.
<urn:uuid:ce73da50-6f56-482e-b7aa-2c21146745d8>
3.75
238
Knowledge Article
Science & Tech.
31.403571
OBSERVATION OF MICROSCOPIC MAGNETISM This chapter is devoted to the observation of microscopic magnetism, working out in some detail the most commonly used magnetic techniques. The basic aspects of these techniques are only briefly recalled, and indication is given of relevant text books which will give a sound background knowledge. However, it is the goal of the chapter to allow the reader to be able to read the current literature with some acceptable understanding. The magnetic techniques include microSQUID and micro Hall probe techniques and torque magnetometry; specific heat measurements, including equilibrium and out of equilibrium measurements; and magnetic resonance techniques, including EPR, NMR, and muon spin resonance. Mention of neutron techniques, including polarized neutron diffraction and inelastic neutron scattering, will conclude the section. Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter. If you think you should have access to this title, please contact your librarian.
<urn:uuid:3fbf84e5-19d7-4a76-ac3b-461a9cae036c>
2.96875
224
Truncated
Science & Tech.
21.233018
storm is a popular name for two different seasonal events. One begins with a clash of warm and cold air and results in the typical winter storm. The second is born when a hurricane leaves the tropics and transforms itself into a super powerful version of a winter storm. The common winter storm is born when a cool mass of air, dropping down from the Arctic, clashes with a warmer mass of air. The area where these air masses meet is called a front, named after the battle grounds of World War I, because it is usually a place of violent weather commonly associated with fierce wind, rain, snow, and hail. As these fronts move across the mid-latitudes of the United States, they produce far ranging winter storms. hurricane may grow to resemble a powerful winter storm, but begins life as a tropical storm. Hurricanes tend to move along specific seasonal paths, known as storm tracks, which relate to long-standing patterns in atmospheric circulation. One common storm track begins in the warm waters of the tropics and then courses up along the eastern seaboard of the United States. Tropical storms tend to be small and violent, spinning in a counter-clockwise motion, and measuring about 60 to 300 miles in diameter, with powerful wind gusts. The faster they spin, measured by the speed of the gusting air, the more violent and dangerous they become. The most violent of these storms, when the wind speeds surpass 74 miles per hour, are identified by scientists as hurricanes. They are called typhoons in the North Pacific and the South China Sea, and cyclones in the Indian When hurricanes leave the warm tropical waters that spawned them, they become known as extratropical storms. They also tend to lose their cyclonic spinning action, and spread into enormously large storms ranging from 620 to 2,500 miles across, with wind gusts that can reach 50 miles per hour. March, 1993, weather satellite photos showed a large mass of cold air moving across North America, down from the North Pole. This cold mass of air eventually collided with a warmer mass in the region above the Gulf of Mexico. line of powerful thunderstorms formed along the front, drawing energy from the temperature differentials. The size of the thunderstorms alarmed many of the meteorologists watching the developing storm, and they began issuing storm alerts as they watched the thunderclouds combine into an enormous spinning winter storm. storm moved onto land during the early hours on Friday morning, March 12, killing dozens of people, and devastating parts of the Florida coast. As the storm approached land, high winds and low pressures carried the sea along with it. High winds and low pressures can raise the water level in the ocean. This effect is known as storm surge, and if it gets trapped against a cove or bay, it can raise the water 10 to 20 feet higher than normal. Large waves, some up to 40 feet high, can ride on top of the surge, and come crashing over the shores and deep inland. The storm then began to climb along the East Coast. As the storm moved across the eastern seaboard, torrential rains turned into heavy snows falling from Alabama to New York, virtually paralyzing the eastern third of the country. The storm eventually spread and covered more than 2,000 miles. Strong winds, created by rapidly dropping pressures, blew up and down the East Coast. Local authorities were totally unprepared for the intensity of the assault. The interstate highways became impassable and millions of people lost electrical power. New York City was brought to a standstill. A foot of snow fell from Alabama to Maine, and freezing temperatures set new records across the The final accounting included 243 deaths, and about two billion dollars in damage. The storm had forced the closure of all the airports in the eastern United States, and created great chaos. Nearly 100 million people in 26 states had their lives affected in ways both great and small by the Storm of the Century. -- By Micah Fink
<urn:uuid:656b108b-a6bd-4266-b96f-0fc5b46ae60f>
3.921875
886
Knowledge Article
Science & Tech.
53.123141
23. May 2013Energy and Environment Environment Household waste always used to end up left untreated in landfills, and the effects of this practice are well-known: these waste disposal sites were quite often ecological "death zones". With the incineration of municipal waste, there was some mitigation of this problem: despite the overall increase in quantities of waste, the areas claimed by landfill have been limited in recent decades thanks to waste recycling and incineration. However, waste incineration remains far from a panacea. Some combustion products that are already present in the burnt materials or that arise just during the combustion process itself are harmful to human health and the environment and some of them still find their way out of waste incineration plants and into landfill sites as their final destination. 17. May 2013Matter and Material Large Scale Facilities Research Using Muons Myonen – instabile Elementarteilchen – bieten Forschenden wichtige Einblicke in den Aufbau der Materie. Sie liefern Informationen über Vorgänge in modernen Materialien, über die Eigenschaften von Elementarteilchen und über die Grundstrukturen der physikalischen Welt. Viele Myonenexperimente sind nur am Paul Scherrer Institut möglich, weil hier besonders intensive Myonenstrahlen zur Verfügung stehen. This news release is only available in German. 10. May 2013Energy and Environment Environment Megacities are often perceived by the public to be major sources of air pollution, which affect their surroundings as well. However, recent studies show that the environmental credentials of cities with over one million inhabitants are not so bad after all. An international team of researchers, including scientists from the Paul Scherrer Institute (PSI), has now confirmed, on the basis of aerosol measurements carried out in Paris, that so-called post-industrial cities affect the air quality of their immediate surroundings far less than might be thought. 7. May 2013SwissFEL SwissFEL Experiments Large Scale Facilities The X-ray laser SwissFEL will provide researchers with novel experimental opportunities for gaining insights into a large variety of materials and processes. But, how do we identify which scientists will benefit most from the facility and in what way the facility should be configured to best meet their needs? Bruce Patterson, the SwissFEL’s idea-collector, explains how this search is done. 5. May 2013Media Releases Matter and Material Materials Research Research Using Synchrotron Light Scientists use nano-rods to investigate how matter assembles To make the magnetic interactions between the atoms visible, scientists at the Paul Scherrer Institute PSI have developed a special model system. It is so big that it can be easily observed under an X-ray microscope, and mimics the tiniest movements in Nature. The model: rings made from six nanoscale magnetic rods, whose north and south poles attract each other. At room temperature, the magnetisation direction of each of these tiny rods varies spontaneously. Scientists were able to observe the magnetic interactions between these active rods in real time. These research results were published on May 5 in the journal “Nature Physics”. For parties of 12 persons and over, we offer a free-of-charge tour through our large-scale facilities, and for students we have founded the student laboratory iLab. School classes can visit us free of charge for a day, carry out experiments in the laboratory and then see from the large-scale facilities how the scientific principle studied at iLab is applied in routine research. Homepage iLab
<urn:uuid:314aebff-d775-4cd0-b039-7dcc5e01e369>
3
767
Content Listing
Science & Tech.
27.384598
Last week when we were making our flubber and oobleck we noticed something a bit odd when we were looking at our photo-documentation of the experiments. The photos of oobleck and flubber all had a slight blue cast to them. That got us talking about colors in general which led to that classic question -- what makes the sky blue? Take a look at this picture of flubber, which was made with clear water, clear crystals, and white glue. Where is the blue coming from? It isn't just the color of the counter, because look how much Ian's t-shirt is reflecting orange at the top of the photo. Take a look at this still I captured from the video on last week's post: The material in the bowl isn't as blue as the sky, but look at bluish tinge in the right of the photo. What was going on? Beckett and I decided to investigate a bit further. Remember I described both flubber and oobleck as colloids? A colloid is a substance microscopically dispersed evenly throughout another substance. When light hits a colloid, the shorter wavelengths of light are scattered while many of the longer wavelengths pass through. This happens all the time in daily life -- radio waves pass through walls and trees and metal car roofs, but light waves are reflected. In most colloids, the suspended particles are just large enough to scatter the shorter visible light waves -- which we see as the blue cast coming off the white. Most white and translucent colloids exhibit this effect, known as the Tyndall Effect. The Tyndall Effect, also known as Tyndall Scattering, was named after John Tyndall, the 19th century physicist who first described it. You can see the Tyndall effect by pouring yourself a glass of milk -- milk is an emulsified colloid composed of liquid butter fat suspended in water. In colloids with a strong color, blue light is scattered, but the color of the suspended particles dominates the faint blue color of the Tyndall effect. You can also see the Tyndall Effect in a location that might surprise you: blue eyes! Blue eyes are caused when the gene that normally supplies melanin to the eye is suppressed, leaving an iris with little pigmentation. Since the layer of liquid over the iris is an (almost colorless) colloid, the molecules in the liquid absorb longer wavelengths of light and scatter the shorter wavelengths. When you see blue eyes, you are not seeing blue pigmentation, but a lack of melanin pigmentation and light back-scattered by the Tyndall Effect. So, is this what makes the sky blue? Not exactly, but close. First of all, the earth's atmosphere is not a colloid but gaseous. Gases are not colloids because they mix and disperse evenly. The 78 percent of atmospheric molecules that are nitrogen gas molecules mix evenly with the 21 percent that are oxygen gas molecules and with the 1 percent that are other gases -- one kind of molecule is not suspended in the other. But the gas particles in the atmosphere do scatter light particles randomly. This is called the Raleigh Effect or Rayleigh scattering, named after John William Strutt, the 3rd Baron Rayleigh, a near contemporary of John Tyndall. In space, the sun looks like a large white ball -- just like most of the stars we see at night. When the white light of the sun hits the earth's atmosphere, a portion of the light is scattered. Unlike Tyndall scattering (where the molecules suspended in the colloid are large enough to absorb the shorter wavelengths) in Rayleigh scattering, all the wavelengths (both long and short) are scattered in the atmosphere. So what we see, is the average of all of the wavelengths of light molecules bumping around: Rayleigh scattering varies according to two things: the wavelength of the light and the size of the particles. In our atmosphere, the shorter wavelengths get scattered more, so we see more blue. You can also see Tyndall Scattering in the atmosphere -- cars (with oil problems) and some motorcycles give off enough particles to form a colloid known as a solid aerosol. If you look directly at the sun (which you should only do with proper eye protection and only for very very brief periods of a second or two) the sun looks white or yellow in the middle of the day, and looks orange or red at night. At the horizon, the sunlight that travels to you passes through much more atmosphere and almost all of the shorter wavelengths are scattered, allowing only the red and orange wavelengths to reach your eyes. Beckett's Aunt Leota (also a science geek) took this picture of sunset this week in New Mexico with the shadow of the Sandia Mountains giving a cool effect: In this photo you can see both blue and red light scattered by the Rayleigh Effect. In the center of the photo the Sandia mountains cast a shadow over the landscape, but the light in the upper atmosphere is scattered and blue is still the dominant color. The portion of the photo that shows red is where the sunlight is still traveling through the atmosphere, scattering more, therefore scattering away more of the blue wavelengths and allowing only the red and orange wavelengths to pass. The final factor that influences the color of the sky is the non-gaseous particles suspended in the atmosphere. Water droplets form clouds and can have a variety of colors. And water droplets at just the right density and angle can form rainbows. Solid matter, from human pollution as well as volcanoes and forest fires, also contributes to the variety of sky colors. Tell us about your favorite sky color -- summer sunsets or early morning runs, or high noon on the 4th of July. We'd love to hear what makes the sky perfect for you!
<urn:uuid:e98531b3-368c-43d1-aac7-9d2ac0a5f2df>
3.53125
1,204
Personal Blog
Science & Tech.
53.433959
Beetle leg: Spike Walker, a retired biology lecturer based in Penkridge, England, was striving for visual abstraction when he captured a detail of a Dytiscus water beetle's front leg. Walker used a type of darkfield microscopy in which the object is shot against a blue screen. The blue light shines through the orange of the leg's exoskeleton. The view, spanning about 1.8 millimeters in width, shows hair (left and bottom) and a suction cup (large disk on right). The males use these suction cups to hold on to females during mating. The image is patched together from 44 shots, each having a different focal plane. Image: Spike Walker More In This Article - Photo Album - Photo Album Nature looks fundamentally different depending on scale. This diversity is especially striking in the world of biology, where matter assembles itself in constantly renewing configurations, offering our eyes—aided by scientific instruments—limitless perspectives. Thus, we can find beauty in places we did not suspect—inside a flower from a roadside weed, in the anatomical details of a flea or under a mushroom growing on a dead tree. Some people explore microscopic worlds for scientific reasons; others, such as Laurie Knight, for the sheer adventure. “The reason I do this,” he says, “is that I get to see things that a lot of people can’t really see.” Fortunately, Knight and many others also like to share some of the vistas they discover. Every year scientists and hobbyists alike submit their microscopy art to the Olympus BioScapes International Digital Imaging Competition. These are images whose purpose is, in the words of another serious hobbyist, Edwin K. Lee, “to capture the combined essence of science and art.” And, in turn, every year we at Scientific American like to share with readers some of our favorite shots from that competition. Enjoy. This article was originally published with the title Life Unseen.
<urn:uuid:9b23dd6c-87f9-406b-aaa8-52604c825f5b>
2.78125
415
Truncated
Science & Tech.
42.728237
Summary: Since 1977, the French space agency has been helping civilian and military authorities understand the precise nature of Un-identified Aerospace Phenomena (PAN). The SEPRA database is comprised of more than 2200 different cases, with some 6000 eyewitness accounts and approximately 100 sightings from aircraft. The sky, on a summer night, is a wonder to behold, a spectacle that leads one to dream of distant worlds: the Moon, the planets, the thousands of stars and galaxies which one can more or less easily identify. The tranquillity of this celestial heaven is sometimes disturbed by a shooting star, the twinkling lights of an airplane, or the silent passing of a satellite. It is not infrequent, however, that one is surprised by something strange, and initially unrecognisable. But before one’s imagination takes hold and prompts one to think of extra-terrestrials, it would be better to first tell the French space agency. Since 1977, the space agency has been helping civilian and military authorities understand the precise nature of Un-identified Aerospace Phenomena (PAN). The unit involved is the Rare Aerospace Phenomena Study Department (SEPRA) based at the CNES technical centre in Toulouse. Since 1977, the department has developed a precise analytical methodology and today has accumulated a considerable database. "We are involved in the framework of well defined procedures with, for example, the police or civil aviation authorities" explains Roland Ivarnez, head of Satellite Operations Directorate at CNES. "We analyse the official testimonials that are referred to us and respond to the first questions that arise after eye-witness accounts. We are concerned with facts and our approach is rigorously scientific." Most observations, whether seen from the ground or in the air, are of phenomena viewed at a distance. "We try to establish if there is any correlation with identifiable objects, such as the planets, or the Moon, according to celestial maps. We check whether any high-altitude balloons have been released, on the position of orbiting satellites and with Space Command in the United States which tracks the re-entry of satellites or rocket stages in the atmosphere" details Jean-Jacques Velasco in charge of SEPRA. "Meteorological conditions can alter the perception of an object. People in all good faith can be mislead by some kinds of lighting, for instance laser beams used at night-clubs." The SEPRA database is comprised of more than 2200 different cases, with some 6000 eyewitness accounts and approximately 100 sightings from aircraft. "With the observations by pilots or air controllers we have the advantage of receiving a first analysis, as a result of their professional experience" explains Jean-Jacques Velasco. A statistical review of all the cases studied since 1974 shows that about 20% of them have been immediately understood. The reasons are often banal and earthly: meteors, satellite or rocket re-entries, sounding-balloons or celestial objects that were seen in particular circumstances. Most of the remaining eyewitness accounts lack the information necessary for a thorough investigation. SEPRA then collects further details from those concerned and may solicit the opinion of specialists in other fields of research. The precise nature of the phenomena can then usually be provided in practically all these cases. But there does remain a small percentage (4-5%) where SEPRA has been unable to offer an explanation, given the state of present understanding. The enquiries in these rare cases have confirmed the physical reality of certain phenomena which have been impossible to analyse. The Trans-en-Provence affair in 1981 (an oval shaped phenomenon which moved silently in the air and which left traces on the ground, including important biological changes), and the case of the Air France AF-3532 flight on 28 January 1994 (where something seen by the crew was correlated to an radar observation) are two examples of unsolved enquiries. Such case studies therefore remain open. "The fact that we do not understand should not lead us to speculate. As a scientific organisation, it is not our role to take sides in such unexplained cases, even less to enter into the debate over the existence or not of extra-terrestrials" stresses Roland Ivarnez. The subject is sensitive, yet CNES refutes the arguments of those who criticise its work in this field. "It is entirely normal that, as a space agency, we should be asked for our assistance, it is part of our duty as a public service," says Roland Ivarnez. "We devote a certain limited effort, but the manpower and budget are adequate given the relative importance of this subject compared to CNES’s principal areas of activity. It is true however that because of public and media interest in such questions, we are called upon far more than the true importance of such phenomena." For his part, Jean-Jacques Velasco recognises the difficulty of the job: "Some people consider me as the authority who will systematically decree that all such phenomena are but the fruit of the imagination, or that they are all understood after analysis. For others, I am the person who will comfort them in their beliefs that little green men do exist". Given such differing views, CNES can only reaffirm that it is open-minded but that its approach in this field is strictly scientific.
<urn:uuid:ae3acb5f-bdca-44af-9f09-79a9a2bfb5bd>
3
1,099
Knowledge Article
Science & Tech.
29.391393
XftPattern holds a set of names with associated value lists; each name refers to a property of a font. XftPatterns are used as inputs to the matching code as well as holding information about specific fonts. XftFont contains general font metrics and a pointer to either the core XFontStruct data or a structure holding FreeType and X Render Extension data. XftFontStruct contains information about FreeType fonts used with the X Render Extension. XftFontSet contains a list of XftPatterns. Internally Xft uses this data structure to hold sets of fonts. Externally, Xft returns the results of listing fonts in this format. XftObjectSet holds a set of names and is used to specify which fields from fonts are placed in the the list of returned patterns when listing fonts. XftDraw is an opaque object which holds information used to render to an X drawable using either core protocol or the X Rendering extension. XftFont * XftFontOpen (Display *dpy, int screen, ...);XftFontOpen takes a list of pattern elements of the form (field, type, value) terminated with a 0, matches that pattern against the available fonts and opens the matching font. font = XftFontOpen (dpy, scr, XFT_FAMILY, XftTypeString, "charter", XFT_SIZE, XftTypeDouble, 12.0); This opens the charter font at 12 points. The point size is automatically converted to the correct pixel size based on the resolution of the monitor. void XftTextExtents8 (Display *dpy, XftFont *font, unsigned char *string, int len, XGlyphInfo *extents);XftTextExtents8 computes the pixel extents of "string" when drawn with "font". XftDraw * XftDrawCreate (Display *dpy, Drawable drawable, Visual *visual, Colormap colormap);XtDrawCreate creates a structure that can be used to render text and rectangles to the screen. void XftDrawString8 (XftDraw *d, XRenderColor *color, XftFont *font, int x, int y, unsigned char *string, int len);XftDrawString8 draws "string" using "font" in "color" at "x, y". void XftDrawRect (XftDraw *d, XRenderColor *color, int x, int y, unsigned int width, unsigned int height);XftDrawRect fills a solid rectangle in the specified color. config : "dir" STRING | "include" STRING | "includeif" STRING | "match" tests "edit" edits ; test : qual FIELD-NAME COMPARE CONSTANT ; qual : "any" | "all" ; edit : FIELD-NAME ASSIGN expr SEMI ; STRINGs are double-quote delimited. FIELD-NAMEs are identifiers, ASSIGN is one of "=", "+=" or "=+". expr can contain the usual arithmetic operators and can include FIELD-NAMEs. "dir" adds a directory to the list of places Xft will look for fonts. There is no particular order implied by the list; Xft treats all fonts about the same. "include" and "includeif" cause Xft to load more configuration parameters from the indicated file. "includeif" doesn't elicit a complaint if the file doesn't exist. If the file name begins with a '~' character, it refers to a path relative to the home directory of the user. If the tests in a "match" statement all match a user-specified pattern, the pattern will be edited with the specified instructions. Where ASSIGN is "=", the matching value in the pattern will be replaced by the given expression. "+="/"=+" will prepend/append a new value to the list of values for the indicated field. Table of Contents
<urn:uuid:2a15df4e-3efd-4b07-96c3-f04387bdae2e>
2.8125
846
Documentation
Software Dev.
53.117789
Definition: Partition ("factor") the pattern, x, into left, xl, and right, xr, parts in such a way to optimize searching. Compare xr left to right then, if it matches, compare xl right to left. Generalization (I am a kind of ...) string matching algorithm. See also Knuth-Morris-Pratt algorithm, Boyer-Moore. Note: [It] "can be viewed as an intermediate between the classical algorithms of Knuth, Morris, and Pratt on the one hand and Boyer and Moore, on the other hand." Maxime Crochemore and Dominique Perrin, Two-way string-matching, Journal of the ACM, 38(3):651-675, July 1991. If you have suggestions, corrections, or comments, please get in touch with Paul E. Black. Entry modified 2 February 2005. HTML page formatted Tue Dec 6 16:16:33 2011. Cite this as: Paul E. Black, "Two Way algorithm", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 2 February 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/twoWay.html
<urn:uuid:77c7d753-d239-4385-8f84-c6bf0d18b271>
3.078125
283
Knowledge Article
Software Dev.
76.457082
In the constellation Virgo 8.5 million times the mass of the Sun Diameter about three-quarters of the distance from the Sun to Mercury The supermassive black holes in the hearts of many galaxies form what are known as active galactic nuclei (AGN). The accretion disks around these black holes are especially hot and bright, so they produce copious amounts of energy at many different wavelengths. Some AGN, however, are muffled. In these cases, the galaxy's nucleus is surrounded by a broad, thick doughnut of gas, known as a torus, that is far outside the accretion disk. If this disk is aligned edge-on as seen from Earth it absorbs much of the AGN's energy, so the galaxy looks less "active" than it should. An example is NGC 4388, a spiral galaxy that is a member of the Virgo Cluster, a collection of hundreds of galaxies centered in the constellation Virgo. Observations by space-based X-ray and gamma-ray telescopes confirm that a thick torus encircles the galaxy's nucleus, absorbing light from the black hole's accretion disk. Water molecules in a disk that is between the accretion disk and the torus are zapped by energy from nearby stars, boosting their energy level and causing them to emit microwaves. High concentrations of water form microwave hotspots known as masers. If Earth lies along the path of a maser's beam, radio telescopes sensitive to microwaves can detect them. Precise tracking of these masers reveals their motion around the center of the galaxy. By applying the laws of orbital motion, astronomers can determine the precise mass of the central object. Using measurements made from 2005 to 2009, a team of astronomers measured the masses of the supermassive black holes in seven galaxies, including NGC 4388. The measurements indicate that the black hole is about 8.5 million times as massive as the Sun, which is twice as massive as the black hole at the center of the Milky Way. However, the research team notes that it has tracked only five masers so far, so its conclusions "should be used with some caution until better data are obtained." Did you find what you were looking for on this site? Take our site survey and let us know what you think. This document was last modified: March 14, 2012.
<urn:uuid:110cf607-97e5-4983-8021-b765c7c6b0f1>
4.21875
488
Knowledge Article
Science & Tech.
51.545203
Global environmental challenges from The Great Debate UK: -Lord Julian Hunt is visiting Professor at Delft University, and former Director-General of the UK Met Office. The opinions expressed are his own.- The unusually large rainfall from this year’s monsoon has caused the most catastrophic flooding in Pakistan for 80 years, with the U.N. estimating that around one fifth of the country is underwater. This is thus truly a crisis of the very first order. Heavy monsoon precipitation has increased in frequency in Pakistan and Western India in recent years. For instance, in July 2005, Mumbai was deluged by almost 950 mm (37 inches) of rain in just one day, and more than 1,000 people were killed in floods in the state of Maharashtra. Last year, deadly flash floods hit Northwestern Pakistan, and Karachi was also flooded. It is my clear view that this trend is being fueled both by global warming (which also means extremes of rainfall are also a growing world-wide trend), and indeed potentially by any intensification of the El-Nino/La-Nino cycle.
<urn:uuid:ffa01a0b-38ce-4505-ad48-4af00fb5fb14>
3.046875
225
Personal Blog
Science & Tech.
51.514329
Concept 22 DNA words are three letters long. The genetic code had to be a "language" — using the DNA alphabet of A, T, C, and G — that produced enough DNA "words" to specify each of the 20 known amino acids. Simple math showed that only 16 words are possible from a two-letter combination, but a three-letter code produces 64 words. Operating on the principle that the simplest solution is often correct, researchers assumed a three-letter code called a codon. Research teams at University of British Columbia and the National Institutes of Health laboriously synthesized different RNA molecules, each a long strand composed of a single repeated codon. Then, each type of synthetic RNA was added to a cell-free translation system containing ribosomes, transfer RNAs, and amino acids. As predicted, each type of synthetic RNA produced a polypeptide chain composed of repeated units of a single amino acid. Several codons are "stop" signals and many amino acids are specified by several different codons, accounting for all 64 three-letter combinations.
<urn:uuid:9a55438d-1729-4e3b-8220-82dabb08382c>
3.765625
220
Knowledge Article
Science & Tech.
35.735336
Dolichopterys Kosterm., Recueil Trav. Bot. Néerl. 32: 279. 1935.—Type: D. surinamensis Kosterm. [L. surinamensis (Kosterm.) Sandwith]. Woody vines or shrubs (or small trees?); stipules absent or vestigial, 0.2–0.5 mm long, borne on adaxial edges of petiole 1–2 mm above base; leaves opposite or subopposite; lamina densely and persistently sericeous below, mostly eglandular, occasionally biglandular on margin at base. Inflorescence paniculate, rarely simple, the flowers ultimately borne in pseudoracemes; bracteoles eglandular, the bracteoles borne at apex of peduncle when peduncle is developed; pedicels straight or slightly circinate in bud. Sepals mostly bearing a single, very large, circular or transversely elliptical, radially lineate abaxial gland on the lateral 4, the anterior eglandular (all sepals eglandular in some populations of L. inpana); corolla bilaterally symmetrical; petals bright yellow, glabrous or only very sparsely sericeous abaxially; stamens with filaments glabrous, longer opposite sepals, shorter opposite petals; anthers with the connective abaxially broad and swollen; ovary with the carpels distinct; styles inserted near apex of carpels, the anterior shorter than the posterior 2, the stigmas large, borne on internal angle of apex to nearly terminal. Samaras separating from a short pyramidal torus; samara bearing a relatively short, inequilaterally trapezoidal or flabellate dorsal wing with its greatest width toward base of nut and 2 long, narrow, forward-pointing, parallel-sided lateral wings 3 or more times as long as wide (except L. splendens, which has a very short dorsal crest and the lateral wings reduced to ridges or lost); intermediate winglets absent. Chromosome number unknown. Seven species in diverse habitats in South America, south to about 23°S. [map] Lophopterys is easily recognized in most cases by the combination of bearing only one large abaxial gland on each of the four lateral sepals and having two long narrow lateral wings on the samara accompanied by a relatively large trapezoidal dorsal wing. It does not much resemble Hiraea, having a very different inflorescence and the stipules vestigial or lost, but its placement in this clade is supported by the marginal leaf glands (if any) and the carpels distinct in the ovary. The two elongated lateral wings of the samara suggest the genus Tetrapterys, but other morphological characters do not support such a relationship, nor do DNA sequences. See additional discussion in the revision cited below. Etymology: The name Lophopterys comes from the Greek words for crest (lophos) and wing (pteron), referring to the mericarp of L. splendens, which lacks lateral wings and has only a reduced, crestlike dorsal wing.
<urn:uuid:67cc52ff-cff6-4e4e-848e-c97289d9a784>
2.75
667
Knowledge Article
Science & Tech.
28.605081
Learn how biologists conduct freshwater surveys and use these data for ongoing studies and long-term monitoring of freshwater communities. Biologists research, monitor and record the long-term history and trends of Florida’s freshwater fish populations. Biologists often arouse the curiosity of onlookers while using this freshwater fish sampling technique. Biologist use fyke nets to monitor shallow-water fish communities in Florida’s lakes. Biologists use gill nets to monitor offshore fish communities in Florida’s lakes.
<urn:uuid:c8907542-a3d8-4109-a282-f78fd23f8055>
2.765625
106
Content Listing
Science & Tech.
25.054846
by James C. McLane III This article proposes an unusual way to land objects from orbit and probes returning from deep space. I won’t dwell on details best left for future study, but will suggest enough possibilities to encourage more study of the concept. The use of airbags to help spacecraft land on Mars is a recent example of just how important it is to consider unconventional recovery options like the one described here. The goal is to reduce or eliminate the usual weight penalty and reliability issues associated with parachutes, touchdown cushioning rockets, water flotation devices, and other complex paraphernalia normally required to softly land a space vehicle on the Earth. New concepts deserve a memorable name, so I call this recovery system “Pit Stop” for reasons that will soon be obvious. In this scenario, reentry of the spacecraft would be conventional until it slows and descends in the atmosphere to an altitude of perhaps 15,000 meters. At that height the landing capsule might separate from the disposable heat shield, which then drops away. The heat shield could remain attached, but it might present extra thermal problems for the recovery facility on the ground. Aerodynamic control surfaces would then pitch the vehicle over into a vertical dive where it would reach a terminal velocity of perhaps 100 meters per second. |The goal is to reduce or eliminate the usual weight penalty and reliability issues associated with parachutes, touchdown cushioning rockets, water flotation devices, and other complex paraphernalia normally required to softly land a space vehicle on the Earth.| Signals from stationary Global Positioning System (GPS) transmitters on the ground near the landing site, in conjunction with the Orbital GPS system, would help direct the capsule’s final decent. Like a “smart bomb” the vehicle would aim for an exact spot on the earth’s surface. Ground-based vertical wind profilers would provide real-time meteorological updates for the guidance and control system. Other terminal guidance aids, for example LIDAR (light detection and ranging) or active laser tracking might also be used to achieve great accuracy. In this “Pit Stop” concept, the landing capsule, carefully steered in its plunge to earth, would aim for the opening of a deep vertical shaft set into the ground. The capsule would dive into this hole while still falling at terminal velocity. The recovery shaft would extend straight down into the earth for hundreds of meters and be closed off and pressure tight at its base. Detailed shaft shape and depth would affect the desired deceleration rate. For recovery of an object that could withstand high G-loads, the shaft might be only a few hundred meters deep. For a low G-load recovery, a shaft over a thousand meters deep would probably be required. The landing capsule would fit loosely in the entrance of the pit, but clearance would tighten with depth. As the capsule descends, air would flow around the vehicle through the narrowing gap between it and the shaft wall. Appropriate vehicle shaping might encourage it to seek a stable position in the center of the shaft, but it’s also possible it might be pulled toward a wall by aerodynamic effects. Since scraping against the shaft wall is possible, abrasion could be addressed by coating the shaft with a film of water or other material. As the capsule descends, it would act as a piston, compressing the air in front of its direction of travel. It would slow down rapidly because the compressed air would behave like a soft, pneumatic spring. Eventually the spacecraft would stop. In one scenario, as air escapes around its body, the vehicle would slowly sink down to the bottom of the pit and finally be stopped by a cushioning system. Airlocks at the bottom would open and a human recovery crew would gain access. As the capsule speeds down the shaft, there is some possibility that the air compressed below might not escape fast enough to give the desired deceleration profile. If so the vehicle could stop and then bounce back upward, propelled by the compressed air, then halt, then descend again to a stop, then rise, then descend, etc., in a sequence of elevator –like up and down moves that slowly dampen out. One way to prevent rebound would be to contour the sides of the spacecraft so that air can vent by (or maybe even through) its body. Another way to avoid a pogo stick affect might be to arrange the primary shaft to nestle concentrically inside a larger outer tube. Vent holes or slots between the two shafts would release air from under the capsule at a controlled rate. The best site for “Pit Stop” ground stations would be a place with no wind, but light or steady local winds could be compensated for by the vehicle’s terminal guidance. Daily upper atmospheric variables such as density, high altitude winds and irregular de-orbit burn characteristics all introduce uncertainty in the final reentry path. These unknowns could be addressed by installing several “Pit Stop” capture facilities spaced kilometers apart along the reentry ground track in the general landing area. The final choice of which receiving pit to use would be made when the capsule begins its atmospheric dive toward the ground. The “Pit Stop” recovery scheme should work with a wide range of vehicles. When the system is proven to be reliable, it may be especially attractive for use with manned vehicles that pay a big weight penalty to achieve soft-landing capability. Shaft recovery provides pneumatic cushioning that’s inherently automatic and reliable. The vehicle guidance and control system must be absolutely trustworthy, but the directional precision needed for such control is demonstrated regularly in existing military applications. |When the system is proven to be reliable, it may be especially attractive for use with manned vehicles that pay a big weight penalty to achieve soft-landing capability.| Back in 2004 the parachute system failed on a Chinese reentry vehicle, yet the capsule remained basically intact after smashing through the roof of a house. In the same year, when the chutes on NASA’s Genesis capsule malfunctioned, that deep space probe hit the Utah desert traveling at almost 100 meters per second. Relatively soft soil kept the capsule from disintegrating and much of the payload was recovered. These two rare accidents show that it is possible for a spacecraft to survive after falling through the lower atmosphere with no retarding devices and striking the ground. Thus, a totally successful soft landing is mostly a matter of controlling the final deceleration. It makes perfect sense to put equipment on the ground to do this rather than to carry heavy landing devices on the vehicle. An underground recovery system based on this “Pit Stop” concept could become routine when it’s appreciated that such a scheme would be simple, safe, and economical. James C. McLane III worked as an engineer for over 20 years in NASA’s manned space program. An Associate Fellow in the American Institute of Aeronautics and Astronautics and a licensed Professional Engineer in the State of Texas, his current job in the oil and gas industry allows him to view human space activities from a fresh perspective.
<urn:uuid:6e3c3727-cfe1-43a8-a668-81a963800a75>
3.625
1,467
Personal Blog
Science & Tech.
38.625192
Introduced in 1935, the Cane toad (Bufo marinus) is one of the most prominent and successful invasive species in Australia. It was originally introduced from specimens in Hawaii - although the species is native to Central and South America. Without showing any noticeable interest in the cane beetle it was introduced to control (as the cane beetle was eating sugar cane crops in northern Queensland), the Cane Toad spread unimpeded throught QLD to New South Wales and reached Northern Territory by the 1980s. It has a voracious appetite eating anything from insects, frogs and small possums. And basically no predators touch it due to its parotid glands that secrete a milky toxin often deadly to all those that try to consume it! Biological control people………..needs to be researched before you devastate a continents wildlife.
<urn:uuid:756c2e6b-ccfa-453b-bd11-830b3a713dda>
2.890625
168
Personal Blog
Science & Tech.
38.311362
Alligators are everywhere. They’re team mascots, Transformer toys, actors in Lubriderm commercials (and CSI: Miami), unwanted golfing partners, and even expensive cowboy boots. What might be a surprise is that they’re also “model animals” for scientists, meaning that there are dozens, if not hundreds, of published technical articles on all things gatorly. They’re also commonly used in K-12 and undergraduate classrooms. WitmerLab has been working on American alligators for years, because crocodilians are one of just two living groups (birds are the other) of that great tribe known as archosaurs that includes dinosaurs and pterosaurs. Now, we’re joining with Casey Holliday’s lab at the University of Missouri to present the 3D Alligator, two parallel, complementary, and growing websites that present alligator anatomy in all its 3D digital glory. In both cases, we’re starting with the skull, although we include a few soft-tissue systems that are active areas of research for us (brain, inner ear, sinuses, etc.). Casey’s team presents an adult skull, and we present a wee gatorling, a “day-0” hatchling that was stillborn on its birthday. Sad perhaps, but this little guy is now immortal, because we’re releasing him to the tubes of the interwebz. We also present some of our 3D alligator work on an adult done “way back” in 2008. Check out the WitmerLab 3D Alligator site and the Holliday Lab 3D Alligator site. (more…) Archive for April, 2011 Birds have a lousy sense of smell, right? That common perception may apply to some modern-day birds, but that wasn’t always the case. Early birds, frankly, smelled like dinosaurs, meaning that they inherited a pretty respectable sense of smell from their dinosaurian kin. The typical scenario had been that as birds evolved flight, the senses of vision and balance increased and the olfactory sense diminished. Darla Zelenitsky (University of Calgary) and François Therrien (Royal Tyrrell Museum) invited Ryan Ridgely and me to join forces in testing this scenario by studying the evolution of the olfactory bulb, the part of the brain receiving information on odors, across the transition from small theropod dinosaurs to birds. As our new article in Proceedings of the Royal Society B reveals, birds started out with a full sensory toolkit, including a pretty capable sniffer. And we also learned a thing or two about non-avian theropods along the way.
<urn:uuid:2e890b31-e47a-4847-baa8-1f48ec4b3c46>
2.734375
557
Personal Blog
Science & Tech.
42.611729
Coral reefs in the Caribbean Sea and the western Atlantic Ocean have been in a continual state of decline for the last 20 years. Scientists traditionally have looked to human influences as the culprit. But according to a relatively new scientific study the real problem may be blowin' in the wind. The demise of coral reefs coincides with large increases in the influx of dust from Africa. Indeed, the hundreds of tons of soil dust that have crossed the Atlantic yearly for the past 25 years could be a significant contributor to coral reef decline -- as well as a factor in other areas such as human health -- according to Eugene A. Shinn, a research geologist with the U.S. Geological Survey in St Petersburg, Fla. Shinn, a leader of several popular AAPG field seminars that use the Caribbean as a setting, has long been recognized as an expert in not just the region's geology but also its total environment. "Atmospheric transport of dust from North Africa may be responsible for a number of environmental hazards, including the demise of Caribbean corals, red tides, amphibian diseases, increased occurrence of asthma in humans, and decrease of oxygen in estuaries," Shinn said. Shinn, along with several fellow scientists, reported on the problem in a paper titled "African Dust and the Demise of Caribbean Coral Reefs" in the AGU journal, Geophysical Research Letters. USGS scientists have monitored coral reef vitality for nearly 40 years. Atlantic coral diseases were first reported in Florida and Bermuda in the 1970s, but these early reports received little attention until the late 1980s when the problem could no longer be ignored. The Problem Emerges Black band disease on corals first appeared in the Caribbean in 1973, and from 1978 to 1983 several coral species suffered a period of die-off in Florida, Bermuda and the Caribbean. In 1983 the herbivorous sea urchin Diadema antillarum was virtually killed out throughout the Caribbean. Some unknown pathogen first decimated Diadema populations in Panama in January, and by July had spread to coral reefs in Belize, Mexico and the Florida Keys -- indicating rapid transport of the pathogen in the main Caribbean current. Shinn and Garriet W. Smith, with the University of South Carolina-Aiken, observed the die-off in Florida and San Salvador in the eastern Bahamas, well away from the major current flow. "The effect of the Diadema die-off where it occurred was immediate and obvious," Shinn said. "Algae normally grazed from dead coral surfaces proliferated, interfering both with coral recruitment and growth. "I've been watching coral reefs since the 1950s, photographing the same coral for over 40 years," he continued. "These diseases became apparent to me in the late 1970s, and the problems accelerated dramatically in 1983." During the summer of 1983 elkhorn and staghorn corals, two major Caribbean reef building species, also experienced mass mortality. Entire forests of these branching reef builders perished, and in the absence of Diadema grazing, dead elkhorn and staghorn branches were quickly overgrown by fleshy algae, which retard the establishment of coral larvae. In addition, a pathogen affecting Caribbean sea fans was reported at about the same time and that soft coral species also suffered mass mortalities. "The reefs have never been the same," Shinn said. "Algae began growing everywhere." A Connection is Made A second major coral reef event occurred during the warm, quiet, almost hurricane-free summer of 1987. This major disease struck Caribbean populations of coral and sponges, resulting in bleaching of the coral surfaces. Corals such as brain coral expelled the symbiotic algal cells that give it color, leaving it snow white -- a condition called bleaching. Episodes of bleaching and black-band diseases proliferated and continued into the late 1990s. Finally, in the mid-1990s a Caribbean-wide pathogen affecting sea fans was reported. Scientists determined that the pathogenic agent was a soil fungus aspergillus. These major coral disease events coincided with increases in warm water associated with El Niño weather patterns, so many scientists initially believed these die-offs were the result of pollution, sedimentation or warm water associated with the North Atlantic Oscillation, which coincides with the larger Pacific El Niño Southern Oscillation. "I didn't correlate these coral diseases with the large increases in African dust transported to the western Atlantic and Caribbean Sea until one day I read a little article in Geotimes on work done in the Amazon Rain Forest that showed the rainforest received some of its essential nutrients from African dust," Shinn said. "I started thinking if this dust could support plant growth half way around the world, it might also explain some of the algae growth on coral reefs." Day of the Locusts African dust blowing across the Atlantic is certainly not a new discovery. Mariners have noted the phenomenon in their ships' logs for hundreds of years. The prevailing easterlies that bring the dust clouds are the same trade winds that blow hurricanes toward the United States and Caribbean every year. Shinn contacted Joe Prospero, who is with the Cooperative Institute for Marine and Atmospheric Studies at the University of Miami and has been monitoring dust in Barbados since 1965. His studies showed significant increases in African dust transport in the 1970s, which correlated with drought conditions in northern Africa. "One of the first things that jumped out at me while I studied his data was big spikes in dust transport in 1983 and 1987 -- the two years we experienced the massive mortality rates on coral reefs," Shinn said. It seemed like a reasonable hypothesis that African soil dust resulting from the prolonged drought in the over grazed grasslands of the Sahel and the desiccation of Lake Chad could contain abundant fungal spores. "In October 1989 large African desert locusts were transported in dust clouds to Trinidad," Shinn said. "If a one-inch long grasshopper can be blown over in the dust, it should come as no surprise that African dust could contain soil fungus spores." Conventional wisdom holds that ultraviolet light kills all the spores, fungi and bacteria before it can cross the Atlantic -- but Shinn and colleagues have identified many species of fungi, including aspergillus spores, in aerosols collected during dust events in the Virgin Islands. These nutrients in the African dust may stimulate phytoplankton and benthic algal growth in the normally oligotrophic waters of the Caribbean coral reefs. A USGS microbiologist has 110 different species of bacteria and fungi in culture from four air samples taken in the Virgin Islands. About 80 percent of those are bacteria and 20 percent is fungi. Now That's a Cloud Originally, some researchers speculated that the Caribbean-wide epidemic of aspergillosis in sea fans was related to increased runoff caused by deforestation in the Caribbean. However, outbreaks around isolated Caribbean islands such as San Salvador indicate this cause is unlikely. Also, identification and culturing of aspergillus from those air samples taken during dust storms in the Virgin Islands show that African dust is an efficient substrate for delivering aspergillus spores. This year Shinn's colleague and co-author Smith successfully inoculated healthy sea fans with aspergillus cultured from spores in dust sampled from the air during African dust outbreaks in the Virgin Islands. Dust often reduces visibility in the Virgin Islands, even causing temporary airport closings. This dust has been tracked from North Africa across the Atlantic with NASA Seawifs and TOMS satellite data, which is readily available on the Internet. This past February, in fact, a NASA satellite photographed one of the largest dust storms ever observed. The brown cloud, approximately the size of Spain, was seen leaving Africa and blowing west across the Atlantic toward the Caribbean and the United States. Also, satellite images show that African dust transported across the Atlantic goes mainly toward the southern Caribbean and equatorial regions of South America during North American winters and that transport direction shifts northward to impact Florida and the southeastern United States during summer months. This pattern of seasonal change suggests that a dust-born pathogen could impact the southernmost part of the Caribbean around Panama during January and a few months later the entire Caribbean. The sea urchin Diadema mortality was first reported in Panama in January 1983, and within just a few months spread northward throughout the Caribbean. "We think these observations and experiments provide a reasonable explanation for the near synchronous widespread distribution of outbreaks around remote oceanic islands in the Caribbean. The Iron-Ex Experiment was done to demonstrate a way to remove CO2 from the atmosphere if ever needed," Shinn said. The Iron Curtain In addition to hosting spores, African dust itself is composed of chemical elements such as iron, phosphorous and sulfates, which have been proven to stimulate phytoplankton in tropical waters. "Richard Barber with Duke University's Marine Laboratory worked on an experiment in the Pacific Ocean, where they basically seeded 75-square kilometers with an iron solution," Shinn said. This experiment -- done to demonstrate a way to remove CO2 from the atmosphere if ever needed -- was conducted in an area where oceanographers have known there is enough nitrogen and phosphorous in the water to support plankton, but there wasn't any present. "Scientists believed iron was the missing micronutrient," Shinn said. "The water turned to pea soup when the iron solution was introduced, proving that iron was indeed the missing link." Shinn said he attended a series of lectures Barber gave concerning his work, and he "realized that iron likely plays an important role in the coral die-off in the Caribbean and western Atlantic. The one ingredient that is always present in African dust is iron -- about 6 percent of the dust is iron oxide." Human Health Hazards While Shinn's studies focus on the detrimental affects of African dust on Caribbean and western Atlantic coral reefs, another serious issue is the impact of the dust on human health. As Shinn observed, the occurrence of asthma has increased worldwide -- and Caribbean island nations are particularly afflicted with the illness. "I was surprised to learn that about 50 percent of children in Puerto Rico have asthma," Shinn said. "So do the children of Trinidad, which is ground zero for African dust." He related the story of a Texas woman who moved to a poured concrete, sterile house in St. John, Virgin Islands, because of her allergies to many chemical and petroleum-based products. Her new life was a success until the first dust storm when she became violently ill. Today she and her husband collect samples for Shinn and his fellow scientists. "A large number of people have been studying the transport of dust for years," Shinn added. "Much of the research was stimulated by the military during the Cold War era, studying potential fallout patterns. "I was surprised no one had suggested this influx of dust might have an impact in other areas." 'A Difficult Situation' So what do we do about the potentially harmful effects of African dust? Shinn said the situation should be addressed at its root. "The United States and other countries don't expend much effort educating and teaching African nations about the effects of primitive agriculture and overgrazing in the Sahel," he noted. "It's very similar to what happened in this country in the 1930s, when the government stimulated farmers to go west and plow up the grasslands. Then a drought set in and we endured the devastating Dust Bowl years. "It is in our own interests to help teach better farming practices, and provide aid for irrigation in North Africa." However, not all branches of science believe that eliminating African dust from the Caribbean and western Atlantic is the best course of action. For example, the dust is highly beneficial to rainforests. "There is geological evidence that the rainforests wax and wan with the influx of African dust," Shinn said. "So scientists are at odds about the beneficial or detrimental effects of the dust. It's a difficult situation to resolve." Shinn's hypothesis has met with mixed reviews in the scientific community. "I started looking at the correlation between African dust and the coral reef die-offs in 1996 and it's been an uphill battle," he said. "However, today the theory is beginning to catch on. "I always say there are three stages of discovery," he continued. "First is, 'You are wrong and I can prove it.' Second is 'You're right, but is it important?' And third is 'Didn't we know this all along?' "We are somewhere between stage two and three with this study," he laughed. At least, research funds are finally beginning to trickle in to study the issue of African dust and its impact in the Caribbean and western Atlantic. "This is a classic story about how science is supposed to work," Shinn said -- "collaboration and a great many little things coming together to push our knowledge forward."
<urn:uuid:b810db48-65ee-41ab-b7ec-798e465a4a19>
3.21875
2,705
Knowledge Article
Science & Tech.
37.747141
Pieter J. van Rhijn Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. stellar luminosity function ...refers to the absolute number of stars of different absolute magnitudes in the solar neighbourhood. In this form it is usually called the van Rhijn function, named after the Dutch astronomer Pieter J. van Rhijn. The van Rhijn function is a basic datum for the local portion of the Galaxy, but it is not necessarily representative for an area larger than the immediate solar neighbourhood.... What made you want to look up "Pieter J. van Rhijn"? Please share what surprised you most...
<urn:uuid:f3d7e0bb-0a76-4c54-b00e-392fe5f81701>
3.234375
154
Knowledge Article
Science & Tech.
57.658333
Summer bushfires near densely-populated areas in southern Australia attract the most public attention, but the continent’s largest and most frequent fires actually occur in the spring in the tropical savannas of northern Australia. On November 25, 2012, the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA’s Aqua satellite captured this natural-color image of wildfires burning on Cape York Peninsula, the northernmost part of Australia. Smoke plumes from multiple fires are visible streaming west; red outlines indicate hot spots where MODIS detected unusually warm surface temperatures associated with fires. Three main meteorological variables affect fire behavior: atmospheric humidity, air temperature, and wind strength. On Cape York Peninsula, humidity drops and temperatures rise beginning with the arrival of the dry season in March. By the end of November, grasses and woodland trees in the region have reached their most flammable state. In this case, steady winds (about 20 kilometers/12 miles per hour) have helped sustain fires that were likely started by lightning and human activity. About half of the savanna woodlands on Cape York Peninsula burn either every year or so, typically late in the dry season. Dried grasses, which receive ample sunshine and rain during the wet season, provide the bulk of the fuel. As the dry season progresses, trees also drop loads of leaves and twigs that help fuel fires. Key grasses and trees in the region include Sorghum, Eucalyptus, and Corymbia. Despite the frequency of the burning, wildfires on Cape York Peninsula are generally less severe than in southern Australia because fungi, bacteria, and other decomposers that thrive in the region continuously break dried grass and leaf litter down. Fewer decomposers live in southern Australia, making it possible for leaf litter and other fuel to build up for decades and prime the landscape for extremely destructive fires. - NAFI. (2012, Nov. 27) North Australia fire information map: Cape York. Accessed Nov. 27, 2012. - NASA. (2012, Nov. 27) Cape York Peninsula fires. Accessed Nov. 27, 2012. - Queensland Government. (n.d.) Rural fire service. Accessed Nov. 27, 2012. - Savannah Explorer. (n.d.) Fire in Australia’s tropical savannas. Accessed Nov. 27, 2012. NASA image courtesy Jeff Schmaltz, LANCE MODIS Rapid Response Team at NASA GSFC. Caption by Adam Voiland. - Aqua - MODIS
<urn:uuid:b54950c4-39af-4353-b0f1-48e53a6b66bc>
4
521
Knowledge Article
Science & Tech.
43.052569
You are here Falling prey to noise An oceanside log cabin surrounded by a 1,000yearold forest is illuminated by the glow of a computer screen. Even though it’s 2 a.m., Georgie Gemmell is still sitting at the laptop, which rests on a desk cluttered with binoculars, audio cassettes and hand drawn maps. As if in a trance, the 22yearold closes her eyes and listens to static white noise, which an underwater microphone is broadcasting through a set of speakers above her head. Suddenly, a highpitched whistle jolts Gemmell into action — she fumbles for a pen and a small red notepad and starts furiously scribbling notes. The whistle twists into a long, fluid song, which is joined by a chorus of haunting voices. A smile spreads across Gemmell’s face: a pod of orcas is nearby. “They sound like thunder at night,” she says. “You can’t see them when they swim past, but you can hear them. It’s really magical.” Gemmell is a summer volunteer at OrcaLab, a research facility on British Columbia’s Hanson Island that focuses on killer whale acoustics. Two ferry trips, one water taxi and 10 hours of driving northwest from Vancouver, near the northern end of Vancouver Island, OrcaLab operates a network of six hydrophones that continuously listen to the ocean to record orca vocalizations. The songs are streamed online at orcalive.net, but in recent summers, boat noise has drowned out the orcas. Tourism is a $13 billion annual industry in British Columbia, and in summer, visitors migrate from the ski slopes to the sea. Local First Nations fishermen also hit the water in pursuit of salmon. “Sometimes, cruise ships come in clumps and all of Johnstone Strait [the waterway surrounding OrcaLab] reverberates with the sound of their propellers,” says cetologist and former neuroscientist Paul Spong, who founded OrcaLab in 1970 and remains its director. “It’s unbearable for us to listen to, but I can’t imagine what it’s like for the whales.” In 2001, the killer whale (Orcinus orca) was put on Canada’s Species at Risk registry. To help boost its population, Fisheries and Oceans Canada (DFO) made the Johnstone Strait a “critical habitat” for the orca. Although the designation is, in part, intended to monitor the “degradation of the acoustic environment,” Spong says it hasn’t diverted ships. Ship noise is making it tough for orcas in Johnstone Strait, B.C., to .find their prey. Since the 1980s, Canadian marine biologists have been trying to figure out how sound pollution affects whales. To assess the impact of boat noise on killer whales, says John Ford — an adjunct professor in the University of British Columbia’s zoology department and an orca expert who has worked for DFO for 10 years — one must understand how different orca populations hunt. Reseachers theorize that northern resident orcas, a threatened population of about 260 whales that lives in Johnstone Strait, are picky eaters; they feed on chinook, the largest species of salmon. It is believed that orcas use an acoustic tool called echolocation to hunt, emitting a staccato “click” into the water and waiting for the sound to bounce back off potential prey. In quiet waters, orcas can detect a chinook up to 100 metres away in seconds. “The northern residents need really quiet conditions to detect the echoes bouncing back from the fish,” says Ford. “The constant boat noise could interfere with their foraging efficiency by masking the sound of their sonar, which they need to detect salmon.” But humanmade noise doesn’t affect just salmoneating orcas. Transient killer whales — a group of more than 260 nomadic whales — eat seals, dolphins and other porpoises. Due to their prey’s acute hearing, transients rarely echolocate but, instead, hunt through a stealthy game of hideandseek. “Transients are virtually silent while hunting, using passive listening to detect prey,” says Ford. When boats come near, transients likely can’t hear their prey splash. Boat noise isn’t the only source of acoustic pollution. One day in the early 1980s, Ford saw orcas react to a ship’s lowfrequency sonar in Johnstone Strait. “The whales had formed a tightly knit group and were heading toward the shore. I wasn’t sure whether they were going to end up on the beach.” The whales swam in dangerously shallow water until the sonar passed, at which point they returned to the deep. The orcas’ frantic reaction taught Ford that sonar has the potential to cause serious harm. OrcaLab was created in 1970 to monitor whales in Johnstone Strait. Lindy Weilgart, a marine researcher at Dalhousie University who specializes in acoustic communication among whales, believes sonar affects whales in other ways too. “There is enough evidence to show that they don’t need a beach to die,” says Weilgart, referring to a case in which a whale died at sea four hours after having been exposed to sonar. “The stimulus of the noise itself hitting the supernitrogensaturated blood is enough to force the nitrogen bubbles out of the solution, blocking blood vessels and causing hemorrhaging.” Boat noise, while not directly deadly, says Weilgart, still impacts the orcas’ environment. “Whales are dealing with many stressors all at once,” she notes, pointing to low salmon stocks and water pollution. “They’re rarely dealing with just noise pollution.” Michael Jasny, an environmental lawyer for the New York Citybased Natural Resources Defense Council, says sound pollution could be tackled by mandating quieter, more energy efficient propellers on large ships. “You could save the shipping and travel industry millions,” he says. Jasny describes the whales’ acoustic environment as an “urban jungle” and warns that Canada’s small orca population could experience farther decline if sound pollution isn’t confronted. “The problem will only become worse if Canada doesn’t take action.”
<urn:uuid:f29defdb-cdf3-4846-abcb-add8ac475c79>
2.6875
1,380
Knowledge Article
Science & Tech.
50.309821
Turns out that a large part of Mars could support life One of man’s dreams has always to be able to go out into space and find a new home for us to screw-up live on and one of the places has always been Mars, even though it has been shown to be barren of any kind of life as we know it. As much as we might like to think that one day we could see the human race expanded to colonies on Mars the common consensus among scientists is that we couldn’t survive on the planet without a whole lot of help. That opinion could be changing though is a new study of the planet suggests that Mars could be more life-sustaining than was originally thought. The study was carried out by the Australian National University and working with the most up to date data that is now available they looked at the planet as a whole rather than on particular areas of the planet as previous scientists had done. What they found was that there is 3% of Mars that could sustain life, the thing is that the areas they are talking about are underground. The Martian atmosphere has a bad combination of low temperature and low pressure. When put together, it means that liquid water cannot exist in most places on the planet’s surface. However, Lineweaver believes that the additional pressure of soil could mean that water exists below ground. Additionally, warmth from the planet’s core could render some regions of the planet’s subsurface warm enough for bacteria and other micro-organisms. If I was younger I’d sign up in a minute to be able to have a shot at living on Mars. Hell, now that I think of it I’d do it even now, even though my wife might not like the idea.
<urn:uuid:abd55cd7-5273-4f39-abe2-5475e1057404>
2.84375
369
Personal Blog
Science & Tech.
53.486754
This article has focused on finding the right role for C++ amongst today's other popular languages, and on understanding its most difficult aspect: memory management. The tables of common memory-related errors presented here can be used as a handy reference, to find and avoid such errors in your own code. Subsequent articles of the series will continue to discuss C++ memory management in greater detail. The second article will be devoted to describing the nature of the C++ memory management mechanism, so that you can begin to apply it creatively in your designs. After that, the third article will present a series of specific techniques that you can use as building blocks in your programs. C++ memory management is an enormously useful tool for creating elegant software. Having gained a clear awareness of its dangers, you are now ready to understand its benefits. Enabling you to do so is, ultimately, the purpose of this series of articles. A number of very useful resources are available regarding C++. Notes on these resources are provided here (the Bibliography itself follows). First, you need a book with broad coverage, which can serve as an introduction, a reference, and for review. Ira Pohl's C++ by Dissection [Poh02] is an example of such a book. It features a particularly gentle ramp-up into working with the language. In addition to a book with broad coverage, you will need books that focus specifically on the most difficult aspects of the language and present techniques to deal with them. Three titles that you should find very valuable are Effective C++ [Mey98] and More Effective C++ [Mey96] (both by Scott Meyers), and C++ FAQs [Cli95] by Marshall P. Cline and Greg A. Lomow, which is also available in an online version. The key to reading all three of these books is not to panic. They contain a great deal of difficult technical details, and are broken up into a large number of very specific topics. Unless you are merely reviewing material with which you are already familiar, reading any of these books from cover to cover is unlikely to be useful. A good strategy is to allocate some time (even as little as 15 minutes) each day to work with either of Meyers' books, or with C++ FAQs. Begin your session by looking over the entire table of contents, which, in all three books, has a very detailed listing of all of the items covered. Don't ignore this important step; it will take you progressively less time as you become familiar with each particular book. Next, try to read the items that are most relevant to the current problem that you are trying to solve, ones where you feel that you are weak, or even those that seem most interesting to you. An item that looks completely unfamiliar is also a good candidate — it is likely an important aspect of C++ of which you are not yet aware. Finally, when you want insights into bureaucracy, tips on what to do with your icewater during NASA meetings (answer: dip booster rocket O-ring material into it), or just a good laugh when you are frustrated with C++, try Richard P. Feynman's "What Do You Care What Other People Think?" [Fey88]. The second article in this series will describe why Feynman's book is so important. See Further Reading for notes on this bibliography. [Cli95] Marshall P. Cline and Greg A. Lomow. C++ FAQs: Frequently Asked Questions. Addison-Wesley Publishing Co., Inc.. Copyright © 1995. 0-201-58958-3. [Fey88] Richard Feynman and Ralph Leighton. "What Do You Care What Other People Think?": Further Adventures of a Curious Character. W.W. Norton & Company, Inc.. Copyright © 1998 Gweneth Feynman and Ralph Leighton. 0-393-02659-0. [Mey96] Scott Meyers. More Effective C++: 35 New Ways to Improve Your Programs and Designs. Addison-Wesley Longman, Inc.. Copyright © 1996. 020163371X. [Mey98] Scott Meyers. Effective C++: 50 Specific Ways to Improve Your Programs and Designs. Second. Addison-Wesley. Copyright © 1998. 0-201-92488-9. [Poh02] Ira Pohl. C++ by Dissection: The Essentials of C++ Programming. Addison-Wesley. Copyright © 2002. 0-201-74396-5. - The Corrected - A Base Class Used in Several Examples - A Derived Class Used in Several Examples - Object Slicing in Function Call: Example Output - The Non-Virtual Destructor: Example Classes - The Non-Virtual Destructor: Example Output - Using Arrays Polymorphically: Example Output - C vs. C++ Casts: the Example - C vs. C++ Casts: Example Output - Bitwise Copying of Objects: Example Output - Freeing Memory that Was Not Dynamically Allocated: Example Output - Exception Leaving a Constructor: Example Output - Exception Leaving a Destructor: Example Class - Exception Leaving a Destructor: Example Output - Improper Throw: Example Output - Improper Catch: Example Output List of Tables - Errors in Function (or Method) Calls and Returns - Errors when Defining Methods in Classes - Errors in Handling of Allocated Memory or Objects - Errors Related to Exceptions George Belotsky is a software architect who has done extensive work on high-performance internet servers, as well as hard real-time and embedded systems. Return to the Linux DevCenter.
<urn:uuid:de677245-3723-4c52-8480-718d8dcbb33a>
2.71875
1,205
Tutorial
Software Dev.
54.839957
|Jun21-12, 09:50 AM||#1| Spin direction through ferromagnetic material What is the spin direction of the electrons through the ferromagnetic material? It will become parallel or antiparallel? I was confused because the scattering rate and conductivity both depend on the density of state at Fermi level. For ferromagnetic material, is it right to say that at the Fermi level the density of state of spin direction parallel is small than those in anti parallel? |Jul4-12, 02:19 AM||#2| You have to clarify what kind of ferromagnetic material is. there is no common answer for such a question. Give you a suggestion, please refer to the concept of spin valve. |Similar Threads for: Spin direction through ferromagnetic material| |Fermi Vector in Ferromagnetic Material||Advanced Physics Homework||0| |Show that for a ferromagnetic material the field in the gap is given by?||Advanced Physics Homework||3| |Magnetic field of ferromagnetic material||Classical Physics||1| |spin glass for ferromagnetic systems||Advanced Physics Homework||0| |a ferromagnetic material||Introductory Physics Homework||1|
<urn:uuid:5a4a34a2-2edc-4c30-ad16-47836980c7e0>
2.78125
274
Comment Section
Science & Tech.
32.535756
G and H - Having to do with the galaxy. - Galactic Cosmic Rays (GCRs) - These cosmic ray particles come from outside our solar system, but from within our galaxy. They have lost all of their electrons during their trip through the galaxy. More about GCRs... One of billions of large systems of stars and gas, held together by gravity, that make up the universe. - Gamma rays - High energy electromagnetic radiation (in excess of 100 keV) which can be generated by nuclear reactions in space. This is an image of the EGRET gamma ray all-sky survey - above 100 MeV. More about gamma rays in "Imagine the Universe!"... - A low number of atoms or molecules in a relatively large volume of space. Atoms or molecules are spread apart relative to each other. - Gas-proportional counter - An instrument that measures the point of impact of a particle and the energy loss of the particle through a gas inside the counter. There are gas-proportional counters are on the SEPICA instrument onboard the ACE spacecraft. - Based on the Earth as center; as, the geocentric theory of the universe. - Geomagnetic storm - A magnetic storm on Earth. - GMT (Greenwich Mean Time) - The local time at the 0 meridian passing through Greenwich, England; it is the same everywhere. Same as UT (Universal Time). - Gradual flares - One of two general types of solar flares. Gradual flares accelerate mostly protons and last for several days. They occur mainly near the poles of the Sun and happen, on average, about 100 times per year. - A physical force attracting one object to another object. - Ground station - Ground stations are the link between the control system and a satellite in orbit. They track a satellite's signal to find it's location, and the status of equipment onboard and distribute that information to interested parties. They are also used to transmit operational signals back to the satellite to help control it. The NASA/Goddard Space Flight Center in Greenbelt, MD. - Prefix referring to the Sun. - With the Sun at the center. - The gradual boundary between the heliosphere and the interstellar gas outside our solar system. See the diagram with the definition of "heliosphere" below. The area in space that contains our solar system, solar wind, and the entire solar magnetic field. It extends well beyond the orbit of Pluto, out to the heliopause. More about the heliosphere... - The gas made from hydrogen in the core of stars by nucleosynthesis. Each atom of helium contains two protons. - One half of a sphere or globe. The northern hemisphere on Earth is divided from the southern hemisphere by the equator. - An instrument that determines the three-dimensional path of a particle through it. Hodoscopes are used in both the SIS and CRIS instruments on the ACE spacecraft. - The most common gas in the universe. Each atom of hydrogen contains one proton. Click on images above to learn more about them In the News
<urn:uuid:b7999214-ce41-4226-9985-90a63b9e15b2>
3.84375
663
Structured Data
Science & Tech.
52.004469
In the type of work that we do, just getting the cells out of the rock or sediment where they were living can be difficult. Many of these organisms are adapted to live at high temperatures and pressures, and live deeply embedded in rock and aren't going to detach just because we scientists want them to. There are chemicals that help with this step as well as various regimes of shaking, spinning, heating and cooling. From there the next step is to lyse (burst) the cells so that they release their DNA (or RNA depending on what you are interested in). This can be accomplished with freezing and thawing as well as sonication (using sound to move particles), and more chemicals. When a cell bursts it releases more than just DNA, and so the next step is to use other chemicals to make sure that the cells' own enzymes don't break down the DNA (or RNA) that we are interested in. If there are lots of metals present in the sample they need to be removed with still other chemicals. Eventually you have (hopefully) isolated your DNA and you are ready to make copies of it so that you have enough to "read". One way this is done is with a PCR (polymerase chain reaction). In a tiny tube goes your DNA, loose nucleotides (raw material to make more DNA), an enzyme (does the actual assembly), primers (tell the enzyme where to start and stop building), water and buffer. Then the tubes are placed in a machine that runs them through cycles of heat and cold to (hopefully) stimulate the enzyme to make copies of the DNA by assembling the nucleotides in the same order they are assembled in the original DNA. Lots of things can (and do) go wrong in this process. If you didn't have DNA to begin with, you will get no DNA after the PCR (obviously). If you use the wrong primers they won't match up with the DNA, and the process can't start. If the temperature is too hot or too cold, the enzyme makes mistakes and copies the DNA incorrectly or doesn't work at all. If there are too many metals in the solution left over from the sediment, the reaction will not work. If your enzyme has been stored incorrectly it will not work. When it doesn't work you simply try again, and again, and again until you figure out which step went wrong. Keep in mind that this has to happen for each sample you are dealing with. Once you have your PCR product (amplified segments of DNA selected by your primers), it gets run on an electrophoresis gel. Basically you use electricity to move the DNA segments through a gel (kind of like gelatin). The smallest fragments will be pushed the farthest along the gel by the electricity and the largest fragments will move the least. If all goes according to plans, you see bands in the gel corresponding to different-sized fragments of DNA. Each band represents millions of copies of that specific fragment. At that point you use a gel extraction kit to remove the now-purified (all the same segment) DNA from the gel. At this point the DNA gets sent off for sequencing where various technologies that are too technical for this blog (maybe I'll try explaining when I understand them better... on the other hand, maybe I'll spare you that) are used to read the pattern of A's, C's, T's, and G's of each fragment. The code then gets sent back to the scientists who have to figure out how to assemble the various fragments of DNA into something that can be useful. The final step is analysis. Depending on the question asked, this might be trying to use the genetic code to figure out how closely related two species are, or what organisms were in your sample, or what genes were present, or any one of a number of different questions. For each question there are multiple ways to search for an answer and in some cases different methods will provide different answers. Scientists need to understand the (often new) technologies used for the various steps so that they can properly interpret the data. It is not enough to simple know the code. A question as simple as "is species A more closely related to species B or species C?" can have different answers depending on what part of the DNA was amplified. Sometimes one gene can tell one evolutionary story, where a whole genome (all the DNA in an organism) can tell a very different one. If you only look at the one gene, you might never know. This is why scientists still argue about how certain species evolved and why phylogenetic trees (think family tree of species based on genetics) can be very controversial. The point is not that science is hard (duh!), or even to make you think I am crazy for wanting to do all of this. However, maybe next time you watch CSI or Law and Order and the crime lab instantaneously delivers that key DNA evidence, you will realize that science doesn't actually work that fast, and you will know that it really is quite complicated!
<urn:uuid:5c1d1d1e-90ae-41ba-8aeb-e65ae39fff6a>
4.09375
1,028
Personal Blog
Science & Tech.
57.906286
Why the Earth's Temperature Is What It Is In his first-year seminar titled Energy, Professor Michael Brown’s explanation of the Earth’s temperature begins by calculating the Sun’s radius (Rs), the distance from Sun to Earth (rE-S), and the cross-sectional area of the circle of solar energy that is intercepted by Earth’s surface. Equations below the diagram show the derivation of the Sun’s luminosity—its total power output—from its surface temperature (Ts) and size (Rs) using the Stefan-Boltzmann Law. At its surface, the sun gives off 3.9 x 1026 watts—one very bright bulb! The next equation shows how this power is diluted by distance as this “bubble” of energy expands outward from the Sun. The power density at the surface of a bubble the radius of the Earth’s orbit is measured in watts per square meter; it is known as the Solar Constant, and is calculated at 1380 W/m2. Using the Solar Constant and the cross-sectional area that this energy falls on, Brown shows that the power absorbed by the Earth (PABS) totals 1.76 x 1017 W. But because Earth also radiates energy, Brown factors in this loss to the system (PRAD) and equates the two values to arrive at a temperature for a “bare rock” Earth (TE) of 280K. Pretty chilly for us warm-blooded humans! Fortunately, there’s the greenhouse effect. The Earth’s atmosphere acts like a thin layer of glass that reflects back some of the energy radiated by the Earth. Thus, the planet has two energy inputs—one from the Sun and one from the Earth itself—that combine to raise the temperature to a habitable 303K. Just about right—unless we mess with the reflective capacity of the atmosphere.
<urn:uuid:ca7b401a-981e-4e65-ba20-e54c74ebde82>
4.375
399
Knowledge Article
Science & Tech.
58.253032
How did Asexual Life Evolve? Name: Rob Courtney It is an accepted fact that all life on Earth began as one-celled organisms that reproduced asexually, with all offspring identical to the parent from which they were created. My question is, if all offspring are identical to their parents, then is there any room for advancement? How did life ever evolve past this primitive state if no changes are made? Thank you. Genetic mutations still occur due to chemical and radiation effects. By mutation. Since all cells, in a way, in these organisms are in the germ line, i.e. they pass mutations on to their offspring, then every mutation event lives on. In sexually reproducing organisms, only mutations in eggs and sperm will get passed on. Also, by breakage and recombination-like events during cleavage. Click here to return to the Biology Archives Update: June 2012
<urn:uuid:3a5b5454-471c-42b1-80eb-6d1f69231b9d>
3.453125
197
Q&A Forum
Science & Tech.
47.070638
Volcanic heat at the mid-ocean ridge axis drives hydrothermal circulation and chemical exchange between the ocean crust and seawater. Microbes are known to harness chemical energy from this volcanic system at temperatures as high as 121°C. A mid-ocean ridge hydrothermal system, plume, and resulting deposits and precipitates are featured. Click image for larger view and more details. Hydrothermal Vents, Ocean Chemistry and Extreme Microbes David A. Butterfield Joint Institute for the Study of the Atmosphere and Ocean University of Washington Pacific Marine Environmental Laboratory, NOAA Context of Hydrothermal Venting in the Chemical Balance of the Earth The oceans are the largest reservoir of water on Earth, and the interactions between the Sun, the solid Earth, the atmosphere and the oceans are important in maintaining the chemical and thermal balance that supports life on our planet. The Sun drives the patterns of evaporation and winds that generate rain and snowfall. Precipitation reacts chemically with rocks and soils, and transports solutes (dissolved substances) in streams and rivers into the ocean. The delivery of dissolved solutes to the ocean via rivers is the primary input to the oceans that we are all familiar with. The ocean crust is created by the largest volcanic feature on the planet, the mid-ocean ridges. Along these volcanic ridges lies a hidden, but equally important, cycle of water transport that plays a key role in maintaining the chemical composition of the oceans. Volcanic heat drives convection of seawater through the permeable ocean crust, and the reactions that occur during this circulation remove some elements from seawater and add others (see illustration above). On a global scale, some elements' removal from or addition to seawater via hydrothermal circulation is of equal magnitude to their input from river sources. Many metals are enriched in hydrothermal fluids by a thousand to a million times over their concentration in normal seawater. Primordial helium (the stable isotope 3He), present at the formation of the Earth, continues to leak out of the Earths mantle at mid-ocean ridge hydrothermal vents, and eventually enters the atmosphere when deep water wells up to the surface. Elements enriched in hydrothermal fluids are delivered to the ocean at hot or warm vents, and the effluent can be tracked by sensitive measurements, in some cases for hundreds to thousands of kilometers from the hydrothermal sources. Cross-section of ocean crust, with volcanic ridge axis at left, and progressively older crust to the right. Large arrows represent the flow of heat from the mantle. Small arrows represent the flow of seawater and hydrothermal fluids through the crust. Click image for larger view and more details. The Magnitude of the Hydrothermal Chemical Exchange Clearly, the deep ocean is too inaccessible to measure the hydrothermal chemical exchange on a global scale, so systematic patterns have to be understood in order to estimate this exchange. Perhaps the simplest way to model the global hydrothermal system is to consider the heat exchange on the mid-ocean ridge and link heat to chemical exchange. Hydrothermal circulation efficiently cools newly formed oceanic crust, and the amount of heat removed by hydrothermal fluids can be estimated by comparing real sea-floor heat flow measurements to theoretical heat flow from crust that loses heat only by conduction. Once the quantity of global hydrothermal heat loss is known, one can link the chemical exchange to it by measuring and understanding the ratios of hydrothermal components to heat. In theory, then, one can simply take the product of the global hydrothermal heat term and the excess or deficit of any hydrothermal component relative to the heat of the fluid that transports it. What Controls Thermo-chemical Relationships? If everything were uniform along the mid-ocean ridges through space and time, we could confidently predict thermo-chemical relationships and know the magnitude of global exchange. In reality, however, the factors that control the temperature and composition of hydrothermal fluids vary considerably in time and space. Rock composition, the presence or absence of sediments, permeability of the ocean crust, boiling and separation of vapor and liquid, the amount of time since the last volcanic eruption, and depth of the heat source all vary widely. The result is a wide range of vent temperature and fluid composition. Studies of mid-ocean ridge vents over the past 20 years have yielded a first-order picture of the chemical systematics of high-temperature fluids, although the range of composition continues to expand with exploration. In order to understand and quantify chemical exchange, we have to measure thermo-chemical relationships across the range of geological settings. NOAA's Ocean Exploration Program provides a way to search out and characterize vents in unexplored or poorly explored environments, bringing us closer to quantifying the role of hydrothermal circulation in the global chemical balance of the oceans. Work in the western Pacific is essential to progress in this area. Microbes in Hydrothermal Environments Microbes are present virtually everywhere on the surface of the planet: in soil, groundwater, and throughout the oceans. Since the discovery of hydrothermal vents in the late 1970s, it has been shown that microbes are also ubiquitous in and around hot and warm sea-floor vents driven by volcanic heat. The confirmed upper temperature limit for life continues to be pushed upwards, with the current mark at 121°C based on cultured microbes from hydrothermal vents at the Endeavour segment of the Juan de Fuca ridge. Microbes have the ability to capture energy from a huge range of chemical processes. Hydrothermal vents are an excellent place to find the diversity of microorganisms because they span such a huge range of conditions in a very confined space. Fluids range from near freezing to >400°C, from oxygen-rich bottom seawater with a pH near 8 to completely anoxic, sulfide- and metal-rich hydrothermal fluids with pH as low as 2. There is clear evidence that microbes thrive along the gradient of conditions created when seawater mixes with hot vent fluids. Researchers are still gathering the samples that can tell us how microbial communities depend on and vary with the physical, chemical, and geological environment. Thin section photomicrographs of "Strain 121" microbe that uses ferric oxide to oxidize simple organic molecules at temperatures up to 121°C. This is the hottest microbe to be maintained in culture. Scale bars are one micron. Courtesy of Kashefi and Lovely 2003 (Science, vol. 301, p. 934). For more information, visit the NSF Web site. The evidence gathered from the hydrothermal plumes above the Mariana Arc submarine volcanoes in 2003 strongly suggests that we will encounter vent fluids outside the range of what has previously been observed. During ROV (remotely operated vehicle) operations in these environments in March and April 2004, we expect to find new and unique microbes. This is part of an ongoing international effort to understand the diversity of microbial life on our planet and to discover microbial capabilities and products (e.g., enzymes, antibiotics) that can benefit society and the environment. Sign up for the Ocean Explorer E-mail Update List.
<urn:uuid:923e64b4-762b-4384-b81a-26527667198f>
3.84375
1,487
Knowledge Article
Science & Tech.
29.930143
Posted by musical, a resident of the Palo Verde neighborhood, on Apr 13, 2012 at 3:51 pm One can spend a lifetime studying lightning. People would be surprised at the amount of lightning research done at Stanford and nearby corporate campuses. Yes, the weather services have accurate techniques for recording every discharge, basically like listening for the static you hear on AM radio. The exact locations can be triangulated. Such instruments are distributed around the country and around the world, sensing up to many thousand flashes per second. May be predictive of hurricane severity, tornado probability, or even large earthquakes. Posted by neighbor, a resident of another community, on Apr 13, 2012 at 4:19 pm Lightning, indeed weather itself, has no relationship to earthquakes. If it did, Washington DC, Minneapolis, Chicago and the 100s of U.S. cities who have terrible lightning for a good part of each year would be quaking like mad. Earthquakes are related to geologic phenomena. Posted by musical, a resident of the Palo Verde neighborhood, on Apr 13, 2012 at 4:50 pm The mechanism is a current research topic, especially in Japan and China. Seismic strains do set up electric fields, which under favorable meteorological conditions may produce excess cloud to ground discharges. That's how strain meters and new bathroom scales work, measuring the electric field of a crystal under load. At Stanford they study how very low frequency (VLF) radio waves may change character before an earthquake, and papers were written about it after the 1989 event. Science can be really quite interesting. They should teach more of it in high school. Posted by [azlo, a resident of the Old Palo Alto neighborhood, on Apr 13, 2012 at 5:34 pm always funny to read the comments from internet scientists on global warming(?) Good laugh when meterologist Stumpf claimed rain totals this year were less than last year, guess thats why they tally yearly totals as "average rainfall totals" Guess he missed that day at meterologist school. Posted by Wondering, a resident of Another Palo Alto neighborhood, on Apr 13, 2012 at 6:20 pm > Lightning, indeed weather itself, has no relationship to earthquakes There is some evidence that RF electrical energy is released during, or possibly before, an earthquake. If true, then this RF energy could be detected, and triangulated, just as with the lightening strikes. The issue on the table is not what causes lightening, but how is it detected remotely. I looked up the detection mechanism, which is called LDAR (Lightning Detection and Ranging).
<urn:uuid:72b31833-7e3d-484c-aa78-fa9c3d45b604>
3.125
541
Comment Section
Science & Tech.
48.114021
I disagree with the first comment but it's based on biological stuff rather than good perl practice. An array is an ordered list numerically indexed. A hash is an unordered list which is string indexed. A hash uses far more memory to hold it's data and since you're working with a large file that is loaded into memory, you should be mindful of the unneeded memory usage. Following or not following Perl's best coding practices is the difference between good Perl programmers and mediocre or poor Perl coders. I see a lot of the point of the third one, but I don't understand the "$!". What is that? $! holds the error message returned by the OS. So, in the case of opening a filehandle or directory handle, it will tell you the OS's reason why it failed. You can learn about Perl's special variables by reading the perldocs that come with Perl, specifically: or online http://perldoc.perl.org/perlvar.html
<urn:uuid:81929416-56b2-48aa-86ad-7f306abeb09c>
2.828125
211
Comment Section
Software Dev.
66.054446
Genus: Highly flattened; pellicle firm; body form constant (Kudo, 1966). Chloroplasts small, discoid, pyrenoids usually absent; most species flat and leaf-shaped; often with ridges or fins running helically or longitudinally (Illustrated Guide, 1985). Species: 80-100 μm; similar to P. longicauda; twisted 40-60 degree (How to know the protozoa, 1979). Var. tortuosa: twisted about 90 degree.
<urn:uuid:4a53f893-e5cb-4ba7-a30b-6dd4e87f24bb>
2.9375
113
Knowledge Article
Science & Tech.
55.537126
The <span class="img_descr" Super Moon Zhongguang Wang, Beijing, May 7, according to the economic voice of “the world Caijing reported, comparative study? Just yesterday evening, this year’s largest and brightest moon had appeared in the night sky. I wonder if you did not raise my eyes to the moon? This is the moon from Earth. The experts say, the moon and the earth apart that night, 357,000 kilometers, about 25,000 km closer than the Earth-Moon average distance, so look than ever before, 14%, light about 30%. However, this year’s “Super Moon” circle time, day, can not be observed; the United States is at night, just visible. In other words, the same night to see the moon and see the moon in the United States than China’s Yuan. this so-called “super moon” does not make sense in astronomy. After before and after perigee, the moon just to reach the full moon, state, super moon “was born”. The “super moon” caused by the Qiantang tide, 12 noon yesterday, 15 sub-tide resort Yanguan usher in a tide of 1.75 meters high. But the folk of the largest and most round moon will lead to earthquakes and other natural disasters, rumors. which the reasons for the sinking of the Titanic, is considered the moon to blame. This view astronomers from Texas State University San Marcos campus. January 4, 1912, the Titanic sank that night was an unusual night of full moon. The night of the earth and the moon is since 796 years BC, the two closest to the moment. On that day, the Earth from the Sun and the perihelion distance just in the year recent position, leading to the solar gravitational tides also happens to be at the most powerful period of the year. The spring tide of violent break into the harbor, Greenland fjord glaciers, pushing the debris south drift, the formation of a grand-scale iceberg floating journey. they will eventually encounter in the middle of the Titanic, Titanic sent to the seabed. There is no “super moon”, departing from the Greenland icebergs, in the journey all the way south usually face several times, ran aground until melted some of the loose open only on the road again, to catch up with the Titanic a very low probability of . However, the above story is just a guess, there is no evidence that the Earth of the super moon will have a negative impact. Kou Wen of the Beijing Planetarium astronomy experts said that the perigee of the moon has little effect on the tides.the share: welcome the , comment I want to Comments the microblogging Recommend | today microblogging hot spots (edit: SN010)
<urn:uuid:a09546f5-198b-47b7-a573-589867961442>
3
596
Personal Blog
Science & Tech.
59.4557
As habitats shift, connections are key for climate-sensitive species. As the seasons come and go, plant life and carbon change on Earth. High temperatures in 2010 stressed corals around the world. See how human activity is changing fire patterns around the world. How have humans altered Earth's land surface? See it via satellite. Follow a satellite’s-eye-view of the habitats of the Congo River. Its metropolitan area has grown more than 300 percent in recent decades. How satellite data helps scientists map changes in one of Earth's great wildernesses. Watch roads and parking lots multiply around the threatened Chesapeake Bay.
<urn:uuid:3303439f-37a0-4a09-947b-b3bc7181e2e3>
3.390625
133
Content Listing
Science & Tech.
53.566071
Individual photons can be used to quantum mechanically couple waveguides together as a means to store information and perform calculations. The technique could potentially lead to the production of quantum computers that will be able to solve problems that are far beyond the capability of any conventional computer. This image shows how light injected into a central waveguide couples together a collection of neighboring waveguides. This research was performed by Amit Rai, G. S. Agarwal, and J. H. H. Perk, Department of Physics, Oklahoma State University, and was published in Physical Review A (Atomic, Molecular, and Optical Physics), October 8, 2008. "Transport and quantum walk of nonclassical light in coupled waveguides," Phys. Rev. A 78, 042304 (2008)
<urn:uuid:22462645-8ef6-4cec-8a0f-e8d8b1e41123>
3.0625
162
Knowledge Article
Science & Tech.
40.577552
|Bioluminescence can be expected at any region or depth in the sea. It occurs mainly at sea. It is the only source of light in most of the habitable volume of the ocean. However, with only a few exceptions, it is not found in freshwater. The evolution of bioluminescence has occurred many times as is shown by the number of chemical mechanisms in which light is emitted and the various, distantly related organisms that are bioluminescent. (4 ) One of the unique features of bioluminescence is that, unlike other forms of light, it is cold light. Unlike any artificial light source, a star or even the glow of many heated materials, bioluminescent light is produced with very little heat radiation. (1 ) Because bioluminescence is so common in the marine environment, measurements of bioluminescence can facilitate location of certain animals, "how they associate with each other and how their distribution patterns are affected by such variables as light, temperature, salinity and pollution." (6 ) This page is being created for Biology 312: Animal Physiology at Davidson College in North Carolina.
<urn:uuid:dce1f057-4e24-4cec-8788-602b02bf121a>
3.671875
243
Knowledge Article
Science & Tech.
27.710621
This was a concept from the 1950s for a Minimum Orbital Unmanned Satellite Earth (MOUSE) vehicle proposed by Professor S.F.Singer of Maryland University. The work “Minimum Satellite Vehicles” was originally presented in 1951 at the Second International Conference on Astronautics in London by three members of the British Interplanetary Society – Kenneth Gatland, Alan Dixon and Anthony Kunesh. They originally looked at putting a small 5 kg payload into orbit. Later larger vehicles were proposed. The MOUSE would have been a 100 Ib (approximately 50 kg) satellite suitable for studying solar radiation, cosmic rays and weather as it was launched into the upper atmosphere. The final stage weight would have been 16,000 kg and a thrust of approximately 30,000 kg, not much more than the thrust of the V2 rocket. It was for a close orbit artificial satellite of the ‘minimum’ type. For examples of three-step liquid/hydrazine rockets were considered (a) without payload for checking the orbital path, and drag studies (b) with 220 Ib payload, research instruments and telemetry transmitter (c) with 385 Ib payloa, inluding additional control equipment (d) with the same payload as (c) but using expendable-tank construction. The MOUSE project had fairly modest objectives, the establishment of a rocket, with a small payload of instruments in a temporary orbit at a distance of 200 miles. At this altitude, the atmosphere, though highly tenuous, would still be sufficient to exert an influence on the rocke and would eventually cause it to descend. It was estimated that MOUSE would make over 200 orbits over a perid of 12 days during which time it would have transmitted to Earth more information of conditions at various latitudes at the frontier of space than all the high altitude research rockets that had been fired to date. It laid the groundwork for much of the subsequent developments in the British Skylark sounding rocket which were developed as part of the UK contribution to the 1957 International Geophysical Year. The 1951 paper was recently republished in an issue of Space Chronicles: K.W.Gatland, A.M.Kunesch, A.E.Dixon, Minimum Satellite Vehicles, Space Chronicles, JBIS, 56, 1, pp.38-43, 2003. This paper is available to purchase by contacting the BIS here.
<urn:uuid:4da94742-b0ee-47db-818d-89a33843a18a>
3.609375
497
Knowledge Article
Science & Tech.
46.691957
int vsscanf ( const char * s, const char * format, va_list arg ); Read formatted data from string into variable argument list Reads data from s and stores them according to parameter format into the locations pointed by the elements in the variable argument list identified by arg. Internally, the function retrieves arguments from the list identified by arg as if va_arg was used on it, and thus the state of arg is likely to be altered by the call. In any case, arg should have been initialized by va_start at some point before the call, and it is expected to be released by va_end at some point after the call. - C string that the function processes as its source to retrieve the data. - C string that contains a format string that follows the same specifications as format in scanf (see scanf for details). - A value identifying a variable arguments list initialized with va_start. va_list is a special type defined in <cstdarg>. On success, the function returns the number of items in the argument list successfully filled. This count can match the expected number of items or be less -even zero- in the case of a matching failure. In the case of an input failure before any data could be successfully interpreted, EOF is returned. /* vsscanf example */ void GetMatches ( const char * str, const char * format, ... ) va_start (args, format); vsscanf (str, format, args); int main () FILE * pFile; GetMatches ( "99 bottles of beer on the wall", " %d %s ", &val, buf); printf ("Product: %s\nQuantity: %d\n", buf, val); - Read formatted data into variable argument list (function - Read formatted data from stream into variable argument list (function - Read formatted data from string (function - Read formatted data from stdin (function - Write formatted data from variable argument list to string (function
<urn:uuid:45ff5930-8862-4461-af46-ae8c22c4494f>
3.125
432
Documentation
Software Dev.
47.96324
Permanent Monitoring Panel on Desertification - Report Given to Seminar on Planetary Emergencies Problem: The issue of desertification has been debated for a generation. There is little disagreement that there has been an environmental decline in much of the worlds drylands particularly in Africa. However, there has been contentious debate about: Although global conferences have addressed desertification specifically, or as part of a broader set of global concerns that resulted in an international convention, little substantive action has taken place. For the past decade, the desertification "debate" has remained largely a series of academic skirmishes. The reasons for the marginalization of the desertification topic have been economic. Most development in dryland areas has intentionally focused on irrigation where very high returns on investment potentially could be achieved. As investment opportunities, rainfed agriculture or livestock grazing are not competitive and they have received only sporadic attention usually in the aftermath of disaster. Thus, until quite recently, development efforts have touched only a small fraction of the total dryland area of the globe. New Opportunities: The prospect of global warming through the accumulation of "greenhouse" gases in the atmosphere will likely exacerbate desertification and the degradation of arid lands. Decreasing the atmospheric accumulation of greenhouse gases has dominated recent environmental debate - most notably CO2 emitted through the combustion of fossil fuels and land use changes (i.e., deforestation; conversion of grasslands to crops). As a result, world attention has focused on limiting CO2 emissions and ultimately reducing total atmospheric CO2. This concern culminated in the Kyoto Protocol to the Framework Convention on Climate Change (1997) that establishes limits on total net CO2 emissions by industrialized countries (i.e., "Annex 1 countries). By establishing these limits, CO2 may be emitted so long as it is offset, or sequestered, through some other process. The imposition of limits also helps to establish a basis by which carbon might be traded as a commodity. The most obvious way to sequester carbon is to increase standing above-ground biomass (e.g., trees). However, global stocks of CO2 in the soil are two times larger than that in plant biomass but have become depleted through a variety of management practices (e.g., conversions to agriculture and urbanization; overgrazing). Thus, storing carbon in the soil (as living root biomass, soil flora and fauna, and accumulated soil organic matter) offers more substantial prospects for sustained sequestration. Under "Kyoto," those who emit CO2 (e.g., coal-fired electric power plants) would compensate to adopt alternative tillage practices, or land managers to plant and/or maintain natural vegetation that would sequester carbon in an amount that would offset their emissions using some mechanism of trade or bilateral development. This could substantially help reduce atmospheric CO2, and would also help to distribute more equitably the costs and benefits of a growing world economy between developing and developed countries. In addition to economic benefits, sequestration of carbon in soils in the form of organic matter also would have direct environmental benefits by restoring lost soil productivity, conserving soil and water resources, and preserving biological diversity. Finally, soil carbon sequestration will allow developing countries to become active and meaningful participants in the global struggle to address climate change. Among all types of land, degraded (desertified) drylands offer considerable opportunity for carbon sequestration: (1) they are extensive; (2) they offer low opportunity costs; and, (3) they are occupied by the most economically and politically disadvantaged populations on Earth. For combating desertification, carbon sequestration may offer a missing economic engine that would allow farmers and herders to benefit from the global economy, enhance their livelihood, and improve their local environment. It thus offers a unique opportunity to address directly two international conventions the Framework Convention on Climate Change and the Convention to Combat Desertification. It is also contributes to a third the Convention on Biological Diversity by enhancing local habitat and biological productivity and thus reducing pressure on adjacent endangered habitats. Issues: Two international workshops within the past year have helped focus attention on soil carbon sequestration as a vehicle for development. In these, the potential of soil carbon sequestration was specifically identified as a potential tool for combating desertification and enhancing agricultural sustainability. Discussions during these workshops revealed that the potential of this tool is limited by at least three issues. Limits of the convention. The Kyoto Protocol focuses on forest ecosystems as the primary vehicle for terrestrial carbon sequestration and does not explicitly recognize carbon that might be stored in soils in other ecosystems. Limits of awareness. There is a general awareness and understanding of the objectives of the Framework Convention on Climate Change and the Convention to Combat Desertification. However, the potential synergism of soil carbon sequestration as a mechanism for addressing them both simultaneously is largely unrecognized particularly among those countries that might benefit most (i.e., the countries of the arid and semiarid zone). Limits of experience. The mechanisms by which these conventions might be made to work are not yet well defined. At a national or regional level, there is the challenge of shaping existing governmental institutions to respond to the new demands of implementing projects to satisfy the conventions. At the local level, there are the challenges of: 2. PROPOSED ACTIVITIES The Desertification PMP proposes to initiate a multi-pronged initiative to explore more fully and demonstrate, we hope the degree to which three planetary emergencies (desertification, climate change, biological diversity) can be addressed synergistically. 2.1 Erice Declaration on Carbon Sequestration in Soils In response to issue 1 above, the first involves a campaign to recognize soil carbon within the Kyoto Protocol. This will begin with a statement prepared by the World Federation of Scientists. Subsequently, it will be pursued by explaining to a broader audience (ultimately policy makers) the significance of this oversight through routine publications and briefings (where possible), and pursuit of the following two activities. 2.2 International Workshop on Soil Carbon Sequestration for Desertification Control In response to issue 2, we propose to hold a workshop in Erice in early March, 2000. The purposes will be: Participants would include: The four-day workshop will have four parts: Soil carbon sequestration (day 1). National action programs (day 2). Working group discussions (day 3). Reports (day 4). In response to issue 3, the workshop is intended to serve as a sound base upon which substantive programs can be built within Africa and other parts of the semiarid zone. At present, projects are anticipated in at least one region of Africa (sponsored by the United States), and one country in Africa (sponsored by Sweden). 2.3 World Laboratory Fellow in Desertification Control As noted above, the U.S. Geological Survey EROS Data Center has an active program in soil carbon sequestration in development. To build capacity in Africa and possible lay the foundation for a project in Africa (see 2.2 ), a staff member of Centre de Suivi Ecologique (CSE Senegal) will be detailed to USGS/EROS Data Center. The purpose will be to provide training in: (1) remote sensing; (2) GIS data base development; and (3) carbon sequestration for desertification control. 2.4 Desertification in the Mediterranean Basin Desertification also affects the Mediterranean Basin. The Italian government has recently released a national report on desertification. It claims that upwards of 40 percent of the country exhibits problems that might be attributed to desertification. 2.4.1 Desertification Demonstration Project The data sets that might be employed include systematic observations (e.g., weather records), historical ground and aerial photography, historical satellite images, archival records (e.g., agricultural production; crop or forest surveys), historical narratives (e.g., newspaper reports; personal journals; published descriptions of specific sites), and personal interviews (e.g., land managers, residents, government officials). Monitoring would be based on the baseline assessment, with the intent of identifying where changes from those "initial" conditions occur and determining their causes. The effort would be based on an analysis of the most current data within the sets described above. However, satellite data (and aerial photography where available) would be the primary monitoring tool, supplemented by systematic observations, archival records, and interviews. Control/intervention would focus on those areas in which land management practices had been successful in retaining or restoring productive capacity. The purpose would be to develop an understanding of: (1) the physical and biological processes involved, as well as; (2) the physical, biological, economic, and policy preconditions that allowed the management practice to succeed. This would permit the identification of suitable interventions and the areas that would be most favorable in which to implement them. 2.4.2 Capacity Building for Wildfire Potential Monitoring 1. "Land degradation in arid, semi-arid and dry sub-humid areas resulting from various factors including climatic variations and human activities" (United Nations Environment Program, 1992). Here we largely exclude irrigated areas and focus on those drylands characterized by land uses such as livestock grazing and marginal rainfed agriculture. (Back to text) 4. "Carbon Sequestration in Soils: Science, Monitoring and Beyond," organized by Batelle Pacific Northwest National Laboratory, held in St. Michaels, Maryland in December, 1998. 5. Lal, R., H.M. Hassan, and J. Dumanski. 1999, "Desertification control to sequester C and mitigate the greenhouse effect," in, Carbon Sequestration in Soils: Science, Monitoring, and Beyond, N.J. Rosenberg, R.C. Izaurralde, and E.L. Malone, eds. Batelle Press. Columbus and Richland (pp. 83-152). (Back to text)
<urn:uuid:3c5fb84f-f29b-4e8d-8b64-f9d1b86cd992>
3.734375
2,047
Academic Writing
Science & Tech.
23.610562
Motor proteins are molecular motors, which efficiently convert chemical energy stored in ATP into mechanical work, demonstrating the feasibility and applications of nanoscale engines. We are designing hybrid, biomimetic nanodevices and materials based on the motor protein kinesin by merging biotechnology with micro- and nanofabrication. In previous years, we presented prototypes of molecular shuttles (a nanoscale transport system) [1-3], and a piconewton forcemeter , and introduced a novel surface imaging method based on self-propelled probes . Now we can report on measurements of the specifications of our devices, for example the clearance of molecular shuttles, which require non-destructive distance measurements with nanometer accuracy. These measurements aid the design process, and establish the performance limits of hybrid devices. We will also present our recent progress in the design of motor protein-based nanodevices (e.g. described in ), which demonstrate the promise of molecular-scale motors for nanotechnology. (1) Hess, H.; Clemmens, J.; Qin, D.; Howard, J.; Vogel, V. Nano Letters 2001, 1, 235. (2) Hess, H.; Vogel, V. Reviews in Molecular Biotechnology 2001, 82, 67. (3) Hess, H.; Clemmens, J.; Matzke, C. M.; Bachand, G. D.; Bunker, B. C.; Vogel, V. Appl. Phys. A 2002, 75, 309. (4) Hess, H.; Howard, J.; Vogel, V. Nano Letters 2002, 2, 1113. (5) Hess, H.; Clemmens, J.; Howard, J.; Vogel, V. Nano Letters 2002, 2, 113. (6) Clemmens, J.; Hess, H.; Howard, J.; Vogel, V. Langmuir 2003, 19, 1738.
<urn:uuid:6d84b56c-d325-4c92-9edc-7768cf13b729>
2.765625
408
Academic Writing
Science & Tech.
61.459218
This page collects examples of concurrent and parallel programming in Haskell. - Riemann's Zeta function approximation - Signal that you want to gracefully exit another thread - Passing messages across a single chan to two readers - Chat server - using a single channel for a variable number of readers - Passing IO events lazily from a producer to a consumer thread 2 More examples A large range of small demonstration programs for using concurrent andparallel Haskell are in the Haskell concurrency regression tests. In particular, they show the use of 3 Proposed updates The base 18.104.22.168 package's Control.Concurrent.QSem and QSemN are not exception safe. The SafeConcurrent has the proposed replacement code.
<urn:uuid:8729ea4a-29d2-441e-91d0-3f8b9a27048f>
2.8125
154
Content Listing
Software Dev.
42.11058
Measuring instruments used for current observations and data reporting Land-based (in situ) observations are collected from instruments sited at locations on every continent. They include temperature, dew point, relative humidity, precipitation, wind speed and direction, visibility, atmospheric pressure, and types of weather occurrences such as hail, fog, and thunder. NOAA's National Climatic Data Center (NCDC) provides a broad level of service associated with in situ observations. These include data collection, quality control, archive, and removal of biases associated with factors such as urbanization and changes in instrumentation through time. Data on sub-hourly, hourly, daily, monthly, annual, and multi-year timescales are available. Access NCDC's land-based datasets directly. Find a Station Locate a station either by a map tool or location and data search tool. Find details such as begin/end date for a station and when there was a change in equipment or siting. Climate Data Online (CDO) The web access application for NCDC's archive of weather and climate data. Monthly publications for a variety of datasets along with serial publications and other documents.
<urn:uuid:62f540d5-d16b-4430-94d2-3867f4c857fe>
3.40625
242
Content Listing
Science & Tech.
24.065
Wed Dec 19 19:59:17 GMT 2012 by Eric Kvaalen Is the system the researchers built really radar, or does it use shorter wavelengths? A photon of radio-wave radiation has very little energy, so a huge number of photons are emitted by a radar system. Slightly off topic, but how can a DOI as short as http://doi.org/jz5 specify an arbitrary article like the one referred to here? That doesn't even follow the normal syntax! Sat Dec 22 07:24:59 GMT 2012 by Eric Kvaalen I found a partial answer to my second question at http://shortdoi.org/. Apparently anyone can create a "shortDOI" simply by asking for one at that site. I tried with a DOI and it said that there already existed a shortDOI, which consisted of six lower-case letters. So how is it that the shortDOI for this article is only two letters and a digit? All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:53caf86e-dc67-49fe-b9fb-8b6c0593bd94>
3.046875
256
Comment Section
Science & Tech.
67.513957
|Sep6-12, 10:51 AM||#1| Properties of reflected light This may sound stupid, but why when I shine a laser pointer at a mirror and reflect in onto another mirror back and forth, the points of light make a parabola when the mirrors are tilted in such a way as to maximize the number of reflections happening? Tilting the mirror further or closer then causes the parabola to stretch or compress exponentially. Why is this? physics news on PhysOrg.com >> Promising doped zirconia >> New X-ray method shows how frog embryos could help thwart disease >> Bringing life into focus |Sep6-12, 11:32 AM||#2| I don't follow what your actually observing when you say: "points of light make a parabola". Is this its path through the air, like what you would see with smoke or dust in the air? |Similar Threads for: Properties of reflected light| |How is light absorbed/reflected?||Quantum Physics||16| |A problem of reflected light||Advanced Physics Homework||0| |What determines how light is reflected||General Physics||3| |reflected light||Introductory Physics Homework||1| |Colors and Reflected Light||General Physics||3|
<urn:uuid:d3d115bf-fded-4ee0-be33-b6693aa9fa64>
3.296875
282
Comment Section
Science & Tech.
51.775
This article was written for a conference on Mathematics and War, Karlskrona, Sweden, August 2002, and appeared as a chapter in the published conference volume Mathematics and War in 2003. Pages 1 | 2 | 3 | References Abstract: Alan Turing (1912-1954), British mathematician, was critical in the Anglo-American decipherment of German communications in the Second World War. This experience enabled him to formulate an original plan for the digital computer in 1945, based on his own 1936 concept of the universal machine. He went on to found the program of Artificial Intelligence research. This article discusses the relationship between these developments, and more general questions of mathematics and war illustrated by Alan Turing's life and work. The British mathematician Alan M. Turing (1912-1954) played a critical role in the Second World War, as the chief scientific figure in the Anglo-American decipherment of German military communications. Furthermore, his work was central to the emergence of the digital computer in its full modern sense in 1945. However, the secrecy surrounding his work was so intense that until the 1970s only hints of it were published. This secrecy was enhanced by the mystery of his sudden death in 1954 and the effective taboo, which prevailed for twenty years, on any public mention of his homosexuality. To those interested in the true history of the computer, Alan Turing's role remained as elusive as the myth of Atlantis. This secrecy has now almost completely been dispelled, partly through this author's work (Hodges 1983), but only to reveal a much deeper enigma of Alan Turing, who gave himself first to the purest and most timeless mathematics, but then applied himself to its most urgent and timely practice. What did Alan Turing think of his intellectual and moral involvement in the world crisis? And what is the true assessment of the impact of the war on his scientific work? This article will review Alan Turing's mathematical work in the Second World War, discuss how this relates to the history and philosophy of computing, and then raise the wider question of his place in mathematics and war. Turing's role in the Second World War was largely dominated by the particular form of the Enigma ciphering machine as elaborated for military and naval purposes by the German authorities. For a recent complete description of the Enigma see (Bauer 2000). Essentially, it was Turing who picked up the relay baton when the Polish mathematicians shared their brilliant cryptanalytic work with Britain and France. It then fell to him to pass on the baton, by sharing British achievements with the United States. Alan Turing's primary role stemmed partly from the fact that he was the first scientific figure to join the British cryptanalytic department, the so-called 'Government Code and Cypher School,' which until 1938 was staffed essentially by the language-based analysts of the First World War. (One reason for Turing's recruitment may have been that he had, through his Fellowship of King's College, Cambridge, personal connections with that older British generation; in particular J. M. Keynes may well have formed an important link.) Turing was brought into the work on a part-time basis at the Munich crisis period, and joined full-time immediately on declaration of war. Meanwhile an Oxford mathematics graduate, Peter Twinn, was recruited through open advertisement in 1938, and this belated acceptance of the modern world was shown also in the development of a modern communications infrastructure for the new headquarters at Bletchley Park, Buckinghamshire. Nevertheless, the Polish mathematicians were well in advance at the time of the now famous meeting in July 1939. They had used group-theoretic algebra to deduce the Enigma rotor wirings from information obtained by spying; they had noticed and used other group-theoretic methods and mechanical methods to exploit certain simple forms of indicator system that were then in use by the German forces. It is not clear to what extent Turing had discovered these independently in early 1939 — his report (Turing 1940) does not say — but in any case the Polish group had successfully made an all-important guess which had eluded the British: this was the order in which the keyboard letters were connected to the first rotor. They were in fact in the simple order ABCD.... This almost absurdly simple fact was the most critical piece of information imparted by the Poles. In late 1939, Turing initiated the two most decisive new developments: he saw the 'simultaneous scanning' principle of what became the British 'Bombe', and he deduced the form of the more sophisticated indicator system that was being used for the German Naval communications. Turing's 'Bombe' was an electromechanical machine of great logical and technical sophistication. Its property was this: given a stretch of ciphertext and the corresponding plaintext, it could search through all possible settings of the military Enigma and detect those which could possibly have been responsible for the encipherment. It is not difficult to see, from simple counting arguments, that a 'crib' of about 20 letters will generally serve to identify such a setting. (The reader may take it that, once some penetration into cipher traffic has been made, such a 'crib' is not impossible to find.) It is much harder to see that this theoretical possibility can be matched by any practical method. In particular, the Stecker or plugboard complication introduced in the military Enigma had so many possible settings that serial trial was impossible. In fact serial trial was indeed necessary for searching through the possible positions of the rotors. But Turing's great discovery was that the huge number of plugboard possibilities could effectively be tested in parallel, and virtually instantaneously. His idea was this: suppose we are testing, in the serial sequence, a particular rotor setting. A plugboard setting consists of a number of pairs like (AJ), (UY), representing the swapping of letters performed by the plugboard on entry to, and on exit from, the rotors. There are 150,738,274,937,250 possible settings consisting of ten such pairs, the choice normally made. However there is no need to work through such a number of possibilities. Instead, consider the smaller number of 325 basic pair-possibilities: (AA), (AB), (AC) ... (ZZ), where '(AA)' represents the letter A being left unchanged by the plugboard. Now, given any one basic pair-possibility, e.g. (AA), knowledge of the plaintext, ciphertext, and rotor setting will imply various other pair-possibilities, and these in turn yet more. Turing saw first, that finding a single 'contradiction' serves to eliminate a possibility: that is, if by following the implications (AA) can be shown to imply (AE), then (AA) must be false. He saw the less obvious point that by allowing the logical deductions to continue, (AA) would generally imply all of (AB) ... (AZ); hence all must be false; hence the rotor position being tested could not possibly be correct. Turing was proud of this counter-intuitive idea, of continuing to follow through the consequences of what must be false propositions. He said it was akin to the principle in mathematical logic that 'a false proposition implies any proposition.' Turing also saw how to embody these 'deductions' simply in wired connections between rotors and terminals, so that the flow of implications would take place at the speed of electric current. (But it took another Cambridge mathematician, W. G. Welchman, to improve the circuitry with the 'diagonal board' which automatically identified (AB) with (BA), and so on; it is a curious fact that Turing missed this simple idea.) Finally, it was essential that the machine could be equipped to detect the possibility of a correct rotor position, with logical switching capable of recognising an incomplete bank of pair-possibilities. With this achieved by the engineers of the British Tabulating Machinery company, Turing's Bombe yielded the central process on which Enigma decryption rested throughout the War. Its principle was highly non-trivial, and it apparently went unnoticed by G. Hasenjäger, the young German logician who was Turing's unseen opponent in this war of logic (Bauer, 2000). As already noted above, in the business of searching through the 263 possible rotor positions, no improvement was possible on serial trial using the equivalent of moving Enigma rotors, so that improvements rested on having ever faster, more reliable machines produced in larger numbers. For the question of the choice of rotors and their order, however, which was particularly relevant for the Naval Enigma problem, Turing developed a method called 'Banburismus' which, particularly in 1941, much improved upon the simple trial of the possibilities. This method rested on the logical details of the turnover positions of the different rotors, but also on assessing the statistical identification of likely 'depths' — two different stretches of message, both sent on the same Enigma settings. For detecting depths objectively, giving a reliable probability measure to them, Turing developed a theory of Bayesian inference. The most striking feature of his theory was his measurement of the weight of evidence by the logarithm of conditional probabilities. This was essentially the same as Shannon's measure of information, developed at the same time. This theory was developed into sophisticated methodology (Good 1979, 2000). Whilst the logical principle of the Bombe, and its amazingly effective application, was perhaps Turing's most brilliant single idea, his statistical techniques were more general and far-reaching. In particular, they were also applied to the quite different Lorenz cipher system employed by Germany for high-level strategic messages. Thus it was Turing's theory of probability estimation that underlay the methods mechanised by the large electronic 'Colossus' built in 1943-45 to decipher this type of traffic. The latter half of the war saw the relay baton pass on to the USA. Turing himself had to cross the Atlantic, at the height of the battle, in connection with naval Enigma crisis of 1942 and the building of American Bombes at Dayton, Ohio. Besides transferring British expertise in Enigma-breaking to the United States, he also had a new top-level role in inspecting and reporting on the American equipment for speech encipherment (to be used by Roosevelt and Churchill), and became fully acquainted with the use of information sampling theory as well as the electronic technology involved. It is fair to say that Turing most enjoyed a pioneering period of breaking into the unknown, and flourished best in such settings; he was not so happy at detailed follow-up or development. After 1943 he spent much of his time on a freshly self-imposed problem: the design of an advanced electronic speech scrambler of a much more compact form than the American equipment he had inspected (Hodges 1983, p. 273). Sometimes it is blithely asserted that what one mathematician can do, another can undo. Not so: the possibilities of cryptanalysis are highly contingent on details; and even if a system is breakable in the long term, short term considerations may be of the essence. The German military adaptation of the Enigma might have made it unbreakable; the British version of the Enigma, which has attracted far less attention, had more rotors and a non-reciprocal plugboard, and was apparently invulnerable to German attack. The successful continuation of the Polish work may very well have depended critically on Turing as an individual. It was not that Turing merely played the expert part expected of him. Rather, it was at a time when a distinctly pessimistic attitude prevailed, that Turing took on the Naval Enigma problem precisely because no-one else thought it tractable, and so he could have it to himself. This individualistic approach was also necessary in developing from scratch a suitable statistical theory. Thus, it can well be maintained that the Anglo-American command of the Atlantic battle, crucial in the central strategy of the Western war, was owed to Alan Turing's work. Turing did not create great new fundamental mathematics in this work, but he brought to bear the insights of a deep thinker. The Bletchley Park analysts compared their work with chess problems, and G. H. Hardy had famously called chess problems the hymn tunes of mathematics, as distinct from serious, interesting problems (Hardy 1940). Yet unusually these military hymns had some beauty. In a remarkable comment on an episode in mathematics and war, the contemporary chronicler of the Naval Enigma section ended (Mahon 1945) In finishing this account of Hut 8's activities I think that it should be said that while we broke German Naval Cyphers because it was our job to do so and because we believed it to be worthwhile, we also broke them because the problem was an interesting and amusing one. The work of Hut 8 combined to a remarkable extent a sense of urgency and importance with the pleasure of playing an intellectual game. It can, however, be argued that more important than the direct application of new mathematics, was the influence of his wartime experience in leading Turing from being a pure theorist of computation, to being the leading expositor of an actual electronic design for a modern computer. At the age of twenty-four, Turing had published what is now his most celebrated work, defining computability (Turing 1936-7). Its motivation lay in pure mathematical logic, clarifying the nature of an 'effective method' with the Turing machine concept. It did not set out to assist practical computation in any way. Yet it did produce the constructive and highly suggestive idea of the universal Turing machine, and in fact Alan Turing was never entirely 'pure' in his approach: bringing 'paper tape' into the foundations of mathematics was itself a striking breach of the conventional culture. As we shall see, he took an interest in the practical application of his ideas even in 1936. But Turing's harmonious collaboration with the engineers of the Bombe took him very much further towards practicality, indeed into the most advanced practical engineering of 1940. Then his acquaintance with the world-leading technology of the Colossus showed him the viability of a electronic digital machine capable of embodying the idea of the universal machine — in modern terms, a computer with modifiable stored program. Turing was able to write a detailed proposal for such a computer, the ACE, in 1945-6, and was able to survey from a position of considerable strength all the possible forms of data storage available at that point (Turing 1946). Pages 1 | 2 | 3 | References
<urn:uuid:8dc4337a-5e2a-4424-8b6b-903c5a9e4261>
2.8125
2,974
Academic Writing
Science & Tech.
35.063594
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2003 March 10 Explanation: Why do many galaxies appear as spirals? A striking example is M101, shown above, whose relatively close distance of about 22 million light years allow it to be studied in some detail. Recent evidence indicates that a close gravitational interaction with a neighboring galaxy created waves of high mass and condensed gas which continue to orbit the galaxy center. These waves compress existing gas and cause star formation. One result is that M101, also called the Pinwheel Galaxy, has several extremely bright star-forming regions (called HII regions) spread across its spiral arms. M101 is so large that its immense gravity distorts smaller nearby galaxies. Authors & editors: Jerry Bonnell (USRA) NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & NASA SEU Edu. Forum & Michigan Tech. U.
<urn:uuid:5467c544-55ba-474d-bbf6-0f3347469cc4>
3.8125
217
Knowledge Article
Science & Tech.
40.512222
of the Universe The most profound insight of General Relativity was the conclusion that the effect of gravitation could be reduced to a statement about the geometry of spacetime. particular, Einstein showed that in General Relativity mass caused space to curve, and objects travelling in that curved space have their paths deflected, exactly as if a force had acted on them. Curvature of Space in Two Dimensions The idea of a curved surface is not an unfamiliar one since we live on the surface of a sphere. More generally, mathematicians distinguish 3 qualitatively different classes of curvature, as illustrated in the following image These are examples of surfaces that have two dimensions. For example, the left surface can be described by a coordinate system having two variables (x and y, likewise, the other two surfaces are each described by two independent coordinates. The flat surface at the left is said to have zero curvature, the spherical surface is said to have positive curvature, and the saddle-shaped surface is said to have negative curvature. Curvature of 4-Dimensional Spacetime The preceding is not too difficult to visualize, but General Relativity asserts that space itself (not just an object in space) can be curved, and furthermore, the space of General Relativity has 3 space-like dimensions and one time dimension, not just two as in our example above. This IS difficult to visualize! Nevertheless, it can be described mathematically by the same methods that mathematicians use to describe the 2-dimensional surfaces that we can visualize easily. The Large-Scale Geometry of the Universe Since space itself is curved, there are three general possibilities for the geometry of the Universe. Each of these possibilites is tied intimately to the amount of mass (and thus to the total strength of gravitation) in the Universe, and each implies a different past and future for the Which of these scenarios is correct is still unknown because we have been unable to determine exactly how much mass is in the Universe. If space has negative curvature, there is insufficient mass to cause the expansion of the Universe to stop. The Universe in that case has no bounds, and will expand forever. This is termed an open universe. If space has no curvature (it is flat), there is exactly enough mass to cause the expansion to stop, but only after an infinite amount of time. Thus, the Universe has no bounds in that case and will also expand forever, but with the rate of expansion gradually approaching zero after an infinite amount of time. This is termed a flat universe or a Euclidian universe (because the usual geometry of non-curved surfaces that we learn in high school is called If space has positive curvature, there is more than enough mass to stop the present expansion of the Universe. The Universe in this case is not infinite, but it has no end (just as the area on the surface of a sphere is not infinite but there is no point on the sphere that could be called the "end"). The expansion will eventually stop and turn into a contraction. Thus, at some point in the future the galaxies will stop receding from each other and begin approaching each other as the Universe collapses on itself. This is called a Is the Universe Open, Flat, or Closed? The Density Parameter of the Universe ||(0.013 +/- 0.005) h-2 |Stars in Galaxies |Dynamics (r < 10 h-1 ||~0.05 - 0.2 |Dynamics (r > 30 h-1 ||~0.05 - 1 Source: P. J. E. Peebles, Principles of The geometry of the Universe is often expressed in terms of the density parameter, which is defined to the the ratio of the actual density of the Universe to the critical density that would just be required to cause the expansion to stop. Thus, if the Universe is flat (contains just the amount of mass to close it) the density parameter is exactly 1, if the Universe is open with negative curvature the density parameter lies between 0 and 1, and if the Universe is closed with positive curvature the density parameter is greater than 1. The density parameter determined from various methods is summarized in the adjacent table. In this table, BB nucleosynthesis refers to constraints coming from the synthesis of the light elements in the big bang, +/- denotes an experimental uncertainty in a quantity, and the parameter h lies in the range 0.5 to 0.85 and measures the uncertainty in the value of the Hubble parameter. Although most of these methods (which we will not discuss in detail) yield values of the density parameter far below the critical value of 1, we must remember that they have likely not detected all matter in the Universe yet. The current theoretical (because it is predicted by the theory of cosmic inflation) is that the Universe is flat, with exactly the amount of mass required to stop the expansion (the corresponding average critical density that would just stop the is called the closure density), but this is not yet confirmed. Therefore, the value of the density parameter and thus the ultimate fate of the Universe remains one of the major unsolved problems in modern cosmology.
<urn:uuid:caa8d21c-1f4a-4c08-8514-86052b5a8e54>
4.0625
1,152
Academic Writing
Science & Tech.
34.567289
Higher-order determinants are natural generalizations. The minor of the entry in the th-order determinant is the ()th-order determinant derived from by deleting the th row and the th column. The cofactor of is An th-order determinant expanded by its th row is given by If two rows (or columns) of a determinant are interchanged, then the determinant changes sign. If two rows (columns) of a determinant are identical, then the determinant is zero. If all the elements of a row (column) of a determinant are multiplied by an arbitrary factor , then the result is a determinant which is times the original. If times a row (column) of a determinant is added to another row (column), then the value of the determinant is unchanged. For real-valued , Compare also (1.3.7) for the left-hand side. Equality holds iff for every distinct pair of , or when one of the factors vanishes. An alternant is a determinant function of variables which changes sign when two of the variables are interchanged. Examples: where are the th roots of unity (1.11.21). Let be defined for all integer values of and , and denote the determinant If tends to a limit as , then we say that the infinite determinant converges and . Of importance for special functions are infinite determinants of Hill’s type. These have the property that the double series converges (§1.9(vii)). Here is the Kronecker delta. Hill-type determinants always converge.
<urn:uuid:b471eb4d-8e67-42f1-911b-809caff517a9>
2.984375
342
Knowledge Article
Science & Tech.
51.97489
Special & General Relativity Questions and Answers Is the speed of light a barrier of the same kind as the speed of sound? First, I do not know of any actual instances where a physicist declared that the speed of sound was some kind of 'wall' that could not be overcome. The speed of sound depends on air temperature, density and composition. The speed of light seems to be, truly, a maximum speed limit that cannot be overcome no matter how you accelerate matter. An enormous number of modern technologies can only operate given that the speed of light behaves as an 'asymptotic limit' to speed. The rate at which particles accelerate for the given applied energy depends critically on the speed of light being a HARD limit to speed. The behavior of matter is exactly consistent with light speed being a hard barrier, in literally millions of experiments performed every year. Had just ONE of those come out differently, you would see in physics a tremendous amount of excitement. Special relativity has shown not a trace of a flaw in over 70 years of experimentation. Any physicist who detected such a flaw would be as famous as Einstein himself, instantly, over night! If any simple flaw could have been found, it would have turned up already. Return to the Special & General Relativity Questions and Answers page. All answers are provided by Dr. Sten Odenwald (Raytheon STX) for the NASA Astronomy Cafe, part of the NASA Education and Public Outreach program.
<urn:uuid:3433ff2e-0aad-4b21-920d-541c2daefaa5>
2.984375
302
Q&A Forum
Science & Tech.
46.2436
The word "nebula," in Latin, means "cloud." Early astronomers believed that nebulae were made up of many dim stars. They tried to separate what they could observe into different stars. This approach worked some of the time, but often they could not distinguish bewtween them. Late in the nineteenth century, the spectroscope (a device for analyzing light) was used to prove that nebulae were not star groups at all. Nebulae are actually large clouds of gas and dust. There are four types of nebulae: emission, reflective, dark, and planetary. Emission and reflection are sometimes labeled as diffuse nebulae. Emission nebulae are those that are very colorful. They are also known as "bright" nebulae. They lie near stars with a surface temperature of about 13,900 degress Celsius or greater. The immense amount of energy being radiated by these stars excites the atoms of the nebulae, which absorb the UV light. The atoms, in order to reach a lower energy state, must emit this energy in their own form of radiation. We see this as the many wonderful colors that appear. Most emission nebulae contain a lot of red because of the large amounts of hydrogen (the most abundant element) present. They also contain helium, carbon, nitrogen, oxygen, and sulfur. The Great Nebula, also known as M42, is an emission nebula. It is in the constellation Orion. When these clouds of particles are near a star with a surface temperature less than 13,900° C, the light from the star is reflected by the particles and there exists what is known as a reflective type of nebula. Most nebula are reflective. Another type, dark nebulae, occur when the particles of the nebula block out light behind it. This creates a dark patch against the sky, where there are apparently no stars to be seen. Two very famous dark nebulae are the Horsehead and the Coalsack, both named for their shapes. The term globules refers to smaller, denser, dark nebulae. The last type is the planetary nebulae, which appear to be planets but are really a result of a dying star. Also, many supernovae create nebulae. The Crab Nebula is one of these. Nebula are not very dense. The typical density is only a hundred or less atoms per cubic centimeter. However, these are visible because the atoms, in their excited state, have so much room to move and give off light. Also, many nebulae are several light years across. Atomic collisions are quite rare in nebulae. It is when stars begin to form from the gases that the atoms heat up tremendously and collide more often as their volume is compressed. A nebula exists throughout the Milky Way that is only about one atom per cubic centimeter. It is observed as a dim haze and obscures the center of the galaxy from our sight. A Gallery of Planetary Nebulae - images of nebulae Types of Nebulae - pretty, illustrative images of nebulae types Nebulae - explains different types, black and white images included.
<urn:uuid:cd606f6d-75e5-4de1-81c0-44ecc14d07a0>
4.0625
663
Knowledge Article
Science & Tech.
45.436513
It is known that the area of the largest equilateral triangular section of a cube is 140sq cm. What is the side length of the cube? The distances between the centres of two adjacent faces of another cube is 8cms. What is the side length of this cube? Another cube has an edge length of 12cm. At each vertex a tetrahedron with three mutually perpendicular edges of length 4cm is sliced away. What is the surface area and volume of the remaining solid? This article outlines the underlying axioms of spherical geometry giving a simple proof that the sum of the angles of a triangle on the surface of a unit sphere is equal to pi plus the area of the triangle. A spider is sitting in the middle of one of the smallest walls in a room and a fly is resting beside the window. What is the shortest distance the spider would have to crawl to catch the fly?
<urn:uuid:e2ced8a0-e9e7-4287-a80d-b5ad59c03a79>
2.953125
192
Content Listing
Science & Tech.
63.856233
Le Monde tells us (in French) that researchers found 47-million-year-old fossils of nine mating pairs of turtles of species Allaeochelys crassesculpta in a lake in Germany. These fossils, which, according to the researchers, are the only fossils of mating pairs of animals to be found, taught the researchers a lot on this extinct species. But what caught my eye is a probabilistic statement made by one of the researchers: “Des millions d’animaux vivent et meurent chaque année, et nombre d’entre eux se fossilisent par hasard, mais il n’y a vraiment aucune raison que ça arrive lorsque vous êtes en train de vous reproduire. Il est hautement improbable que les deux partenaires meurent en même temps, et les chances que les deux soient fossilisés à la fois sont encore plus maigres”, a expliqué Walter Joyce, de l’université allemande de Tübingen. Avec plusieurs couples, les probabilités s’amoindrissent encore. Since the name Walter Joyce sounds English speaking, I searched for a version in English. I found this: “No other vertebrates have ever been found like these, so these are truly exceptional fossils,” Joyce said. “The chances of both partners dying while mating are extremely low, and the chances of both partners being preserved as fossils afterward even lower. These fossils show that the fossil record has the potential to document even the most unlikely event if the conditions are right.” The difference between the English version and the French version are slight yet important. The French reader learns that it is improbable that a pair of animals will die together while mating, and that it is even less probable that they will be fossilized together. The chances that this will happen to several mating pairs is even smaller. This is true if the death-while-mating+being-fossilized events of the different couples were independent. However, the article itself explains to the readers that here the events are dependent. During the mating process the turtles freeze, and sometimes sink to the bottom of the lake, where the water is poisonous. Thus, in this lake the probability of finding a fossil of a single turtle is less probable than finding the fossil of a mating pair of turtles, and the probability of finding several fossils of mating pairs is not necessarily lower than the probability of finding a single fossil of a mating pair… The English version does not correct this inaccuracy, but it does end with a note that can be interpreted as a correction of previous mistakes: “These fossils show that the fossil record has the potential to document even the most unlikely event if the conditions are right.” Next week the semester starts, and I will get a new bunch of first-year undergrad students who are eager to study Introduction to Probability. I will ask them to read this article and spot the mistakes. I wonder how many will find what is wrong in this article.
<urn:uuid:04287bc0-6af4-4fa7-b7ba-a098b081850e>
3.46875
665
Personal Blog
Science & Tech.
40.872126
Tropical rainforests, in general, receive between 50 to 250 inches of rainfall each year. Hawaii’s Mount Waialeale is in a class by itself. It's the world's wettest rainforest, averaging 450 inches (that’s more than 37 feet of water) each year. Most tropical rainforests lie along the equator in the "tropics"—between the Tropic of Capricorn and Tropic of Cancer—where sunlight strikes the Earth at roughly a 90-degree angle for a full 12 hours a day. This consistent and direct dource of energy maximizes photosynthesis, resulting in an unusual abundance and variety of plant life. Because of heavy rains, the soil found in most tropical rainforests is shallow and very poor in nutrients. Nutrients are mainly found in a layer of decomposing leaf litter—also called a root mat—which is quickly broken down by various species of decomposers (insects, bacteria, and fungi). Shallow-rooted plants take up these nutrients before rainfall can wash them away. The practice of cutting or burning down trees to clear land for agricultural purposes is short-sighted as the productive nutrients are held in the mass of a tree’s roots rather than in the soil itself. Half of the Earth’s biodiversity resides in our rainforests, and yet these biomes are among the most threatened on our planet. Scientists have documented that more than 50 million acres of rainforest—an area the size of England, Scotland, and Wales—are destroyed or seriously degraded each year. Consequently, fragile ecosystems are destroyed and the planet loses untold numbers of yet-to-be-catalogued species. To get an idea of what’s at stake, Academy scientists, led by Dr. Brian Fisher, curator and chairman of the Academy’s department of entomology, began traveling to Madagascar in 2002 to document the island’s arthropods. (Arthropods are animals with jointed legs and exoskeletons such as ants, insects, spiders, butterflies, and crustaceans.) During that expedition, Fisher helped establish the Bibikely Biodiversity Center in Tsimbazaza National Park. There, scientists from both Madagascar’s Malagasy Academy of Science and the California Academy of Sciences work together to catalog the country’s rich biodiversity and, at the same time, to develop conservation priorities that will help ensure the survival of plants and animals found nowhere else on Earth.
<urn:uuid:404647f1-c54c-40bb-a58b-614b93102f8b>
4.375
521
Knowledge Article
Science & Tech.
39.312256
The vectors, (2, -1) and (4, 8), are 1)parallel 2)perpendicular 3)neither parallel or perpendicular Write the complex number z = 2 + 5i in polar form, rounding to the nearest hundredth if needed. Convert the complex number c=3cis 0.25 into rectangular form. Write the complex number z = −6 in polar form The cost of 4 scarves and 6 hats is $52. The cost of two hats is $1 more than the cost of one scarf. What are the costs of one scarf and one hat? Please show me how to set this up and work it. the energy that can extracted from a storage battery is always less than the energy that goes into it while it is being charged. why? 2/2.5 = 0.8 z=0.7881 1-0.7881 = 0.2119 The lengths of pregnancies are normally distributed with a mean of 273 days and a standard deviation of 20 days. If 64 women are randomly selected, find the probability that they have a mean pregnancy between 273 days and 275 days. -2 z=0.0228 1-0.0228 answer 0.9772 54.5-50/50 = 4.5/5 = 0.9 z= 0.1841 For Further Reading
<urn:uuid:4542bfa7-d0e7-459a-91fa-2a55cb475e3d>
2.875
289
Content Listing
Science & Tech.
102.441054
An international team including Lawrence Livermore National Laboratory scientists has definitively measured the spin rate of a supermassive black hole for the first time. The findings, made by the two X-ray space observatories, NASA's Nuclear Spectroscopic Telescope Array (NuSTAR) and the European Space Agency's XMM-Newton, solve a long-standing debate about similar measurements in other black holes and will lead to a better understanding of how black holes and galaxies evolve. An impromptu spacewalk over the weekend seems to have fixed a big ammonia leak at the... NASA's Hubble Space Telescope has found the building blocks for Earth-sized planets in... The planet-hunting Kepler telescope has discovered two planets that seem like ideal places for some sort of life to flourish. According to scientists working with the NASA telescope, they are just the right size and in just the right place near their star. The discoveries, published online Thursday, mark a milestone in the search for planets where life could exist. Human travel to Mars has long been the unachievable dangling carrot for space programs. Now, astronauts could be a step closer to our nearest planetary neighbor through a unique manipulation of nuclear fusion, the same energy that powers the sun and stars. University of Washington researchers and scientists at a Redmond-based space-propulsion company are building components of a fusion-powered rocket aimed to clear many of the hurdles that block deep space travel, including long times in transit, exorbitant costs, and health risks. It's the Martian version of spring break: Curiosity and Opportunity, along with their spacecraft friends circling overhead, will take it easy this month because of the sun's interference. For much of April, the sun blocks the line of sight between Earth and Mars. This celestial alignment—called a Mars solar conjunction—makes it difficult for engineers to send instructions or hear from the flotilla in orbit and on the surface. A laboratory experiment at NASA's Jet Propulsion Laboratory, Pasadena, Calif., simulating the atmosphere of Saturn's moon Titan suggests complex organic chemistry that could eventually lead to the building blocks of life extends lower in the atmosphere than previously thought. The results now point out another region on the moon that could brew up prebiotic materials. Scheduled for launch in late 2013, the Mars Atmosphere and Volatile Evolution (MAVEN) mission will carry a sensitive magnetic-field instrument built and tested by a team at NASA’s Goddard Space Flight Center. Very little magnetic field traces remain on Mars, which is forcing NASA to eliminate all magnetic traces from its spacecraft. The magnetometer may help determine the history of the loss of atmospheric gases to space through time, providing answers about Mars’ climate evolution. The SpaceX Dragon capsule returned to Earth on Tuesday with a full science load from the International Space Station—and a bunch of well-used children's Legos. The privately owned cargo ship splashed down in the Pacific right on target, 250 miles off the coast of Mexico's Baja Peninsula, five hours after leaving the orbiting lab. The California-based SpaceX confirmed the Dragon's safe arrival via Twitter. Rusted pieces of two Apollo-era rocket engines that helped boost astronauts to the moon have been fished out of the murky depths of the Atlantic by Amazon.com CEO Jeff Bezos. A privately funded expedition led by Bezos raised the main engine parts during three weeks at sea, about 360 miles from Cape Canaveral. The engine parts were resting nearly 3 miles deep in the Atlantic After taking measurements of sudden, drastic changes in radiation levels, researchers have reported that NASA’s Voyager 1 spacecraft, now more than 11 billion miles from the Sun, left the heliosphere dominated by the Sun and has passed outside our solar system. Anomalous cosmic rays, which are cosmic rays trapped in the outer heliosphere, all but vanished, dropping to less than 1% of previous amounts. NASA's twin GRAIL (Gravity Recovery and Interior Laboratory) spacecraft went out in a blaze of glory Dec. 17, 2012, when they were intentionally crashed into a mountain near the moon's north pole. GRAIL had company—NASA's Lunar Reconnaissance Orbiter (LRO) mapping satellite was orbiting the moon as well. With just three weeks notice, the LRO team scrambled to get LRO in the right place at the right time to witness GRAIL's fiery finale Drilling into a rock near its landing spot, the Curiosity rover has answered a key question about Mars: The red planet long ago harbored some of the ingredients needed for primitive life to thrive. Topping the list is evidence of water and basic elements that teeny organisms could feed on, scientists said Tuesday. The Mars rover Curiosity drilled into its first rock a month ago. Now scientists will reveal what's inside. Gathering at NASA headquarters Tuesday, the rover team will detail the minerals and chemicals found in a pinch of ground-up rock. The results come seven months after Curiosity made a dramatic landing in an ancient crater near the equator. NASA’s Martian rover hunkered down Wednesday after the sun unleashed a blast that raced toward Mars. While Curiosity was designed to withstand punishing space weather, its handlers decided to power it down as a precaution since it suffered a recent computer problem. While the hardy rover slept, the Opportunity rover and two NASA spacecraft circling overhead carried on with normal activities. A private Earth-to-orbit delivery service made good on its latest shipment to the International Space Station on Sunday, overcoming mechanical difficulty and delivering a ton of supplies with high-flying finesse. The Dragon's arrival couldn't have been sweeter—and not because of the fresh fruit on board for the six-man station crew. Coming a full day late, the 250-mile-high linkup above Ukraine culminated a two-day chase that got off to a shaky, almost dead-ending start. A commercial cargo ship rocketed toward the International Space Station on Friday under a billion-dollar contract with NASA that could lead to astronaut rides in just a few years. Launch controllers applauded and gave high-fives to one another once the spacecraft safely reached orbit. The rocket successfully separated from the white Dragon capsule, which contains more than a ton of food, tools, computer hardware, and science experiments. NASA's Fermi Gamma-ray Space Telescope orbits our planet every 95 minutes, building up increasingly deeper views of the universe with every circuit. Its wide-eyed Large Area Telescope (LAT) sweeps across the entire sky every three hours, capturing gamma rays from sources across the universe. A Fermi scientist has transformed LAT data of a famous pulsar into a mesmerizing movie that visually encapsulates the spacecraft's complex motion. An ultraviolet spectrograph designed by Southwest Research Institute (SwRI) has been selected for flight on the European Space Agency's Jupiter Icy Moon Explorer (JUICE). NASA is funding development of the instrument, which will observe ultraviolet emissions from the Jovian system. Fresh off drilling into a rock for the first time, the Mars rover Curiosity is prepping for the next step—dissecting the pulverized rock to determine what it's made of. NASA said Wednesday it received confirmation that Curiosity successfully collected a tablespoon of powder from the drilling two weeks ago and was poised to transfer a pinch to its onboard laboratories. It's the first time a spacecraft has bored into a rock on Mars to retrieve a sample from the interior. For the first time, researchers are demonstrating ice crystal icing formation in a full-scale engine test facility at NASA's Glenn Research Center. The tests duplicate the natural event of cloud formation, ingestion by an aircraft engine of ice crystals created by the cloud, and the reduction of engine power that can result. This phenomenon is being studied to gain an understanding of the physics behind ice crystal formation in a turbine engine. Scientists have found more than 50 tiny fragments of a meteor that exploded over Russia's Ural Mountains with the power of dozens of atomic bombs. Most are less than a centimeter in diameter, but locals saw a big meteorite fall into the lake on Friday, leaving a 6-m-wide hole in the ice. A meteor up to 50-60 cm could eventually be found in the lake. In a Mars first, the Curiosity rover drilled into a rock and prepared to dump an aspirin-sized pinch of powder into its onboard laboratories for closer inspection. Using the drill at the end of its 7-foot-long robotic arm, Curiosity on Friday chipped away at a flat, veined rock bearing numerous signs of past water flow. The exercise was so complex that engineers spent several days commanding Curiosity to tap the rock outcrop, drill test holes and perform a "mini-drill" in anticipation of the real show. Young engineers who weren't even born when the last Saturn V rocket took off for the moon are testing a vintage engine from the Apollo program. The engine, known to NASA engineers as No. F-6049, was grounded because of a glitch during a test in Mississippi and later sent to the Smithsonian Institution. Now, NASA engineers are using to get ideas on how to develop the next generation of rockets for future missions to the moon and beyond. New information coming from researchers analyzing spectrometer data from NASA's Mars Reconnaissance Orbiter (MRO), which looked down on the floor of McLaughlin Crater on the Red Planet’s surface, suggests the formation of the carbonates and clay in a groundwater-fed lake within the closed basin of the crater. The depth of the crater may have helped allow the lake to form. NASA is partnering with a commercial space company in a bid to replace the cumbersome "metal cans" that now serve as astronauts' homes in space with inflatable bounce-house-like habitats that can be deployed on the cheap. A $17.8 million test project will send to the International Space Station an inflatable room that can be compressed into a 7-foot tube for delivery. Engineers working on NASA's James Webb Space Telescope have recently concluded performance testing on the observatory's aft-optics subsystem at Ball Aerospace & Technologies Corp's facilities in Boulder, Colo. This is significant because it means all of the telescope's mirror systems are ready for integration and testing. NASA scientists and engineers are working now to lay the groundwork for the Aerosol-Cloud-Ecosystem (ACE) mission, which will change what we can learn about clouds and aerosols. To that end, the Polarimeter Definition Experiment (PODEX) in Southern California will soon commence, testing a new class of polarimeters that are especially suited for finding the type, shape, and size of particles in the upper atmosphere.
<urn:uuid:d50c3e26-9d24-456b-a231-595bdb6fc959>
2.75
2,184
Content Listing
Science & Tech.
37.927001
ScienceFilter: Creationists, crystals, and thermodynamics. A common red herring argument that I've encountered as advanced by Creationists is that by Newton's Second Law of Thermodynamics evolution should not be possible. (According to Wikipedia that argument was originated by a biochemist named Duane Gish , incidentally.) My understanding is that among other things, one reason why this is an erroneous argument is that while it might apply to a closed system the environment on Earth is constantly being pumped full of heat via sunlight and other solar radiation. But never mind that, that's just the context. I was thinking about it and it occurred to me that in terms of orderliness increasing, the formation of ice crystals in freezing-temperature water or quartz crystals in liquid hot magma that has cooled to the correct temperature both seem to represent an increase in orderliness of the matter in question. Something I've heard is that formation of crystals isn't strictly due to loss of heat. Supposedly you can have a quantity of water at a static temperature around freezing and if it's turbulent it will remain liquid but when it becomes still the ice crystals will begin to form. (Although hmm, maybe loss of turbulence would be a loss of heat.) So is an ice crystal actually a higher entropy state than the equivalent amorphous mass of water? Water expands when it freezes. I don't know why, van der Waals bonds or something, right? But other substances become more dense when they change state to a solid, so a given mass would lose volume. Isn't this kind of like all the air molecules in a room leaping into one corner of it - just the example that's presented as absurd in explanations of statistical mechanics? I know that orderliness isn't the same thing as heat and isn't really the opposite of entropy. I was hoping that anyone who feels they've got a thorough understanding of thermodynamics could expound on what a thermodynamic analysis of the formation of crystals would be. And anything you can say about the relationship of order to entropy and thermodynamics would be interesting too. In an eerie coincidence there was this recent post about the second law. The crystals must be telling me things.
<urn:uuid:e808f93e-d628-4e30-a839-0ba8b8783f08>
2.703125
456
Comment Section
Science & Tech.
46.293349
Global Extinction Within One Human Lifetime As A Result Of A Spreading Atmospheric Arctic Methane Heatwave And Surface Firestorm Reposted from Arctic News (http://arctic-news NULL.blogspot NULL.com/p/global-extinction-within-one-human NULL.html) Although the sudden high rate Arctic methane increase at Svalbard in late 2010 data set applies to only a short time interval, similar sudden methane concentration peaks also occur at Barrow point and the effects of a major methane build-up has been observed using all the major scientific observation systems (http://arctic-news NULL.blogspot NULL.com/p/global-extinction-within-one-human NULL.html#). Giant fountains/torches/plumes of methane entering the atmosphere up to 1 km across have been seen on the East Siberian Shelf. This methane eruption data is so consistent and aerially extensive that when combined with methane gas warming potentials, Permian extinction event temperatures and methane lifetime data it paints a frightening picture of the beginning of the now uncontrollable global warming induced destabilization of the subsea Arctic methane hydrates on the shelf and slope which started in late 2010. This process of methane release will accelerate exponentially, release huge quantities of methane into the atmosphere and lead to the demise of all life on earth before the middle of this century. The 1990 global atmospheric mean temperature is assumed to be 14.49 oC (Shakil, 2005; NASA, 2002; DATAWeb, 2012) which sets the 2 oC anomaly above which humanity will lose control of her ability to limit the effects of global warming on major climatic and environmental systems at 16.49 oC (IPCC, 2007). The major Permian extinction event temperature is 80 oF (26.66 oC) which is a temperature anomaly of 12.1766 oC above the 1990 global mean temperature of 14.49 oC (Wignall, 2009; Shakil, 2005). Results of Investigation Figure 1 shows a huge sudden atmospheric spike like increase in the concentration of atmospheric methane at Svalbard north of Norway in the Arctic reaching 2040 ppb (2.04 ppm)(ESRL/GMO, 2010 – Arctic – Methane – Emergency – Group.org). The cause of this sudden anomalous increase in the concentration of atmospheric methane at Svalbard has been seen on the East Siberian Arctic Shelf where a recent Russian – U.S. expedition has found widespread, continuous powerful methane seepages into the atmosphere from the subsea methane hydrates with the methane plumes (fountains or torches) up to 1 km across producing an atmospheric methane concentration 100 times higher than normal (Connor, 2011). Such high methane concentrations could produce local temperature anomalies of more (http://arctic-news NULL.blogspot NULL.com/p/global-extinction-within-one-human NULL.html#) than 50 oC at a conservative methane warming potential of 25. Read the rest of this article (http://arctic-news NULL.blogspot NULL.com/p/global-extinction-within-one-human NULL.html) with detailed graphs
<urn:uuid:08ca24de-d790-4287-9a9a-9869590a59c6>
3.09375
658
Truncated
Science & Tech.
44.457644
Showers, Flowers, and Tornadoes Commercial Agriculture/University of Missouri Extension The clash of air masses occur over Missouri during spring, especially in April and May, as remnants of winter do not want to let go and summer heat is just around the corner. Thunderstorms are common during this transition season and, every once in a while, all the ingredients fall into place for severe thunderstorms that can bring flooding rains, damaging hail, and strong winds. Another more elusive ingredient that can be associated with severe thunderstorms, and also the most terrifying, is the tornado. Missouri recorded a total of 91 tornadoes last year, the second highest on record since 1950. On average, the Show-Me state experiences just over 30 tornadoes a year with about 50% of them occurring during April and May. Tornadoes can occur any time of the year and any time of the day but a majority of them (83%) occur between noon and midnight. Over the past 30 years, through the efforts of science and technology, we have uncovered a lot of the mystique that surrounds a tornado. Improvements have also been made in public awareness and safety procedures to minimize the effect that tornadoes can have on life and property. One milestone that stressed the importance of school safety and debunked some tornado myths was the Super Outbreak of April 3-4, 1974. The Super Outbreak is the worst tornado outbreak in U.S. history. A total of 148 twisters touched down in 13 states killing 330 people and injuring over 5000. Damage surveys, eyewitness reports and pictures of this event provided evidence to disprove some myths about tornadoes. Some myths that were untrue include: Myth: A tornado will not strike at the confluence of major rivers. Fact: Cairo, IL, located where the Ohio and Mississippi Rivers merge, was struck by a tornado that day. Myth: Tornadoes will not traverse steep or high hills. Fact: A tornado developed in the Blue Ridge mountains of north Georgia and climbed a 3,000 foot ridge before descending to the bottom of a canyon. Another tornado in Indiana descended a 60-foot bluff, crossed a river and damaged homes nestled in a valley. Damage surveys of schools hit during the Super Outbreak provided information for engineers to revise construction designs that would be safer for students. Some of the school damage patterns proved claims that inside hallways away from glass are the safest place to be. Classrooms with outside walls, areas with windows, lunch rooms and gymnasiums are the most dangerous places for students during severe weather. Most deaths and injuries from tornadoes are from flying debris. If you are in a house go to the lowest possible level and stay away from windows, doors, or outside walls. If a basement is not available, go to the interior portions of the lowest level. Get as many walls between you and the outdoors. Closets, bathrooms, and interior hallways offer the most protection.
<urn:uuid:5d5fc454-0a8e-414f-afc7-69a1c7a34e82>
3.5625
596
Knowledge Article
Science & Tech.
46.043434
GTE/TRAnsport and Chemical Evolution over the Pacific Project DescriptionTRACE-P is part of a long series of GTE aircraft missions aimed at better understanding of global tropospheric chemistry [McNeal et al., 1998]. Over the past two decades, GTE has conducted missions in several remote regions of the world (Amazonia, the Arctic, the tropical Atlantic, the Pacific) to characterize the natural processes determining the composition of the global troposphere and to assess the degree of human perturbation. The rapid industrialization now taking place in Asia is of compelling interest. Energy use in eastern Asia has increased by 5% yr-1 over the past decade and this rate of increase is expected to continue for the next two decades [U.S. Dept. of Energy, 1997]. Combustion of fossil fuels is the main source of energy. Emission of NOx in eastern Asia is expected to increase almost 5-fold from 1990 to 2020 [van Aardenne et al., 1999]. There is a unique opportunity to observe the time-dependent atmospheric impact of a major industrial revolution. Long-term observations of from ground sites and satellites can provide continuous monitoring of the temporal trend of atmospheric composition but are limited in terms of spatial coverage (ground sites) or the suite of species measurable (satellites). Aircraft missions can complement surface and satellite observations by providing a detailed investigation of the dynamical and chemical processes affecting atmospheric composition over broad geographical regions. . Summary provided by http://www-gte.larc.nasa.gov/trace/tracep.html
<urn:uuid:974ab5a3-b001-4fe0-9a55-4e658a9e294f>
3.109375
323
Knowledge Article
Science & Tech.
34.62045
The paradoxical involvement of RNA-mediated gene silencing in the maintenance of some DNA silencing is bridged in Arabidopsis plants by an RNA polymerase that acts as a liaison between both pathways, UK researchers report in the February 3 issue of Science. Alan Herr, from the John Innes Centre, Norwich, and colleagues from there and elsewhere show that an RNA polymerase connects RNA and DNA silencing pathways. They found that mutants in RNA polymerase IV (Pol IV, also called RPD1), part of a new clade of polymerases in plants, were defective in both pathways. "The finding of a new silencing-specific RNA polymerase is a surprising twist in the evolution of RNA polymerases," Herr wrote The Scientist in an E-mail. "Even though Pol IV is plant specific, the function of Pol IV may be performed by another RNA polymerase in other programs. Silencing of a locus does not mean that it is not transcribed." RNA silencing occurs through the multiprotein RNA-induced silencing complex that cleaves double stranded RNA, producing short interfering RNAs (siRNA), which then amplify the cycle. Conversely, DNA silencing occurs through chromatin-mediated mechanisms that can include DNA methylation and histone modifications to form transcriptionally inactive heterochromatic regions. In the Pol IV mutant, both siRNA formation and DNA methylation are decreased at heterochromatic regions, Herr and colleagues found. "Pol IV works together with a different type of RNA polymerase previously implicated in gene-silencing mechanisms called RNA-dependent RNA polymerase to produce double-stranded RNA that is then processed into small RNAs by a dicer enzyme," Herr wrote in his E-mail. "These small RNAs then act as the specificity determinant for the establishment and maintenance of the silenced state." The new report is consistent with a general silencing model that Shiv Grewal of the National Institutes of Health calls "a self-enforcing loop." According to the model, siRNA is targeted to heterochromatin, and "heterochromatic regions recruit the RNAi machinery through the interactions of chromodomains," Grewal,, who was not involved in the study, told The Scientist. This in turn reinforces the complex by producing more siRNA transcripts. Polymerase IV may allow just enough transcription in heterochromatic regions to kick-start the loop, Grewal said. Steve Jacobsen, of the University of California, Los Angeles, described the research as a case in which plants "use transcription to keep a locus silent. Methylation shuts genes off, but too strong of a shutoff is not good for maintaining siRNA-mediated silencing. Shut off all transcription, and siRNA can't work. Jerzy Paszkowski at the University of Geneva, Switzerland, said the model raises "he chicken and egg problem" about which part of the loop comes first. Grewal suggested that "bidirectional transcription may be the initial trigger," like that found in transposable elements and other repetitive sequences. In the battle between transposable elements overtaking a genome and a genome completely quiescing these parasites, Pol IV may be playing both sides, allowing transcription and silencing. According to Grewal, "Transposable elements have evolved to transcribe in the presence of heterochromatin, an adaptive response to overcome the heterochromatic machinery to silence them." While heterochromatin bodyguards this incomplete silencing, Pol IV allows transposable elements to "sneak past the door," says Grewal. Herr and colleagues found that Pol IV is a plant-specific polymerase that groups outside of the usual polymerases I, II, and III. "The phylogenetic restriction of Pol IV suggests that it has an evolutionarily derived function rather than an evolutionary basal one," according to Jim Birchler at the University of Missouri, Columbia. Consistently, a predicted subunit of the new polymerase IV machine, RPD2, controls silencing in Arabidopsis. Though Pol IV has a genetic function in silencing, Birchler noted that "Pol IV could have a role in silencing in the plant kingdom that is not understood at all. Determining the conditions under which Pol IV performs transcription is an important next step." A.J. Herr et al., "RNA polymerase IV directs silencing of endogenous DNA," Science, February 3, 2005. D.C. Baulcombe, "RNA silencing in plants," Nature, 431:356-63, September 16, 2004
<urn:uuid:44041f43-a555-42fa-88e1-b03f419d83ce>
2.71875
958
Academic Writing
Science & Tech.
29.623024
|.: Formations of Natural Catastrophes :.| Before thunderstorms develop, a change in wind direction and an increase in wind speed with increasing height, creating an invisible, horizontal spinning effect in the lower atmosphere. Warm rising within the thunderstorm updraft tilts the rotating funnel of air from horizontal to vertical. An area of vertical rotation, ranging from 3-9.5 kilometres (2-5 miles) wide, extends through most of the storm. Progressively, with increasingly warm air and a constant updraft, a tornado forms within this area of rapid rotation.
<urn:uuid:48e68ad5-b436-437b-ba87-e1064b0e94b5>
3.578125
118
Knowledge Article
Science & Tech.
41.888256
Marine Conservation - Conservation of Marine Life and Habitats Earth Day - April 22 The first Earth Day was celebrated on April 22, 1970. The idea was proposed by Wisconsin Senator Gaylord Nelson amid growing concerns about air and water pollution and other environmental problems. Learn more about the history of Earth Day here. Top Marine Life News Stories of 2010 - 2010 Marine Life News With increasing awareness of the oceans and the environment, marine life and ocean news are more often in the forefront. The year 2010 was an interesting one for marine life - we had great tragedy, like the oil spill in the Gulf of Mexico, and great excitement, such as the huge number of amazing discoveries associated with the Census of Marine... Eco-Friendly Holidays - How Do You Go Green During the Holidays? The holiday season can be fun, festive and filled with celebration. It can also be a time of high impact on the environment. How do you go green for the holidays? Share your tips and read tips from others. Should Whales and Dolphins Be Kept In Captivity? Recent incidents involving whales in captivity have caused many to question the appropriateness of keeping whales in captivity. Others, who dream of up-close study of whales and dolphins or being dolphin trainers, say that the benefits of whales and dolphins in captivity are worth the risk.Should whales and other marine life continue to be held... Whales and Dolphins in Captivity - How Do You Feel About Orcas and Other... Share your opinions about marine life in captivity. SeaWorld has suffered the loss of several orca whales in 2010. Should whales and other marine life continue to be held in captivity? Gulf of Mexico Oil Spill - Share Your News and Experiences About the … Share your news and experiences about the BP/Deepwater Horizon oil spill in the Gulf of Mexico. International Whaling Commission Meeting - June 2010 The International Whaling Commission was established in 1946, "to provide for the proper conservation of whale stocks and thus make possible the orderly development of the whaling industry." In recent years, the IWC has evolved with more conservation goals in mind, and an increased effort on protecting whales will be discussed at the IWC's 62nd annual meeting in Agadir, Morocco, June 21-25, 2010… World Oceans Day - How to Celebrate World Oceans Day Users share their ideas and events for celebrating World Oceans Day. World Oceans Day is celebrated every year on June 8. The idea of a day to celebrate the ocean was first proposed in 1992 at the Earth Summit in Rio de Janeiro by the Government of Canada. The day was unofficially celebrated until 2009, when the United Nations officially declared June 8 of each year World Oceans Day. Oil Spill in Gulf of Mexico - What Happened, and How You Can Help Information on the oil spill from the Deepwater Horizon oil rig operated by BP off the coast of Louisiana in the Gulf of Mexico. Learn what happened to the Deepwater Horizon, the effects of oil on marine life and the latest news about the BP oil leak. Effects Of Oil Spills On Sea Turtles Sea turtles are animals that travel widely, sometimes thousands of miles. They also use the shorelines, crawling up onto beaches to lay their eggs. Because of their endangered status and their wide range, sea turtles are species that are of particular concern in an oil spill. There are several ways that oil can impact sea turtles. Oil can... Learn about marine conservation, threats and issues facing marine life and what you can do to help protect marine life. Shark Conservation Act of 2009 To aid in shark conservation and demonstrate the importance of sharks to the U.S., the Shark Conservation Act of 2009 was introduced. Over 70 million sharks are killed annually, and many shark species are considered overfished. Sharks are fished for sport, caught as and hunted for their fins in a practice called shark finning. Overfishing of... What Is the International Coastal Cleanup? Each year, hundreds of thousands of volunteers gather to clean the coastline during the International Coastal Cleanup, which was started over 20 years ago by the Ocean Conservancy. Learn about the International Coastal Cleanup, why to do beach cleanups and how you can sign up for the International Coastal Cleanup. Brief History of Cod Fishing A brief history of Atlantic cod fishing, from early fishing using dories and handlines to the modern factory ships used today. Shark Conservation Act of 2009 To aid in shark conservation and demonstrate the importance of sharks to the U.S., the Shark Conservation Act of 2009 was introduced. Over 70 million sharks are killed annually, and many shark species are considered overfished. Sharks are fished for sport, caught as and hunted for their fins in a practice called shark finning. Overfishing of sharks could be drastic to the ocean food web. I Protect Marine Life By... Even if you don't live near a coastline, there are many things you can do to protect marine life. Share the ways that you help protect marine life, from simple behavior changes at home to volunteering or working with marine life or doing hands-on projects to protect the marine environment. Turtle Excluder Device (TED) Turtle excluder devices (TEDs) were created to protect sea turtles from getting caught in shrimp nets. The TED is attached to a shrimp trawling net and is a grid of metal bars that has an opening at the top or bottom, creating a hatch that allows sea turtles and larger fish to escape. Small animals such as shrimp go between the bars and are caught in the end of the trawl. What Is Shark Finning Shark finning is the process of cutting a fin off a shark. The rest of the shark's body is cast into the sea, sometimes still alive. Even though the fins do not have any taste, they are a sought-after commodity for shark fin soup, which is a delicacy in Asian cultures and a dish served at special occasions. Whales and Entanglements Entanglement in fishing gear is one of the major threats to whales today. Learn about whale entanglements in fishing gear, and how whales are rescued. Aquaculture Makes Splash with Scientists and Consumers Learn about the pros and cons of aquaculture, or fish farming, including whether it can meet the huge demand for seafood. World Oceans Day - What Is World Oceans Day This year, the United Nations officially declared June 8 of each year World Oceans Day, a day to celebrate the oceans. Learn about the history of World Oceans Day, why the oceans are important and what you can do to protect the oceans. What Is that International Whaling Commission (IWC)? A description of the International Whaling Commission. NOAA Fisheries - National Marine Fisheries Service NOAA is charged with managing fisheries, protecting marine species and conserving marine habitat in the U.S. Visit the NOAA site to learn about marine regulations, careers and the work of NOAA. IUCN Red List of Endangered Species The IUCN evaluates animal and plant species and reports on their taxonomy, distribution and status. The Red List classifies the species of most concern to help further their conservation. Easy Ways to Protect Marine Life The ocean is downstream of everything, so all of our actions, no matter where we live, effect the ocean and the marine life it holds. Those who live right on the coastline will have the most direct impact on the ocean, but even if you live far inland, there are many things you can do that will help marine life. Ocean Conservancy's mission is to promote healthy and diverse ocean ecosystems and oppose practices that threaten ocean life and human life. Visit this site to learn about the Ocean Conservancy's efforts to inform, inspire and empower people to help the oceans. What Is Ocean Acidification? Ocean acidification is the process by which the pH of the oceans is lowered due to absorption of carbon dioxide. The oceans have helped the global warming problem for thousands of years by absorbing carbon dioxide. Now the basic chemistry of the oceans is changing because of our activities, with devastating consequences for marine life. Choosing Sustainable Seafood - Seafood Choices That Are Good for the Environment Do you love seafood but worry about the environmental impacts of what you’re eating? Here are some ways you can learn more about seafood choices and what you need to know to ask the right questions when you purchase, whether it’s at the grocery store, a fish market or in a restaurant. Trash Islands - The Ocean Garbage Patch Trash Islands As our global population expands so too does the amount of trash we produce. A large portion of this trash then ends up in the world's oceans. Due to oceanic currents much of the trash in the sea is carried to a number of areas where the currents meet. MarineBio is a non-profit organization that describes itself as "an evolving online tribute to ocean life, an introduction to marine biology and what you need to know about marine conservation." Visit this site for information on marine species, careers, conservation information and more. Seafood Watch Program - A Consumer's Guide to Sustainable Seafood Love seafood? Concerned about the environment? There are many seafood choices that are more eco-friendly. You can visit this site from the Monterey Bay Aquarium and print out your own sustainable seafood guide. What Are Invasive Species? Countries around the world are the recipients of unwanted visitors each year – invasive species. Invasive species can be plants, animals or other organisms such as microbes. They are introduced into new habitats primarily by humans. Learn what an invasive species is and how they effect the environment. ICCAT (International Commission for the Conservation of Atlantic Tunas) Information about ICCAT, the International Commission for the Conservation of Atlantic Tunas. This commission studies the stock sizes of Atlantic tuna and tuna-like species and makes recommendations on fishing quotas. American Cetacean Society The American Cetacean Society,founded in 1967, protects whales, dolphins, porpoises, and their habitats through public education, research grants, and conservation actions. International Whaling Commission The International Whaling Commission provides for the conservation of whale stocks and management of whaling. On this site, you will find lots of information on whales, but updates on what is going on with whaling and hunting regulations around the world. Marine Mammal Protection Act Learn about the Marine Mammal Protection Act. Marine Fish Conservation Network The Marine Fish Conservation Network is a national coalition that promotes the long-term sustainability of marine fish. Its members include environmental organizations, fishing associations, aquariums and marine science groups. Learn about current legislation regarding marine fish, conservation issues and different marine fish species.
<urn:uuid:04dab128-a84c-4997-839e-612cfb12c88c>
2.84375
2,245
Content Listing
Science & Tech.
45.869315
Animals with backbonesPhilippe Janvier This tree diagram shows the relationships between several groups of organisms. The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right. You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box The main characteristics supporting the nodes of this phylogeny are: Node 1: Mineralized exoskeleton, sensory-line canals and grooves Node 2: Perichondral bone or calcification, externally open endolymphatic duct Node 3: Paired fins containing musculature and concentrated in pectoral position, two dorsal fins, epicercal (i.e. upwardly tappering) tail, sclerotic ring and scleral ossification, cellular dermal bone The Vertebrata, or vertebrates, is a very diverse group, ranging from lampreys to Man. It includes all craniates, except hagfishes, and are characterized chiefly by a vertebral column, hence their name. The majority of the extant vertebrates are the jawed vertebrates, or gnathostomes, but lampreys are jawless vertebrates. However, in Late Silurian or Early Devonian times, about 420 to 400 million years ago, the situation was reverse, and the majority of the vertebrate species were jawless fishes (the "ostracoderms", presumably more closely related to the gnathostomes than to lampreys). The decline of the jawless vertebrates and the subsequent rise of the gnathostomes took place about 380 million years ago. Extant vertebrates comprise two clades: the Hyperoartia, or lampreys, and the Gnathostomata, or jawed vertebrates. In addition, there is a number of taxa of fossil jawless vertebrates which were formerly referred to as the "ostracoderms" ("shell-skinned") because most of them possess an extensive, bony endo- and exoskeleton. The "ostracoderms" lived from the Early ordovician (about 480 million years ago) to the Late Devonian (about 370 million years ago). The relationships of the various groups of "ostracoderms" has been the subject of considerable debate since the mid-nineteenth Century, and the theory of relationship proposed here is far from definitive, yet the best supported by the currently available data. The "ostracoderms" are represented by five major groups, four of which are almost certainly clades: the Heterostraci, Osteostraci, Galeaspida, Anaspida, and Thelodonti (the monophyly of the latter being debated, Thelodonti page). In addition, there are minor groups which only include a few species: the Arandaspida, Astraspida, Eriptychiida, and Pituriaspida. The Arandaspida, Astraspida, Eriptychiida, and Heterostraci are regarded as forming a clade, the Pteraspidomorphi. Some monospecific genera, Jamoytius, Endeiolepis, and Euphanerops, formerly referred to the Anaspida, are now removed from that clade and may be more closely related to lampreys (see Hyperoartia). A large but still poorly known group, the Euconodonta, has recently been included in the Craniata, and possibly the Vertebrata. It is currently referred to as 'conodonts', but the only forms that can reliably be regarded as craniates belong to a subgroup of conodonts known as euconodonts. The Vertebrata have all the characteristics of the Craniata but share, in addition, a number of unique characteristics which do not occur in hagfishes (Hyperotreti). These characteristics are: - Metamerically arranged endoskeletal elements flanking the spinal cord. There are primitively two pairs of such elements in each metamere and on each side: the interdorsals and basidorsals. In the gnathostomes, there are two additional pairs ventrally to the notochord: the interventrals and basiventrals. These elements are called arcualia and can fuse to a notochordal calcification, the centrum. The ensemble of the arcualia , centrum is the vertebra, and the ensemble of the vertebrae is the vertebral column. Click on an image to view larger version & data in a new window The vertebrates are characterized by a vertebral column; that is, a variable number of endoskeletal elements aligned along the notochord (green) and flanking the spinal cord (yellow). In lampreys (top), the vertebral elements are only the basidorsal (red) and the interdorsals (blue). In the gnathostomes, there are in addition ventral elements, the basiventrals (purple) and interventrals (orange), and the notochord may calcify into centra (pink). (After Janvier 1996). - Extrinsic eye muscles. These muscles are attached to the eyeball and orbital wall, and ensure eye movements - Radial muscles in fins. These are small muscles associated with each of the cartilaginous radials of the unpaired and paired fins. They ensure the undulatory movements of the fin web. - Atrium and ventricle of heart closely-set. - Nervous regulation of heart. The heart in the embryo of the vertebrates is aneural, like the heart of adult hagfishes. In adult vertebrates, however, the heart is innervated by a branch of the vagus nerve. - Typhlosole in the intestine. This is a spirally coiled fold of the intestinal wall. In the Gnathostomes, it can be developed into a complex spiral valve. - At least two vertical semicircular canals in the labyrinth - True neuromasts in the sensory-line system There are many other vertebrate characteristics, both anatomical and physiological. As for extant vertebrates, the main question is whether lampreys are the sister-group of the gnathostomes, or that of hagfishes. In the latter case there would be no reason to distinguish the Vertebrata from the Craniata, as it was formerly done. Although there is good evidence for the lamprey-gnathostome sister-group relationship, the theory that the cyclostomes (lampreys,hagfishes) are a clade is still supported by a number of zoologists. Considering the large number of anatomical, physiological and molecular data that are available now to test these theories, one can expect a definitive clue in a near future (for discussion, see Craniata). The question of the relationships of the numerous extinct vertebrate groups is, in contrast, far from being resolved. This chiefly concerns the Palaeozoic taxa formerly referred to as "ostracoderms"; that is, armored jawless craniates, which are likely to be vertebrates and are now considered as being all more closely related to the gnathostomes than to lampreys. During most of the nineteenth century, the "ostracoderms" known at that time (i.e. the Heterostraci and Osteostraci) were regarded as bony fishes, until Cope (1889) suggested to include them with lampreys and hagfishes in the taxon Agnatha ("jawless"). In the beginning of the twentieth century, Kiaer (1924) and Stensiö (1927) showed that the Anaspida and Osteostraci share with lampreys a median, dorsally placed "nostril" (in fact a nasohypophysial opening) and suggested to include these three groups in a clade Cephalaspidomorphi. In addition, Stensiö (1927) proposed that hagfishes were derived from the Heterostraci and should be grouped with them in the Pteraspidomorphi. At that time, however, the Agnatha were regarded as a clade, whose sister-group was the Gnathostomata, as illustrated by Stensiö's (1927) diagram: This theory implied the diphyletic origin of the Recent "cyclostomes" (hagfishes and lampreys). Although they accepted the monophyly of the Cephalaspidomorphi, most paleontologists rejected that of the Pteraspidomorphi (as including hagfishes). In contrast, until the 1970's, it was widely accepted that the Heterostraci are more closely related, or ancestral to the gnathostomes, mainly because they lacked the specializations of the Cephalaspidomorphi and because they had paired olfactory capsules, like the gnathostomes. With the rise of cladistics, in the late 1970's and the 1980's, and following Løvtrup's (1977) suggestion that extant cyclostomes were paraphyletic, a number of trees were published, which all showed the "ostracoderms" (and the Agnatha as a whole) as paraphyletic. However, all these trees implied that lampreys had lost several characteristics, in particular the paired fins, mineralized skeleton, and sensory-line canals. A major change was Gagnier's (1993) first computer-generated tree, in which these reversions were avoided by considering all "ostracoderms" as more closely related to the gnathostomes than to either lampreys and hagfishes. Further analyses (Forey & Janvier 1994, Janvier 1996b) largely confirmed the higher degree of parsimony of this phylogeny. Although there are variations as to the position of certain taxa, the Galeaspida and Osteostraci constantly group together with the Gnathostomes, whereas the Astraspida, Eriptychiida, Arandaspida, and Heterostraci form a clade, the Pteraspidomorphi, albeit poorly supported. One of the consequences of this tree is that the dorsal nasohypophysial opening (formerly the characteristic of the Cephalaspidomorphi) either occurred more than once, or is a general feature of the Vertebrata. In this tree, four fossil groups are positioned with a question mark. In the case of the Euconodonta, Anaspida and Pituriaspida, this uncertainty is largely due to the scarcity of the characters available from the material (in particular as to the internal anatomy). In the case of the Thelodonti, it is due to their controversial status, as they are likely to be a paraphyletic assemblage of stem Heterostraci and possibly stem forms of other "ostracoderm" groups, yet some authors regard them as a clade (see Thelodonti page). Forey, P. L. (1984). Yet more reflections on agnathan-gnathostome relationships. Journal of Vertebrate Paleontology, 4, 330-343. Forey, P. L., and Janvier, P. (1993). Agnathans and the origin of jawed vertebrates. Nature, 361, 129-134. Forey, P. L., and Janvier, P. (1994). Evolution of the early vertebrates. American Scientist, 82, 554-565. Hardisty, M. W. (1982). Lampreys and hagfishes: Analysis of cyclostome relationships. In The Biology of Lampreys, (ed. M. W. Hardisty and I. C. Potter), Vol.4B, pp. 165-259. Academic Press, London. Janvier, P. (1993). Patterns of diversity in the skull of jawless fishes. In The skull (ed. J. Hanken and B. K. Hall), Vol. 2, pp. 131-188. The University of Chicago Press. Janvier, P. (1996a). Early vertebrates. Oxford Monographs in Geology and Geophysics, 33, Oxford University Press, Oxford. Janvier, P. (1996b). The dawn of the vertebrates: characters versus common ascent in current vertebrate phylogenies. Palaeontology, 39, 259-287. Løvtrup, S. (1977). The Phylogeny of Vertebrata. Wiley, New York. Stensiö, E. A. (1927). The Devonian and Downtonian vertebrates of Spitsbergen. 1. Family Cephalaspidae. Skrifter om Svalbard og Ishavet, 12, 1-391. Wang, N. Z. (1991). Two new Silurian galeaspids (Jawless craniates) from Zhejiang province, China, with a discussion of galeaspid-gnathostome relationships.In Early vertebrates and related problems of evolutionary biology (ed. M. M. Chang, Y. H. Liu, and G. R. Zhang), pp. 41-65. Science Press, Beijing. - DigiMorph. The Digital Morphology library is a dynamic archive of information on digital morphology and high-resolution X-ray computed tomography of biological specimens. A National Science Foundation Digital Library at the University of Texas at Austin. Page copyright © 1997 Page: Tree of Life Vertebrata. Animals with backbones. Authored by Philippe Janvier. The TEXT of this page is licensed under the Creative Commons Attribution License - Version 3.0. Note that images and other media featured on this page are each governed by their own license, and they may or may not be available for reuse. Click on an image or a media link to access the media data window, which provides the relevant licensing information. For the general terms and conditions of ToL material reuse and redistribution, please see the Tree of Life Copyright Policies. Citing this page: Janvier, Philippe. 1997. Vertebrata. Animals with backbones. Version 01 January 1997 (under construction). http://tolweb.org/Vertebrata/14829/1997.01.01 in The Tree of Life Web Project, http://tolweb.org/
<urn:uuid:92b70de3-0ff0-4f91-b075-5dd9e0c6db5b>
3.546875
3,163
Knowledge Article
Science & Tech.
35.509897
very rare and local species in Belgium. larva lives on Scutellaria First mining, later in a loose spinning in a shoot or between two leaves. It is not known how the winter is passed, but most probably in the adult stage, because the usual food plant is often flooded during winter and would not produce sufficient growth in early spring to adults fly at least in two generations per year; onwards from late May till July and again from August to early September.They are active during the day flying over the food plant and later come to light. imago very similar to Prochoreutis myllerana, subtle differences can be looked up in Diakonoff, Microlepidoptera Palaearctica, Vol. 7. Mines are similat to P. myllerana, but the larvae are distinguishable. Luxembourg, Habay-la-Neuve, 22 August 2009. imago sitting on Sclutellaria (Photo © Jean-Yves Baugnée)
<urn:uuid:3d220581-6655-452f-aaa1-05e5ef4ebb84>
2.71875
226
Knowledge Article
Science & Tech.
45.172113
Science Fair Project Encyclopedia All trematodes are parasitic flatworms. Previous classification systems included the Monogenea amongst the trematoda, alongside the Digenea and Aspidogastrea, on the basis that they were all vermiform parasites. The taxonomy of the Platyhelminthes is being subjected to extensive revision thanks to modern phylogenetic studies, and modern sources place the Monogenea in a separate class within the phylum. There are no known cases of human infection with Aspidogastreans, therefore the use of the term "fluke" in relation to human infection refers solely to digenean infections. These can be classified into two areas, on the basis of the system which they infect. Tissue flukes, are species which infect the bile ducts, lungs, or other biological tissues. This group includes the lung fluke, Paragonimus westermani , and the liver flukes, Clonorhis sinensis and Fasciola hepatica. The other group are known as blood flukes, and inhabit the blood in some stages of their life cycle. Blood flukes include various species of the genus Schistosoma. Trematodes have a complex life cycle, often involving several hosts. The eggs pass from the host with the feces. When the eggs reach water, they hatch into free-swimming forms called miracidia . The miracidia penetrate a snail or other molluscan host to become sporocysts . The cells inside the sporocysts typically divide by mitosis to form rediae. Rediae, in turn, give rise to free-swimming cercariae , which escape from the mollusk into water. Using enzymes to burrow through exposed skin, cercariae penetrate another host (often an arthropod) and then encyst as metacercariae . When this host is eaten by the definitive host, the metacercariae excyst and develop and the life cycle repeats. For more information on life cycles, see the respective pages on digenea and aspidogastrea. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:7317591d-c4a7-4e75-8e22-32d98335df87>
3.453125
468
Knowledge Article
Science & Tech.
28.924575
London, England (CNN) -- A possible rise in sea levels by 0.5 meters by 2050 could put at risk more than $28 trillion worth of assets in the world's largest coastal cities, according to a report compiled for the insurance industry. The value of infrastructure exposed in so-called "port mega-cities," urban conurbations with more than 10 million people, is just $3 trillion at present. The rise in potential losses would be a result of expected greater urbanization and increased exposure of this greater population to catastrophic surge events occurring once every 100 years caused by rising sea levels and higher temperatures. The report, released on Monday by WWF and financial services Allianz, concludes that the world's diverse regions and ecosystems are close to temperature thresholds -- or "tipping points." Any one of these surge events could unleash devastating environmental, social and economic changes amid a higher urban population. According to the report, carried out by the UK-based Tyndall Centre, the impacts of passing "Tipping Points" on the livelihoods of people and economic assets have been underestimated. Global temperatures have already risen by at least 0.7 degrees Celsius and the report says a further rise by 2-3 degrees in the second half of the century is likely unless deep cuts in emissions are put in place before 2015. The consequent melting of the Greenland and the West Antarctic Ice Shield could lead to one such tipping point scenario, possibly a sea level rise of up to 0.5 meters by 2050. The report focuses on regions and phenomena where such events might be expected to cause significant environmental impacts within the first half of the century. For example a hurricane in New York, which could cost $1 trillion now, would mean a $5 trillion insurance bill by the middle of the century, the report adds. "If we don't take immediate action against climate change, we are in grave danger of disruptive and devastating changes," said Kim Carstensen, the Head of WWF Global Climate Initiative. "Reaching a tipping point means losing something forever. This must be a strong argument for world leaders to agree a strong and binding climate deal in Copenhagen in December."
<urn:uuid:2dda96f2-f6df-47b2-9959-2b4ea344ff7b>
3.09375
445
Truncated
Science & Tech.
39.675343
pg_trace() enables tracing of the PostgreSQL frontend/backend communication to a debugging file specified as pathname. To fully understand the results, one needs to be familiar with the internals of PostgreSQL communication protocol. For those who are not, it can still be useful for tracing errors in queries sent to the server, you could do for example grep '^To backend' trace.log and see what query actually were sent to the PostgreSQL server. For more information, refer to PostgreSQL manual. pathname and mode are the same as in fopen() (mode defaults to 'w'), connection specifies the connection to trace and defaults to the last one opened. pg_trace() returns TRUE if pathname could be opened for logging, FALSE otherwise.
<urn:uuid:13aaaf7d-b586-495e-ad76-c15ccef3a136>
2.78125
157
Documentation
Software Dev.
42.033475
AS A media strategy, inviting a crowd of hacks to play with rubbery pink goop is certainly novel. After 10 minutes getting our hands dirty, we're beginning to feel convinced. It seems any fool can fix a broken shuttle with this stuff. Which is just what's intended, because NASA has organised this workshop to showcase its efforts to bring the space shuttle back into service. The speakers include top engineers and project managers, and all strike a cautiously optimistic note that belies the reality. Because of technical hitches, NASA has already put back the scheduled return-to-flight date from March 2004 to "no earlier than next September 12th". By then, or reasonably soon after, NASA must have solved the problems that led to the shuttle Columbia's catastrophe on 1 February this year. If it fails, the three surviving shuttles, now grounded, may never fly astronauts into orbit again. NASA's main challenges ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:34ce521f-99a9-4707-99b8-e36f67c691fc>
3.015625
214
Truncated
Science & Tech.
55.618462
Wed Aug 27 01:52:45 BST 2008 by Dennis Bohner Yes, Cape Canaveral Wed Aug 27 05:45:14 BST 2008 by Paul M. Parks It's Cape Canveral, though the space center is known as Kennedy Space Center. Wed Aug 27 07:26:32 BST 2008 by Pierre Charland So GLAST became Fermi, like SIRTF became Spitzer, like AXAF became Chandra, like FIRST became Herschel, like MAP became Wilkinson, like VRM became Magellan, and probably others... It is bad to rename an observatory who has been known for years under another name. I know they do that all the time, and it's confusing all the time. Wed Aug 27 12:52:41 BST 2008 by Deni Why did they have to rename it?GLAST sounded good enough and it's already all over the place. Fermi! Oh, well.. Wed Aug 27 14:46:15 BST 2008 by Haha "Garden variety astronomical objects" sounds great. I want a garden like that =) How Do We Use Gamma Ray Telescopes. ?? Tue Sep 16 09:35:58 BST 2008 by Kayleigh X I Have Been Asked In Science To Find Out How We Use Gamma Ray Telescopes But I Havnt Got A Clue To Were I Can Find Out:S.. ANy Ideas Of What Websites I Can Use To Find It.? Aquinah Blue Crab Conference Of Un Summit Leaders Spring 2009 Mon Oct 20 12:29:43 BST 2008 by Thomas D Gaudette Blue crab specials from the wampanoags to conclude revise of officals and offical summit resolution Gamma-rays In Remote Sensing Sun Feb 01 19:29:37 GMT 2009 by Martina I found this article very interesting. However i would apprietate an explanation for the use of gamma-rays in remote sensing on Earth. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:687d2434-8efb-41ea-b8dd-67f849ce04ac>
2.828125
466
Comment Section
Science & Tech.
77.874857
Chemistry assistant professor Paul Adams, seen here with students, uses the 3-D visualization developed by chemistry professor James Hinton at the University of Arkansas with the company Virtalis. Credit: University of Arkansas When data becomes too complex to describe or even imagine, researchers can bounce them off a wall. But not just any wall. We're talking about the VisWall. Measuring 14-by-8 feet, this giant behemoth can help researchers visualize some of the most complicated scientific concepts. See more in this Science Nation video. Credit: Science Nation, National Science Foundation Most drugs enter our bodies as small molecules, ligands that bind to the surface of target proteins, inhibiting their function and protecting our health. For a drug to tame a headache or reduce a swollen knee, the drug needs to be effective at small doses, and selective enough to limit side effects. Read more in this Discovery. Credit: Pengyu Ren, The University of Texas at Austin An interdisciplinary team of researchers has created a new, ultra-sensitive technique to analyze life-sustaining protein molecules. The technique may profoundly change the methodology of biomolecular studies and chart a new path to effective diagnostics and early treatment of complex diseases. Find out more in this news release. Credit: Hatice Altug, Electrical Engineering Department, Boston University This is an example of the interactive visualization of proteins from the Protein Data Bank (PDB), using PDB browser software on the C-Wall (virtual reality wall) at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Learn more about this image and view related images here. Credit: Jurgen Schulze, Calit2, UC-San Diego The Division of Molecular and Cellular Biosciences (MCB) in NSF's Directorate for Biological Sciences supports fundamental research and related activities designed to promote understanding of complex living systems at the molecular, subcellular and cellular levels. A team from the Scripps Research Institute revealed the first-ever pictures of the formation of cells' "protein factories." In addition to being a major technical feat on its own, the work could open new pathways for development of antibiotics and treatments for diseases tied to errors in ribosome formation. A new assay capable of examining hundreds of proteins at once and enabling new experiments that could dramatically change our understanding of cancer and other diseases has been invented by a team of University of Chicago scientists. During the past 20 years, researchers have identified thousands of cell protein interactions with the goal of developing a comprehensive catalogue known as the interactome. Unfortunately, the data collected by different research teams have been somewhat inconsistent. Proteins are widely viewed as a promising alternative to synthetic chemicals in everything from medications to hand lotion, but outside the controlled confines of the lab bench, proteins quickly change structure, causing irreversible damage to their functionality and often safety. Researchers from the California Institute of Technology (Caltech) and the University of California at San Diego (UCSD) have brought together UCSD theoretical modeling and Caltech experimental data to show just how amino-acid chains might fold up into unique, three-dimensional functional proteins. May 9, 2011 3D Proteins--Getting the Big Picture Virtual reality immerses students in proteins and peptides How do you get to know a protein? How about from the inside out? If you ask chemistry professor James Hinton, "It's really important that students be able to touch, feel, see ... embrace--if you like, these proteins." For decades, with funding from the National Science Foundation (NSF), Hinton has used nuclear magnetic resonance (NMR) to look at protein structure and function. But he wanted to find a way to educate and engage students about his discoveries. "I have all of this equipment, I get a lot of information about the structure of proteins and peptides, but the one thing I didn't have was a very sophisticated way of looking at them," says Hinton, from his lab at the University of Arkansas in Fayetteville. About five years ago, he realized there's a big difference between students looking at a drawing of a protein in a textbook and letting them "jump into" a three-dimensional display of these complex biochemical structures. "Kids are visual people nowadays; they like to see things," says Hinton. "So I began to look around for ways of actually visualizing three-dimensional structures, and I came upon this idea of, 'Wouldn't it be nice if we had immersive techniques that would allow us to experience 3-D virtual reality?'." With support from the Arkansas Bioscience Institute (ABI), Hinton worked with Virtalis, a company in Britain, to create an immersive 3-D virtual reality experience for studying proteins. The results have been dramatic. "It's beginning to have a major impact on how we teach, and it is a great tool for students entering the fields of chemistry and biochemistry," notes Hinton. "Proteins are chemical entities; they pretty much do all the work in your body," says graduate student Vitaly Vostrikov. "The problem with proteins is that they are three-dimensional entities. Visualizing them in two dimensions, on a sheet of paper, is pretty complex." "Pretty complex" could sometimes mean tedious and frustrating for Vostrikov and many researchers studying proteins. "Generally, when you have a protein that is of biological interest, and you want to understand the function, or to alter its activity, the first thing to do is to have the structure of the protein. Once you have the structure, you can understand what the protein looks like. For example, if it has to bind with other molecules, where do the molecules bind? Can we make binding stronger? Can we make binding weaker? Can we disrupt the binding site at all?" explains Vostrikov. Donning a pair of 3-D glasses, he demonstrates how the immersive virtual reality display could show these structures. He could dive in and out of DNA, strains of the flu, and hemoglobin. The technology makes it possible to zoom in, zoom out, or rotate the structure; or look at components one by one. "Understanding protein function is essential if you want to do something in pharmaceutical chemistry," adds Vostrikov. Drug companies, universities and medical schools in the United States, Britain and Canada are using the technology. "We've had radiology groups come in, interested in imaging, of course, and the ability to do virtual reality on the human body," says Hinton. Even people in non-scientific fields are using these imaging techniques. "Other people have been in, including a group from Walmart. They're interested in building new stores, but it's far better to build a store in virtual reality, make your mistakes there, than break ground and start building," says Hinton. His colleague, Paul Adams, assistant professor of chemistry and biochemistry at the University of Arkansas, says virtual reality has become an important tool for his work as well. Adams studies abnormally functioning proteins, with the aim of learning more about the spread of cancer cells. "I believe visualization is the epitome of trying to examine what differences among biomolecules could be the cause or the reason for them functioning in different ways and different environments," says Adams. He says both his research and his teaching have been enhanced using the immersive 3-D virtual reality. In addition, these dramatic displays invite academic cooperation. "If you think of interdisciplinary approaches, such as the work of a biologist, or a chemist, or a physicist: All three scientists could look at this technology, and see three different things, to come up with different ideas based on what they are seeing. And so I think that immersive technology could be a potentially novel way to interject interdisciplinary collaboration," explains Adams. Hinton is 72 years old, and says he's "just an old man having fun." He and others at the university use a demo tape of the 3-D virtual reality as a recruiting tool. It's just the spark some young people need. "Seeing them have fun is a great joy 'cause you feel maybe one out of these 100 or so kids will say, 'Well, biochemistry or, hmm ... chemistry, maybe I'd like that'," says Hinton. Hinton has high praise for his students, many of whom have been supported by NSF. Immersive 3-D virtual reality allows the students, in their quest to solve real world problems, the opportunity to view their world in a different way. Any opinions, findings, conclusions or recommendations presented in this material are only those of the presenter grantee/researcher, author, or agency employee; and do not necessarily reflect the views of the National Science Foundation.
<urn:uuid:6726af50-4244-4341-8ed2-867273a9cfc0>
3.3125
1,835
Content Listing
Science & Tech.
36.53575
You most likely haven't noticed, but there is slightly more carbon dioxide in the atmosphere today than there was when you took your first breath the day you were born. In the last 200 years, the volume of carbon dioxide in the Earth's atmosphere has increased by roughly 30 percent. This increase—mostly the result of using fossil fuels as sources of energy—is believed to be a contributing factor to global climate change. Because the use of fuels such as coal, natural gas and oil will continue for the foreseeable future, there's a growing international scientific effort to develop ways to slow the addition of carbon dioxide to the atmosphere. One approach is to capture and securely store carbon emitted from the global energy system. This process is called carbon sequestration. Pacific Northwest National Laboratory, Oak Ridge National Laboratory and Argonne National Laboratory lead a scientific research effort focused on removing carbon from the atmosphere and storing it in the soil. Known as the U.S. Department of Energy's Center for Research on Enhancing Carbon Sequestration in Terrestrial Ecosystems (CSITE), this group is helping determine ways to use plants, microbes and soil management practices to cause more carbon to be stored below ground. "By making modest changes in farming and forestry practices, plants and soils can be used much more efficiently to remove carbon dioxide from the atmosphere," said Cesar Izaurralde, staff scientist with the global climate change group at Pacific Northwest. "This not only cleans the atmosphere, but increases organic matter in the soil where it can be beneficial." Pacific Northwest's Blaine Metting, who leads CSITE jointly with Gary Jacobs from Oak Ridge, explained that the research center is building the scientific understanding necessary to develop and test flexible, feasible carbon sequestration technologies. "We're working on terrestrial carbon sequestration studies ranging from the molecular level to large-scale land use," Metting said. "At the molecular level, we're conducting research on the effects of adding organic carbon and fossil energy byproducts to mine spoils, with the added benefit of reclaiming the land as well as storing carbon. On a larger scale, studies are underway to determine changes in managing agricultural systems that could increase carbon sequestration." One research project taking place in the Cascade Mountains in the Northwest and in loblolly pine forests in the Southeast is aimed at improving the efficiency of capturing and storing carbon in forests. In another project, Pacific Northwest and Oak Ridge have joined Ohio State University and Virginia Polytechnic Institute in a two-year project to study the use of soil enhancers made from the solid wastes of coal plants, paper mills and sewage treatment facilities to improve the natural carbon uptake of lands disturbed by mining, highway construction or poor management practices. In addition to looking at how to capture and stabilize carbon in land, the CSITE also will research ways to measure, monitor and verify sequestration. The goal of all the carbon sequestration technologies is to help stabilize the carbon dioxide level in the Earth's atmosphere—avoiding carbon emissions and increasing capture and storage wherever possible. "Terrestrial carbon sequestration can help buy time for development and deployments of novel energy technologies to displace fossil fuel," Metting said. "We're hoping to slow the increase of carbon dioxide in the atmosphere and help control global climate change while researchers work on ways to reduce society's reliance on fossil fuel energy and increase the use of low-carbon and carbon-free fuels and technologies."
<urn:uuid:87c0fd24-2c70-467e-adc4-28f7fc1a325d>
4.21875
697
Knowledge Article
Science & Tech.
23.46302
Brilliant 10: Anže Slosar Maps Matter At The Edge Of The Universe IMAGE BY Marius Bugge The oldest part of the universe, more than 10 billion light years away, bursts with super-luminous quasars and diffuse aggregations of hydrogen gas. Anže Slosar, a cosmologist at Brookhaven National Laboratory in New York, wants to map that expanse in 3-D. Slosar looks for and then plots patterns in the periodic density fluctuations of matter that coalesced after the big bang. Others have mapped this structure to six billion light years away by observing how galaxies cluster, but at the universe's far edges, galaxies are too faint to see. To overcome this challenge, Slosar uses a new technique that many were skeptical would even work: Instead of plotting light visible to humans, he and his collaborators look at the shadows Anže Slosar Brookhaven National Laboratorythat massive gas clouds create when they obstruct light from the faraway quasars. For the first several months, the data looked too messy to map, and Slosar stewed in constant panic that he would fail. It took a few rounds of mathematical tweaks to coax out an actual signal. But once he did, data on 14,000 quasars from the BOSS telescope in New Mexico enabled Slosar to produce the largest-ever map of the ancient universe's structure-between 10 and 12 billion light years away-which in turn gives scientists insight into what the universe looked like very soon after it began. "I thought, Well, this is [what] the universe actually looks like very far away," Slosar says. "I felt so good that I just opened a beer and stopped working for the rest of the day."
<urn:uuid:98adada4-023e-4364-9647-35fe7b5c5adf>
3.34375
373
Truncated
Science & Tech.
42.012
Acta Biológica Colombiana versión impresa ISSN 0120-548X In 2000 the first draft of the human genome, what became known as our book of life, was presented. It generated high expectations for its potential applications to the benefit of the biological sciences. What happened 10 years later? We know how many genes we have in our genome and analyzed the function of some of them. Nowadays, we know the sequences of 3 mammalians genomes: M. musculus, P. troglodytes y S. scrofa and the genomes or borradores from other eucaryotes (other animals, plants, fungi and protists) and procaryotes (Archea and Bacterias). However, the study of the genome is not merely a description of the sequences that compose it. The answers provided will have very different approaches from evolution and conservation of biodiversity to gene therapy and malignant transformation, where the study of individual and population particularities requires sources of information both past and present on these genomes under survey. Thus, advances in science are always provisional and therefore liable to be continued, completed and even reinterpreted as we advance in knowledge, new questions arise. Palabras llave : Primates; genome dynamics; chromosomes; heterochromatin.
<urn:uuid:79c88358-569e-4e54-a890-78805016d4e1>
3.296875
269
Academic Writing
Science & Tech.
25.126272