text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Dispersal of all branchiopods is only by passive means, as other stages cannot survive out of water. Thus De los Rios-Escalante (2010), based on Niemeyer & Cereceda (1984) who used climatic, topographic and hydrological characteristics, proposed the following four regions in according to a zoogeographical review of inland water Branchiopods and Copepods: Northern Chile (18[degrees]-27[degrees]S); Central Chile (27[degrees]-37[degrees]S), Northern and Central Patagonia (37[degrees]-51[degrees]S) and Southern Patagonia (51[degrees]-55[degrees]S). After the first subsequent rains, the invertebrate fauna in these particular pans consisted of many "typical" temporary pan invertebrates, including various large branchiopods like: Anostraca (Fairy shrimp), Conchostraca (Clam shrimp), and Notostraca (Tadpole shrimp). Scientists had thought that early arthropods had simpler brains like those of tiny freshwater crustaceans called branchiopods Some believe that insects evolved from the an ancestor that gave rise to the malacostracans, a group of crustaceans that include crabs and shrimp, while others point to a lineage of less commonly known crustaceans called branchiopods , which include, for example, brine shrimp. The other group, which was generally favoured, thought they came from a class of crustaceans called branchiopods , which include brine shrimp and fairy shrimp. The particular style of molar ornamentation links the Mount Cap fossils to a clade of crown-group (pan)crustaceans that includes branchiopods , malacostracans, and hexapods, and thus provides a crucial calibration point for arthropod phylogeny, as well as the earliest record of an arthropod with a sophisticated particle-feeding ecology (Harvey and Butterfield 2008). The thousands of large and small pools that form annually are an ideal study area for freshwater branchiopods capable of surviving drought by forming resting eggs. 2002), isopods (Alexander, 1988; Hessler, 1993), branchiopods (Williams, 1994), and cladocerans (Zaret and Kerfoot, 1980; Kirk, 1985; Lagergren et al. Fairy shrimp, tadpole shrimp, and clam shrimp are all capable of this impressive feat, a skill not uncommon among a category of animals called branchiopods In cooperation with natural heritage program members in all 50 states, NatureServe has compiled, and maintains, a detailed database of over 21,000 plant and animal species in the United States, including nearly 16,200 vascular plants, approximately 2,550 native vertebrate animal species (including mammals, birds, reptiles, amphibians, and freshwater fishes), and a wide spectrum of invertebrates (including "all 2,600 species in the following groups: freshwater mussels, freshwater snails, crayfishes, large branchiopods , butterflies and skippers, underwing moths, tiger beetles, and dragonflies and damselflies"). Their planktonic fish belong to many genera of branchiopods (of the orders Anostraca, Conchostraca, and Notostraca) and copepods.
<urn:uuid:41818390-1855-49eb-be82-b15871ad2f3a>
3.78125
720
Knowledge Article
Science & Tech.
6.318073
95,560,510
Chaos theory is a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions. 'Chaos' is an interdisciplinary theory stating that within the apparent randomness of chaotic complex systems, there are underlying patterns, constant feedback loops, repetition, self-similarity, fractals, self-organization, and reliance on programming at the initial point known as sensitive dependence on initial conditions. The butterfly effect describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state, e.g. a butterfly flapping its wings in China can cause a hurricane in Texas. Small differences in initial conditions, such as those due to rounding errors in numerical computation, yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as: Chaos: When the present determines the future, but the approximate present does not approximately determine the future. Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in several disciplines, including meteorology, anthropology,sociology, physics,environmental science, computer science, engineering, economics, biology, ecology, and philosophy. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, and self-assembly processes. Chaos theory concerns deterministic systems whose behavior can in principle be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time that the behavior of a chaotic system can be effectively predicted depends on three things: How much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the solar system, 50 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random. In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L. Devaney says that, to classify a dynamical system as chaotic, it must have these properties: In some cases, the last two properties in the above have been shown to actually imply sensitivity to initial conditions. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two. An alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. In continuous time dynamical systems, chaos is the phenomenon of the spontaneous breakdown of topological supersymmetry which is an intrinsic property of evolution operators of all stochastic and deterministic (partial) differential equations. This picture of dynamical chaos works not only for deterministic models but also for models with external noise, which is an important generalization from the physical point of view because in reality all dynamical systems experience influence from their stochastic environments. Within this picture, the long-range dynamical behavior associated with chaotic dynamics, e.g., the butterfly effect, is a consequence of the Goldstone's theorem in the application to the spontaneous topological supersymmetry breaking. Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points with significantly different future paths, or trajectories. Thus, an arbitrarily small change, or perturbation, of the current trajectory may lead to significantly different future behavior. Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system would have been vastly different. A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time the system is no longer predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. Of course, this does not mean that we cannot say anything about events far in the future; some restrictions on the system are present. With weather, we know that the temperature will not naturally reach 100 °C or fall to -130 °C on earth (during the current geologic era), but we can't say exactly what day will have the hottest temperature of the year. In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions. Given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by where t is the time and ? is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic. Topological mixing (or topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system. Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity. For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x -> 4 x (1 - x) is one of the simplest systems with density of periodic orbits. For example, -> -> (or approximately 0.3454915 -> 0.9045085 -> 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem). Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits. Some dynamical systems, like the one-dimensional logistic map defined by x -> 4 x (1 - x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region. An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it was not only one of the first, but it is also one of the most complex and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly. Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them. Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré-Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional. The Poincaré-Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as: where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved. While the Poincaré-Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can exhibit chaotic behavior. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis. are sometimes called Jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behaviour. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems. A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits. One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Nonlinear jerk systems are in a sense minimally complex systems to show chaotic behaviour; there is no chaotic system involving only two first-order, ordinary differential equations (the system resulting in an equation of second order only). An example of a jerk equation with nonlinearity in the magnitude of is: Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes: In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative. Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898 Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent. Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff,Andrey Nikolaevich Kolmogorov,Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Except for Smale, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing. Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970. Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again, and to save time he started the simulation in the middle of its course. He did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To his surprise, the weather the machine began to predict was completely different from the previous calculation. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modelling cannot, in general, make precise long-term weather predictions. In 1963, Benoit Mandelbrot found recurring patterns at every scale in data on cotton prices. Beforehand he had studied information theory and concluded noise was patterned like a Cantor set: on any scale the proportion of noise-containing periods to error-free periods was a constant - thus errors were inevitable and must be planned for by incorporating redundancy. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). This challenged the idea that changes in price were normally distributed. In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpi?ski gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982 Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. Biological systems such as the branching of the circulatory and bronchial systems proved to fit a fractal model. In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year, independently Pierre Coullet and Charles Tresser with the article "Iterations d'endomorphismes et groupe de renormalisation" and Mitchell Feigenbaum with the article "Quantitative Universality for a Class of Nonlinear Transformations" described logistic maps. They notably discovered the universality in chaos, permitting the application of chaos theory to many different phenomena. In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh-Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements. In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking disorder among schizophrenics. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles. In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature. Alongside largely lab-based approaches such as the Bak-Tang-Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg-Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws. In the same year, James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public, though his history under-emphasized important Soviet contributions. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick. The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines (mathematics, topology, physics,social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, etc.). Chaos theory was born from observing weather patterns, but it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, microbiology, biology, computer science, economics,engineering,finance,algorithmic trading,meteorology, philosophy, anthropology,physics,politics, population dynamics,psychology, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing. Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient. Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model. Chaotic dynamics have been exhibited by passive walking biped robots. For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling. In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately. Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior. Researchers have continued to apply chaos theory to psychology. For example, in modeling group behavior in which heterogeneous members may behave as if sharing to different degrees what in Wilfred Bion's theory is a basic assumption, researchers have found that the group dynamic is the result of the individual dynamics of the members: each individual reproduces the group dynamics in a different scale, and the chaotic behavior of the group is reflected in each member. Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so. In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r. Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternate interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model. By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Aniundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable. Some say the chaos metaphor--used in verbal theories--grounded on mathematical models and psychological aspects of human behavior provides helpful insights to describing the complexity of small work groups, that go beyond the metaphor itself. It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships. Traffic forecasting may benefit from applications of chaos theory. Better predictions of when traffic will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right). Chaos theory has been applied to environmental water cycle data (aka hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
<urn:uuid:81f18b75-0a83-45b3-9395-1179b2432f03>
3.90625
6,638
Knowledge Article
Science & Tech.
23.307249
95,560,526
By Will Dunham WASHINGTON (Reuters) - Scientists have identified a new species of great ape on the Indonesian island of Sumatra, finding that a small population of orangutans inhabiting its Batang Toru forest merits recognition as the third species of these shaggy reddish tree dwellers. Researchers said on Thursday these orangutans boast genetic, skeletal and tooth differences from the two other species of orangutan, meriting recognition as a unique third species. That would bring to seven the number of great ape species worldwide aside from people, alongside Africa's eastern and western gorillas, chimpanzees and bonobos. Scientists are worried about the future of the newly identified species, one of humankind's closest relatives. They have labeled the species the Tapanuli orangutan, with the scientific name Pongo tapanuliensis. "There are no more than 800 individuals remaining across three fragmented forest areas," said conservation biologist Matthew Nowak of the Sumatran Orangutan Conservation Programme. In addition to threats like hunting by humans, Nowak said, "Significant areas of the Tapanuli orangutan's range are seriously threatened by habitat conversion for small-scale agriculture, mining exploration and exploitation, a large-scale hydroelectric scheme, geothermal development and agricultural plantations." Orangutan means "person of the forest" in the Indonesian and Malay languages, and it is the world's biggest arboreal mammal. Orangutans are adapted to living in trees, with their arms longer than their legs. They live more solitary lives than other great apes, sleeping and eating fruit in the forest canopy and swinging from branch to branch. "It's pretty exciting to be able to describe a new great ape species in this day and age," said University of Zurich evolutionary geneticist Michael Krützen, adding that most great apes species are listed as endangered or critically endangered. "We must do everything possible to protect the habitats in which these magnificent animals occur, not only because of them, but also because of all the other animal and plant species that we can protect at the same time." Orangutans long were considered a single species, but were recognized as having two species in 1996, one in Sumatra and one in Borneo. The new species lives south of what was the known range for Sumatran orangutans. This population was unknown to scientists until two decades ago. In addition to genetic differences from the other species, the researchers said the skeleton of a Tapanuli orangutan that died after being wounded by villagers showed differences in tooth and skull shape. The research was published in the journal Current Biology. (Reporting by Will Dunham; Editing by Sandra Maler)
<urn:uuid:4a4e22d8-187d-4306-97f0-35c65dfb76e2>
3.046875
560
News Article
Science & Tech.
20.620471
95,560,533
Dark energy, the mysterious substance thought to be accelerating the expansion of the universe, almost certainly exists despite some astronomers’ doubts, a new study says. After a two-year study, an international team of researchers concludes that the probability of dark energy being real stands at 99.996%. But the scientists still don’t know what the stuff is. “Dark energy is one of the great scientific mysteries of our time, so it isn’t surprising that so many researchers question its existence,” co-author Bob Nichol, of the University of Portsmouth in Engalnd, said in a statement. “But with our new work we’re more confident than ever that this exotic component of the universe is real — even if we still have no idea what it consists of.” The Roots of Dark Energy Scientists have known since the 1920s that the universe is expanding. Most assumed that gravity would slow this expansion gradually, or even cause the universe to begin contracting one day. But in 1998, two separate teams of researchers discovered that the universe’s expansion is actually speeding up. In the wake of this shocking find — which earned three of the discoverers the Nobel Prize in Physics in 2011 — researchers proposed the existence of dark energy, an enigmatic force pushing the cosmos apart. Dark energy is thought to make up 73% of the universe, though no one can say exactly what it is. (Twenty-three percent of the universe is similarly strange dark matter, scientists say, while the remaining 4% is “normal” matter that we can see and feel.) Still, not all astronomers are convinced that dark energy is real, and many have been trying to confirm its existence for the past decade or so. Hunting for Dark Energy One of the best lines of evidence for the existence of dark energy comes from something called the Integrated Sachs Wolfe effect, researchers said. In 1967, astronomers Rainer Sachs and Arthur Wolfe proposed that light from the cosmic microwave background (CMB) radiation — the thermal imprint left by the Big Bang that created our universe — should become slightly bluer as it passes through the gravitational fields of lumps of matter. Three decades later, other researchers ran with the idea, suggesting astronomers could look for these small changes in the light’s energy by comparing the temperature of the distant CMB radiation with maps of nearby galaxies. If dark energy doesn’t exist, there should be no correspondence between the two maps. But if dark energy is real, then, strangely, the CMB light should be seen to gain energy as it moves through large lumps of mass, researchers said. This latter scenario is known as the Integrated Sachs Wolfe effect, and it was first detected in 2003. However, the signal is relatively weak, and some astronomers have questioned if it’s really strong evidence for dark energy after all. Re-examining the Data In the new study, the researchers re-examine the arguments against the Integrated Sachs Wolfe detection, and they update the maps used in the original work. In the end, the team determined that there is a 99.996% chance that dark energy is responsible for the hotter parts of the CMB maps, researchers said. “This work also tells us about possible modifications to Einstein’s theory of general relativity,” said lead author Tommaso Giannantonio, of Ludwig-Maximilian University of Munich in Germany. “The next generation of cosmic microwave background and galaxy surveys should provide the definitive measurement, either confirming general relativity, including dark energy, or even more intriguingly, demanding a completely new understanding of how gravity works,” Giannantonio added. The team’s findings have been published in the journal Monthly Notices of the Royal Astronomical Society. Image courtesy of Flickr, Esoastronomy - NASA’s New Robot Will Take the Plunge … Into a Volcano - 10 Strange Facts About Mercury (A Photo Tour) - Living on Mercury Would Be Hard (Infographic) - Tiny Spacesuits Keep Bugs From Exploding in Vacuum | Video This article originally published at Space.com Read more: http://mashable.com/2012/09/15/dark-energy/
<urn:uuid:ccf694df-99f2-435a-9ca4-e5d03af1b462>
3.5
890
Truncated
Science & Tech.
37.533846
95,560,562
In chemistry, an electron pair or a Lewis pair consists of two electrons that occupy the same molecular orbital but have opposite spins. The electron pair concept was introduced in a 1916 paper of Gilbert N. Lewis. Because electrons are fermions, the Pauli exclusion principle forbids these particles from having exactly the same quantum numbers. Therefore, the only way to occupy the same orbital, i.e. have the same orbital quantum numbers, is to differ in the spin quantum number. This limits the number of electrons in the same orbital to exactly two. The pairing of spins is often energetically favorable and electron pairs therefore play a very large role in chemistry. They can form a chemical bond between two atoms, or they can occur as a lone pair of valence electrons. They also fill the core levels of an atom. Although a strong tendency to pair off electrons can be observed in chemistry, it is also possible that electrons occur as unpaired electrons. In the case of metallic bonding the magnetic moments also compensate to a large extent, but the bonding is more communal so that individual pairs of electrons cannot be distinguished and it is better to consider the electrons as a collective 'ocean'. - Polyhedral skeletal electron pair theory - Jemmis mno rules - Lewis acids and bases - Electron pair production - Jean Maruani (1989). Molecules in Physics, Chemistry and Biology: v. 3: Electronic Structure and Chemical Reactivity. Springer. p. 73. ISBN 978-90-277-2598-1. Retrieved 14 March 2013.
<urn:uuid:c31f629c-eda1-40ee-8a1d-d1eb9b8636a0>
4.03125
319
Knowledge Article
Science & Tech.
45.517267
95,560,563
- Principles of Chemistry I: Honors Fall 2015, Unique 49310 Homework, Week 12 1. Formic acid (HCOOH) is a carboxylic acid that deprotonates easily to form the formate ion. a) How do the pi orbitals differ in these two molecules? It will probably be helpful to draw the hybrid orbitals at the central carbon atom to answer this. b) C-O bond lengths in ketones are generally around 1.2 Å, but in ethers are about 1.35 Å. What range of bond lengths would be predict to measure for the C-O bond length in the formate ion? 2. Draw the Lewis dot structure of the aldehyde propylaldehyde (CH3CH2COH). Determine the hybridization of each atom, and draw the correct three dimensional structure of the molecule. 3. For each of the following molecules define the hybridization of each C, N, and O atom. (You will have to download the pdf file for this one.) 4. What pressure is exerted by 250 g of CO2 gas at 25˚C in a container 1.5 dm3 in volume if it behaves as a perfect gas? What pressure would it exert if it behaves as a van der Waals gas? Comment on any differences. 5. A 2.0 L vessel is filled with N2 at a pressure of 3.0 atm. The tank is connected to a 5.0 L vessel that is under vacuum, and a valve between the two tanks is opened. Determine the total pressure of the two-tank system at equilibrium. You may assume temperature remains constant and that the volume of the apparatus connecting the two tanks is negligible. 6. Compare the root mean square speed of He atoms near the surface of the sun, which is approximately 6000 K, with that of He atoms in an interstellar dust cloud which is approximately 100 K.
<urn:uuid:5329fc52-724a-443d-ad29-c10574fdca07>
3.125
406
Tutorial
Science & Tech.
72.542647
95,560,568
Asian carp in Chain Lake | Photo by Thad Cook More than one-third of Lake Erie’s biomass could become Asian carp in the next century if an invasion of the dreaded invasive fish occurs in the Great Lakes as many predict. The latest study from almost a dozen Canadian and American researchers concludes that fish throughout the aquatic food web would be devastated should silver and bighead carp become established in the Great Lakes. The report, published this week in the Transactions of the American Fisheries Society Journal, argued everything from planktivores like emerald shiners and gizzard shad, to fish-eating predatory fish such as walleye and rainbow trout would show signs of decline, in some cases by as much as 37 percent. Not all native fishes would decline as a result of an increased Asian carp presence. If the modelling is accurate, smallmouth bass — which prey on juvenile fish — could see an upswing once young Asian carp become more plentiful. Silver and bighead carp, two invasive fish from China that have been overrunning US rivers from Louisiana to Minnesota since the 1970s, are the dominant fish in size and volume throughout many ecosystems they inhabit. Both are highly efficient filter feeders, sucking up phyto- and zooplankton from the water faster than native species (who also rely on these microscopic insects and plants for food) can ingest them. But unlike most fish native to the Great Lakes, Asian carp are highly opportunistic feeders: both have been shown to switch food sources by surviving on detritus and bacteria when other food sources are scarce. Native fish simply cannot do this. Both species have so successfully colonized the Mississippi and Illinois rivers that more than 90 percent of all living matter in these waters is Asian carp. Yet there’s a silver lining to the report’s findings, albeit, a dark one. The fact that mathematical modelling and structured expert judgement capped the projected biomass at 34 percent is, in a perverse way, good news. While the study confirmed their detrimental impact on forage and predatory fish, the degree of damage was less than the Mississippi and Illinois rivers and far less than many predicted. “Our results suggest that the lakewide impacts on the Lake Erie food web will not be as great as some have feared,” according to Hongyan Zhang, lead author of the report and a researcher at the University of Michigan. “Fortunately, the [biomass] percentage would not be as high as it is today in the Illinois River, where Asian carp have caused large changes in the ecosystem and have affected human use of the river,” she said in a release. But not so fast. Zhang and her fellow scientists ran a series of models to gauge the potential effects that silver and bighead carp could have on Lake Erie’s food web. To obtain a broad range of potential impacts, researchers structured the model to account for several variables: whether the carp would turn to detritus in the absence of plankton, what impact nutrient loading (a common problem in the warm waters of western Lake Erie) would have on carp feeding and whether Asian carp would turn to eating sport fish larvae if phyto- and zooplankton volumes drop. This last variable is crucial. “We have found no evidence in the published literature that Asian carp consume larval fish or fish eggs,” Zhang notes. However, based on research conducted by Canadian biologist Becky Cudmore and the US Geological Survey, many experts have suggested that Asian carp probably do feed on fish larvae given the flexibility of their diets and their proximity to spawning grounds. So while the overall prognosis isn’t as terrible as many thought, it all rests on Asian carp avoiding fish larvae as food. There’s no record of this happening, but it’s cold comfort to many who know the invasive fish best. Should larvae end up a staple food for the invasive fish in Lake Erie, expect populations of walleye and yellow perch to decline dramatically, researchers warned. Lake Erie, the most productive of the Great Lakes’ $7-billion fishery, was chosen as the first modelling subject. Live Asian carp, as well as traces of their environmental DNA, have already been found in the western basin of Lake Erie near the Sandusky and Maumee rivers. The research team intends to run similar studies to project the impacts of Asian carp on lakes Ontario, Huron and Michigan in future. The results of future studies can’t come soon enough — 2015 saw a string of bad news on the carp front. Bumper crops of silver and bighead carp were born in the Upper and Lower Mississippi River systems and the Illinois River. Meanwhile, the fish moved 106 kilometres closer to Lake Michigan via the Illinois River. There, they threaten to bypass electric fences and other obstacles built to stop them in the Chicago Area Waterway System. Asian carp now swim approximately 122 kilometres from the Great Lakes. - A\J Editorial Board (18) A\J Editorial Board - A\J Special Delivery (160) A\J Special Delivery - Backstage at A\J (84) Backstage at A\J - Current Events (212) Current Events - EcoLogic (8) EcoLogic - Food and Culture (24) Food and Culture - Green Living (32) Green Living - Made in Canada (21) Made in Canada - Renewable Energy (54) Renewable Energy - Shades of Green (12) Shades of Green - Summer Reading Series (7) Summer Reading Series - Sustainable A\J (57) Sustainable A\J - The Green Student (19) The Green Student - The Mouthful (14) The Mouthful - The Wild Side (38) The Wild Side - Think Global (14) Think Global - Turtle Island Solidarity Journey 2018 (4) Turtle Island Solidarity Journey 2018 Popular on A\J - From EATING AROUND THE WORLD article: "The long road to sustainability requires rebuilding our communities, and a g… https://t.co/gLTuZ7Rvu5 — 22 weeks 1 day ago - A Valentine's Day (and every day) message from Jane Goodall: "Let us replace impatience and intolerance with unders… https://t.co/1WGML2toyK — 22 weeks 1 day ago - For Valentine's Day: https://t.co/exvDzE2LQf — 22 weeks 1 day ago
<urn:uuid:fc57c7ad-1cf7-4b89-ae8c-ccb3603964a9>
3.375
1,349
News Article
Science & Tech.
38.968396
95,560,598
> - It should be mentioned that the input can be a list of characters: > (bin '("1" "1" "1" "0"))? -> 14 I would not especially mention this here. In fact, most built-in functions which expect a symbolic argument also accept something else (like a number or a list) if it makes sense to take their "name". This is more practical than throwing an error, and saves an explicit argument conversion. So we would have to write this for very many functions. In case of 'bin', this happens because it calls 'chop'. So the "official" way is to call 'bin' (and similar functions) with either a number or a string (transient symbol), but the above mechanism implies that you may call it also with something else, if it can be automatically handled as a string internally. > - The result for the example (bin 1234567 4) should be "1 0010 1101 0110 1000 Oops, yes. Thanks! Fixed. > I also suggest that there be a "See also" link to 'bin' and 'pad' from the > 'format' ref.
<urn:uuid:945657d5-106f-4b97-8131-c6756e96e745>
3.21875
258
Comment Section
Software Dev.
70.835311
95,560,602
It’s a punchline that sends every 12-year-old boy into a fit of giggles. Now it has been proven to be true. Uranus stinks! Scientists using a huge telescope on Hawaii’s Mauna Kea volcano found the seventh planet from the sun is surrounded by clouds made up of hydrogen sulfide, the gas that smells like rotten eggs and bad flatulence. The study by scientists from the California Institute of Technology, University of Oxford and the University of Leicester was published in the journal Nature Astronomy. “If an unfortunate human were ever to descend through Uranus’ clouds they would be met with very unpleasant and odiferous conditions,” Patrick Irwin of the University of Oxford wrote. Not that they would live long enough to sniff it. “Suffocation and exposure in the negative 200 degrees Celsius atmosphere made of mostly hydrogen, helium and methane would take its toll long before the smell,” Irwin wrote. Despite previous observations by ground telescopes and the Voyager 2 spacecraft, scientists had failed to determine the composition of Uranus’ atmosphere. The new data was obtained by using a spectrometer on the Gemini North telescope in Hawaii. It should help scientists better understand the formation of Uranus and other outer planets. VOAClick here for reuse options! Copyright 2018 NewsGram
<urn:uuid:306e989f-0fee-432e-ba38-0ea138edf754>
2.953125
279
News Article
Science & Tech.
37.889173
95,560,635
+44 1803 865913 By: Demetrio Boltovskoy(Editor) 1705 pages, plates with b/w line drawings, b/w distribution maps, tables Compiles detailed information for the specific identification of 30 zooplanktonic groups, and offers a review of species-specific geographic and vertical distribution patterns in the South Atlantic Ocean. This edition covers the abundant literature produced in the last decades, much of which is in the form of unpublished dissertations, internal reports, and articles in local journals of restricted distribution. For each zooplanktonic group the following information is included: comments on its general morphology, traits used in systematics, terminology, specific methodological aspects, classification, geographic distribution of the species in the South Atlantic, and vertical distribution patterns of the species in the South Atlantic. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I have always been MOST impressed by the efficiency, courtesy, integrity and professionalism of NHBS! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:507a48b9-317a-469d-be91-5ee5cbdc017f>
2.78125
247
Product Page
Science & Tech.
17.824134
95,560,636
Mechanisms and Effects of Acid Rain on Environment Received Date: May 02, 2014 / Accepted Date: May 27, 2014 / Published Date: May 30, 2014 This paper focuses on mechanisms and effects of air pollution on atmosphere. It’s been mentioned that the most causes of our world’s global climate change are totally different circumstances. Among them, acidic rain is one among the chronic problems for the global climate change and ecological deformation of our surroundings. It's been finished that usually, precipitation that includes a pH 5.6 is taken into account as air pollution. It’s fashioned, once sulphur oxides and gas oxides reacted with water throughout rain and as gases or fine particles. This air pollution affects a spread of plants and animals in our surroundings. because it is mentioned higher than below ways of hindrance acidic rain on environment; it's been reduced by pack up smokestacks and exhaust pipes furthermore as victimization alternatives energy sources for vehicles, fuel station and electricity generation for various purpose so as to measure in an exceedingly safe and appropriate atmosphere without concern of worldwide warming and inexperienced house gases. Keywords: Acid rain; Mechanisms; Environments Acid rain is a broad term used to describe several ways that acids fall out of the atmosphere. A more precise term is acid deposition, which has two parts: wet and dry. Wet deposition refers to acidic rain, fog, and snow. As this acidic water flows over and through the ground, it affects a variety of plants and animals. Dry deposition refers to acidic gases and particles. About half of the acidity in the atmosphere falls back to earth through dry deposition . The wind blows these acidic particles and gases towards buildings, cars, homes and trees. Dry deposited gases and particles can also be washed from trees and other surfaces by rainstorms. When that happens, the runoff water adds those acids to the acid rain, making the combination more acidic than the falling rain alone. Precipitation that has a pH value of less than seven may contain acidic rain. This is due to the presence of acidic oxide emissions in the atmosphere from industries and vehicles. However, a rainfall that has a pH value of less than 5.6 is considered as acid rain . It is formed when sulphur dioxides and nitrogen oxides, as gases or fine reacts with rain water. Particles in the atmosphere combine with water vapour and precipitate as sulphuric acid or nitric acid in rain, snow, or fog. Therefore, the main objective of this paper was to assess the effect of acid rain on environment and to suggest the methods of preventing acid rain. Moreover, to review what have done on acid rain before and to forecast what will have done in the future. This is the first phase of the research. It will continued more on experimental result in the second phase of the paper. How do we measure acid rain? Acid rain is measured using pH meter from 1 to 14 value scales with a pH of 7.0 being neutral, 0 to 7 being acidic, and 7 to 14 basic . When the PH value lowers, the acidity nature of rain increases. Pure water has a pH value of 7. However, normal rain is slightly acidic because different acidic oxide emissions react with rain that lowers the pH value about 5.6. According to 2000 report, the most acidic rain falling in the US has a pH of about 4.3 . This acid rain's pH and the chemicals that cause acid rain are monitored by two networks that are supported by EPA. The National Atmospheric Deposition Program measures wet deposition, and its Web site features maps of rainfall pH (follow the link to the isopleths maps) and other important precipitation chemistry measurements. The Clean Air Status and Trends Network (CASTNET) measures dry deposition. Its web site features information about the data it collects, the measuring sites, and the kinds of equipment it uses . Components of acid rain The major components of acid rains are sulphur dioxide/sulphur trioxide, carbon dioxide and nitrogen dioxide dissolves in rain water. These components are deposited as dry and wet depositions. When these pollutants are dissolved in water during rain it forms various acids (Figure 1). The chemical reactions of these pollutants are discussed as follows. • CO2+H2O → H2CO3 (carbonic acid) • SO2+H2O → H2SO3 (sulphorous acid) • NO2+H2O → HNO2 (nitrous acid)+HNO3 (nitric acid) Causes for the formation of acid rain Natural sources and human activities are the main causes for the formation of acid rain in the world. Natural source causes are emissions from volcanoes and biological processes that occur on the land, in wetlands, and in the oceans contribute acid-producing gases to the atmosphere; and Effects of acidic deposits have been detected in glacial ice thousands of years old in remote parts of the globe. Whereas, activities of human beings are burning of coal, using Oil and natural gas in power stations to produce electricity, cooking purpose and to run their vehicles are giving off oxide of sulphur, oxides of carbon, oxides of nitrogen, residual hydrocarbons and particulate matters to the environment. These emissions mix with water vapour and rainwater in the atmosphere producing weak solutions of sulphuric and nitric acids, which fall back as acid rain to the ocean, lake and land. Areas affected by acid rain due to power plant Canada and USA: Acid rain is a problem in Eastern Canada and the Northeastern USA. Large smelters in western Ontario and steel processing plants in Indiana, Ohio historically used coal as a source of fuel. The sulfur dioxide produced was carried eastward by the jet stream. Acid rain from power plants in the Midwest United States has also harmed the forests of upstate New York and New England. In many areas water and soil systems lack natural alkalinity such as lime base cannot neutralize acid. Sulfur dioxide is emitted from industrial processes and the burning of fossil fuels. In particular, ore smelting, coal-fired power generators, and the processing of natural gas result in the greatest emissions of sulfur dioxide. In 2000, Canada emitted 2.4 million tons of sulfur dioxide. Moreover, the primary causes of oxides of nitrogen are a vehicle, which accounts about 60% of all nitrogen oxide emissions. However, emissions also come from furnaces, boilers and engines. In 2000, Canada emitted 2.5 million tones of nitrogen oxide . Therefore, these emissions are the main causes of acid rain all over the world. Europe and Asia: Industrial acid rain is a substantial problem in China, Eastern Europe and Russia and areas down-wind from them. The effects of acid rain can spread over a large area, far from the source of the pollution. Research carried out in North America in 1982, revealed that sulphur pollution killed 51,000 people and about 200,000 people become ill due to this emissions. Over the past decades, Norway has suffered a great damage due to the effect of acid rain. While Norway’s sulphur dioxide emissions have decreased significantly since the 1970s and 1980s, and nitrogen oxide emissions have decreased slightly, the damages from acid rain appear to be worsening in southern Norway. This is because it takes years for the ecosystems and the environment to recover from the effects of acidification. According to the State of the Environment in Norway, 18 salmon stocks have been lost and 12 are endangered, and have been wiped out of all of the large salmon rivers in southern Norway . Hydro treating is a catalytic chemical process widely used to remove sulfur compounds from refined petroleum products such as gasoline or petrol, jet fuel, diesel fuel, and fuel oils. One purpose for removing the sulfur is to reduce the sulfur dioxide emissions resulting from using those fuels in automotive vehicles, aircraft, railroad locomotives, ships, or oil burning power plants, residential and industrial furnaces, and other forms of fuel combustion. Another important reason for removing sulfur from the intermediate product naphtha streams within a petroleum refinery is that sulfur, even in extremely low concentrations, poisons the noble metal catalysts platinum and rhenium in the catalytic reforming units that are subsequently used to upgrade the of the naphtha streams . Effects of acid rain on environment Harmful to aquatic life: This is due to increasing the acidity character in water bodies that Stops eggs of certain organisms (e.g. fish) to stop hatching, Changes population ratios and affects their ecosystem. Harmful to vegetation: Vegetables are destructed due to increased acidity in soil, Leeches nutrients from soil, and slowing plant growth, poisoning plants, creates brown spots in leaves of trees, impeding photosynthesis, allows organisms to infect through broken leaves. Affects human health: Causes respiratory problems, asthma, dry coughs, headaches and throat irritations; Leeching of toxins from the soil by acid rain can be absorbed by plants and animals. When consumed these toxins it affect human’s life severely ,which cause brain damage, kidney problems and Alzheimer's disease have been linked to people who eat meat of "toxic" animals/plants by these pollutant. Effect on transport: Currently, both the railway industry and the aeroplane industry have to spend a lot of money to repair the corrosive damage done by acid rain. Furthermore, bridges have collapsed in the past due to acid rain corrosion. Acid rain dissolves the stonework and mortar of buildings (especially those made out of sandstone or limestone). It reacts with the minerals in the stone to form a powdery substance that can be washed away by rain. How do we prevent our environment from acidic rain? There are several ways to reduce acid deposition and precipitation. These are: Clean up smokestacks and exhaust pipes: Almost all of the electricity that powers modern life comes from burning fossil fuels like coal, natural gas, and oil. However, exhaust emission of these fuels are the main causes of acid deposition that released into the atmosphere. Coal fuel accounts for most US SO2 and a large portion of NOx emissions. Sulfur is present in coal as an impurity, and it reacts with air when the coal is burned to form SO2. In contrast, NOx is formed when any fossil fuel is burned. There are several options for reducing SO2 emissions, including using coal containing less sulfur, washing the coal, and using devices called scrubbers to chemically remove the SO2 from the gases leaving the smokestack and recycling to use as a raw material. Power plants can also switch fuels; for example burning natural gas creates much less SO2 than burning coal. Certain approaches will also have additional benefits of reducing other pollutants such as mercury and carbon dioxide. Understanding these "co-benefits" has become important in seeking cost-effective air pollution reduction strategies. Finally, power plants can use technologies that don't burn fossil fuels. Each of these options has its own costs and benefits, however; there is no single universal solution. Similar to scrubbers on power plants, catalytic converters reduce NOx emissions from cars. These devices have been required for over twenty years in the US, and it is important to keep them working properly and tailpipe restrictions have been tightened recently. EPA has also made, and continues to make, changes to gasoline that allows it to burn cleaner dioxide of sulfur (SO2) and NOx. Use alternative energy sources: There are other sources of electricity besides fossil fuels such as nuclear power, hydropower, wind energy, geothermal energy, and solar energy. Of these, nuclear and hydropower are used most widely; wind, solar, and geothermal energy have not yet been harnessed on a large scale. There are also alternative energies available to power automobiles, including natural gas powered vehicles, battery-powered cars, fuel cells, biofuels and biodiesel and combinations of alternative and gasoline powered vehicles. All sources of energy have environmental costs as well as benefits. Some types of energy are more expensive to produce than others. Nuclear power, hydropower, and coal are the cheapest forms today, but changes in technologies and environmental regulations may shift that in the future. All of these factors must be weighed when deciding which energy source to use today and which to invest for tomorrow. Liming: Powdered limestone added to water and soil to neutralize acid. It is commonly used in Norway and Sweden. However, it is more expensive and short-term remedy. Acid deposition penetrates deeply into the fabric of an ecosystem, changing the chemistry of the soil as well as the chemistry of the streams and narrowing, sometimes to nothing, the space where certain plants and animals can survive. Because there are so many changes, it takes many years for ecosystems to recover from acid deposition, even after emissions are reduced and the rain becomes normal again. For example, while the visibility might improve within days, and small or episodic chemical changes in streams improve within months, chronically acidified lakes, streams, forests, and soils can take years to decades or even centuries (in the case of soils) to heal. However, there are some things that people do to bring back lakes and streams more quickly. Limestone or lime (a naturally-occurring basic compound) can be added to acidic lakes to "cancel out" the acidity. This process, called liming. Liming tends to be expensive, has to be done repeatedly to keep the water from returning to its acidic condition, and is considered a short-term remedy in only specific areas rather than an effort to reduce or prevent pollution. Furthermore, it does not solve the broader problems of changes in soil chemistry and forest health in the watershed, and does nothing to address visibility reductions, materials damage, and risk to human health. However, liming does often permit fish to remain in a lake, so it allows the native population to survive in place until emissions reductions reduce the amount of acid deposition in the area. Generally, rainfall that has a pH value less than 5.6 is considered as acid rain. It is formed when sulphur dioxides and nitrogen oxides reacted with water during rain and as gases or fine. Acids rain is described in terms of wet and dry depositions. The wet deposition refers to acidic rain, frog and snow whereas dry deposition refers to acidic gases and particles. This acid rain affects a variety of plants and animals (Harmful to aquatic life, Harmful to vegetation, affects human health and Transport) in our environment. As it is discussed above under Methods of prevention of acidic rain on environment; we reduce it by Clean up smokestacks and exhaust pipes as wells as using alternative energy sources for vehicles and electricity generation for different purpose in order to live in a safe and suitable environment without fare of global warming. - Safe Drinking Water Foundation (2007) Saskatoon, Canada. - Stephen KL (1999) Introduction to acid-base chemistry. Simon Fraser University, Canada. - Marian B (1994) Weather and Climate. Hong Kong: Time Life Asia, Hing Kong. - Beychok M (2013). Hydrodesulfurization. Encyclopedia of Earth. Citation: Wondyfraw M (2014) Mechanisms and Effects of Acid Rain on Environment. J Earth Sci Clim Change 5: 204. Doi: 10.4172/2157-7617.1000204 Copyright: © 2014 Wondyfraw M. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Select your language of interest to view the total content in your interested language Share This Article 5th International Conference on Geological and Environmental Sustainability August 13-14, 2018, Bali, Indonesia 6th International Meeting on Oceanography October 07-09, 2018 Melbourne, Australia 6th International Conference on Oceanography and Marine Biology September 21-22, 2018 Dallas, USA 3rd International Convention on Geochemistry and Environmental Chemistry October 19-20, 2018 Ottawa, Canada 3rd International Convention on Geosciences and Remote Sensing October 19-20, 2018 Ottawa, Canada - Total views: 24270 - [From(publication date): 8-2014 - Jul 22, 2018] - Breakdown by view type - HTML page views: 19690 - PDF downloads: 4580
<urn:uuid:796e40c8-2cc6-4281-919f-75716d281989>
3.328125
3,397
Truncated
Science & Tech.
36.301222
95,560,637
These slides will be available shortly after this talk ES6.ioReactForBeginners.comLearnNode.comCSSGrid.ioSyntax.fm I'll tweet the link out. New DOM Apis Not brand new, but let's do a quick review Promises are an IOU for something that will happen in the future - AJAX call returning data - Access to a User's webcam - Resizing an image All of these things take time, we simply kick off the process and move along with our lives. But why do we do it this way? Almost Everything Is Asynchronous Let's say we wanted to do a few things: - Make Coffee - Drink Coffee - Cook Breakfast - Eat Breakfast Do you need to finish making coffee before you can Start Breakfast? Would it make sense to wait until coffee is made and consumed before we even start cooking breakfast? No - we want to start one thing, come back to it once it's finished, and deal with the result accordingly! Most new browser APIs are built on Promises, or Observables More on Observables in a bit Many, many more PaymentRequest, getUserMedia(), Web Animation API It's easy to make your own too! Christmas Tree Callback Hell we get it Promises are great. What's the deal with .then()? It's still kinda callback-y Any code that needs to come after the promise still needs to be in the final .then() callback :\ Async + Await Async + Await still is promises, but with a really nice syntax Let's break it down great! - But it's hard to read/write The PHP is easier to read The JS is more performant I'm not really happy Synchronous looking code, without the wait. How does it work? 1. Mark it as Async 2. await inside your async fn Best of Both Worlds! Why wait for Wes? Remember, async+await is just promises A few options (which we don't have time for)... Done with Async+ Await! Let's see more new stuff! How do you know when an element is on screen? With Intersection Observer, you can be alerted when an element is fully or partially scrolled into or out of view. - Animate elements in on scroll - Play video on scroll in - Lazy Load images - Record views for ads beyond fold - Use with sticky headers Ready for Meta? How does it work?! 1. Setup some Options 2. Create an empty Observer 3. Give it a callback 4. Observe Away! Payment Request API Every single online store needs to reinvent the checkout form. We're all just trying to do the same thing - collect payment info from the user. The Payment Request is a standardized browser API to collect billing and shipping information from your users.[Google Developers] So, Does the Browser Charge Your Card? Is it secure? The same, or more! Not new at all Safari Doesn't give a shit September 2017 / iOS11 playsinline and getUserMedia() per-element resize events! Gateway Drug to Element Queries! Some cool stuff! What About Support? Async + Await via Babel Resize Observer Polyfillable, but expensive. Intersection Observer has an official W3 polyfill Web Payment is easily polyfillable or Fallback to checkout form getUserMedia is everywhere
<urn:uuid:d25a60c9-df8b-4299-b0c2-adeb3d490076>
2.640625
760
Personal Blog
Software Dev.
64.535864
95,560,656
Ben Farmer, University of Kentucky The problem: What comes to mind when you think of extreme environments? The freezing tundra of Antarctica, or maybe the fiery lava flows of a Hawaiian volcanic zone? Those particularly interested in marine science may think of the deep ocean, perhaps the Marianas Trench. Whichever drastic environment you think of, one fascinating thing ties all of these extremes together: life finds a way to thrive in each of them. Earth is home to as many as 1 trillion species, and the bulk of them are microbes (Locey and Lennon 2016). Microbes that are adapted to live in conditions that are inhospitable to most life on Earth are called extremophiles. Archaea and bacteria, the two domains of life aside from eukaryotes, represent the majority of extremophiles. While archaea were long thought to be a type of bacteria since the two appear very similar, archaea are more closely related to humans. Archaea are an important model organism because they have forged a niche in just about every habitat imaginable. Hot springs in Yellowstone National Park were among the first locations where archaea were discovered and owe their vibrant colors to these microbes (Oren and Rodriguez-Valera 2001). Haloarchaea are what I am studying this summer at the College of Charleston. Halo– is a prefix meaning “salt,” and haloarchaea are halophilic, or salt-loving. Perhaps the most famous location that haloarchaea have been found is in the Dead Sea – evidently not so dead after all. Haloarchaea are commonly found in water 10 times as salty as the ocean, in conditions known as hypersaline. Our goal is to investigate what adaptations have made that possible. We know that the amino acid composition of halophiles is unusually acidic (Martin et al. 1999). Proteins of halophiles are therefore also unusually acidic, which allows their proteins to properly fold in hypersaline conditions. What we do not know is whether the expression of proteins can change at different salinities. Better understanding how proteins are adapted in haloarchaea lends itself to understanding extremophiles on a broader scale. Mechanisms that allowed microbes to function in seemingly inhospitable environments were likely responsible for evolution of life on Earth (Rampelotto 2010). There are many habitats today that mimic extreme environments from both ancient history and current conditions on other planets, such as Mars. Martian soil is incredibly salty, a result of surface water that evaporated long ago (https://dornsife.usc.edu/labs/laketyrrell/life-in-hypersaline-environments/). Halophiles may have once lived in those hypersaline Martian waters. Therefore, knowledge that we gain about haloarchaea adaptations is valuable to our understanding of life both on Earth and elsewhere. Many thanks to my mentor, Dr. Matthew Rhodes, who has introduced me to everything from cell culturing to python. This project is funded through the National Science Foundation and supported by the Fort Johnson REU Program, NSF DBI- 1757899. Locey KJ, Lennon JT (2016) Scaling laws predict global microbial diversity. Proc Natl Acad Sci 113:5970–5975 Martin DD, Ciulla R a, Roberts MF (1999) Osmoadaptation in Archaea. Appied Enviromental Microbiol 65:1815–1825 Oren A, Rodriguez-Valera F (2001) The contribution of halophilic Bacteria to the red coloration of saltern crystallizer ponds. FEMS Microbiol Ecol 36:123–130 Rampelotto PH (2010) Resistance of microorganisms to extreme environmental conditions and its contribution to astrobiology. Sustainability 2:1602–1623
<urn:uuid:711d911c-d655-486d-816a-40b90f3410fe>
4.25
792
Personal Blog
Science & Tech.
33.409317
95,560,666
A View from Emerging Technology from the arXiv "Burning Walls" May Stop Black Hole Formation A new effect may oppose the formation of black holes and explain the mysterious energy of gamma-ray bursts. Black holes are thought to form when stars of sufficient size collapse, creating a force so strong that nothing can oppose it. The result is a region of space with infinite density and a gravitational field so strong that nothing, not even light, can escape. The idea that no known force can oppose the collapse of a large star sits uncomfortably with many physicists. Einstein believed that black holes could not form because the angular momentum of the star would eventually become high enough to stabilize a collapse. Others say that our inability to find a force that opposes collapse says more about our limited understanding of physics than about the existence of black holes. The current thinking is that any star three or four times bigger than the sun ought to form a black hole in a supernova at the end of its life. Anything smaller than that, and the degeneracy pressure of neutrons, which prevents neutrons from being squashed too closely together, can successfully oppose the collapse. Hence the formation of neutron stars. Now Ilya Royzen from the P.N. Lebedev Physical Institute of the Russian Academy of Sciences, in Moscow, has put his finger on an even more powerful force at work in supernovas. He says that quantum chromodynamics predicts that when a collapse overcomes the pressure of neutron degeneracy, another effect comes into play: matter undergoes a phase change. This change is from a hadronic form to a so-called subhadronic form, which is very different to ordinary matter. In subhadronic form, space is essentially empty. So the phase change creates a sudden reduction in pressure, allowing any ordinary matter in the star to implode into this new vacuum. The result is a massive increase in temperature of this matter to 100 million electron volts or so, creating what Royzen calls a “burning wall” within the supernova. He says that it is this “burning wall that stops the formation of a black hole during a supernova of stars up to about four times the mass of the sun, not just the degeneracy pressure of neutrons.” Now 100 million electron volts is several orders of magnitude more energy than any theory of supernovas has predicted so far. And that’s interesting because it ought to produce very powerful gamma rays. Strangely enough, the most powerful gamma rays are several orders of magnitude more powerful than can be explained by existing theories of supernovas. As Royzen says, it’s hard to resist making the link. If it stands up, he may have put his finger on the mechanism that finally explains the most powerful gamma-ray bursts in the universe and one of the great modern-day mysteries of astrophysics. Ref: arxiv.org/abs/0906.1929: QCD Against Black Holes? Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:d819182f-ea35-4cdf-94eb-0d99fa5176ce>
3.71875
650
News Article
Science & Tech.
44.679586
95,560,679
This issue we see that tides keep species apart – or at least prevent the different strains of the ubiquitous seaweed Fucus spp. from all merging into one species! We also see that iron rich waters result in changes to benthic communities and get an insight into how sponges form glass skeletons – a feat of materials engineering that has always amazed me. The tone is decidedly more serious in our Fisheries section, with a link to graphics showing just how much our fish stocks have declined over the last century. Actually ‘declined’ is not nearly a strong enough word for it. ‘Wiped out’ would be a little closer to the mark. Promiscuity on the beach: The intertidal zone is one of the most pronounced ecological gradients on the planet, and is usually visibly banded with specialised communities living in zones down from the extreme high water mark. Despite this many species on the beach are able to hybridise, a policy which might cost them their unique adaptations. This study looks at different species of the brown seaweed Fucus, and concludes that it is just the presence of the extreme gradient in exposure that keeps the species morphologically distinct. Zardi GI, Nicastro KR, Canovas F, Ferreira Costa J, Serrão EA, et al. (2011) Adaptive Traits Are Maintained on Steep Selective Gradients despite Gene Flow and Hybridization in the Intertidal Zone. PLoS ONE 6(6): e19402. doi:10.1371/journal.pone.0019402 Ironing in the differences: The addition of iron salts to oceanic waters low in nutrients is known to encourage algal growth, and artificial addition of iron has been suggested as a means to encourage carbon dioxide uptake by phytoplankton. In this survey areas of naturally enriched oceanic waters are compared with adjacent low-iron areas. High iron concentrations correlated with higher benthic biomass, and a different species composition. THese areas were also found to be less homogeneous than their low iron counterparts. Wolff GA, Billett DSM, Bett BJ, Holtvoeth J, FitzGeorge-Balfour T, et al. (2011) The Effects of Natural Iron Fertilisation on Deep-Sea Ecology: The Crozet Plateau, Southern Indian Ocean. PLoS ONE 6(6): e20697. doi:10.1371/journal.pone.0020697 Lean on me: Hydroids can form symbiotic relationships with corals, in which they rely on the coral’s skeleton for support and protection. This necessitates some specific adaptations in the hydroid that permit it to penetrate to the host’s skeleton, but also permit it to release itself from this, so it does not become overgrown. Pantos O, Hoegh-Guldberg O (2011) Shared Skeletal Support in a Coral-Hydroid Symbiosis. PLoS ONE 6(6): e20946. doi:10.1371/journal.pone.0020946 Sub-sea stereo: Dolphins can generate two sound beams simultaneously. The beams are projected in different directions, and are thought to help the dolphin locate objects more precisely. ScienceDaily (June 8, 2011) Oily weight-belt: Copepods are the prey of many fish in surface waters, so they like to hypernate in deep water during the winter, when their own algal food is scarce. This raises a problem for them in adapting their buoyancy, and it appears that they do this be having reserves of omega-3 oils, which undergo a phase change to a dense butter under pressure. This means that the tiny animals are neutrally buoyant both in surface and deep waters, and don’t have to swim to maintain depth. ScienceDaily (June 13, 2011) Too fast for crabs: The green crab (Carcinus maenas) is less able to locate food in fast water currents, and spends longer eating the food it does catch. It is thought that the current both carries scent cues away, making it harder for the crab to find food, but also imposes a physical impediment to the crab’s getting about. Robinson EM, Smee DL, Trussell GC (2011) Green Crab (Carcinus maenas) Foraging Efficiency Reduced by Fast Flows. PLoS ONE 6(6): e21025. doi:10.1371/journal.pone.0021025 Quick change: Rock hinds (Epinephelus adscensionis, Gulf of Mexico) seem similar to our cuckoo Wrasse, with the dominant female in a group becoming male if the existing male is removed. Both sexes of rock hind, however, can display temporary markings to defend territory. In addition the males can also show a distinctive ‘tuxedo’ pattern, with a yellow tail and black and white body. Most of the time, however, both males and females adopt a camoflaged pattern. Kline RJ, Khan IA, Holt GJ (2011) Behavior, Color Change and Time for Sexual Inversion in the Protogynous Grouper (Epinephelus adscensionis). PLoS ONE 6(5): e19576. doi:10.1371/journal.pone.0019576 Don’t bite your clients! Male cleaner fish beat up spouses who get greedy and take a nip out of their clients. [Social and sexual make-up of cleaner fish is again similar to our cuckoo Wrasse, with a single dominant male and a harem of up to 16 females, the largest female will change sex if the dominant male is removed] ScienceDaily (June 16, 2011) How sponges grow glass: Sponges split ointo two families, depending upon whether they have calcareous or silica-based spines, which form a skeleton. The sponge Suberites domuncula – commonly found on the shells inhabited by hermit crabs – starts growing a silica spicule within a cell. The first structure is an internal canal, down which the cell inserts silicasomes. Pores in the silicasomes allow the access of aquaporin, which initiates hardening of the bio-silicate. Wang X, Wiens M, Schröder HC, Schloßmacher U, Pisignano D, et al. (2011) Evagination of Cells Controls Bio-Silica Formation and Maturation during Spicule Formation in Sponges. PLoS ONE 6(6): e20523. doi:10.1371/journal.pone.0020523 Jellies impact marine food web: Jellyfish are in direct competition for plankton with fish, but have now been shown to have a secondary impact on marine bacteria. Soluble organic material from jellyfish tends to be used by bacteria for respiration, rather than recycling vital trace nutrients. As a consequence jellyfish short-circuit the marine food web, resulting in any carbon dioxide that is absorbed by photosynthesis being quickly dumped back into the atmosphere. ScienceDaily (June 6, 2011) Richer at the edges? The boundaries between different biological landscapes (ecotones) provide both opportunities and threats – species living close to an edge may be more exposed to predators, or the area may encourage species mixing and greater diversity. This study looks at how the diversity of gastropods changed at the boundaries between reefs and seagrass (Posidonia and Amphibolis) beds. [In this instance there appears to be no edge effect, with species richenss and biomass both quickly converging on the values expected for the bulk reef or seagrass ecosystem with no peak or trough at the interface.] Tuya F, Vanderklift MA, Wernberg T, Thomsen MS (2011) Gradients in the Number of Species at Reef-Seagrass Ecotones Explained by Gradients in Abundance. PLoS ONE 6(5): e20190. doi:10.1371/journal.pone.0020190 Larder raid: GPS-dataloggers attached to adult Peruvian pelicans Pelecanus thagus confirm that they forage at night. Zavalaga CB, Dell’Omo G, Becciu P, Yoda K (2011) Patterns of GPS Tracks Suggest Nocturnal Foraging by Incubating Peruvian Pelicans (Pelecanus thagus). PLoS ONE 6(5): e19966. doi:10.1371/journal.pone.0019966 A coral’s cholesterol count: Corals contian a mix of lipid types which can be split into two classes – storage (an energy reserve) and structural (used to build cells – e.g. cholesterol). This study finds that the balance between these is determined by the requirements of both the coral and their symbiont algae Symbiodinium. Cooper TF, Lai M, Ulstrup KE, Saunders SM, Flematti GR, et al. (2011) Symbiodinium Genotypic and Environmental Controls on Lipids in Reef Building Corals. PLoS ONE 6(5): e20434. doi:10.1371/journal.pone.0020434 Too salty? One of the genes responsible for sensing how salty the environment is has been identified in the cyanobacterium Synechocystis sp.. Liang C, Zhang X, Chi X, Guan X, Li Y, et al. (2011) Serine/Threonine Protein Kinase SpkG Is a Candidate for High Salt Resistance in the Unicellular Cyanobacterium Synechocystis sp. PCC 6803. PLoS ONE 6(5): e18718. doi:10.1371/journal.pone.0018718 Don’t go eating that yellow snow: The organic particulates that rain down from the productive surface waters into the oceans deeps have been, rather poetically, referred to as ‘marine snow’. For the full health warning read Hannah Waters in Culturing Science, August 19, 2010 Microbes in the Gulf of Maine: A survey of everything from viruses to phytoplankton in this area estimates a minimum abundance of cell-based microbes as 1.7×1025 organisms. This may equate to a species richness of between 105 to 106 taxa. Li WKW, Andersen RA, Gifford DJ, Incze LS, Martin JL, et al. (2011) Planktonic Microbes in the Gulf of Maine Area. PLoS ONE 6(6): e20981. doi:10.1371/journal.pone.0020981 Invertebrate vaccination: It looks like some invertebrates can be primed against disease, though the mechanism remains unclear. Pope EC, Powell A, Roberts EC, Shields RJ, Wardle R, et al. (2011) Enhanced Cellular Immunity in Shrimp (Litopenaeus vannamei) after ‘Vaccination’. PLoS ONE 6(6): e20960. doi:10.1371/journal.pone.0020960 DNA barcoding delimits species: Fast barcoding techniques prove robust enough to pick out species of bivalve in Chinese waters. Chen J, Li Q, Kong L, Yu H (2011) How DNA Barcodes Complement Taxonomy and Explore Species Diversity: The Case Study of a Poorly Understood Marine Fauna. PLoS ONE 6(6): e21326. doi:10.1371/journal.pone.0021326 A bad place to live? A study that looks into how the local environment influences disease in corals. The underlying associations between disease prevalence and 14 different predictor variables (biotic and abiotic) are reported. Aeby GS, Williams GJ, Franklin EC, Kenyon J, Cox EF, et al. (2011) Patterns of Coral Disease across the Hawaiian Archipelago: Relating Disease to Environment. PLoS ONE 6(5): e20370. doi:10.1371/journal.pone.0020370 Marlin don’t like being tagged: [Tagging is a vital part of our studies into the lives of the larger marine organisms, it allows us to fill in gaps in our understanding of their life-cycles and behaviour. It also allows us to see which areas of the sea they make most use of, and so plan conservation strategies. This only works if the animal being tagged continues to behave normally, however…] Sippel T, Holdsworth J, Dennis T, Montgomery J (2011) Investigating Behaviour and Population Dynamics of Striped Marlin (Kajikia audax) from the Southwest Pacific Ocean with Satellite Tags. PLoS ONE 6(6): e21087. doi:10.1371/journal.pone.0021087 Eels don’t like being tagged either. Methling C, Tudorache C, Skov PV, Steffensen JF (2011) Pop Up Satellite Tags Impair Swimming Performance and Energetics of the European Eel (Anguilla anguilla). PLoS ONE 6(6): e20797. doi:10.1371/journal.pone.0020797 Boning up: Scientists now know a little more about the bone fish resident in Caribbean waters, with observations of pre-spawning aggregations of fish in deeper water than previously suspected for the species. ScienceDaily (June 7, 2011) Eel today, gone tomorrow: The European eel in Sweden is now thought to be critically endangered, having collapsed to a few percent of its population only 50 years ago. mnagement policies in place are thought to be too lenient with local fisheries to permit it to survive. ScienceDaily (June 7, 2011) Stranded statistics: Genetic barcoding of carcasses from dolphin strandings may not accurately reflect the genetic makeup of the population as a whole, so care must be taken with inferrences based on this data. [contrast with below] Bilgmann K, Möller LM, Harcourt RG, Kemper CM, Beheregaray LB (2011) The Use of Carcasses for the Analysis of Cetacean Population Genetic Structure: A Comparative Study in Two Dolphin Species. PLoS ONE 6(5): e20103. doi:10.1371/journal.pone.0020103 Stranding stats reflect population: Keeping a track of stranded cetaceans may be a cheap and reliable way of following what is happening to the population a a whole. [Contrast with above] ScienceDaily (June 8, 2011) Where are the marine reserves? Reserves established a decade ago have allowed fish populations to recover, so how come so little of the marine system is protected? Bruce Barcott in Environment 360, 16 Jun 2011 Deep Sea Conservation Coalition: Is calling for the United Nations General Assembly to secure a moratorium on high seas bottom trawling and protect deep-water species that are often slow growing and very susceptible to over fishing. Evolve away from that: The ability of corals to survive in a rapidly changing marine ecosystem – threatened by climate change, pollution and exploitation – may depend upon how quickly they can evolve. This paper is a preliminary study of the plasticity of genomes of Acropora millepora and Acropora palmata. Voolstra CR, Sunagawa S, Matz MV, Bayer T, Aranda M, et al. (2011) Rapid Evolution of Coral Proteins Responsible for Interaction with the Environment. PLoS ONE 6(5): e20392. doi:10.1371/journal.pone.0020392 Antarctic diversity: South Georgia is in a unique location, in the middle of the Antarctic circum-polar current, and south of the polar front. The shelf around South Georgia has now been reported to be the most diverse marine area in the Southern Ocean. The species in the area have not been recorded very often, and many may be rare. In addition, a large number are thought to be close to the edge of their range, and may find it hard to cope with climate change. Hogg OT, Barnes DKA, Griffiths HJ (2011) Highly Diverse, Poorly Studied and Uniquely Threatened by Climate Change: An Assessment of Marine Biodiversity on South Georgia’s Continental Shelf. PLoS ONE 6(5): e19795. doi:10.1371/journal.pone.0019795 Little skate (Leucoraja erinacea) genome workshop: Dr Bik in Deep Sea News, May 28th, 2011 Fish predictor: A model combining seabottom topography (roughness, curvature, slope etc.) and geography (distance to shore or shelf, water depth) has proven successful in predicting the habitat ranges of three reef fish species. The dominant factor in predicting the range of fish was the distance to shore (or shelf), followed by the tortuosity/complexity (slope of slope) of the surface. Pittman SJ, Brown KA (2011) Multi-Scale Approach for Predicting Fish Species Distributions across Coral Reef Seascapes. PLoS ONE 6(5): e20583. doi:10.1371/journal.pone.0020583 Disease transfer: A study on the US Pacific coast finds evidence that diseases associated with terrestrial animals are finding their way aquatic populations is being found after post mortems of marine mammals. ScienceDaily (May 25, 2011) Fisheries and exploitation Just how much have our fisheries declined in the last century? The map published by the Guardian shows estimated fish stocks for 1900 and 2000 in the North Atlantic. There is a reference to original work published by Christensen et al. in Fish and Fisheries, 2003, 4, 1-24. Fisheries activty on the Great Barrier Reef: Study showing how fisheries activity has changed since 1990, with increases in the areas protected from trawl fisheries. [The number of boats, and days spent fishing declined, but the catch per boat or day spent fishing increased]. Grech A, Coles R (2011) Interactions between a Trawl Fishery and Spatial Closures for Biodiversity Conservation in the Great Barrier Reef World Heritage Area, Australia. PLoS ONE 6(6): e21094. doi:10.1371/journal.pone.0021094 Madagascar! More than the penguins will go hungry as fish stocks have been plundered by unregulated local fisheries and European fishing fleets during a period of prolonged political unrest. Two thirds of the Madagascan population face hunger. ScienceDaily (June 17, 2011) Are fisheries subsidies bad for everyone? A hindcasting model reviewing how the subsidies system in the North Sea have influenced fisheries profits and ecological stability. The suggestion here is that subsidies had a negative impact on both marine biomass and fisheries profitability. Heymans JJ, Mackinson S, Sumaila UR, Dyck A, Little A (2011) The Impact of Subsidies on the Ecological Sustainability and Future Profits from North Sea Fisheries. PLoS ONE 6(5): e20239. doi:10.1371/journal.pone.0020239 Abalone in trouble: Fisheries for the Northern Abalone in British Columbia (Canada) were closed in 1990 to allow stocks to recover, but so-far to very little effect, largely due to poaching. Recent studies, however, show that increases in CO2 will further undermine this species. ScienceDaily (May 26, 2011) Fraudulent fish finder: The European Commissions Joint Research Centre claims that a battery of new technicques based on molecular technologies (genetics, genomics, chemistry and forensics) can answer questions of species and provenance, advances are necessary to police illegal fisheries. ScienceDaily (May 28, 2011) Nuclear cruise: Scientific expedition to follow the effects of leaked radiation from the Fukushima Daiichi nuclear plant in Japan. Dr Bik in Deep Sea News, June 6th, 2011. Sunscreen threat to marine life? Studies on the fresh water water flea Daphnia magna show that nanoparticulate titanium dioxide (used as the active ingredient of most sunscreens) is toxic, but quickly associates into larger, less toxic, particles in water. Prolonged exposure of daphnia to concentrations of 2 mg/L resulted in the build up of coating on the animals, and resulted in moulting problems and high mortality for the water fleas. Dabrunz A, Duester L, Prasse C, Seitz F, Rosenfeldt R, et al. (2011) Biological Surface Coating and Molting Inhibition as Mechanisms of TiO2 Nanoparticle Toxicity in Daphnia magna. PLoS ONE 6(5): e20112. doi:10.1371/journal.pone.0020112 Legal litmus test: Perpetrators of local acidification in coastal waters can be brought to book, concludes Stanford University’s Center for Ocean Solutions. ScienceDaily (May 26, 2011) Did bugs pig out? It has been reported that the hydrocarbons released during the Deepwater Horizon spill in the Gulf of Mexico were consumed by bacteria in the water colum within 120 days. This model is based on low oxygen concentrations monitored in the Gulf after the spill. Other scientists are less convinced by the inferencee, citing that there is a lot of uncertainly in the measurements. Methane in particular is thought to be very hard to digest. ScienceDaily (May 29, 2011) Record dead zone predicted: Flooding of the Mississippi river this spring has swept large amounts of nutrients into the Gulf of Mexico, and a record dead zone is predicted as a consequence. (June 14, 2011) More on the deadzone: Graphs and stuff. ScienceDaily (June 14, 2011) Danger low oxygen! It looks like marine eutrophication may be very sensitive to climate change. Low oxygen water masses are created as bacteria decompose algae sinking through the water column. Usually they occur at some depth, where the water is cold, so bacterial growth is inhibited. Increased temperature reduces oxygen solubility on seawater and increases deep water temperatures. As a result these is less O2 to go round, and it is used up more efficiently. The result is likely to be significant increases in the size of oceanic dead zones, where higher plants and animals are unable to survive. ScienceDaily (June 18, 2011) Ocean acidification impairs hearing in clownfish. ScienceDaily (June 4, 2011) Sea-ice and plankton: The copepod Calanus gracialis is well adapted to its arctic environment. It stores an enourmous amount of fat in its body (compared to its body mass), to survive during the arctic winter. Each spring as the ice melts it releases phytoplankton which are preyed upon by Calanus, who use the bounty to reproduce. Later, as the ice melts entirely there is a new phytoplankton bloom, which feeds the young Calanus, and sets them up for the long winter. This is a lifecycle that is closely timed to the seasons, and may be badly disrupted by climate changes. As Calanus is a vital food source for a wide range of species, from cod to whales, this may have far-reaching consequences. ScienceDaily (June 6, 2011) Coral calcification: To understand how corals will respond to ocean acidification it would be useful to know how they go about producing thair calcified tissues. This study is the first to be able to image corals reducing the acidity (increasing the pH) of the seawater adjacent to their calicoblastic epithelium – the area of skin that is creating the calcified material. The reduced acidity in this volume of water makes carbonate minerals less soluble, so easier to precipitate as a skeleton. Venn A, Tambutté E, Holcomb M, Allemand D, Tambutté S (2011) Live Tissue Imaging Shows Reef Corals Elevate pH under Their Calcifying Tissue Relative to Seawater. PLoS ONE 6(5): e20013. doi:10.1371/journal.pone.0020013 CO2 seeps associated with reduced biodiversity: Natural volcanic CO2 seeps in Papua New Guinea show how our oceans may become as a result of our fossil fuel economy. The study reports reductions in biodiversity and recruitment on the reef as pH declined from 8.1 to 7.8, and predicts that reef development would cease at a pH below 7.7. ScienceDaily (May 29, 2011) Sponges in hot water. Cebrian E, Uriz MJ, Garrabou J, Ballesteros E (2011) Sponge Mass Mortalities in a Warming Mediterranean Sea: Are Cyanobacteria-Harboring Species Worse Off? PLoS ONE 6(6): e20211. doi:10.1371/journal.pone.0020211
<urn:uuid:f31d2a09-9b56-43fd-8b9e-d1fdcd0153ba>
2.859375
5,308
Content Listing
Science & Tech.
52.029752
95,560,698
Earthquake early warning systems may provide the public with crucial seconds to prepare for severe shaking. This is a map of the blind-zone radius for California. Yellow and orange colors correspond to regions with small blind zones and red and dark-red colors correspond to regions with large blind zones. For California, a new study suggests upgrading current technology and relocating some seismic stations would improve the warning time, particularly in areas poorly served by the existing network – south of San Francisco Bay Area to north Los Angeles and north of the San Francisco Bay Area. A separate case study focuses on the utility of low cost sensors to create a high-density, effective network that can be used for issuing early warnings in Taiwan. Both studies appear in the November issue of the journal Seismological Research Letters (SRL). "We know where most active faults are in California, and we can smartly place seismic stations to optimize the network," said Serdar Kuyuk, assistant professor of civil engineering at Sakarya University in Turkey, who conducted the California study while he was a post-doctoral fellow at University of California (UC), Berkeley. Richard Allen, director of the Seismological Laboratory at UC Berkeley, is the co-author of this study. Japan started to build its EEW system after the 1995 Kobe earthquake and performed well during the 2011 magnitude 9 Tohoku-Oki earthquake. While the U.S. Geological Survey(USGS)/Caltech Southern California Seismic and TriNet Network in Southern California was upgraded in response to the 1994 Northridge quake, the U.S is lagging behind Japan and other countries in developing a fully functional warning system. "We should not wait until another major quake before improving the early warning system," said Kuyuk. Noting California's recent law that calls for the creation of a statewide earthquake early warning (EEW) system, Kuyuk says "the study is timely and highlights for policymakers where to deploy stations for optimal coverage." The approach maximizes the warning time and reduces the size of "blind zones" where no warning is possible, while also taking into account budgetary constraints. Earthquake early warning systems detect the initiation of an earthquake and issue warning alerts of possible forthcoming ground shaking. Seismic stations detect the energy from the compressional P-wave first, followed by the shear and surface waves, which cause the intense shaking and most damage. The warning time that any system generates depends on many factors, with the most important being the proximity of seismic stations to the earthquake epicenter. Once an alert is sent, the amount of warning time is a function of distance from the epicenter, where more distant locations receive more time. Areas in "blind zones" do not receive any warning prior to arrival of the more damaging S-wave. The goal, writes Kuyuk and Allen, is to minimize the number of people and key infrastructure within the blind zone. For the more remote earthquakes, such as earthquakes offshore or in unpopulated regions, larger blind zones can be tolerated. "There are large blind zones between the Bay Area and Los Angeles where there are active faults," said Kuyuk. "Why? There are only 10 stations along the 150-mile section of the San Andreas Fault. Adding more stations would improve warning for people in these areas, as well as people in LA and the Bay Area should an earthquake start somewhere in between," said Kuyuk. Adding stations may not be so simple, according to Allen. "While there is increasing enthusiasm from state and federal legislators to build the earthquake early warning system that the public wants," said Allen, "the reality of the USGS budget for the earthquake program means that it is becoming impossible to maintain the functionality of the existing network operated by the USGS and the universities. "The USGS was recently forced to downgrade the telemetry of 58 of the stations in the San Francisco Bay Area in order to reduce costs," said Allen. "While our SRL paper talks about where additional stations are needed in California to build a warning system, we are unfortunately losing stations." In California, the California Integrated Seismic Network (CISN) consists of multiple networks, with 2900 seismic stations at varying distances from each other, ranging from 2 to 100 km. Of the some 2900 stations, 377 are equipped to contribute to an EEW system. Kuyuk and Allen estimate 10 km is the ideal distance between seismic stations in areas along major faults or near major cities. For other areas, an interstation distance of 20 km would provide sufficient warning. The authors suggest greater density of stations and coverage could be achieved by upgrading technology used by the existing stations, integrating Nevada stations into the current network, relocating some existing stations and adding new ones to the network. The U.S. Geological Survey (USGS) and the Gordon and Betty Moore Foundation funded this study.A Low-Cost Solution in Taiwan MEMS accelerometers are tiny sensors used in common devices, such as smart phones and laptops. These sensors are relatively cheap and have proven to be sensitive detectors of ground motion, particularly from large earthquakes. The current EEW system in Taiwan consists of 109 seismic stations that can provide alerts within 20 seconds following the initial detection of an earthquake. Wu sought to reduce the time between earthquake and initial alert, thereby increasing the potential warning time. The EEW research group at National Taiwan University developed a P-wave alert device named "Palert" that uses MEMS accelerometers for onsite earthquake early warning, at one-tenth the cost of traditional strong motion instruments. From June 2012 to May 2013 Wu and his colleagues tested a network of 400 Palert devices deployed throughout Taiwan, primarily at elementary schools to take advantage of existing power and Internet connections and where they can be used to educate students about earthquake hazard mitigation. During the testing period, the Palert system functioned similarly to the existing EEW system, which consists of the conventional strong motion instruments. With four times as many stations, the Palert network can provide a detailed shaking map for damage assessments, which it did for the March 2013 magnitude 6.1 Nantou quake. Wu suggests the relatively low cost Palert device may have commercial potential and can be readily integrated into existing seismic networks to increase coverage density of EEW systems. In addition to China, Indonesia and Mexico, plans call for the Palert devices to be installed near New Delhi, India to test the feasibility of an EEW system there. Nan Broadbent | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:644feea2-cfe5-495b-a96b-431d65186cec>
3.1875
1,966
Content Listing
Science & Tech.
36.980169
95,560,704
Acetic acid is an important petrochemical that is currently produced from methane (or coal) in a three-step process based on carbonylation of methanol. We report a direct, selective, oxidative condensation of two methane molecules to acetic acid at 180 degrees C in liquid sulfuric acid. Carbon-13 isotopic labeling studies show that both carbons of acetic acid originate from methane. The reaction is catalyzed by palladium, and the results are consistent with the reaction occurring by tandem catalysis, involving methane C-H activation to generate Pd-CH3 species, followed by efficient oxidative carbonylation with methanol, generated in situ from methane, to produce acetic acid. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:a92465d4-6a15-4350-b0f4-023cf031d1f1>
2.8125
165
Academic Writing
Science & Tech.
4.134556
95,560,705
Introductory Biology Pollen Germination Protocol - We use a solution to germinate the Solanum pollen. It is 10% sucrose in 0.01% Boric acid (keep refrigerated as it will grow mold). We usually put a drop or two of this solution in a depression slide and then tap the anther over the solution to release some pollen. We make a ring of vaseline around the well and lay a coverglass over the vaseline ring to seal the preparation so it won't dry out during the lab. It works well for us - germination easily seen in 30 - 40 minutes. data regenerated on Thu, 19 Jul 2018 08:06:40 -0400 [bcm v4.0]
<urn:uuid:0cd76add-b676-4984-86d9-f7b67473067d>
2.84375
152
Truncated
Science & Tech.
63.695
95,560,712
Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers\ud are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC) algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI). Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium III machines.\u To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.
<urn:uuid:07e7aa69-cfe7-49c4-95fc-1253d4dfa077>
2.65625
212
Academic Writing
Software Dev.
18.282105
95,560,741
“In geometry, an icosahedron (Greek: eikosaedron , from eikosi twenty + hedron seat; ...Icosahedron “A sphere (from Greek σφαίρα — sphaira, "globe, ball," ) is perfectly round geometrical object in...Sphere “Determining the length of an irregular arc segment—also called rectification of a curve—was...Arc length “The history of logic is the study of the development of the science of valid inference (logic...History of logic “File:Samplevenndiagram.jpg A Venn diagram, named after the British mathematician and philosopher...Venn diagram “In mathematics, a polygonal number is a number represented as dots or pebbles arrayed in the...Polygonal number Pages in category "Pages with broken file links" The following 13 pages are in this category, out of 13 total.
<urn:uuid:4ced3f48-5a80-4b59-a501-203012b41f18>
3.015625
210
Content Listing
Science & Tech.
42.543404
95,560,770
NOAA's Ocean Acidification Program supports research that focuses on economically and ecologically important marine species. Research of survival, growth, and physiology of marine organisms can be used to explore how aquaculture, wild fisheries, and food webs may change as ocean chemistry changes. A number of NOAA National Marine Fisheries Service Science Centers have state-of-the-art experimental facilities to study the response of marine organisms to the chemistry conditions expected with ocean acidification. The Northeast Fisheries Science Center has facilities at its Sandy Hook, NJ and Milford, CT laboratories; the Alaska Fisheries Science Centers at its Newport, OR and Kodiak, AK laboratories; and the Northwest Fisheries Science Center at its Mukilteo and Manchester, WA laboratories. All facilities can tightly control carbon dioxide and temperature. The Northwest Fisheries Science Center can also control oxygen, and can create variable treatment conditions for carbon dioxide, temperature, and oxygen. These facilities include equipment for seawater carbon chemistry analysis, and all use standard operating procedures for analyzing carbonate chemistry to identify the treatment conditions used in experiments. Both deep sea and shallow reef-building corals have calcium carbonate skeletons. As our oceans become more acidic, carbonate ions, which are an important part of calcium carbonate structures, such as these coral skeletons, become relatively less abundant. Decreases in seawater carbonate ion concentration can make building and maintaining calcium carbonate structures difficult for calcifying marine organisms such as coral. Increased levels of carbon dioxide in our ocean can have a wide variety of impacts on fish, including altering behavior, otolith (a fish's ear bone) formation, and young fish's growth. Find out more about what scientists are learning about ocean acidification impacts on fish like rockfish, scup, summer flounder, and walleye pollock. Shellfish, such as oyster, clams, crabs and scallop, provide food for marine life and for people, too. Shellfish make their shells or carapaces from calcium carbonate, which contains carbonate ion as a building block. The decreases in seawater carbonate ion concentration expected with ocean acidification can make building and maintaining calcium carbonate structures difficult for calcifying marine organisms like shellfish. This may impact their survival, growth, and physiology, and, thus, the food webs and economies that depend on them. Plankton are tiny plants and animals that many marine organisms, ranging from salmon to whales, rely on for nutrition. Some plankton have calcium carbonate structures, which are built from carbonate ions. Carbonate ions become relatively less abundant as the oceans become more acidic. Decreases in seawater carbonate ions can make building and maintaining shells and other calcium carbonate structures difficult for calcifying marine organisms such as plankton. Changes to the survival, growth, and physiology of plankton can have impacts throughout the food web. Species exposure experiments that measure the response of organisms reared in seawater with manipulated carbonate chemistry are an important way to learn about the potential effects of ocean acidification (OA). Experimental systems that closely mimic the natural environment (e.g. with multiple stressors) can lead to studies with greater ecological relevance. Using a combination of NWFSC and OAP funds, the NWFSC built a facility for conducting species exposure experiments at the Montlake Lab, and has started a new facility at the Mukilteo Field station. The facilities include both rearing aquaria and a lab for carbon chemistry analysis (DIC, alkalinity, spectrophotometric pH). The NWFSC experimental systems are considered “shared-use” facilities, in that the systems are available for NWFSC research teams and outside collaborators as capacity allows. In the past, we have worked on collaborative projects with PMEL, University of Washington, Oregon State University, Suquamish Tribe, Evergreen State University, Cal Poly and Western Washington University. These collaborators often provide external funding for experiments, greatly increasing the research that can be conducted. The goal of this component of the project is to continue the mooring and ship-based monitoring of the Ocean Acidification-impacted carbonate chemistry of US Pacific coastal waters. This objective will be accomplished by: 1) continued operation of the Oregon Ocean Acidification Mooring Program, including deployment and maintenance of the surface moorings at the established Ocean Acidification (OA) node at NH10 with surface MAPCO2 systems, nearbottom moorings with SAMI-CO2 and SAMI-pH systems at the NH10 site and the shelfbreak in the early stages of the project, followed by a relocation (following validation exercises, see #3) of these assets to a more biologically productive site to the south; 2) measurement support of the West Coast Ocean Acidification Cruise in 2016; and 3) a validation program for moored measurements off the Oregon Coast. The final component will include a parallel deployment of the NOAA-OAP moored assets at NH-10 for 6-12 months following establishment of the OOI node there to ensure consistency between the OAP and OOI platforms, as well as continued opportunistic sample collection for archiving and analyses in Hales; lab at OSU. Working with the Carbon Group at NOAA’s Pacific Marine Environmental Lab, we propose to continue the now 4-year time series of real-time, high-frequency measurements of critical core OA parameters on the northern Washington shelf, including regular collection of validation samples. Specifically APL-UW will continue to maintain a heavily-instrumented surface mooring (Cha’ba) providing core OA and support parameters 13 miles WNW of La Push, WA, within the Olympic Coast National Marine Sanctuary, just shoreward and south of the Juan de Fuca Eddy---a known harmful algae bloom (HAB) source (Trainer et al., 2009; Hickey et al., 2013). Cha’ba currently houses a MAPCO2 system and many auxiliary sensors including two pH sensors, several CTDs, two oxygen sensors, an ADCP, and a fluorometer/turbidity sensor. Because of budget limitations, lack of ship time, and possessing only one surface mooring, we are only able to deploy the Cha’ba system for 6-8 mo/yr, typically from March-April through September-October. A LOI is attached to this workplan that would allow for continuous 12 mo/yr deployments in order to bring this to the full requirements of NOAA OAP. Cha’ba’s location, in an upwelling zone and near the source waters to Puget Sound via the Strait of Juan de Fuca, offers key insights. While Cha'ba records surface air and seawater conditions with some depth resolution, NANOOS also supports a subsurface profiling mooring 400m away from Cha''ba, measuring full water-column properties below 20m, soon to be instrumented (US IOOS funding) with a real-time HAB detection system, pH sensor and profiling CTD offering broader context and insights on biological responses. Synergies between OA and HAB toxicity have been suggested (Sun et al., 2011). Continuation of the MAPCO2 effort on Cha''ba with these ancillary data will facilitate analysis to further develop our understanding of shelf processes important to OA variability, prediction, and biological responses. This project will deploy two interdisciplinary moorings (CCE1 and CCE2) in the southern California Current System, a key coastal upwelling ecosystem along the west coast of North America. The study region forms the dominant spawning habitat for most of the biomass of small pelagic fishes in the entire California Current System, is important for wild harvest of diverse marine invertebrates and fishes, plays a significant role in the ocean carbon budget for the west coast, and is in close proximity to the Channel Islands National Marine Sanctuary. The offshore CCE1 mooring is located in the core flow of the California Current itself, and represents a key source of horizontal transport of nutrients, dissolved gases, and organisms from higher latitudes. It also represents the offshore atmosphere-ocean gas exchange that occurs over a large area and influences the carbon budget of this Eastern Boundary Current. The CCE2 mooring is located near Pt. Conception, one of the major upwelling centers off the west coast. This is a site of strong, episodic upwelling events that lead to marked increases in pCO2, declines in pH and dissolved oxygen, and intrusion of waters unfavorable to precipitation of calcium carbonate by some shell-bearing marine organisms. The proposed work will regularly deploy and service taut line, bottom-anchored moorings at the two mooring sites, with sensors designed to measure all core carbonate system variables specified by the PMEL OA Monitoring Network. The data will be validated with shipboard measurements and rigorous QC procedures, and made freely available via Iridium satellite telemetry. Complementary measurements made by partners in this region include Spray glider-based assessments of calcium carbonate saturation state, CalCOFI shipboard hydrographic and plankton food web measurements, process studies conducted by the CCE-LTER (Long Term Ecological Research) site, and a new experimental Ocean Acidification facility. PI: Uwe Send
<urn:uuid:e6477217-cc4e-4478-8b78-4e08a8596fd9>
3.859375
1,921
Knowledge Article
Science & Tech.
16.680076
95,560,777
Your body turns the cereal you ate for breakfast into energy. This energy allows you to run and play or learn at school. The fuel in your parents’ car is burned, creating energy that makes the car run. Energy is all around us and is simply the ability to do work. - The law of conservation of energy says that energy cannot be created or destroyed; only turned into a different kind of energy. For example, your breakfast cereal holds chemical energy (calories). When you eat it, it turns into kinetic energy (movement). - There are many different kinds of energy: chemical energy, nuclear energy, kinetic energy, solar energy, electrical energy, and potential energy. - Potential energy is stored energy waiting to be released. The pressure building up in a bottle of soda is potential energy; so is a spring. - In physics, energy is measured in joules, abbreviated as J. There are other measurements of energy, such as calories or kilowatts. - Energy can be either renewable or nonrenewable. Renewable energy includes wind, solar, and water power or the energy we get from food. We will always have more of these resources. Fossil fuels – used to fuel our cars and heat our homes – were made over millions of years. Once they’re gone, they’re gone. - Energy: the ability to do work - Calorie: a measurement of the energy in food - Joule: a measurement of energy in physics Question and Answer Question: Can energy from the sun be used to power our homes? Answer: The energy from just one hour of sunlight could power the entire world for a year. Right now, we’re looking for new ways to use the Sun’s power. Visit Rutgers University for a video on energy of motion. Cite This Page You may cut-and-paste the below MLA and APA citation examples: MLA Style Citation Declan, Tobin. " Fun Energy Facts for Kids ." Easy Science for Kids, Jul 2018. Web. 17 Jul 2018. < http://easyscienceforkids.com/kids-energy/ >. APA Style Citation Tobin, Declan. (2018). Fun Energy Facts for Kids. Easy Science for Kids. Retrieved from http://easyscienceforkids.com/kids-energy/ Sponsored Links :
<urn:uuid:31adc4a5-0d38-4355-8ffb-7c5630274152>
3.59375
500
Knowledge Article
Science & Tech.
51.222902
95,560,800
What determines whether a group on a benzene ring will be activating or deactivating and ortho para or meta directing?© BrainMass Inc. brainmass.com July 18, 2018, 1:15 am ad1c9bdddf A. Directing effects 1. Ortho, para directors can stabilize a positive charge. As a result, the intermediates formed when the electrophile reacts at the ortho or para position are relatively more stable than the intermediates formed when the electrophile reacts at the meta position. The more stable intermediates form at a faster rate (recall Hammond's postulate). a. Alkyl groups are ortho, para directors because they can stabilize a positive charge through b. Groups in which the atom directly attached to the ring has a lone pair, are ortho, ...
<urn:uuid:635eb1b6-04ad-4a1a-ba70-4c6bfa8b49ce>
2.515625
178
Q&A Forum
Science & Tech.
45.185769
95,560,815
- Open Access Groundwater pressure changes and crustal deformation before and after the 2007 and 2014 eruptions of Mt. Ontake © Koizumi et al. 2016 Received: 27 November 2015 Accepted: 26 February 2016 Published: 17 March 2016 Volcanic activity generally causes crustal deformation, which sometimes induces groundwater changes, and both of these phenomena are sometimes detected before volcanic eruptions. Therefore, investigations of crustal deformation and groundwater changes can be useful for predicting volcanic eruptions. The Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, has been observing groundwater pressure at Ohtaki observatory (GOT) since 1998. GOT is about 10 km southeast of the summit of Mt. Ontake. During this observation period, Mt. Ontake has erupted twice, in 2007 and in 2014. Before the 2007 eruption, the groundwater pressure at GOT clearly dropped, but it did not change before or after the 2014 eruption. These observations are consistent with the crustal deformation observed by Global Navigation Satellite System stations of the Geospatial Information Authority of Japan. The difference between the 2007 and 2014 eruptions can be explained if a relatively large magma intrusion occurred before the 2007 eruption but no or a small magma intrusion before the 2014 eruption. Volcanic activity generally causes crustal deformation, which sometimes induces groundwater changes, and both the crustal deformation and the accompanying groundwater changes are sometimes detected before a volcanic eruption or volcano-related seismic activity (e.g., Okada et al. 2000; Sparks 2003; Koizumi et al. 2004). Therefore, investigations of crustal deformation and groundwater changes can be useful for predicting volcanic eruptions. In this study, we examined groundwater changes and crustal deformation associated with the 2007 and 2014 eruptions of Mt. Ontake. Those associated with the 2007 eruption were clearly different from those associated with the 2014 eruption. We investigated the reason for this difference and its relationship with the mechanisms of these two eruptions of Mt. Ontake. To examine crustal deformation, we used daily positional information from four Global Navigation Satellite System (GNSS) stations operated by the Geospatial Information Authority of Japan (GSI), TKN, MTK, OTK, and HGW and calculated the baseline distance between pairs of stations. The daily positional data were downloaded from the homepage of GSI (Geospatial Information Authority 2015). We also used tilt data from a tilt station (TNH) operated by the Japan Meteorological Agency. The original groundwater pressure data measured at GOT includes diurnal and semidiurnal oscillations caused mainly by earth tides (tidal volumetric strain changes). Atmospheric pressure and precipitation also affect the groundwater pressure. These tidal changes and the effects of atmospheric pressure were estimated and subtracted from the groundwater pressure by using the BAYTAP-G tidal analysis program (Tamura et al. 1991), and the residual values were used as the corrected groundwater pressure (Fig. 2). Discussion and conclusions To summarize, the Geological Survey of Japan, AIST, has been observing groundwater pressure at GOT since 1998. The groundwater pressure at GOT clearly dropped before the 2007 eruption, but it did not change before or after the 2014 eruption. These changes are consistent with crustal deformation observed by nearby GNSS stations. The difference between the 2007 and 2014 eruptions can be explained if a relatively large magma intruded before the 2007 eruption but no or a small magma intrusion occurred before the 2014 eruption. NK carried out the data processing and analysis of the groundwater data and drafted the manuscript. TS carried out the groundwater observations and helped draft the manuscript. YK carried out the tidal analysis of the groundwater pressure and helped draft the manuscript. TO analyzed the GNSS data and helped draft the manuscript. All authors read and approved the final manuscript. We thank the residents of Otaki village for their cooperation in this research. We are grateful to an anonymous reviewer for reviewing our manuscript and valuable comments. The authors declare that they have no competing interests. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. - Cyranoski D (2014) Why Japan missed volcano’s warning signs. Nature. doi:10.1038/nature.2014.16022 Google Scholar - Earthquake Research Institute in the University of Tokyo (2014), http://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/kaisetsu/CCPVE/shiryo/130/130_no01.pdf. Accessed 8 Sept 2015 - Geospatial Information Authority of Japan (2015) GNSS data providing service. http://terras.gsi.go.jp/. Accessed 22 Nov 2015 - Imanishi K, Takeo M, Ellsworth WL, Ito H, Matsuzawa T, Kuwahara Y, Iio Y, Horiuchi S, Ohmi S (2004) Source parameters and rupture velocities of microearthquakes in Western Nagano, Japan, determined using stopping phases. Bull Seismol Soc Am 94:1762–1780View ArticleGoogle Scholar - Japan Meteorological Agency (2014) Rep. Coordin. Committee on prediction of volcanic eruption. http://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/kaisetsu/CCPVE/shiryo/130/130_no01.pdf. Accessed 8 Sept 2015 (in Japanese) - Kato A, Terakawa T, Yamanaka Y, Maeda Y, Horikawa S, Matsuhiro K, Okuda T (2015) Preparatory and precursory processes leading up to the 2014 phreatic eruption of Mount Ontake, Japan. Earth, Planets Space 67:111. doi:10.1186/s40623-015-0288-x View ArticleGoogle Scholar - Koizumi N, Kitagawai Y, Matsumoto N, Takahashi M, Sato T, Kamigaich O, Nakamura K (2004) Pre-seismic groundwater level changes induced by crustal deformations related to earthquake swarms off the east coast of Izu Peninsula, Japan. Geophys Res Lett 31:L10606. doi:10.1029/2004GL019557 Google Scholar - Maeda Y, Kato A, Terakawa T, Yamanaka Y, Horikawa S, Matsuhiro K, Okuda T (2015) Source mechanism of a VLP event immediately before the 2014 eruption of Mt. Ontake, Japan. Earth, Planets Space 67:187. doi:10.1186/s40623-015-0358-0 View ArticleGoogle Scholar - Matsumoto K, Sato T, Takanezawa T, Ooe M (2001) GOTIC2: a program for computation of oceanic tidal loading effect. J Geod. Soc Jpn 47:243–248Google Scholar - Murase M, Kimata F, Yamanaka Y, Horikawa S, Matsuhiro K, Matsushima T, Mori H, Ohkura T, Yoshikawa S, Miyajima R, Inoue H, Mishima T, Sonoda T, Uchida K, Yamamoto K, Nakamichi H (2016) Preparatory process preceding the 2014 eruption of Mount Ontake volcano, Japan: insights from precise leveling measurements. Earth, Planets Space 68:9. doi:10.1186/s40623-016-0386-4 View ArticleGoogle Scholar - Nakamichi H, Kumagai H, Nakano M, Okubo M, Kimata F, Ito Y, Obara K (2009) Source mechanism of very-long-period event at Mt. Ontake, central Japan: response of a hydrothermal system to magma intrusion beneath the summit. J Volcanol Geotherm Res 187:167–177View ArticleGoogle Scholar - Ogiso M, Matsubayashi H, Yamamoto T (2015) Descent of tremor source locations before the 2014 phreatic eruption of Ontake volcano, Japan. Earth, Planets Space 67:206. doi:10.1186/s40623-015-0376-y View ArticleGoogle Scholar - Okada Y, Yamamoto E, Ohkubo T (2000) Coswarm and preswarm crustal deformation in the eastern Izu Peninsula, central Japan. J Geophys Res 105(B1):681–692View ArticleGoogle Scholar - Sparks RSJ (2003) Forecasting volcanic eruptions. Earth Planet Sci Lett 210:1–15. doi:10.1016/S0012-821X(03)00124-9 View ArticleGoogle Scholar - Tamura Y, Sato T, Ooe M, Ishiguro M (1991) A procedure for tidal analysis with a Bayesian information criterion. Geophys J Int 104:507–516View ArticleGoogle Scholar - Yamada N, Wakita K (1989) Geological map of Japan 1:200,000. Iida, Geological Survey of JapanGoogle Scholar
<urn:uuid:f83302b7-0dbc-4ef8-93fd-6bee4d5c5df1>
2.921875
1,991
Academic Writing
Science & Tech.
43.92896
95,560,816
Aerobraking is a spaceflight maneuver that reduces the high point of an elliptical orbit (apoapsis) by flying the vehicle through the atmosphere at the low point of the orbit (periapsis). The resulting drag slows the spacecraft. Aerobraking is used when a spacecraft requires a low orbit after arriving at a body with an atmosphere, and it requires less fuel than does the direct use of a rocket engine. When an interplanetary vehicle arrives at its destination, it must change its velocity to remain in the vicinity of that body. When a low, near-circular orbit around a body with substantial gravity (as is required for many scientific studies) is needed, the total required velocity changes can be on the order of several kilometers per second. If done by direct propulsion, the rocket equation dictates that a large fraction of the spacecraft mass must be fuel. This in turn means the spacecraft is limited to a relatively small science payload and/or the use of a very large and expensive launcher. Provided the target body has an atmosphere, aerobraking can be used to reduce fuel requirements. The use of a relatively small burn allows the spacecraft to be captured into a very elongated elliptic orbit. Aerobraking is then used to circularize the orbit. If the atmosphere is thick enough, a single pass through it can be sufficient to slow a spacecraft as needed. However, aerobraking is typically done with many orbital passes through a higher altitude, and therefore thinner region of the atmosphere. This is done to reduce the effect of frictional heating, and because unpredictable turbulence effects, atmospheric composition, and temperature make it difficult to accurately predict the decrease in speed that will result from any single pass. When aerobraking is done in this way, there is sufficient time after each pass to measure the change in velocity and make any necessary corrections for the next pass. Achieving the final orbit using this method takes a long time (e.g., over six months when arriving at Mars), and may require several hundred passes through the atmosphere of the planet or moon. After the last aerobraking pass, the spacecraft must be given more kinetic energy via rocket engines in order to raise the periapsis above the atmosphere. The kinetic energy dissipated by aerobraking is converted to heat, meaning that a spacecraft using the technique needs to be capable of dissipating this heat. The spacecraft must also have sufficient surface area and structural strength to produce and survive the required drag, but the temperatures and pressures associated with aerobraking are not as severe as those of atmospheric reentry or aerocapture. Simulations of the Mars Reconnaissance Orbiter aerobraking use a force limit of 0.35 N per square meter with a spacecraft cross section of about 37 m², equate to a maximum drag force of about 7.4 N, and a maximum expected temperature as 170 °C. The force density (i.e. pressure), roughly 0.2 N per square meter, that was exerted on the Mars Observer during aerobraking is comparable to the aerodynamic resistance of moving at 0.6 m/s (2.16 km/h) at sea level on Earth, approximately the amount experienced when walking slowly. Aerocapture is a related but more extreme method in which no initial orbit-injection burn is performed. Instead, the spacecraft plunges deeply into the atmosphere without an initial insertion burn, and emerges from this single pass in the atmosphere with an apoapsis near that of the desired orbit. Several small correction burns are then used to raise the periapsis and perform final adjustments. This method was originally planned for the Mars Odyssey orbiter, but the significant design impacts proved too costly. Another related technique is that of aerogravity assist, in which the spacecraft flies through the upper atmosphere and utilises aerodynamic lift instead of drag at the point of closest approach. If correctly oriented, this can increase the deflection angle above that of a pure gravity assist, resulting in a larger delta-v. Although the theory of aerobraking is well developed, utilising the technique is difficult because a very detailed knowledge of the character of the target planet's atmosphere is needed in order to plan the maneuver correctly. Currently, the deceleration is monitored during each maneuver and plans are modified accordingly. Since no spacecraft can yet aerobrake safely on its own, this requires constant attention from both human controllers and the Deep Space Network. This is particularly true near the end of the process, when the drag passes are relatively close together (only about 2 hours apart for Mars). NASA has used aerobraking four times to modify a spacecraft’s orbit to one with lower energy, reduced apoapsis altitude, and smaller orbit. On 19 March 1991, aerobraking was demonstrated by the Hiten spacecraft. This was the first aerobraking maneuver by a deep space probe. Hiten (a.k.a. MUSES-A) was launched by the Institute of Space and Astronautical Science (ISAS) of Japan. Hiten flew by the Earth at an altitude of 125.5 km over the Pacific at 11.0 km/s. Atmospheric drag lowered the velocity by 1.712 m/s and the apogee altitude by 8665 km. Another aerobraking maneuver was conducted on 30 March. In May 1993, aerobraking was used during the extended Venusian mission of the Magellan spacecraft. It was used to circularize the orbit of the spacecraft in order to increase the precision of the measurement of the gravity field. The entire gravity field was mapped from the circular orbit during a 243-day cycle of the extended mission. During the termination phase of the mission, a "windmill experiment" was performed: Atmospheric molecular pressure exerts a torque via the windmill-sail-like oriented solar cell wings, the necessary counter-torque to keep the sonde from spinning is measured. In 1997, the Mars Global Surveyor (MGS) orbiter was the first spacecraft to use aerobraking as the main planned technique of orbit adjustment. The MGS used the data gathered from the Magellan mission to Venus to plan its aerobraking technique. The spacecraft used its solar panels as "wings" to control its passage through the tenuous upper atmosphere of Mars and lower the apoapsis of its orbit over the course of many months. Unfortunately, a structural failure shortly after launch severely damaged one of the MGS's solar panels and necessitated a higher aerobraking altitude (and hence one third the force) than originally planned, significantly extending the time required to attain the desired orbit. More recently, aerobraking was used by the Mars Odyssey and Mars Reconnaissance Orbiter spacecraft, in both cases without incident. Aerobraking in fiction In Robert A. Heinlein's 1948 novel Space Cadet, aerobraking is used to save fuel while slowing the spacecraft Aes Triplex for an unplanned extended mission and landing on Venus, during a transit from the Asteroid Belt to Earth. In the fourth episode of Stargate Universe, the Ancient ship Destiny suffers an almost complete loss of power and must use aerobraking to change course. The episode ends in a cliffhanger with Destiny headed directly toward a star. The spacecraft Cosmonaut Alexey Leonov in Arthur C. Clarke's novel 2010: Odyssey Two and its film adaptation uses aerobraking in the upper layers of Jupiter's atmosphere to establish itself at the L1 Lagrangian point of the Jupiter – Io system. In the 2004 TV series Space Odyssey: Voyage to the Planets the crew of the international spacecraft Pegasus perform an aerobraking manoeuvre in Jupiter's upper atmosphere to slow them down enough to enter Jovian orbit. In the space simulation sandbox game Kerbal Space Program, this is a common method of reducing a craft's orbital speed. It is sometimes humorously referred to as "aerobreaking", because the high drag sometimes causes large crafts to split in several parts. In the 2014 film Interstellar, astronaut pilot Cooper uses aerobraking to save fuel and slow the spacecraft Ranger upon exiting the wormhole to arrive in orbit above the first planet. Aerodynamic braking is a method used in landing aircraft to assist the wheel brakes in stopping the plane. It is often used for short runway landings or when conditions are wet, icy or slippery. Aerodynamic braking is performed immediately after the rear wheels (main mounts) touch down, but before the nose wheel drops. The pilot begins to pull back on the stick, applying elevator pressure to hold the nose high. The nose-high attitude exposes more of the craft's surface-area to the flow of air, which produces greater drag, helping to slow the plane. The raised elevators also cause air to push down on the rear of the craft, forcing the rear wheels harder against the ground, which aids the wheel brakes by helping to prevent skidding. The pilot will usually continue to hold back on the stick even after the elevators lose their authority, and the nose wheel drops, to keep added pressure on the rear wheels. Aerodynamic braking is a common braking technique during landing, which can also help to protect the wheel brakes and tyres from excess wear, or from locking up and sending the craft sliding out of control. It is often used by private pilots, commercial planes, fighter aircraft, and was used by the space shuttles during landings. |Wikimedia Commons has media related to Aerobraking.| - Jill L. Hanna Prince & Scott A. Striepe. "NASA LANGLEY TRAJECTORY SIMULATION AND ANALYSIS CAPABILITIES FOR MARS RECONNAISSANCE ORBITER" (PDF). NASA Langley Research Center. Archived from the original (PDF) on 2009-03-20. Retrieved 2008-06-09. - http://www.spacedaily.com/mars/features/aero-97g.html article on MGS - Spaceflight Now | Destination Mars | Spacecraft enters orbit around Mars - Percy, T.K.; Bright, E. & Torres, A.O. (2005). "Assessing the Relative Risk of Aerocapture Using Probabilistic Risk Assessment" (PDF). - "SCIENCE TEAM AND INSTRUMENTS SELECTED FOR MARS SURVEYOR 2001 MISSIONS". 6 November 1997. - McRonald, Angus D.; Randolph, James E. (Jan 8–11, 1990). "Hypersonic maneuvering to provide planetary gravity assist". AIAA-1990-539, 28th Aerospace Sciences Meeting. Reno, NV. - Prince, Jill L. H.; Powell, Richard W.; Murri, Dan. "Autonomous Aerobraking: A Design, Development, and Feasibility Study" (PDF). NASA Langley Research Center. NASA Technical Reports Server. Retrieved 15 September 2011. - "Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000" by Asif A. Siddiqi, NASA Monographs in Aerospace History No. 24. - J. Kawaguchi, T. Icbikawa, T. Nishimura, K. Uesugi, L. Efron, J. Ellis, P. R. Menon and B. Tucker, "Navigation for Muses-A (HITEN) Aerobraking in the Earth's Atmosphere -- Preliminary Report" Archived December 26, 2010, at the Wayback Machine., Proceedings of the 47th Annual Meeting of the Institute of Navigation June 10–12, 1991, pp.17-27. - Gunter's Space Page "MUSES-A (Hiten)" - "Surfing an alien atmosphere". ESA.int. European Space Agency. Retrieved 11 June 2015. - "Venus Express rises again". ESA.int. European Space Agency. Retrieved 11 June 2015. - "Trace Gas Orbiter Aerobraking". - Space Cadet, p. 157-158. by Robert Hienlien - Airplane Flying Handbook By the Federal Aviation Administration - Skyhorse Publishing 2007 - "Archived copy". Archived from the original on 2016-06-10. Retrieved 2012-07-31. - Cosmic Perspectives in Space Physics By S. Biswas - Kluwer Academic Publishing 2000 Page 28 - JPL aerobraking report for MGS - An Explanation of How Aerobraking Works (PDF) - Hoffman, S. (August 20–22, 1984). A comparison of aerobraking and aerocapture vehicles for interplanetary missions. AIAA and AAS, Astrodynamics Conference. Seattle, Washington: American Institute of Aeronautics and Astronautics. pp. 25 p.. Retrieved 2007-07-31.
<urn:uuid:cf6bc3b1-5c30-4221-a01f-9bbba726b4a9>
3.9375
2,687
Knowledge Article
Science & Tech.
45.069819
95,560,818
Tropical cyclones occur around the equator at 5 ° - 30 ° but also have varying names depending upon where in the world they form. In the northern hemisphere, tropical cyclones occur between June and November peaking in September. In the southern hemisphere, the season lasts from November to April but storms remain less common here than in the northern hemisphere. More than one tropical storm can occur in the same ocean and region at once. Due to the Coriolis effect, the storm's surface wind will be deflected to the right in the northern hemisphere to rotate counter-clockwise, and to the left to rotate clockwise in the southern hemisphere. When the average wind speed of the storm reaches 74 mph, the tropical cyclone is classified by various different names globally despite resulting from the same process: - Hurricanes - Atlantic and North-East Pacific Oceans - Typhoon/Super typhoon - North-West Pacific Ocean - Severe tropical cyclone - South-West Pacific and South- East Indian Ocean - Severe cyclonic storm - North Indian Ocean - Tropical cyclone - South-West Indian Ocean Tropical cyclones are most frequently seen to make landfall and impact in the USA and Asia. There are seven basins in which tropical storms are seen to form regularly at different times throughout the year, these are sometimes referred to as seasons. - Atlantic basin: June - November - North- East Pacific basin: Late May/early June - Late October/early November - North- West Pacific basin: Occur all year round. Main season - July- November - North Indian basin: April - December - South- West Indian basin: Late October/early November - May - South- East Indian/Australian basin: Late October/early November - May - Australian/South- West Pacific basin: Late October/early November - May Observing tropical cyclone tracks The image below, produced by Nasa, shows the actual observed tracks of tropical cyclones over 20 years from 1985 to 2005 and clearly shows their formation in the zones displayed in the previous diagram:
<urn:uuid:76a4cf5c-cc5c-4d8b-90ce-74584497d63b>
3.9375
430
Knowledge Article
Science & Tech.
14.770135
95,560,825
Cells have a sophisticated system to control and dispose of defective, superfluous proteins and thus to prevent damage to the body. Dr. Katrin Bagola and Professor Thomas Sommer of the Max Delbrück Center for Molecular Medicine (MDC) Berlin-Buch as well as Professor Michael Glickman and Professor Aaron Ciechanover of Technion, the Technical University of Israel in Haifa, have now discovered a new function of an enzyme that is involved in this vital process. Using yeast cells as a model organism, the researchers showed that a specific factor, abbreviated Cue1, is not only a receptor and activator for a component of the degradation apparatus, but also contributes to ensuring that the defective protein is marked with a molecular tag for degradation (Molecular Cell, doi: org/10.1016/j.molcel.2013.04.005)*. Proteins are molecular machines in the cells of an organism. Different types of proteins perform many different functions: They transport materials to their destination, ward off pathogens, enable chemical reactions in the cell and much more. Many proteins are produced in a cell organelle, the endoplasmic reticulum (ER), are then folded and subsequently transported to their destination. Some proteins are only required for a specific, time-limited purpose and must be degraded once their purpose has been served. But errors also frequently occur during production and folding. These defective proteins are not functional and can even harm the organism. Therefore they, too, must be degraded. The cells therefore have a sophisticated system to dispose of defective, superfluous proteins. In the ER there is a special process for protein degradation, known as ER-associated degradation (ERAD). This system contains a number of enzymes that cooperate to ensure that a defective protein is marked with a molecular tag, the molecule ubiquitin. This process is called ubiquitylation. A chain of four to six molecules serves as degradation signal. A protein tagged with such a molecular chain is transported to the proteasome, the protein-cleaving machinery of the cell, where it is separated into its components. This ubiquitin-proteasome system is found in all eukaryotic cells; it is ubiquitous. It is one of the most complex cellular systems and protects the body from severe diseases. Defective proteins that escape this system trigger serious diseases such as Alzheimer’s, Parkinson’s, Huntington’s disease, cystic fibrosis or diabetes. The scientist who discovered this protective program is Professor Ciechanover. He received the Nobel Prize in Chemistry in 2004 for this achievement together with Professor Avram Hershko (Technion) and Professor Irwin Rose (University of California, Irvine, USA). Several enzymes must work in concert to facilitate the attachment of a ubiquitin chain to a defective protein. Some of these enzymes are anchored in the membrane of the ER, others such as the enzyme Ubc7 swim freely inside the cell. A factor called CUE1, which itself is bound to the membrane, is responsible for recruiting Ubc7 and escorting it to the enzymes at the membrane. To achieve this, it has a domain which binds specifically to Ubc7. Another domain of the factor is the so-called CUE domain. Dr. Bagola and Professor Sommer have studied its function in yeast cells together with their colleagues Professor Glickman and Professor Ciechanover. The CUE domain is a ubiquitin-binding domain (UBD). UBDs bind to specific ubiquitin patterns. For example, they can recognize whether one or more ubiquitin molecules have been attached to a protein and how the respective ubiquitin molecules are linked together in chains. The ubiquitin pattern determines which ubiquitin domain binds to which protein and thus determines the subsequent fate of the protein. Direct impact on molecular chain formation – Signal for protein degradation The MDC and Technion researchers, who have collaborated closely for many years, showed that the CUE domain of the factor Cue1 binds to ubiquitin chains that are linked together via a specific building block of the individual ubiquitin molecules. These chains subsequently serve as a degradation signal for proteins. In addition, the researchers found that the CUE domain also has a direct impact on the length of the ubiquitin chains: If the CUE domain was lacking or limited in its function due to a mutation, the ubiquitin chains developed more slowly and were shorter in length. Apparently, the CUE domain stabilizes the ubiquitin chains, allowing additional ubiquitin molecules to be attached more easily. In yeast cells, the researchers found that the CUE domain of Cue1 in this way actually affects how effectively the ERAD system can degrade proteins. The researchers suspect that the CUE domain is used specifically for the disposal of proteins which are bound to the ER membrane. However, they seem to have no influence on the degradation of soluble proteins. “Our results show that a ubiquitin-binding domain can also regulate the formation of ubiquitin chains,” the researchers said. “This function was previously unknown until now.” * Ubiquitin binding by a CUE domain regulates ubiquitin chain formation by ERAD E3 ligases. Katrin Bagola1, Maximilian von Delbrück1, Gunnar Dittmar1, Martin Scheffner3, Inbal Ziv4, Michael H. Glickman4, Aaron Ciechanover5, and Thomas Sommer1, 2 1Max-Delbrück-Center for Molecular Medicine, Robert-Rössle-Strasse 10, D-13122 Berlin, Germany 2Humboldt-University zu Berlin, Institute for Biology, Invalidenstr.43, D-10115 Berlin, Germany 3Department of Biology, Konstanz Research School Chemical Biology, University of Konstanz, Konstanz, Germany 4Department of Biology and 5Cancer and Vascular Biology Research Center, The Rappaport Faculty of Medicine and Polak Cancer Center, Technion-Israel Institute of Technology, Haifa 31096, Israel
<urn:uuid:7a115ff6-84d3-4fd3-a43b-358a174affee>
3.703125
1,273
News Article
Science & Tech.
28.985489
95,560,832
Artin’s Conjecture for Primitive Roots In his preface to Diophantische Approximationen, Hermann Minkowski expressed the conviction that the “deepest interrelationships in analysis are of an arithmetical nature.” Gauss described one such remarkable interrelationship in articles 315–317 of his Disquisitiones Arithmeticae. There, he asked why the decimal fraction of 1/7 has period length 6: whereas 1/11 has period length of only 2: $$ x^2 + y^2 = 5 $$ Why does 1/99007599, when written as a binary fraction (that is, expanded in base 2), have a period of nearly 50 million 0s and 1s? To answer these questions, Gauss introduced the concept of a primitive root. $$ x - y = 1 $$ KeywordsElliptic Curf Primitive Root Riemann Hypothesis Sieve Method Imaginary Quadratic Field These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Unable to display preview. Download preview PDF. - 2.E. Bombieri, Le grand crible dans la théorie analytique des nombres, Astérique 18 (1974).Google Scholar - 8.R. Gupta, V. Kumar Murty, and M. Ram Murty, The Euclidean algorithm for S integers, CNS Conference Proceedings, Vol 7 (1985), 189–202.Google Scholar - 10.C. Hooley, On Artin’s conjecture, J. reine angew. Math. 226 (1967), 209–220.Google Scholar © Springer Science+Business Media New York 2001
<urn:uuid:8d49fffd-012a-4a58-871a-76de27df6f08>
2.53125
377
Truncated
Science & Tech.
60.1657
95,560,850
Thats my simple question! Is there some reason it shouldn't? Hi bystander - I read that infra red could not penetrate the ocean but caused evaporation on the surface skin. If that is true then infra red would not be able to heat the ocean like short wave radiation does. Is that true? thanks The enthalpy of vaporization has to come from somewhere --- you don't suppose IR absorption by water might be the source? Absolutely is the source in this example. But the energy from the incoming IR is not retained by the ocean as heat - its released through evaporation into the atmosphere. So is it correct to think that IR cannot heat (i.e. increase the temperature) of the ocean? No. IR does heat water but at a very low rate, with not much energy compared to temperature coefficient of water. Consider also, conduction of heat from the "warm" surface to cooler water layer below the surface. To quote a faculty member from grad school days, "Every calorie looks the same once it's off the bus." Assigning origins and destinations to energies can be misleading. Thanks Doug. The IPCC estimates that a doubling of CO2 in the atmosphere would increase radiative forcing by 3.7W/m2 assuming clear sky conditions. This could only heat water in the top few molecules initially if water is as impermeable to infra red as I understand. Then as Bystander says any heating at the surface which does not evaporate surface skin molecules could be conducted to (or just mixed with) deeper levels. Could you point me in the direction of calculations as to what the magnitude of this heating might be? I completely understand that measuring this through real world observations must be very difficult as the calories are off the bus by then, but someone must have modelled this or tested this in a controlled environment? The IPCC spins at best and lies at worst. Impeached. Quick and dirty demonstration? Two liter soda bottle filled with water; measure T in the morning; leave in sun all day; measure T in late afternoon. But sunlight is Short wave radiation ! Bystander - the experiment you suggest would only tell me what I already know that: short wave radiation heats water. I want to understand if infra red can heat water!! Bystander - I found the following passage on a CAGW skeptic's website (http://climaterealists.com/index.php?id=4245) "However the effect of downwelling infrared is always to use up all the infrared in increasing the temperature of the ocean surface molecules whilst leaving nothing in reserve to provide the extra energy required (the latent heat of evaporation) when the change of state occurs from water to vapour. That extra energy requirement is taken from the medium (water or air) in which it is most readily available. If the water is warmer then most will come from the water. If the air is warmer then most will come from the air. However over the Earth as a whole the water is nearly always warmer than the air (due to solar input) so inevitably the average global energy flow is from oceans to air via that latent heat of evaporation in the air and the energy needed is taken from the water. This leads to a thin (1mm deep) layer of cooler water over the oceans worldwide and below the evaporative region that is some 0.3C cooler than the ocean bulk below." The last sentence does seem to be validated with this paper: http://www.nature.com/nature/journal/v358/n6389/abs/358738a0.html But is the rest fair? Doug - I'm not sure I'm reading your graph correctly but it does seem to show that sunlight at sea level is a mixture of UV, visible and infra red wavelengths. If so - putting a bottle of water in the sunlight and measuring the temp change over a day isn't going to tell me anything about the effectiveness of IR in heating water The shortwave (visible) is going right on through. That's observation one. A day in the sun will bring the bottle up to 60-70 C. That's observation two. From the Planck radiation law 75% of the energy in sunlight is transmitted at wavelengths longer than the peak intensity wavelength, 500 nm (red). Dear Bystander - I appreciate your patience with me on this and I am grateful. That said, a bottle in the sunlight "experiment" is a blind alley if we're trying to understand the oceans. Low evaporation rates, the conduction of heat through the plastic of the bottle and diffraction of the light through the curved surfaces of the bottle all make such an experiment an extremely poor way to model the ocean temperature / IR relationship. Furthermore your comment on Planck radiation law calculations does not address this question if incoming IR causes evaporation in the ocean skin layer rather than an increase in the temperature of the ocean. To get back on track do you agree that this graph is representative of the absorption spectrum of liquid water? If so - its clear that IR does not penetrate below 1cm and the wavelengths of back radiation from the atmosphere penetrate much less than that (e.g. 1/10^5 metres). As per the nature article I referenced above (http://www.nature.com/nature/journal/v358/n6389/abs/358738a0.html) the top 1mm of the ocean is typically 0.3C cooler than the bulk mixed layer. Forgive me for being slow - but if most of the total IR radiation and all of the back radiation from the atmosphere penetrates less than the depth of the ocean skin layer that is cooler than the water below - how can IR increase the temperature of the ocean? These observations suggest to me that almost all of IR radiation incident on the ocean causes evaporation rather than an increase in temperature of the ocean. Where am I going wrong? ... not "wrong," into a "semantic ditch," perhaps. If you're going to give me all the solar radiation that penetrates further than 1 mm by the Kebes plot (shorter than 2 μm), you've given me 80 - 90% of the IR. If you define IR as only that radiation that is absorbed in 1 mm or less, and ignore the 0.8 - 2 μm gap between visible and IR acknowledged by a specific argument, you're losing a lot of energy. Thanks Bystander. I think I get where you're coming from. I'll give you whatever IR you want! Thanks to this discussion I think I can now refine my original question a little more clearly: Can an increase in Atmospheric back radiation (from say increases in atmospheric concentrations of greenhouse gases) lead to increases in ocean temperatures? For anyone else that is interested in this topic I found this set of articles (and associated comments) really useful. As with most other discussions in climatology the answer isn't simple . . . What is "back radiation?" its downward longwave radiation. IR radiating from the atmosphere down to the surface of the earth. Not IR direct from the sun. Heat moves from (hotter/same T/cooler) body/system to (hotter/same T/cooler) body/system? Heat moves from hot to cold obviously. To summarise (and no doubt over-generalise!) one side of the argument seems to posit that: solar radiation heats the ocean, but atmospheric radiation only heats the top few molecules. So increased Downward Longwave Radiation (DLR) is unable to transfer any additional heat into the bulk of the ocean, instead the energy goes into evaporating the skin layer into water vapor. The other side of the argument seems to postulate that: additional downward longwave radiation must alter the IR flux at the surface of the ocean leading to more IR being trapped in lower ocean layers and maintaining a higher water temperature than would be the case with less downward longwave radiation so the downward longwave radiation doesn't heat the ocean but it slows the cooling that would happen with less DLR And quite frankly I'm confused! What do you think? I should also add that I'm very much aware that none of the stuff I've read on this is in the peer reviewed literature. Wozniak et al (2013) "Light abosrbtion in sea water" looks like it might be just what I need - but its paywalled . . . any other suggestions would be gratefully received Not being "flip" with you --- just wanted to be sure we're both working from the same initial set of ideas/postulates/principles. Welcome to the wonderful world of energy "balances" in non-equilibrium systems. The system we're "analyzing" (hah!) has as heat sources the sun, ~ 10-4steradians at ~ 5800 K or 1-1.3 kW/m2 at earth surface, and crustal heat leak of 10-30 mW/m2, negligible. The heat sink is 4π steradians at ~ 4 K, the CMB. What else do we know? Some fraction of incident solar radiation is reflected, what fraction is subject to some uncertainty; some fraction is transmitted, very small through the atmospheric "halo", but enough to illuminate an otherwise totally eclipsed moon; and, some fraction is absorbed by atmo-, hydro-, and lithospheres, exchanged by conduction, convection, and radiation, and radiated to the CMB. What are the exchange rates for each mechanism among the spheres? How good are the models? How good are the measurements? How do we know which "bus" which calorie came in on? Thanks Bystander that makes me feel better but I can't help feel disappointed that certain reputable scientists describe the science as settled. It seems then that comments within CAGW community that the recent plateau / pause / hiatus in global mean atmospheric temperatures might be explained by the "missing heat" being "trapped" in the deep oceans is not a scientific conclusion from research but merely a hypothesis. Furthermore it seems proving that rising atmospheric CO2 concentrations can heat the ocean is also very difficult. It remains only a hypothesis. Separate names with a comma.
<urn:uuid:b0ea42d0-2f08-46e5-9961-b53e6e0f6208>
2.796875
2,148
Comment Section
Science & Tech.
57.867384
95,560,891
Using a powerful data-crunching technique, Johns Hopkins researchers have sorted out how a protein keeps defective genetic material from gumming up the cellular works. The protein, Dom34, appears to “rescue” protein-making factories called ribosomes when they get stuck obeying defective genetic instructions, the researchers report in the Feb. 27 issue of Cell. “We already knew that binding to Dom34 makes a ribosome split and say ‘I’m done,’ and that without it, animals can’t survive,” says Rachel Green, Ph.D., a professor in the Department of Molecular Biology and Genetics at the Johns Hopkins University School of Medicine and a Howard Hughes Medical Institute investigator. “In this study, we saw how the protein behaves in ‘real life,’ and that it swoops in only when ribosomes are in a very particular type of crisis.” Ribosomes use genetic instructions borne by long molecules called messenger RNA to make proteins that cells need to get things done. Normally, ribosomes move along strands of messenger RNA, making proteins as they go, until they encounter a genetic sequence called a stop codon. At that point, the protein is finished, and specialized recycling proteins help the ribosome disconnect from the RNA and break up into pieces. Those pieces later come together again on a different RNA strand to begin the process again. From Green’s earlier work with Dom34, it appeared that the protein might be one of the recycling proteins that kicks in at stop codons. To see if that was the case, Green used a method for analyzing the “footprints” of ribosomes developed at the University of California, San Francisco. In 2009, scientists there reported they had mashed up yeast (a single-celled organism that is genetically very similar to higher-order animals) and dissolved any RNA that wasn’t protected inside a ribosome at the time. They then took the remaining bits of RNA — those that had been “underfoot” of ribosomes — and analyzed their genetic makeup. That sequence data was then matched to the messenger RNA it came from, giving the researchers a picture of exactly which RNA — and thus, which genes — were being turned into protein at a given moment in time. Green and postdoctoral fellow Nick Guydosh, Ph.D., adapted this method to see what Dom34 was up to. Guydosh wrote a computer program to compare footprint data from yeast with and without functioning Dom34 genes. The program then determined where on messenger RNAs the ribosomes in cells without Dom34 tended to stall. It was at these points that Dom34 was rescuing the ribosomes in the normal cells, Guydosh says. “What many of these ‘traffic jams’ had in common was that the RNA lacked a stop codon where the ribosome could be recycled normally,” he says. For example, some of the problem messenger RNAs were incomplete — a common occurrence, as chopping up messenger RNAs is one way cells regulate how much of a protein is produced. In others, the RNA had a stop codon, but something strange and unexpected was going on in these latter cases: The ribosomes kept going past the place where the stop codon was and went into a no man’s land without protein-making instructions. “Ribosomes kept moving but stopped making protein, at least for a time,” Guydosh says. “As far as we know, this ‘scanning’ activity has never been seen before — it was a big surprise.” “What these results show us is why we need Dom34 to survive: It’s the only protein that can rescue ribosomes stuck on RNAs,” says Green. “Without it, cells eventually run out of the ribosomes they need to make protein.” This study was funded by the National Institute of General Medical Sciences (grant number R01GM059425), the Howard Hughes Medical Institute and the Damon Runyon Cancer Research Foundation. Shawna Williams | newswise World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:afb0450c-7377-4619-9a76-a40101c87896>
3.5
1,537
Content Listing
Science & Tech.
44.563566
95,560,929
Bringing the elegance of C# EventHandler to Python The concept of events is heavily used in GUI libraries and is the foundation for most implementations of the MVC (Model, View, Controller) design pattern. Another prominent use of events is in communication protocol stacks, where lower protocol layers need to inform upper layers of incoming data and the like. Here is a handy class that encapsulates the core to event subscription and event firing and feels like a "natural" part of the language. The package has been tested under Python 2.6, 2.7, 3.3 and 3.4. The C# language provides a handy way to declare, subscribe to and fire events. Technically, an event is a "slot" where callback functions (event handlers) can be attached to - a process referred to as subscribing to an event. To subscribe to an event: >>> def something_changed(reason): ... print "something changed because %s" % reason ... >>> from events import Events >>> events = Events() >>> events.on_change += something_changed Multiple callback functions can subscribe to the same event. When the event is fired, all attached event handlers are invoked in sequence. To fire the event, perform a call on the slot: >>> events.on_change('it had to happen') something changed because it had to happen Usually, instances of :class:`~events.Events` will not hang around loosely like above, but will typically be embedded in model objects, like here: class MyModel(object): def __init__(self): self.events = Events() ... Similarly, view and controller objects will be the prime event subscribers: class MyModelView(SomeWidget): def __init__(self, model): ... self.model = model model.events.on_change += self.display_value ... def display_value(self): ... >>> from events import Events >>> events = Events() >>> print events <events.events.Events object at 0x104e5d5f0> >>> def changed(): ... print "something changed" ... >>> def another_one(): ... print "something changed here too" ... >>> def deleted(): ... print "something got deleted!" ... >>> events.on_change += changed >>> events.on_change += another_one >>> events.on_delete += deleted >>> print len(events) 2 >>> for event in events: ... print event.__name__ ... on_change on_delete >>> event = events.on_change >>> print event event 'on_change' >>> print len(event) 2 >>> for handler in event: ... print handler.__name__ ... changed another_one >>> print event <function changed at 0x104e5c230> >>> print event.__name__ changed >>> print len(events.on_delete) 1 >>> events.on_change() something changed somethind changed here too >>> events.on_delete() something got deleted! Note that by default :class:`~events.Events` does not check if an event that is being subscribed to can actually be fired, unless the class attribute :attr:`__events__` is defined. This can cause a problem if an event name is slightly misspelled. If this is an issue, subclass :class:`~events.Events` and list the possible events, like: class MyEvents(Events): __events__ = ('on_this', 'on_that', ) events = MyEvents() # this will raise an EventsException as `on_change` is unknown to MyEvents: events.on_change += changed You can also predefine events for a single :class:`~events.Events` instance by passing an iterator to the constructor. events = Events(('on_this', 'on_that')) # this will raise an EventsException as `on_change` is unknown to MyEvents: events.on_change += changed You can also leverage both the constructor method and the :attr:`__events__` attribute to restrict events for specific instances: DatabaseEvents(Events): __events__ = ('insert', 'update', 'delete', 'select') audit_events = ('select') AppDatabaseEvents = DatabaseEvents() # only knows the 'select' event from DatabaseEvents AuditDatabaseEvents = DatabaseEvents(audit_events) Events is on PyPI so all you need to do is: pip install events python setup.py test The package has been tested under Python 2.6, Python 2.7 and Python 3.3. Source code is available at GitHub. Based on the excellent recipe by Zoran Isailovski, Copyright (c) 2005.
<urn:uuid:85d48b22-30bb-429f-a51c-46dc6bc263d8>
2.890625
985
Documentation
Software Dev.
47.251708
95,560,952
Magnetic fields are generated in space around electric currents. And, like negative and positive electric charges, magnetic fields have two poles: north-and south-seeking. At the atomic level, electric current is generated by electrons and protons (electrically charged particles) as they spin. As each particle spins, a small magnetic field is generated whose effect is cancelled out on the super-atomic scale because in most materials these particles spin in random orientations. In magnetized materials, however, electron spins are organized and aligned to reinforce each other resulting in an observable magnetic field. Some materials, noticeably iron and its alloys, are more prone to magnetic effects than others. KeywordsMagnetic Field Magnetize Material Permanent Magnet Magnetic Field Strength Electric Motor - HCC.Hawley’s Condensed Chemical Dictionary,11th Edition; Sax, N. Irving and Lewis, Richard J., Sr.; Van Nostrand Reinhold: New York, 1987Google Scholar - MEOP.Macmillan Encyclopedia of Physics; Ryder, J.S., ed.; Simon & Shuster Macmillan, 1996Google Scholar - MH14.Materials Handbook,14th Edition; Brady, George S., Clauser, Henry R. and Vaccari, John A.; McGraw-Hill: New York, 1997Google Scholar
<urn:uuid:465511ef-3c11-42b1-bd1d-89c21ba98bf7>
3.78125
267
Knowledge Article
Science & Tech.
35.77959
95,560,955
Species Detail - Common Redshank (Tringa totanus) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). Protected Species: Wildlife Acts || Threatened Species: Birds of Conservation Concern || Threatened Species: Birds of Conservation Concern >> Birds of Conservation Concern - Red List 1 January (recorded in 2003) 31 December (recorded in 2010) National Biodiversity Data Centre, Ireland, Common Redshank (Tringa totanus), accessed 21 July 2018, <https://maps.biodiversityireland.ie/Species/10041>
<urn:uuid:ef122a35-f9c8-4d1c-b231-2882ee550f65>
2.734375
170
Structured Data
Science & Tech.
29.926842
95,560,962
- Washing Carbon Out of the Air - Narrated by: Mark Moran - Length: 18 mins - Release date: 06-02-10 - Language: English - Publisher: Scientific American Regular price: $1.95 Select or Add a new payment method Buy Now with 1 Credit Buy Now for $1.95 Pay using card ending in Klaus S. Lackner, a professor of geophysics at Columbia University, reports on how machines could one day absorb carbon dioxide from the atmosphere, reducing global warming. This article was published in the June 2010 edition of Scientific American. Want more Scientific American? Subscribe for one month or 12 months. Get the latest issue. Check out the complete archive. ©2009 Scientific American
<urn:uuid:1bf6d035-7def-4383-8136-a974f2ae8b48>
2.609375
159
Product Page
Science & Tech.
54.454899
95,560,963
The solar system has inner and outer layers, the inner being made up of the sun, Mercury, Venus and Earth, and the outer being made up of Mars, asteroids and miscellaneous space debris. Although these planets are light-years away from each other, each planet has very distinct effects on the others. The position, physical qualities and orbit of each planet affect the Earth in several measurable ways. Big Bang Theory What's estimated to be 15 billion years ago, according to Visionlearning, an organization that the National Science Foundation funds, the universe exploded in what is known as the big bang. The big bang theory states that the energy of this explosion unified the chemicals involved, creating the matter and energy that would be the solar system, as well as time itself. It was during this explosion that gravity was created and the exact formation and orbit of each planet was set. According to this theory, the shape, orbit and chemical makeup of the earth is affected and affects every other planet in the solar system since the occurrence of the big bang. Earth exists as a life-sustaining planet because of the chemicals and energy that contributed to this explosion. While this model theory is the most popular in evolutionary science, other scientific and religious theories exist to combat it. According to ScienceDaily, changes in Earth's shape over time, coupled with gravitational actions from other planets in the solar system, directly affect the climate on Earth. As these two factors change, the pattern of sunlight across Earth's surface changes. The gravitational pull of Saturn and Jupiter in particular has changed Earth's axial tilt, affecting the way sunlight falls and therefore Earth's climate. Night and Day The Earth's place in the solar system, as well as the speed with which it rotates, create 24-hour Earth days, time zones, and night and day as human beings know it. The gravitational pull of each planet affects the spin and rotation of each other planet. Because of this arbitrary speed with which Earth rotates, the human species and other living creatures on Earth have developed their daily patterns during daytime and nighttime hours. The gravity of the sun keeps Earth and every other planet in its orbit. If the sun weren't in a perpendicular position to the Earth, the Earth would travel in a straight line rather than an elliptical orbit. The fact that the earth orbits as it does creates life on Earth as we know it, keeping us and every other object on Earth rooted to the ground, experiencing nighttime and daytime hours on a consistent basis and so on. If the Earth was not pulled into orbit by the sun, it would eventually hit another planet or object in space and be destroyed.
<urn:uuid:1d0f2d11-7bd8-48a6-aaf4-f860d705bab9>
4.03125
537
Knowledge Article
Science & Tech.
44.266041
95,560,965
Inscribed square problem |Unsolved problem in mathematics: The inscribed square problem, also known as the square peg problem or the Toeplitz' conjecture, is an unsolved question in geometry: Does every plane simple closed curve contain all four vertices of some square? This is true if the curve is convex or piecewise smooth and in other special cases. The problem was proposed by Otto Toeplitz in 1911. Some early positive results were obtained by Arnold Emch and Lev Schnirelmann. As of 2017[update], the general case remains open. - Does every Jordan curve admit an inscribed square? It is not required that the vertices of the square appear along the curve in any particular order. Some figures, such as circles and squares, admit infinitely many inscribed squares. If C is an obtuse triangle then it admits exactly one inscribed square; right triangles admit exactly two, and acute triangles admit exactly three. It is tempting to attempt to solve the inscribed square problem by proving that a special class of well-behaved curves always contains an inscribed square, and then to approximate an arbitrary curve by a sequence of well-behaved curves and infer that there still exists an inscribed square as a limit of squares inscribed in the curves of the sequence. One reason this argument has not been carried out to completion is that the limit of a sequence of squares may be a single point rather than itself being a square. Nevertheless, many special cases of curves are now known to have an inscribed square. Piecewise analytic curves Emch (1916) showed that piecewise analytic curves always have inscribed squares. In particular this is true for polygons. Emch's proof considers the curves traced out by the midpoints of secant line segments to the curve, parallel to a given line. He shows that, when these curves are intersected with the curves generated in the same way for a perpendicular family of secants, there are an odd number of crossings. Therefore, there always exists at least one crossing, which forms the center of a rhombus inscribed in the given curve. By rotating the two perpendicular lines continuously through a right angle, and applying the intermediate value theorem, he shows that at least one of these rhombi is a square. Locally monotone curves Stromquist has proved that every local monotone plane simple curve admits an inscribed square. The condition for the admission to happen is that for any point p, the curve C should be locally represented as a graph of a function y=f(x). In more precise terms, for any given point p on C, there is a neighborhood U(p) and a fixed direction n(p) (the direction of the “y-axis”) such that no chord of C -in this neighborhood- is parallel to n(p). Curves without special trapezoids An even weaker condition on the curve than local monotonicity is that, for some ε > 0, the curve does not have any inscribed special trapezoids of size ε. A special trapezoid is an isosceles trapezoid with three equal sides, each longer than the fourth side, inscribed in the curve with a vertex ordering consistent with the clockwise ordering of the curve itself. Its size is the length of the part of the curve that extends around the three equal sides. If there are no such trapezoids (or an even number of them), the limiting argument for general curves can be carried to completion, showing that curves with this property always have an inscribed square. Curves in annuli If a Jordan curve inscribed in an annulus whose outer radius is at most 1 + √ times its inner radius, and it is drawn in such a way that it separates the inner circle of the annulus from the outer circle, then it contains an inscribed square. In this case, if the given curve is approximated by some well-behaved curve, then any large squares that contain the center of the annulus and are inscribed in the approximation are topologically separated from smaller inscribed squares that do not contain the center. The limit of a sequence of large squares must again be a large square, rather than a degenerate point, so the limiting argument may be used. Variants and generalizations One may ask whether other shapes can be inscribed into an arbitrary Jordan curve. It is known that for any triangle T and Jordan curve C, there is a triangle similar to T and inscribed in C. Moreover, the set of the vertices of such triangles is dense in C. In particular, there is always an inscribed equilateral triangle. It is also known that any Jordan curve admits an inscribed rectangle. Some generalizations of the inscribed square problem consider inscribed polygons for curves and even more general continua in higher dimensional Euclidean spaces. For example, Stromquist proved that every continuous closed curve C in Rn satisfying "Condition A" that no two chords of C in a suitable neighborhood of any point are perpendicular admits an inscribed quadrilateral with equal sides and equal diagonals. This class of curves includes all C2 curves. Nielsen and Wright proved that any symmetric continuum K in Rn contains many inscribed rectangles. H.W. Guggenheimer proved that every hypersurface C3-diffeomorphic to the sphere Sn−1 contains 2n vertices of a regular Euclidean n-cube. - Toeplitz, O. (1911), "Über einige Aufgaben der Analysis situs", Verhandlungen der Schweizerischen Naturforschenden Gesellschaft in Solothurn (in German), 94: 197 - Emch, Arnold (1916), "On some properties of the medians of closed continuous curves formed by analytic arcs", American Journal of Mathematics, 38 (1): 6–18, doi:10.2307/2370541, MR 1506274 - Šnirel'man, L. G. (1944), "On certain geometrical properties of closed curves", Akademiya Nauk SSSR i Moskovskoe Matematicheskoe Obshchestvo. Uspekhi Matematicheskikh Nauk, 10: 34–44, MR 0012531 - Tao, Terence (November 22, 2016), "An integration approach to the Toeplitz square peg problem", What's New - Bailey, Herbert; DeTemple, Duane (1998), "Squares inscribed in angles and triangles", Mathematics Magazine, 71 (4): 278–284, doi:10.2307/2690699 - Matschke, Benjamin (2014), "A survey on the square peg problem", Notices of the American Mathematical Society, 61 (4): 346–352, doi:10.1090/noti1100 - Stromquist, Walter (1989), "Inscribed squares and square-like quadrilaterals in closed curves", Mathematika, 36 (2): 187 – 197, doi:10.1112/S0025579300013061, MR 1045781 - Nielsen, Mark J.; Wright, S. E. (1995), "Rectangles inscribed in symmetric continua", Geometriae Dedicata, 56 (3): 285–297, doi:10.1007/BF01263570, MR 1340790 - Meyerson, Mark D. (1980), "Equilateral triangles and continuous curves", Fundamenta Mathematicae, 110 (1): 1–9, doi:10.4064/fm-110-1-1-9, MR 0600575 - Kronheimer, E. H.; Kronheimer, P. B. (1981), "The tripos problem", Journal of the London Mathematical Society, Second Series, 24 (1): 182–192, doi:10.1112/jlms/s2-24.1.182, MR 0623685 - Nielsen, Mark J. (1992), "Triangles inscribed in simple closed curves", Geometriae Dedicata, 43 (3): 291–297, doi:10.1007/BF00151519, MR 1181760 - Guggenheimer, H. (1965), "Finite sets on curves and surfaces", Israel Journal of Mathematics, 3: 104–112, doi:10.1007/BF02760036, MR 0188898 - Victor Klee and Stan Wagon, Old and New Unsolved Problems in Plane Geometry and Number Theory, The Dolciani Mathematical Expositions, Number 11, Mathematical Association of America, 1991 - Mark J. Nielsen, Figures Inscribed in Curves. A short tour of an old problem - Inscribed squares: Denne speaks at Jordan Ellenberg's blog - Who cares about topology? (Inscribed rectangle problem) YouTube video showing a topological solution to a simplified version of the problem.
<urn:uuid:8d421b72-8ed5-4eb9-9a3a-23b89a0dbec7>
3.359375
1,877
Knowledge Article
Science & Tech.
50.454232
95,561,043
During the massive oil spill from the ruptured Deepwater Horizon well in 2010, it seemed at first like there might be a quick fix: a containment dome lowered onto the broken pipe to capture the flow so it could be pumped to the surface and disposed of properly. But that attempt quickly failed, because the dome almost instantly became clogged with frozen methane hydrate. Methane hydrates, which can freeze upon contact with cold water in the deep ocean, are a chronic problem for deep-sea oil and gas wells. Sometimes these frozen hydrates form inside the well casing, where they can restrict or even block the flow, at enormous cost to the well operators. Now researchers at MIT, led by associate professor of mechanical engineering Kripa Varanasi, say they have found a solution, described recently in the journal Physical Chemistry Chemical Physics. The paper's lead author is J. David Smith, a graduate student in mechanical engineering. The deep sea is becoming "a key source" of new oil and gas wells, Varanasi says, as the world's energy demands continue to increase rapidly. But one of the crucial issues in making these deep wells viable is "flow assurance": finding ways to avoid the buildup of methane hydrates. Presently, this is done primarily through the use of expensive heating systems or chemical additives. "The oil and gas industries currently spend at least $200 million a year just on chemicals" to prevent such buildups, Varanasi says; industry sources say the total figure for prevention and lost production due to hydrates could be in the billions. His team's new method would instead use passive coatings on the insides of the pipes that are designed to prevent the hydrates from adhering. These hydrates form a cage-like crystalline structure, called clathrate, in which molecules of methane are trapped in a lattice of water molecules. Although they look like ordinary ice, methane hydrates form only under very high pressure: in deep waters or beneath the seafloor, Smith says. By some estimates, the total amount of methane (the main ingredient of natural gas) contained in the world's seafloor clathrates greatly exceeds the total known reserves of all other fossil fuels combined. Inside the pipes that carry oil or gas from the depths, methane hydrates can attach to the inner walls — much like plaque building up inside the body's arteries — and, in some cases, eventually block the flow entirely. Blockages can happen without warning, and in severe cases require the blocked section of pipe to be cut out and replaced, resulting in long shutdowns of production. Present prevention efforts include expensive heating or insulation of the pipes or additives such as methanol dumped into the flow of gas or oil. "Methanol is a good inhibitor," Varanasi says, but is "very environmentally unfriendly" if it escapes. Varanasi's research group began looking into the problem before the Deepwater Horizon spill in the Gulf of Mexico. The group has long focused on ways of preventing the buildup of ordinary ice — such as on airplane wings — and on the creation of superhydrophobic surfaces, which prevent water droplets from adhering to a surface. So Varanasi decided to explore the potential for creating what he calls "hydrate-phobic" surfaces to prevent hydrates from adhering tightly to pipe walls. Because methane hydrates themselves are dangerous, the researchers worked mostly with a model clathrate hydrate system that exhibits similar properties. The study produced several significant results: First, by using a simple coating, Varanasi and his colleagues were able to reduce hydrate adhesion in the pipe to one-quarter of the amount on untreated surfaces. Second, the test system they devised provides a simple and inexpensive way of searching for even more effective inhibitors. Finally, the researchers also found a strong correlation between the "hydrate-phobic" properties of a surface and its wettability — a measure of how well liquid spreads on the surface. The basic findings also apply to other adhesive solids, Varanasi says — for example, solder adhering to a circuit board, or calcite deposits inside plumbing lines — so the same testing methods could be used to screen coatings for a wide variety of commercial and industrial processes. The research team included MIT postdoc Adam Meuler and undergraduate Harrison Bralower; professor of mechanical engineering Gareth McKinley; St. Laurent Professor of Chemical Engineering Robert Cohen; and Siva Subramanian and Rama Venkatesan, two researchers from Chevron Energy Technology Company. The work was funded by the MIT Energy Initiative-Chevron program and Varanasi's Doherty Chair in Ocean Utilization. Sarah McDonnell | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Materials Sciences 20.07.2018 | Physics and Astronomy 20.07.2018 | Materials Sciences
<urn:uuid:acd7a4e8-e826-4cae-9e51-7d234cbe27fe>
3.390625
1,557
Content Listing
Science & Tech.
34.159899
95,561,058
Six scientists have entered a dome perched atop a remote volcano in Hawaii where they will spend the next eight months in isolation to simulate life for astronauts traveling to Mars, the University of Hawaii said. The study is designed to help NASA better understand human behavior and performance during long space missions as the U.S. space agency explores plans for a manned mission to the Red Planet. “I’m proud of the part we play in helping reduce the barriers to a human journey to Mars,” said Kim Binsted, the mission’s principal investigator. The crew will perform geological field work and basic daily tasks in the 1,200-square-foot (365 m) dome, located in an abandoned quarry 8,000 feet (2.5 km) above sea level on the Mauna Loa volcano on Hawaii’s Big Island. There is little vegetation and the scientists will have no contact with the outside world, said the university, which operates the dome. Communications with a mission control team will be time-delayed to match the 20-minute travel time of radio waves passing between Earth and Mars. “Daily routines include food preparation from only shelf-stable ingredients, exercise, research and fieldwork aligned with NASA’s planetary exploration expectations,” the university said. The project is intended to create guidelines for future missions to Mars, some 35 million miles (56 million km) away, a long-term goal of the U.S. human space program. The NASA-funded study, known as the Hawaii Space Exploration Analog and Simulation (Hi-SEAS), is the fifth of its kind. (Reporting by Brendan O’Brien in Milwaukee; editing by Richard Lough)
<urn:uuid:51d306a4-4224-4884-9afe-3c9d4bdaf099>
3.640625
357
Truncated
Science & Tech.
48.028394
95,561,060
(fŭn'jì), kingdom of heterotrophic single-celled, multinucleated, or multicellular organisms, including yeasts, molds, and mushrooms. The organisms live as parasites, symbionts, or saprobes (see saprophyte). Previously classified in the plant kingdom, fungi are nonmotile, like plants, but lack the vascular tissues (phloem and xylem) that form the true roots, stems, and leaves of plants. Most coenocytic (multinucleated) or multicelluar fungi are composed of multiple filaments, called hyphae, grouped together into a discrete organism called a mycelium. The cell walls of most fungi are of chitin compounds instead of cellulose; a group fungi known as cryptomycota lack chitinous cell walls. In many ways fungi are more closely related to animals than to plants, and they have been thought to share a common protist ancestor with animals. A recent classification system suggested by nucleic acid (genetic material) comparisons places the fungi with the animals and the plants in an overarching taxonomic group called the eukarya. The 100,000 identified species of organisms commonly classed together as fungi are customarily divided into four phyla, or divisions: Zygomycota, Ascomycota, Basidiomycota, and Deuteromycota. Zygomycota includes black bread mold and molds, such as those of the genus Glomus, that form important symbiotic relationships with plants. Most are soil-living saprobes that feed on dead animal or plant remains. Some are parasitic of plants or insects. They reproduce sexually and form tough zygospores from the fusion of neighboring gametangia. There is no distinguishable male or female. Ascomycota includes yeasts, the powdery mildews, the black and blue-green molds, edible types such as the morel and the truffle, and species that cause such diseases of plants as Dutch elm disease, chestnut blight, apple scab, and ergot. There are over 50,000 species, about 25,000 of which occur only in lichens. In ascomycetes, the hyphae are subdivided by porous walls through which the cytoplasm and the nuclei can pass. Their life cycle is a complex combination of sexual and asexual reproduction. Basidiomycota includes the gill fungi (most mushrooms), the pore fungi (e.g., the bracket fungi, which grow shelflike on trees, and an edible type called tuckahoe), and the puffballs. It also includes the fungi that cause smut and rust in plants. Like ascomycetes, the hyphae are subdivided by porous walls. In basidiomycetes, two hyphae fuse to form a dikaryotic mycelium (a mycelium in which both nuclei remain distinct). These mycelia differentiate into reproductive structures called basidia that make up the basidiocarp (the body popularly known as the mushroom cap). The nuclei then fuse and undergo meiosis, creating spores with one nucleus each. When these spores germinate, they produce hyphae, and the process begins again. Deuteromycota comprises a miscellaneous assortment of fungi that do not not fit neatly in other divisions; they have in common an apparent lack of sexual reproductive features. Also called Fungi Imperfecti, the group includes species that help create Roquefort and Camembert cheeses, that cause diseases of plants and of animals (e.g., athlete's foot and ringworm), and that produce penicillin. A number of the fungi classified as deuteromycetes have been found to be asexual stages of species in other groups, and some classification schemes consider the deuteromycetes a class under Ascomycota. Fungi are valuable economically as a source of antibiotics, of vitamins, and of various industrially important chemicals, such as alcohols, acetone, and enzymes, as well as for their role in fermentation processes, as in the production of alcoholic beverages, vinegar, cheese, and bread dough. They are extremely important in soil renewal, through the decomposition of organic matter (see humus)—a function unwelcome when it results in the rotting of clothing and other goods and the spoilage of foods. - See C. M. Christensen, The Molds and Man (3d. rev. ed. 1965). - Introduction to Fungi (1980). , - The Fifth Kingdom (1985). , - Elsevier's Dictionary of Edible Mushrooms (1989). , - C. T. Ingold and H. J. Hudson, The Biology of Fungi (6th ed. 1993). - Magical Mushrooms, Mischievous Molds (1998). , - The Book of Fungi (2011). ; , The Fungi is one of the most widespread and ecologically important taxa on Earth, with about 65,000 known species. Fossil fungi have received... Life is too short to stuff a mushroom. Shirley Conran Nouns 1 fungus fungosity, mould, must, mildew, rot, dry rot,... Unicellular or multicellular organisms belonging to the kingdom Fungi (about 50 000 known species) and including mushrooms, mildews, moulds,...
<urn:uuid:05fd6174-9052-4f44-a13f-aefa2ce33474>
3.859375
1,153
Knowledge Article
Science & Tech.
40.502557
95,561,068
What is the coefficient of water when the attached equation is balanced?© BrainMass Inc. brainmass.com July 22, 2018, 7:09 am ad1c9bdddf Balancing Chemical Equations As(OH)3 + H2SO4 ----> As2(SO4)3 + H2O I'll show you how to do this one ... Solution shows the steps to balancing a given equation to find the coefficient of water in it.
<urn:uuid:397a9692-6ccc-4a83-8d08-865cbfb43d4e>
3.0625
98
Truncated
Science & Tech.
82.515
95,561,072
1. Assume the angle of inclination of the sun is given by Theta = (pi/12)t, where t is the number of hours after sunrise. Suppose we have a 10 meter high flagpole. a. What is the angular velocity of the sun? b. Write an equation for the length of the flagpole's shadow when the angle of the sun is theta radians. c. Write an equation for the length of the flagpole's shadow t hours after sunrise. d. If the sun rises at 6 am. when will the flagpole's shadow be 10 meters long? e. If you stand at the tip of the shadow, how far will you be from the top of the flagpole? 2. The height of a wave is given by the function h(t) = 4 cos(wt), where t is in seconds. Suppose the height of the wave is 3 feet at 10 seconds. a. What is the height of the wave at 20 seconds? b. What is the height of the wave at 40 seconds? c. What is the height of the wave at 5 seconds? 3. Consider triangle ABC. Suppose angle A is 45, the side opposite B is 3 meters, and the side opposite A is 2 meters. Find all the other sides and angles. 4. Write a function to model the day to day temperature of Metropolis. Suppose that the low temperature of 62° F is at 2 a.m. an(l the high temperature of 82 is at 2 p.m. a. What is the period of your function? b. What is the amplitude of your function? c. What is the temperature at 6 a.m.. 8 a.m., and 10 p.m. 5. Suppose that you kick a ball into the air with a velocity of 20 meters per second at an angle of 30. If the ball stays airborne for 2 seconds and loses no horizontal velocity. how far (loc it go? 6. Suppose that you are bug stuck in the treads of a bicycle tire. If the tire has a radius of 13 inches and the tires are spinning at 180 revolutions per minute. Write a parametric equation for your position after t seconds. 7. Using parametric equations and simple trigonometric identity, find the angle that will maximize the horizontal distance of a projectile launched from the ground.© BrainMass Inc. brainmass.com July 22, 2018, 6:32 am ad1c9bdddf 15 Trigonometry Problems involving Angular Velocity, Shadows, Waves and Triangles are solved. The solution is detailed and well presented. The response received a rating of "5" from the student who originally posted the question.
<urn:uuid:8f6886cf-b7ae-4808-a51f-077f7e7c712d>
3.8125
566
Q&A Forum
Science & Tech.
84.114541
95,561,073
How America celebrates Pi Day Across the country, math geeks in museums, schools, private groups and elsewhere gather to celebrate the number pi, approximately 3.14. That’s why March 14 — 3-14 — is Pi Day. What’s more, Albert Einstein was born on this day. A quick refresher: Pi is defined as the distance around a perfect circle, or the circumference, divided by the distance across it, or the diameter. It is also involved in calculating the area of a circle, the volume of a sphere, and many other mathematical formulas you might need in the sciences. Throughout history, people have been captivated by this number because there is no way to calculate it exactly by a simple division on your calculator. What’s more, its digits go on infinitely, without any pattern in the numbers. 3.1415926535897932 … etc. Even that many digits are more than most people would need for everyday use, but some folks have been inspired to memorize thousands of digits of pi, or even use the digits to create poetry or music. Math may be scary, but pi is not — as evidenced by the widespread revelry on Pi Day. One might even say — gasp! — it’s cool to like pi these days. Even the House of Representatives supported the designation of March 14 as National Pi Day in 2009. In countries where the day is written before the month, Friday is 14-3, which looks less like pi. “And so Pi Day is an acquired taste,” mathematician Jonathan Borwein, at the University of Newcastle in Australia, said in an e-mail. Conveniently, “pi” sounds like “pie,” and pies are round. You could celebrate Pi Day in a casual way by grabbing a slice of pastry, or pizza. If you’re in enrolled in school, your math class or math department might be doing something special already. But if you happen to live in a particularly pi-happy place, you might be able to take part in some larger-scale, pi-inspired activities. Where Pi Day began If you want to go where the day is said to be “invented,” look no further than San Francisco’s Exploratorium. Larry Shaw, who worked in the electronics group at the museum, began the tradition in 1988. Last year was Pi Day’s 25th anniversary there. Pi Day began as a small gathering with mostly museum staff. Now it’s a public pi extravaganza featuring a “Pi procession,” whose attendees get a number — 0 to 9 — and line up in the order of pi’s digits: 3.14159265 … you get the idea. The parade ends at the “pi shrine” — a pi symbol with digits spiraling around it embedded in the sidewalk, which was unveiled last year. For those who can’t attend in person, the Exploratorium has a Second Life Pi Day event that includes “irrational exhibits, fireworks, cheerleaders, music, and dancing.” The museum also lists a bunch of educational activities to teach about the concept of pi. Where Einstein lived On the opposite coast, the leafy university town where Albert Einstein spent the last 22 years of his life is showing community-wide exuberance for pi. Princeton, New Jersey, kicks off Pi Day weekend on Thursday night with a reading by physicist Charles Adler, then heads into a full day of activities on Friday, including a walking tour of Einstein’s neighborhood and a pizza pie-making contest. The pie-eating contest takes place at McCaffrey’s supermarket, while an Einstein look-alike competition will match mustaches and wild gray hair at the Princeton Public Library. Pi fans who have been spending the last year memorizing digits can show off and compete at the library, where the winner among 7- to 13-year-olds can take home a cool pi-hundred (That is, $314.15). The Historical Society of Princeton will have an Einstein birthday party. Tetsuya Miyamoto, inventor of the KENKEN puzzle, will speak at the library as well. The “brainiac town” residents “love this event because it’s a way for them to celebrate how quirky they are,” said Mimi Omiecinski, owner of the Princeton Tour Company, who started Princeton Pi Day in 2009. “A lot of them get super into it.” Last year about 9,000 people participated, she said. Along with her fascination with Albert Einstein, Omiecinski was inspired to launch a town-wide Pi Day after she heard that the Princeton University mathematics department celebrates March 14 with pie-eating and pi-reciting (As a Princeton student, I got second place for most digits in 2005 and 2006). Even more pi Chicago is getting into the pi business too. Lots of restaurants and bakeries are offering Pi Day specials. The Illinois Science Council and Fleet Feet Sports are hosting a 3.14-mile walk/run Friday night, with discounts for anyone named Albert, Alberta or Albertina. Philly.com highlights two options for satisfying your pie cravings in the City of Brotherly Love. Bostonians can head to Massachusetts Institute of Technology at Pi Time (3:14 p.m.) for pi-themed activities such as “Throw Pie at Your Best Friend on High-Speed Camera.” The Museum of Science in Boston has educational Pi Day events, and the Seattle Children’s Museum will celebrate too. Even the Salvador Dali Museum in St. Petersburg, Florida, will celebrate the day, as “Dali loved the irrational numbers Pi and Phi, often using them and other mathematical principles in his art,” according to the museum. If you live in the area, check out their schedule of math-inspired films and tours throughout the day. There are plenty of online resources too, such as piday.org. Outside of the physical classroom, Pi Day will be celebrated online through Google’s virtual classroom project. David Blatner, author of the comprehensive book “The Joy of Pi,” is hosting a Pi Day competition in which students from three classrooms will square off to see who can recite the most digits of pi from memory. How did Pi Day become such a big thing? Blatner says that Pi Day has become a hit for the same reason the new “Cosmos” TV show is getting so much attention. “People all around the world are hungry to make science and math fun and interesting,” he said in an e-mail. “We know math and science is important, we know that it’s fascinating, but we often don’t know how to make it fun and interesting. Pi Day gives us a great excuse to throw away our fear of math and say ‘Hey, it IS kind of neat!’ ” If you agree, just wait until 3/14/15 — or as one popular Facebook group calls it, “The Only Pi Day of Our Lives.” That’s because pi to four digits after the decimal is 3.1415, and we’re unlikely to survive until 2115 to see that second instance of pi perfection. So get ready next year to take a picture of your digital clock on 3/14/15 at 9:26:53 a.m. That’ll be worth more than a thousand digits. How do you celebrate Pi Day? Tell us in the comments. By Elizabeth Landau ™ & © 2014 Cable News Network, Inc., a Time Warner Company. All rights reserved.
<urn:uuid:821d810a-1819-436d-b179-a9a1c75c8410>
3.28125
1,630
News Article
Science & Tech.
59.705391
95,561,106
By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Julich in Germany have carried out the largest general neuronal network simulation to date. The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world. Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer. The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time. Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain – the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software. This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers. “If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann. Memory of 250.000 PCs Simulating a large neuronal network and a process like learning requires large amounts of computing memory. Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications. More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network. In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs. NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions. A large, international project Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.” Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.” Paving the way for future projects Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.” The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich. The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.
<urn:uuid:4d57374b-20f0-4560-bd57-d2c84897b8d6>
3.40625
985
News Article
Science & Tech.
22.621039
95,561,120
They discovered how the needed transport protein turns up at the underside of plant cells. The discovery helps us to understand how plants grow, and how they organize themselves in order to grow. The scientific journal Nature published the news in advance on its website. Versatile hormoneIt is known for a long time that the plant hormone auxin is transmitted from the top to the bottom of a plant, and that the local concentration of auxin is important for the growth direction of stems, the growth of roots, the sprouting of shoots. To name a few things; auxin is also relevant to, for instance, the ripening of fruit, the clinging of climbers and a series of other processes. Thousands of researchers try to understand the different roles of auxin. In many instances the distribution of auxin in the plant plays a key role, and thus the transport from cell to cell. At the bottom of plant cells, so-called PIN proteins are located on the cell membrane, helping auxin to flow through to the lower cell. However, no one thoroughly understood why the PIN proteins only showed up at the bottom of a cell. An international group of scientists from labs in five countries, headed by Jirí Friml of the VIB-department Plant Systems Biology at Ghent University, revealed a rather unusual mechanism. PIN proteins are made in the protein factories of the cell and are transported all over the cell membrane. Subsequently they are engulfed by the cell membrane, a process called endocytosis. The invagination closes to a vesicle, disconnects and moves back into the cell. Thus the PIN proteins are recycled and subsequently transported to the bottom of the cell, where they are again incorporated in the cell membrane. It is unclear why plants use such a complex mechanism, but a plausible explanation is this mechanism enables a quick reaction when plant cells feel a change of direction of gravity, giving them a new ‘underside’. To see the path of the protein, the researchers used gene technology to make cells in which the PIN protein was linked to fluorescent proteins. (This technology was rewarded with the Nobel Prize 2008 for chemistry.) Subsequently they produced cells in which the endocytosis was disrupted in two different ways. The PIN proteins showed up all over the cell membrane. When the researchers proceeded from single cells to plant embryos, the embryos developed deformations, because the pattern of auxin concentrations in the embryo was distorted. When these plants with disrupted endocytosis grew further, roots developed where the first leaflet should have been. Inge Geysen | alfa NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:5421d515-120f-441d-8ac7-40cdfacf6f8b>
4.03125
1,158
Content Listing
Science & Tech.
44.051219
95,561,128
A hypervalent molecule (the phenomenon is sometimes colloquially known as expanded octet) is a molecule that contains one or more main group elements apparently bearing more than eight electrons in their valence shells. Phosphorus pentachloride (PCl5), sulfur hexafluoride (SF6), chlorine trifluoride (ClF3), the chlorite (ClO2−) ion, and the triiodide (I3−) ion are examples of hypervalent molecules. - 1 Definitions and nomenclature - 2 History and controversy - 3 Criticism - 4 Alternative definition - 5 Bonding in hypervalent molecules - 6 Structure, reactivity, and kinetics - 7 See also - 8 References Definitions and nomenclature Hypervalent molecules were first formally defined by Jeremy I. Musher in 1969 as molecules having central atoms of group 15–18 in any valence other than the lowest (i.e. 3, 2, 1, 0 for Groups 15, 16, 17, 18 respectively, based on the octet rule). Several specific classes of hypervalent molecules exist: - Hypervalent iodine compounds are useful reagents in organic chemistry (e.g. Dess–Martin periodinane) - Tetra-, penta- and hexacoordinated phosphorus, silicon, and sulfur compounds (ex. PCl5, PF5, SF6, sulfuranes and persulfuranes) - Noble gas compounds (ex. xenon tetrafluoride, XeF4) - Halogen polyfluorides (ex. ClF5) - N represents the number of valence electrons - X is the chemical symbol of the central atom - L the number of ligands to the central atom Examples of N-X-L nomenclature include: History and controversy The debate over the nature and classification of hypervalent molecules goes back to Gilbert N. Lewis and Irving Langmuir and the debate over the nature of the chemical bond in the 1920s. Lewis maintained the importance of the two-center two-electron (2c-2e) bond in describing hypervalence, thus using expanded octets to account for such molecules. Langmuir, on the other hand, upheld the dominance of the octet rule and preferred the use of ionic bonds to account for hypervalence without violating the rule (e.g. "SF42+ 2F−" for SF6). In the late 1920s and 1930s, Sugden argued for the existence of a two-center one-electron (2c-1e) bond and thus rationalized bonding in hypervalent molecules without the need for expanded octets or ionic bond character; this was poorly accepted at the time. In the 1940s and 1950s, Rundle and Pimentel popularized the idea of the three-center four-electron bond, which is essentially the same concept which Sugden attempted to advance decades earlier; the three-center four-electron bond can be alternatively viewed as consisting of two collinear two-center one-electron bonds, with the remaining two nonbonding electrons localized to the ligands. The attempt to actually prepare hypervalent organic molecules began with Hermann Staudinger and Georg Wittig in the first half of the twentieth century, who sought to challenge the extant valence theory and successfully prepare nitrogen and phosphorus-centered hypervalent molecules. The theoretical basis for hypervalency was not delineated until J.I. Musher's work in 1969. In 1990, Magnusson published a seminal work definitively excluding the role of d-orbital hybridization in bonding in hypervalent compounds of second-row elements. This had long been a point of contention and confusion in describing these molecules using molecular orbital theory. Part of the confusion here originates from the fact that one must include d-functions in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result), and the contribution of the d-function to the molecular wavefunction is large. These facts were historically interpreted to mean that d-orbitals must be involved in bonding. However, Magnusson concludes in his work that d-orbital involvement is not implicated in hypervalency. Nevertheless, a recent study showed that although the Pimentel ionic model best accounts for the bonding of hypervalent species, the energetic contribution of an expanded octet structure is also not null. In a valence bond theory study of the bonding of xenon difluoride, it was found that ionic structures account for about 81% of the overall wavefunction, of which 70% arises from ionic structures employing only the p orbital on xenon while 11% arises from ionic structures employing an hybrid on xenon. The contribution of a formally hypervalent structure employing an orbital of sp3d hydridization on xenon accounts for 11% of the wavefunction, with a diradical contribution making up the remaining 8%. The 11% sp3d contribution results in a net stabilization of the molecule by 7.2 kcal/mol, a minor but significant fraction of the total energy of the bonds holding the molecule together. Other studies have similarly found minor but non-negligible energetic contributions from expanded octet structures in SF6 (17%) and XeF6 (14%). Both the term and concept of hypervalency still fall under criticism. In 1984, in response to this general controversy, Paul von Ragué Schleyer proposed the replacement of 'hypervalency' with use of the term hypercoordination because this term does not imply any mode of chemical bonding and the question could thus be avoided altogether. The concept itself has been criticized by Ronald Gillespie who, based on an analysis of electron localization functions, wrote in 2002 that "as there is no fundamental difference between the bonds in hypervalent and non-hypervalent (Lewis octet) molecules there is no reason to continue to use the term hypervalent." For hypercoordinated molecules with electronegative ligands such as PF5 it has been demonstrated that the ligands can pull away enough electron density from the central atom so that its net content is again 8 electrons or fewer. Consistent with this alternative view is the finding that hypercoordinated molecules based on fluorine ligands, for example PF5 do not have hydride counterparts e.g. phosphorane PH5 which is unknown. The ionic model holds up well in thermochemical calculations. It predicts favorable exothermic formation of PF4+F− from phosphorus trifluoride PF3 and fluorine F2 whereas a similar reaction forming PH4+H− is not favorable. Durrant has proposed an alternative definition of hypervalency, based on the analysis of atomic charge maps obtained from Atoms in molecules theory. This approach defines a parameter called the valence electron equivalent, γ, as “the formal shared electron count at a given atom, obtained by any combination of valid ionic and covalent resonance forms that reproduces the observed charge distribution”. For any particular atom X, if the value of γ(X) is greater than 8, that atom is hypervalent. Using this alternative definition, many species such as PCl5, SO42−, and XeF4, that are hypervalent by Musher's definition, are reclassified as hypercoordinate but not hypervalent, due to strongly ionic bonding that draws electrons away from the central atom. On the other hand, some compounds that are normally written with ionic bonds in order to conform to the octet rule, such as ozone O3, nitrous oxide NNO, and trimethylamine N-oxide (CH3)3NO, are found to be genuinely hypervalent. Examples of γ calculations for phosphate PO43− (γ(P) = 2.6, non-hypervalent) and orthonitrate NO43− (γ(N) = 8.5, hypervalent) are shown below. Bonding in hypervalent molecules Early considerations of the geometry of hypervalent molecules returned familiar arrangements that were well explained by the VSEPR model for atomic bonding. Accordingly, AB5 and AB6 type molecules would possess a trigonal bi-pyramidal and octahedral geometry, respectively. However, in order to account for the observed bond angles, bond lengths and apparent violation of the Lewis octet rule, several alternative models have been proposed. In the 1950s an expanded valence shell treatment of hypervalent bonding was adduced to explain the molecular architecture, where the central atom of penta- and hexacoordinated molecules would utilize d AOs in addition to s and p AOs. However, advances in the study of ab initio calculations have revealed that the contribution of d-orbitals to hypervalent bonding is too small to describe the bonding properties, and this description is now regarded as much less important. It was shown that in the case of hexacoordinated SF6, d-orbitals are not involved in S-F bond formation, but charge transfer between the sulfur and fluorine atoms and the apposite resonance structures were able to account for the hypervalency (See below). Additional modifications to the octet rule have been attempted to involve ionic characteristics in hypervalent bonding. As one of these modifications, in 1951, the concept of the 3-center 4-electron (3c-4e) bond, which described hypervalent bonding with a qualitative molecular orbital, was proposed. The 3c-4e bond is described as three molecular orbitals given by the combination of a p atomic orbital on the central atom and an atomic orbital from each of the two ligands on opposite sides of the central atom. Only one of the two pairs of electrons is occupying a molecular orbital that involves bonding to the central atom, the second pair being non-bonding and occupying a molecular orbital composed of only atomic orbitals from the two ligands. This model in which the octet rule is preserved was also advocated by Musher. Molecular orbital theory A complete description of hypervalent molecules arises from consideration of molecular orbital theory through quantum mechanical methods. A LCAO in, for example, sulfur hexafluoride, taking a basis set of the one sulfur 3s-orbital, the three sulfur 3p-orbitals, and six octahedral geometry symmetry-adapted linear combinations (SALCs) of fluorine orbitals, a total of ten molecular orbitals are obtained (four fully occupied bonding MOs of the lowest energy, two fully occupied intermediate energy non-bonding MOs and four vacant antibonding MOs with the highest energy) providing room for all 12 valence electrons. This is a stable configuration only for SX6 molecules containing electronegative ligand atoms like fluorine, which explains why SH6 is not a stable molecule. In the bonding model, the two non-bonding MOs (1eg) are localized equally on all six fluorine atoms. Valence bond theory For hypervalent compounds in which the ligands are more electronegative than the central, hypervalent atom, resonance structures can be drawn with no more than four covalent electron pair bonds and completed with ionic bonds to obey the octet rule. For example, in phosphorus pentafluoride (PF5), 5 resonance structures can be generated each with four covalent bonds and one ionic bond with greater weight in the structures placing ionic character in the axial bonds, thus satisfying the octet rule and explaining both the observed trigonal bipyramidal molecular geometry and the fact that the axial bond length (158 pm) is longer than the equatorial (154 pm). For a hexacoordinate molecule such as sulfur hexafluoride, each of the six bonds is the same length. The rationalization described above can be applied to generate 15 resonance structures each with four covalent bonds and two ionic bonds, such that the ionic character is distributed equally across each of the sulfur-fluorine bonds. Spin-coupled valence bond theory has been applied to diazomethane and the resulting orbital analysis was interpreted in terms of a chemical structure in which the central nitrogen has five covalent bonds; This led the authors to the interesting conclusion that "Contrary to what we were all taught as undergraduates, the nitrogen atom does indeed form five covalent linkages and the availability or otherwise of d-orbitals has nothing to do with this state of affairs." Structure, reactivity, and kinetics Hexacoordinate phosphorus molecules involving nitrogen, oxygen, or sulfur ligands provide examples of Lewis acid-Lewis base hexacoordination. For the two similar complexes shown below, the length of the C-P bond increases with decreasing length of the N-P bond; the strength of the C-P bond decreases with increasing strength of the N-P Lewis acid-Lewis base interaction. This trend is also generally true of pentacoordinated main-group elements with one or more lone-pair-containing ligand, including the oxygen-pentacoordinated silicon examples shown below. The Si-halogen bonds range from close to the expected van der Waals value in A (a weak bond) almost to the expected covalent single bond value in C (a strong bond). at 20 °C in anisole Corriu and coworkers performed early work characterizing reactions thought to proceed through a hypervalent transition state. Measurements of the reaction rates of hydrolysis of tetravalent chlorosilanes incubated with catalytic amounts of water returned a rate that is first order in chlorosilane and second order in water. This indicated that two water molecules interacted with the silane during hydrolysis and from this a binucleophilic reaction mechanism was proposed. Corriu and coworkers then measured the rates of hydrolysis in the presence of nucleophilic catalyst HMPT, DMSO or DMF. It was shown that the rate of hydrolysis was again first order in chlorosilane, first order in catalyst and now first order in water. Appropriately, the rates of hydrolysis also exhibited a dependence on the magnitude of charge on the oxygen of the nucleophile. Taken together this led the group to propose a reaction mechanism in which there is a pre-rate determining nucleophilic attack of the tetracoordinated silane by the nucleophile (or water) in which a hypervalent pentacoordinated silane is formed. This is followed by a nucleophilic attack of the intermediate by water in a rate determining step leading to hexacoordinated species that quickly decomposes giving the hydroxysilane. Silane hydrolysis was further investigated by Holmes and coworkers in which tetracoordinated Mes2SiF2 (Mes = mesityl) and pentacoordinated Mes2SiF3− were reacted with two equivalents of water. Following twenty-four hours, almost no hydrolysis of the tetracoordinated silane was observed, while the pentacoordinated silane was completely hydrolyzed after fifteen minutes. Additionally, X-ray diffraction data collected for the tetraethylammonium salts of the fluorosilanes showed the formation of hydrogen bisilonate lattice supporting a hexacoordinated intermediate from which HF2− is quickly displaced leading to the hydroxylated product. This reaction and crystallographic data support the mechanism proposed by Corriu et al.. The apparent increased reactivity of hypervalent molecules, contrasted with tetravalent analogues, has also been observed for Grignard reactions. The Corriu group measured Grignard reaction half-times by NMR for related 18-crown-6 potassium salts of a variety of tetra- and pentacoordinated fluorosilanes in the presence of catalytic amounts of nucleophile. Though the half reaction method is imprecise, the magnitudinal differences in reactions rates allowed for a proposed reaction scheme wherein, a pre-rate determining attack of the tetravalent silane by the nucleophile results in an equilibrium between the neutral tetracoordinated species and the anionic pentavalent compound. This is followed by nucleophilic coordination by two Grignard reagents as normally seen, forming a hexacoordinated transition state and yielding the expected product. The mechanistic implications of this are extended to a hexacoordinated silicon species that is thought to be active as a transition state in some reactions. The reaction of allyl- or crotyl-trifluorosilanes with aldehydes and ketones only precedes with fluoride activation to give a pentacoordinated silicon. This intermediate then acts as a Lewis acid to coordinate with the carbonyl oxygen atom. The further weakening of the silicon–carbon bond as the silicon becomes hexacoordinate helps drive this reaction. Similar reactivity has also been observed for other hypervalent structures such as the miscellany of phosphorus compounds, for which hexacoordinated transition states have been proposed. Hydrolysis of phosphoranes and oxyphosphoranes have been studied and shown to be second order in water. Bel'skii et al.. have proposed a prerate determining nucleophilic attack by water resulting in an equilibrium between the penta- and hexacoordinated phosphorus species, which is followed by a proton transfer involving the second water molecule in a rate determining ring-opening step, leading to the hydroxlyated product. Alcoholysis of pentacoordinated phosphorus compounds, such as trimethoxyphospholene with benzyl alcohol, have also been postulated to occur through a similar octahedral transition state, as in hydrolysis, however without ring opening. It can be understood from these experiments that the increased reactivity observed for hypervalent molecules, contrasted with analogous nonhypervalent compounds, can be attributed to the congruence of these species to the hypercoordinated activated states normally formed during the course of the reaction. Ab initio calculations The enhanced reactivity at pentacoordinated silicon is not fully understood. Corriu and coworkers suggested that greater electropositive character at the pentavalent silicon atom may be responsible for its increased reactivity. Preliminary ab initio calculations supported this hypothesis to some degree, but used a small basis set. A software program for ab initio calculations, Gaussian 86, was used by Dieters and coworkers to compare tetracoordinated silicon and phosphorus to their pentacoordinate analogues. This ab initio approach is used as a supplement to determine why reactivity improves in nucleophilic reactions with pentacoordinated compounds. For silicon, the 6-31+G* basis set was used because of its pentacoordinated anionic character and for phosphorus, the 6-31G* basis set was used. Pentacoordinated compounds should theoretically be less electrophilic than tetracoordinated analogues due to steric hindrance and greater electron density from the ligands, yet experimentally show greater reactivity with nucleophiles than their tetracoordinated analogues. Advanced ab initio calculations were performed on series of tetracoordinated and pentacoordinated species to further understand this reactivity phenomenon. Each series varied by degree of fluorination. Bond lengths and charge densities are shown as functions of how many hydride ligands are on the central atoms. For every new hydride, there is one less fluoride. For silicon and phosphorus bond lengths, charge densities, and Mulliken bond overlap, populations were calculated for tetra and pentacoordinated species by this ab initio approach. Addition of a fluoride ion to tetracoordinated silicon shows an overall average increase of 0.1 electron charge, which is considered insignificant. In general, bond lengths in trigonal bipyramidal pentacoordinate species are longer than those in tetracoordinate analogues. Si-F bonds and Si-H bonds both increase in length upon pentacoordination and related effects are seen in phosphorus species, but to a lesser degree. The reason for the greater magnitude in bond length change for silicon species over phosphorus species is the increased effective nuclear charge at phosphorus. Therefore, silicon is concluded to be more loosely bound to its ligands. In addition Dieters and coworkers show an inverse correlation between bond length and bond overlap for all series. Pentacoordinated species are concluded to be more reactive because of their looser bonds as trigonal-bipyramidal structures. By calculating the energies for the addition and removal of a fluoride ion in various silicon and phosphorus species, several trends were found. In particular, the tetracoordinated species have much higher energy requirements for ligand removal than do pentacoordinated species. Further, silicon species have lower energy requirements for ligand removal than do phosphorus species, which is an indication of weaker bonds in silicon. - Musher, J.I. (1969). "The Chemistry of Hypervalent Molecules". Angew. Chem. Int. Ed. 8: 54–68. doi:10.1002/anie.196900541. - Perkins, C. W.; Martin, J. C.; Arduengo, A. J.; Lau, W.; Alegria, A,; Kochi, J. K.; An Electrically Neutral σ-Sulfuranyl Radical from the Homolysis of a Perester with Neighboring Sulfenyl Sulfur: 9-S-3 species J.Am. Chem. Soc. 1980, 102, 7753–7759 doi:10.1021/ja00546a019 - Jensen, W. (2006). "The Origin of the Term "Hypervalent"". J. Chem. Educ. 83 (12): 1751. Bibcode:2006JChEd..83.1751J. doi:10.1021/ed083p1751. | Link - Kin-ya Akiba. Chemistry of Hypervalent Compounds. New York: Wiley VCH. ISBN 0-471-24019-2. - E. Magnusson. Hypercoordinate molecules of second-row elements: d functions or d orbitals? J. Am. Chem. Soc. 1990, 112, 7940–7951. doi:10.1021/ja00178a014 - Braïda, Benoît; Hiberty, Philippe C. (2013-04-07). "The essential role of charge-shift bonding in hypervalent prototype XeF2" (PDF). Nature Chemistry. 5 (5): 417–422. doi:10.1038/nchem.1619. ISSN 1755-4330. - "The nature of the chemical bond in the light of an energy decomposition analysis". Theory and Applications of Computational Chemistry: 291–372. 2005-01-01. doi:10.1016/B978-044451719-7/50056-1. - Gillespie, R (2002). "The octet rule and hypervalence: Two misunderstood concepts". Coordination Chemistry Reviews. 233–234: 53–62. doi:10.1016/S0010-8545(02)00102-9. - Predicting the Stability of Hypervalent Molecules Mitchell, Tracy A.; Finocchio, Debbie; Kua, Jeremy. J. Chem. Educ. 2007, 84, 629. Link - Durrant, M. C. (2015). "A quantitative definition of hypervalency". Chemical Science. 6: 6614–6623. doi:10.1039/C5SC02076J. - Curnow, Owen J. (1998). "A Simple Qualitative Molecular-Orbital/Valence-Bond Description of the Bonding in Main Group "Hypervalent" Molecules". Journal of Chemical Education. 75 (7): 910–915. Bibcode:1998JChEd..75..910C. doi:10.1021/ed075p910. - Gerratt, Joe (1997). "Modern valence bond theory". Chemical Society Reviews. 26 (2): 87–100. doi:10.1039/CS9972600087. - Holmes, R.R. (1996). "Comparison of Phosphorus and Silicon: Hypervalency, Stereochemistry, and Reactivity". Chem. Rev. 96 (3): 927–950. doi:10.1021/cr950243n. PMID 11848776. - Corriu, RJP; Dabosi, G.; Martineau, M. (1978). "Mécanisme de l'hydrolyse des chlorosilanes, catalysée par un nucléophile: étude cinétique et mise en evidence d'un intermediaire hexacoordonné". J. Organomet. Chem. 150: 27–38. doi:10.1016/S0022-328X(00)85545-X. - Johnson, SE; Deiters, JA; Day, RO; Holmes, RR (1989). "Pentacoordinated molecules. 76. Novel hydrolysis pathways of dimesityldifluorosilane via an anionic five-coordinated silicate and a hydrogen-bonded bisilonate. Model intermediates in the sol-gel process". J. Am. Chem. Soc. 111 (9): 3250. doi:10.1021/ja00191a023. - Corriu, RJP; Guerin, Christian.; Henner, Bernard J. L.; Wong Chi Man, W. W. C. (1988). "Pentacoordinated silicon anions: reactivity toward strong nucleophiles". Organometallics. 7: 237–8. doi:10.1021/om00091a038. - Kira, M; Kobayashi, M.; Sakurai, H. (1987). "Regiospecific and highly stereoselective allylation of aldehydes with allyltrifluorosilane activated by fluoride ions". Tetrahedron Letters. 28 (35): 4081–4084. doi:10.1016/S0040-4039(01)83867-3. - Bel'Skii, VE (1979). J. Gen. Chem. USSR. 49: 298. Missing or empty - Ramirez, F; Tasaka, K.; Desai, N. B.; Smith, Curtis Page. (1968). "Nucleophilic substitutions at pentavalent phosphorus. Reaction of 2,2,2-trialkoxy-2,2-dihydro-1,3,2-dioxaphospholenes with alcohols". J. Am. Chem. Soc. 90 (3): 751. doi:10.1021/ja01005a035. - Brefort, Jean Louis; Corriu, Robert J. P.; Guerin, Christian; Henner, Bernard J. L.; Wong Chi Man, Wong Wee Choy (1990). "Pentacoordinated silicon anions: Synthesis and reactivity". Organometallics. 9 (7): 2080. doi:10.1021/om00157a016. - Dieters, J. A.; Holmes, R. R. (1990). "Enhanced Reactivity of Pentacoordinated Silicon Species. An ab Initio Approach". J. Am. Chem. Soc. 112 (20): 7197–7202. doi:10.1021/ja00176a018.
<urn:uuid:c9c33405-83a7-4bfc-bb46-27ccc093a746>
3.765625
5,863
Knowledge Article
Science & Tech.
38.724539
95,561,137
Solutions for a better world Dicyphus hesperus is a generalist predator who feeds on whitesflies, thrips, aphids, moth eggs and other pests. This predator is able to establish itself in greenhouses, keeping constant pressure on pest populations. Dicyphus is used successfully in several greenhouses in North America. Adult and nymph are predator and very effective to control several species of whiteflies and thrips in tropical and semi-tropical ornamental crops. Because it is also a plant feeder, Dicyphus should not be used on its own to replace other biological control agents. Adult has elongated shape (6 mm) and has large red eyes. The body is green and black with mottled semi-transparent. Adults Dicyphus are able to fly. The female lays her eggs inside plant tissue. Nymphs look like the adults, but they are smaller and, in the early stages, totally green. their wings are not developed. |Preventive||0,25-0,5||per m2||bi-weekly||2 introductions in total| Green lacewing (Chrysoperla carnea) communicates by stridulation inaudible to the human ear. Source : Insect Sounds and Communication : physiology, Behaviour, Ecology and Evolution (2006)
<urn:uuid:7fdc0fbe-737b-4d98-a329-1b8d874c7a88>
3.265625
279
Knowledge Article
Science & Tech.
38.798873
95,561,145
So concludes a group of nearly two dozen scientists in a paper appearing this week in the journal Bioscience. The lead author is Ted Schuur, an associate professor of ecology at the University of Florida. Previous studies by Schuur and his colleagues elsewhere have estimated the carbon contained in permafrost in northeast Siberia. The new research expands that estimate to the rest of the permafrost-covered northern latitudes of Russia, Europe, Greenland and North America. The estimated 1,672 billion metric tons of carbon locked up in the permafrost is more than double the 780 billion tons in the atmosphere today. "It's bigger than we thought," Schuur said. Permafrost is frozen ground that contains roots and other soil organic matter that decompose extremely slowly. When it thaws, bacteria and fungi break down carbon contained in this organic matter much more quickly, releasing it to the atmosphere as carbon dioxide or methane, both greenhouse gases. Scientists have become increasingly concerned about this natural process as temperatures in the world's most northern latitudes have warmed. Just last week, it was announced that the amount of sea ice covering the Arctic may reach a new low this summer. Meanwhile, there is widespread consensus that the highest latitudes will warm the fastest, a process already visible in the accelerated thawing of glaciers worldwide. Two years ago, Schuur and two colleagues authored a paper in the journal Science estimating that 400,000 square miles of northeast Siberian permafrost contained 500 billion metric tons of carbon. For this new paper, scientists combined an extensive database of measurements of carbon content in different types of permafrost soils with the estimated spatial extent of those soils in Russia, Europe, Greenland and North America. Schuur said the researchers estimated the carbon contained in permafrost to a depth of three meters, two meters deeper than many earlier estimates. Although permafrost depths vary greatly with location, basing the estimate on three-meter depth "better acknowledges the true size of the permafrost carbon pool," Schuur said. The new estimate is important because it mirrors other climate change science suggesting that at a certain tipping point, natural processes could contribute significant amounts of greenhouse gases, supplementing human-influenced, industrial processes that release fossil fuel carbon, Schuur said. "There are relatively few people living in the permafrost zone," Schuur said. "But we could have significant emissions of carbon from thawing permafrost in these remote regions." How fast the permafrost would release its carbon is a hugely important question. Schuur said the burning of fossil fuels contributes about 8.5 billion tons of carbon dioxide each year. Deforestation of the tropical forests and replacement of the forest with pasture or other agriculture is thought to add about 1.5 billion tons per year. How much permafrost will add will depend on how fast it thaws, but Schuur said his research indicates the figure could approach .8-1.1 billion tons per year in the future if permafrost continues to thaw. With the Arctic warming and permafrost thawing, shrubs and trees are likely to grow on ground formerly occupied by tundra – indeed, such a transformation has already been observed in parts of Alaska, where some arctic tundra is becoming shrub land. Because plants take in carbon dioxide and release oxygen, it might appear they could compensate for whatever carbon is released by the thawed permafrost. But Schuur said the amount of carbon stored in the permafrost is far greater than what is found in shrubs or trees. For example, he said, a mature boreal forest may contain five kilograms per meter squared of stored carbon. But the same area of permafrost soil can contain 44 kilograms, and 80 percent of that could be lost over long-term warming. "The bottom line," he said, "is that you can't grow a big enough forest to offset the carbon release from the permafrost." Ted Schuur | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:c928334d-f579-415d-b12c-3aa0f2377bb9>
3.59375
1,410
Content Listing
Science & Tech.
41.313907
95,561,161
The location of the intrusion may give a crucial clue to the fate of the little galaxies the gas flows from, the Large and Small Magellanic Clouds. “We’re thrilled because we can determine exactly where this gas is ploughing into the Milky Way – it’s usually extremely hard to get distances to such gas features,” said the research team leader, Dr Naomi McClure-Griffiths of CSIRO’s Australia Telescope National Facility. The gas finger, called HVC306-2+230, is running into the starry disk of our Galaxy about 70 thousand light-years (21kpc) away from us. On the sky, the point of contact is near the Southern Cross. The finger is the pointy end of the so-called Leading Arm of gas that streams ahead of the Magellanic Clouds towards the Milky Way. Until last year, astronomers generally thought that the Magellanic Clouds had orbited our Galaxy many times, and were doomed to be ripped apart and swallowed by their gravitational overlord. But then new Hubble Space Telescope measurements showed the Clouds were moving much faster than previously thought. In turn, this implied that the Clouds are paying our Galaxy a one-time visit rather than being its long-term companions. Knowing where the Leading Arm is crossing the Galactic Disk may help astronomers to predict where the Clouds themselves will go in future. “We think the Leading Arm is a tidal feature, gas pulled out of the Magellanic Clouds by the Milky Way’s gravity,” said Dr McClure-Griffiths. “Where this gas goes, we’d expect the Clouds to follow, at least approximately.” The team’s measurement of where the Leading Arm intrudes into the Milky Way is more in line with the models that assume the Magellanic Clouds have been orbiting our Galaxy than with the models that have the Clouds just passing by. Dr McClure-Griffiths cautions that this is not the final word on the subject, saying that the latter models were far from ruled out. But the new result suggests that the Magellanic Clouds will eventually merge with the Milky Way, rather than zooming past. Andrea Wild | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:9dfab17a-abf1-4c29-9ede-003feafb6fc5>
3.46875
1,046
Content Listing
Science & Tech.
46.235026
95,561,162
Explaining the complex structure of tropical forests is one of the great challenges in ecology. An issue of special interest is the distribution of different sizes of trees, something which is of particular relevance for biomass estimates. Modellers from the UFZ, working together with research partners, has now developed a new method which can be used to explain the tree size distribution in natural forests. To do so, the scientists use principles from stochastic geometry, as they have reported in a contribution to the Proceedings of the National Academy of Sciences of the United States of America (PNAS, Early Edition). For over one hundred years, the distribution of different sizes of trees in forests has been one of the core attributes recorded by foresters and ecologists world-wide, as it can be used to derive many other structural features, such as biomass and productivity. "We wanted to explain this important pattern", said Dr. Franziska Taubert. Working with her UFZ colleagues Dr. Thorsten Wiegand and Prof. Andreas Huth, and other research partners in the Leipzig University of Applied Sciences (HTWK) and the Karlsruhe Institute of Technology (KIT), they have applied the theory of stochastic sphere packing, which is usually used in physics or chemistry. This theory describes how spheres can be placed in an available space. To apply the theory, the scientists randomly distributed tree crowns of different sizes in forest areas. These tree crowns were not permitted to overlap, - just like packing apples into a box. The distribution of the trees that have been successfully placed in the packing process was then used to determine the tree size distribution. "Many forest models are based on a dynamic approach: they take into account processes such as growth, mortality, regeneration and competition between trees for light, water and soil nutrients", said Taubert. "These models are complex and data-hungry", added Thorsten Wiegand," so we decided to take a radically different approach, which is fundamentally simpler and only based on spatial structures". This model approach proved its effectiveness by enabling observed forest structures, especially the tree size distribution, to be reproduced accurately. The rules of stochastic geometry are thereby enriched by tree geometry relationships, and the resulting tree packing system is compared to inventory data from tropical forests in Panama and Sri Lanka. Although one might imagine that a tropical forest is very tightly packed, the scientists came to a surprising conclusion: the packing density of the tree crowns, which averages 15 to 20%, is astonishingly low. "In particular, the upper and lower canopy levels are less tightly packed with tree crowns", said Taubert. High packing densities of around 60%, which are also possible according to stochastic geometry, only occur at tree heights between 25 and 40 meters. The findings concerning the distribution of tree crowns are important, because they can be used to draw conclusions about, for example, the carbon content or productivity of a forest. Using this modelling approach, the researchers were also able to show that the decisive factor in shaping the tree size distribution is competition for space. "In classical forest models", said Andreas Huth, "the trees instead compete for light, or water and nutrients". The theory opens up several new perspectives. The team plans to assess how the model can be applied to natural forests in the temperate and boreal zone. They believe that the model can be used to identify disturbed forests. "That is of special interest because it will enable us to develop a disturbance index", said Taubert, “and to better interpret remote sensing observations by using the structure of natural forests as a reference”. Another benefit of the new theory is that this simple forest packing model takes much less effort than classical forest models. The new approach is an important step toward identifying a minimal set of processes responsible for generating the spatial structure of natural forests. Franziska Taubert, Markus Wilhelm Jahn, Hans-Jürgen Dobner, Thorsten Wiegand and Andreas Huth: "The structure of tropical forests and sphere packings". Proceedings of the National Academy of Sciences of the United States of America (PNAS). http://www.pnas.org/cgi/doi/10.1073/pnas.1513417112 Helmholtz Centre for Environmental Research – UFZ, Leipzig University of Applied Sciences (HTWK), Karlsruhe Institute of Technology (KIT), German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, University of Osnabrück. The researchers thank the Advanced Grant of the European Research Council (ERC) for their support. Dr. Franziska Taubert UFZ Department of Ecological Modelling Phone: +49 341 235-1896 Prof. Dr. Andreas Huth Head of UFZ Department of Ecological Modelling Phone: +49 341 235-1719 Susanne Hufe | Helmholtz-Zentrum für Umweltforschung - UFZ Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:d3eee926-551d-48e3-aac3-69bbb79eadd4>
3.609375
1,644
Knowledge Article
Science & Tech.
36.617064
95,561,175
In this lesson, we examined some of the key anthropogenic climate change projections. Some of the main findings were: - Climate models project anywhere from 0.2 to 7°C warming over the next century, depending on two critical variables: (1) what decisions society makes regarding future carbon emissions and (2) currently irreducible uncertainties regarding the sensitivity of the climate to greenhouse gas radiative forcing; - The primary source of spread in projected warming is factor #1, the uncertainty in future emissions. Were we to freeze greenhouse gases at their current levels, the average projection among models is an additional warming of 0.5°C. This scenario is highly unlikely, as it is very difficult to find a pathway to zero emissions in the near future. On the other hand, were we to pursue business-as-usual emissions scenario (e.g., continue on the A1FI scenario), the average projection among models is for an additional warming of 4°C; - The impact of factor #2, the uncertainty in climate sensitivity, is nonetheless quite significant. In the A1FI scenario, the globe could warm anywhere from a lower bound of 2.5°C to an upper bound of 6.5°C, depending on which particular climate model is used; - The variability in temperatures in both space and time is projected to be considerable. High latitudes warm more than low latitudes owing to positive feedbacks related to the melting of ice, and land warms more than oceans due in large part to the greater thermal inertia of the oceans. Even as the globe warms, there will continue to be cold periods over particular regions related to ENSO and other sources of natural variability; - Precipitation is projected to increase in the tropics and sub-polar latitudes, while decreases are projected for sub-tropical through mid-latitude regions. These changes reflect a combination of the effects of shifting storm tracks and the potential for a warmer atmosphere to hold more water vapor; - Continental drought becomes more widespread over much of the continents. This results from a tendency for increased evaporation to dry out soil, even in many regions that see an increase in precipitation; - Anthropogenic climate change leads to substantial changes in atmospheric circulation, including a poleward shift of the descending branch of the Hadley Circulation and of the jet streams, polar front, and storm tracks. These changes also include a weakening of monsoonal circulations, and possible, but uncertain, impacts on the Walker Circulation pattern associated with ENSO; - The NAO/AO/NAM mode of variability, tied with variations in the position of the Northern Hemisphere winter jet stream, is projected to become more positive, associated with a northward displaced storm track and warmer, wetter winter conditions in regions such as Europe, but drier conditions in semi-arid regions such as the Mediterranean and Near and Middle East; - A modest weakening is projected for the meridional overturning ocean circulation (variously referred to as the thermohaline circulation and conveyor belt circulation). State-of-the-art models, however, project a far weaker effect than what was once considered possible; rather than leading to cold conditions over the North Atlantic and neighboring regions, what is currently projected is only a moderate decrease of the warming in a small region in the North Atlantic ocean south of Greenland; - Projected changes in the character of the El Niño/Southern Oscillation are uncertain. Model projections are divided with respect to whether the future climate state will be more like El Niño or La Niña, and whether individual El Niño and La Niña events will be larger or smaller. Reminder - Complete all of the lesson tasks! You have finished Lesson 7. Double-check the list of requirements on the first page of this lesson to make sure you have completed all of the activities listed there before beginning the next lesson.
<urn:uuid:d0ec900d-8d8e-4f43-ab75-bebec3f85816>
3.484375
804
Truncated
Science & Tech.
22.727071
95,561,204
The developers at NASA have one of the most challenging jobs in the programming world. They write code and develop mission-critical applications with safety as their primary concerns. In such situations, it’s important to follow some serious coding guidelines. These rules cover different aspects of software development like how a software should be written, which language features should be used etc. Even though it’s difficult to establish a consensus over a good coding standard, NASA’s Jet Propulsion Laboratory (JPL) follows a set of guidelines of code named “The Power of Ten–Rules for Developing Safety Critical Code”. This guide focuses mainly on code written in C programming languages due to JPL’s long association with the language. But, these guidelines could be easily applied on other programming languages as well. Laid by JPL lead scientist Gerard J. Holzmann, these strict coding rules focus on security. NASA’s 10 rules for writing mission-critical code: 1. Restrict all code to very simple control flow constructs – do not use goto statements, setjmp or longjmp constructs, and direct or indirect recursion. 2. All loops must have a fixed upper-bound. It must be trivially possible for a checking tool to prove statically that a preset upper-bound on the number of iterations of a loop cannot be exceeded. If the loop-bound cannot be proven statically, the rule is considered violated. 3. Do not use dynamic memory allocation after initialization. 4. No function should be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. Typically, this means no more than about 60 lines of code per function. 5. The assertion density of the code should average to a minimum of two assertions per function. Assertions are used to check for anomalous conditions that should never happen in real-life executions. Assertions must always be side-effect free and should be defined as Boolean tests. When an assertion fails, an explicit recovery action must be taken, e.g., by returning an error condition to the caller of the function that executes the failing assertion. Any assertion for which a static checking tool can prove that it can never fail or never hold violates this rule (I.e., it is not possible to satisfy the rule by adding unhelpful “assert(true)” statements). 6. Data objects must be declared at the smallest possible level of scope. 7. The return value of non-void functions must be checked by each calling function, and the validity of parameters must be checked inside each function. 8. The use of the preprocessor must be limited to the inclusion of header files and simple macro definitions. Token pasting, variable argument lists (ellipses), and recursive macro calls are not allowed. All macros must expand into complete syntactic units. The use of conditional compilation directives is often also dubious, but cannot always be avoided. This means that there should rarely be justification for more than one or two conditional compilation directives even in large software development efforts, beyond the standard boilerplate that avoids multiple inclusion of the same header file. Each such use should be flagged by a tool-based checker and justified in the code. 9. The use of pointers should be restricted. Specifically, no more than one level of dereferencing is allowed. Pointer dereference operations may not be hidden in macro definitions or inside typedef declarations. Function pointers are not permitted. 10. All code must be compiled, from the first day of development, with all compiler warnings enabled at the compiler’s most pedantic setting. All code must compile with these setting without any warnings. All code must be checked daily with at least one, but preferably more than one, state-of-the-art static source code analyzer and should pass the analyses with zero warnings. About these rules, here’s what NASA has to say: The rules act like the seatbelt in your car: initially they are perhaps a little uncomfortable, but after a while their use becomes second-nature and not using them becomes unimaginable. Did you find this article helpful? Don’t forget to drop your feedback in the comments section below. Orginally posted on Fossbytes.
<urn:uuid:46ec0e35-02b1-419b-8be5-ea49e9c64a74>
3.1875
894
Knowledge Article
Software Dev.
45.022587
95,561,229
The researchers found a new way of measuring the activity of a group of enzymes called DNA topoisomerases that help package DNA, the molecule that stores genetic information, into cells. Chemicals that block these enzymes could be developed into new anti-cancer and anti-bacterial drugs. The previous method used for measuring the activity of topoisomerases is time consuming and labour-intensive; this new technique is faster, more accurate and could be automated with robotics to screen thousands of chemicals and identify those with the potential to be made into drugs. “This development is really exciting because it will speed up the whole discovery process for this type of drug. A quicker and more accurate screen will allow more potential drugs to be assessed and therefore aid the search for urgently needed new anti-cancer and antibacterial drugs” says Tony Maxwell. “A patent for the technique has been granted and we already have several pharmaceutical companies that are interested in licensing the technology”. The technique has been patented and will be marketed by PBL, the technology management company of the John Innes Centre, and will be further developed by Inspiralis Ltd, a spin-out company housed in the Norwich Bioincubator. The research was funded by the BBSRC and PBL and is published online in the peer-reviewed journal Nucleic Acids Research. World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:aaab2e5e-645f-4988-8b29-786e1c563a6f>
3.375
938
Content Listing
Science & Tech.
38.026034
95,561,240
Geometric invariants which completely characterize the size and shape of three dimensional configurations of scattering centers on moving targets can be extracted from signals from ranging sensors such as radar. These invariants can potentially be used to fingerprint and track specific moving targets even when information about changes in the targets orientation between observations is unreliable and when no prior model exists for the target. This technology also creates new possibilities for more robust target recognition. Mark A. Stuff, "Three-dimensional invariants of moving targets", Proc. SPIE 4053, Algorithms for Synthetic Aperture Radar Imagery VII, (24 August 2000); doi: 10.1117/12.396363; https://doi.org/10.1117/12.396363
<urn:uuid:35154865-36b8-4ee4-9eae-6724089995a0>
2.578125
151
Academic Writing
Science & Tech.
44.928182
95,561,253
Mit center for global change science affiliates include: when the wind blows: the crucial importance of upwelling along boundary layers in closing the abyssal. Data science research the importance of technology for handling speech and audio noise that occurs at regular intervals when the wind blows and noise that. Wind engineering in africa this increases cross-isobaric flow and augments the relative importance of the harmattan is a hot and dry wind that blows. Do you want to become a better writer you can we have the writing support you need join thousands of other students in our online writing community and receive. 5 ways to demonstrate physically and mentally—when the children are conducting any science experiment can change when the wind blows or an. Sustainable development focuses on how to make better use of agricultural science, knowledge and technology to reduce hunger wind blows everywhere on. It's crucial to calculate precisely how the 19 million photovoltaic facilities and wind farms the technology to how much the wind blows. The science of energy: resources and power explained in the 24 lectures of the science of energy: resources and power the natural wind blows through the. Children’s wind energy book receives shining endorsements from climate shining endorsements from climate leaders, science of wind power technology. Anemometers, or windmeters, are used to determine how fast wind blows they are commonly found at weather stations windmeters are also able to measure wind's. Advantages and challenges of wind energy for as long as the sun shines and the wind blows, the the wind energy technology office's.Importance of grid energy storage on extraordinary days green technology on wind farms in texas, the wind blows almost exclusively at night while demand is. Renewable energy design: wind each teachengineering lesson or activity is correlated to one or more k-12 science, technology, as the wind blows over wind. The second mystery involves the solar wind the fast, hot wind blows charged particles, the integrated science investigation of the so new technology was. Shape of cities shapes the weather date: may 17 when wind blows over a what we showed in our work is the importance of taking into account the spatial. Pros & cons of wind energy electricity only when the wind blows of economic development and environmental progress or sleek icons of modern technology. Make the wind work for you science buddies, 28 july 2017, https: when the wind blows, it makes the blades of the fan, called rotors, spin around,. So we can measure what direction the wind is going go science math history literature technology a north wind comes from the north an ill wind blows no one. When the wind blows: a single season highlights the importance of research on the relationship institute of technology center for global change science. As solar wind blows, what happens when the solar wind suddenly starts to blow significantly harder technology stories science news [email protected] press. The kids ahead program is an initiative to increase the number of kids with science, technology wind energy activities fast the wind blows by knowing. Review of windcatcher technologies wind speed which reaches to approximately 50% when wind blows 1 this technology captures the external wind and induces it. As solar wind blows, important ferromagnetic semiconductor synthesized date: november 22 spintronics is of crucial importance for the storage and transport of. Exploring wind energy student guide secondary 2017-2018 blows, not the direction toward which the wind moves a north wind blows. Read chapter 6 active control in wind turbines: wind-driven power systems represent a renewable energy technology arrays of interconnected wind the wind blows. Ocean science science notes all wind turbines operate in the same basic manner as the wind blows, offshore wind energy technology.Download
<urn:uuid:c635bbef-4178-44d6-bb9b-5156c269082e>
3.28125
745
Spam / Ads
Science & Tech.
32.302801
95,561,263
Sunspots – Our Solar System A sunspot can be defined as a dark and irregularly shape on the surface of the Sun. The temperature of these spots is usually lower than the surrounding photosphere. The photosphere has a temperature of 5,800 degrees, Kelvin and Sunspots have temperatures of about 3,800 degrees Kelvin. They are caused by strong magnetic activity within the Sun. Their diameter can range up to 50,000 kilometers. They have a lighter outer section which is known as the Penumbra. Its middle section is darker, which is known as the umbra. Fast Facts: – - A sunspot is not permanent. It can last from days to weeks to months and can also travel across the solar disk. - Sunspots were telescopically observed in 1610 AD for the first time by Thomas Harriott, Johannes and David Fabricus. - A typical sunspot cycle lasts for 11 years. After every 11 years, their number increase and then starts reducing. - In 1890 while looking at the historical records, E. Maunder noticed that the number fell drastically between 1645 and 1715. - This specific period has been named after E. Maunder. It is known as the Maunder Minimum. - When two more sunspots are gathered in a region, it is known as a sunspot group or the Active Region. - These spots may be several times larger than Earth or very small that telescopic observation is difficult. Cite This Page You may cut-and-paste the below MLA and APA citation examples: MLA Style Citation Declan, Tobin. " Fun Facts for Kids about Sunspot ." Easy Science for Kids, Jul 2018. Web. 21 Jul 2018. < http://easyscienceforkids.com/sunspot-facts/ >. APA Style Citation Tobin, Declan. (2018). Fun Facts for Kids about Sunspot. Easy Science for Kids. Retrieved from http://easyscienceforkids.com/sunspot-facts/ Sponsored Links :
<urn:uuid:5d280ffd-d677-460e-a7da-5ebabe94e04c>
4.125
431
Knowledge Article
Science & Tech.
61.816129
95,561,281
Do you know how to save app updating time using Cocoapods? No? Don’t worry, invest your next 15 mins on this blog, and you will be able to use Cocoapods as 1-2-3. But, before we move on to the implementation part, let’s overview Cocoapods. Cocoapods is a dependency manager for managing third-party libraries. In simple words, it is a perfect tool to manage library dependencies and scale your project elegantly. Check this screenshot from SlideShare: If you’re developing a mobile app with the help of third-party libraries, you can use Cocoapods to save your time that you take to update third party libraries manually. It can save you the trouble of typing countless lines of codes to update libraries of third-parties. It can directly fetch library codes, resolve the dependencies, and help you set up the right environment for your projects. Open your Android studio and create a new project from the file menu. Here, we’ve used ‘Single View Application’ as a template to create a short demo. After you create a new project, install pod on your mac. Just type ‘Terminal’ in spotlight search and open. Type the following commands. sudo gem install cocoapods For mac 10.11 El Capitan, enter the following command. sudo gem install -n /usr/local/bin cocoapods By performing above commands, a pod will be installed on your mac. Now, it’s time to pod setup in your project. Again, open the terminal, if you’ve closed it, and move it to your XCode project directory path by the following command. cd “YOUR XCODE PROJECT PATH” Create a new pod file in your project with the following command. Now, open the pod file that we just created. open -e podfile Next, open your browser and open https://cocoapods.org/ and search for ‘AFNetworking’ and get its pod with the latest version. Add this pod in “podfile” pod 'AFNetworking', '~> 3.1' Save and close this file and install the pod by the following command. Once your pod is installed, open your workspace and start coding. Congratulations! Now you know how to successfully use Cocoapods in your projects and you’re ready to start using it. However, you can do a lot more in Cocoapods such as finding a specific version of a library or it can also save you the trouble of downloading that specific version of the library by searching it online. Nowadays, most of the mobile applications like Square Sized and EmojiGram needs third-party libraries, therefore developing your mobile app with Cocoapods Swift makes it cost-effective. If you have an app idea similar to Instagram or Uber or any other type of application which uses a third-party library, you can contact us. We’re a top mobile app development company, and we’ve recently developed an app like Uber for our client. So, if you’ve such an idea, you can discuss it with our product manager. You can also get a free copy of this demo at Github here. If you’ve any questions related to this blog, do comment it down. We will get back to you as soon as possible. Get your free consultation now
<urn:uuid:a67a95ff-9f08-4a59-8f05-669be2f10f2d>
2.5625
741
Tutorial
Software Dev.
57.153801
95,561,288
Supersymmetry and Gauge Theory by Neil Lambert Publisher: King's College London 2011 Number of pages: 54 From the table of contents: Introduction; Gauge Theory; Fermion: Clifford Algebras and Spinors; Supersymmetry; Elementary Consequences of Supersymmetry; Super-Yang Mills; Extended Supersymmety; Physical Features of Yang-Mills Theories. Download or read it online for free here: by Keith A. Olive - arXiv These lectures on supersymmetry contain a pedagogical description of supersymmetric theories and the minimal supersymmetric standard model. Phenomenological and cosmological consequences of supersymmetry are also discussed. by Ian J R Aitchison - arXiv.org These notes are an expanded version of a short course of lectures given for graduate students in particle physics at Oxford. The level was intended to be appropriate for students in both experimental and theoretical particle physics. by Neil Lambert - King's College London Supersymmetry has been a very fruitful subject of research and has taught us a great deal about mathematics and quantum field theory. Hopefully this course will convince the student that supersymmetry is a beautiful and interesting subject. by W. Siegel, at al. - arXiv Supersymmetry is the supreme symmetry: It unifies spacetime symmetries with internal symmetries, fermions with bosons, and (local supersymmetry) gravity with matter. Under quite general assumptions it is the largest possible symmetry of the S-matrix.
<urn:uuid:96061886-c6e7-46d0-b343-ba5378aba625>
2.546875
332
Content Listing
Science & Tech.
20.531211
95,561,290
The Energy Catalyzer (also called E-Cat) is a claimed cold fusion reactor devised by inventor Andrea Rossi with support from the late physicist Sergio Focardi. An Italian patent, which received a formal but not a technical examination, describes the apparatus as a "process and equipment to obtain exothermal reactions, in particular from nickel and hydrogen". Rossi and Focardi said the device worked by infusing heated hydrogen into nickel powder, transmuting it into copper and producing excess heat. An international patent application received an unfavorable international preliminary report on patentability in 2011 because it was adjudged to "offend against the generally accepted laws of physics and established theories". The device has been the subject of demonstrations and tests several times, and commented on by various academics and others, but no independent tests have been made, and no peer-reviewed tests have been published. Steve Featherstone wrote in Popular Science that by the summer of 2012 Rossi's "outlandish claims" for the E-Cat seemed "thoroughly debunked". Invited guests attended several demonstrations in Bologna in 2011. The device has not been independently verified. Of a January demonstration, Discovery Channel analyst Benjamin Radford wrote that "If this all sounds fishy to you, it should," and that "In many ways cold fusion is similar to perpetual motion machines. The principles defy the laws of physics, but that doesn't stop people from periodically claiming to have invented or discovered one." According to Phys.org (11 August 2011), the demonstrations held from January to April 2011 had several flaws that compromised their credibility and Rossi had refused to perform tests that could verify his claims. University of Bologna researchers have attended some E-Cat demonstrations, but only as observers. On 5 November 2011, the University of Bologna clarified that its researchers had not been involved in the demonstrations and that none of those took place at the university. Rossi had signed a contract with the university, but the contract was terminated and no research was done because Rossi did not make the first payment. Skeptic Ian Bryce speculated that the E-Cat was misconnected during demonstrations, and that the power attributed to fusion is supplied to the device through the earth wire. Dick Smith offered Rossi one million dollars to demonstrate that the E-Cat system worked as claimed, while the power through the earth wire was also being measured, which Rossi refused. Peter Thieberger, a senior physicist at Brookhaven National Laboratory, said it would be very difficult for this misconnection to happen by accident and that the issue could only be cleared with a fully independent test. On 28 October 2011 the unit was "customer tested" and was said to release 2,635 kWh during five and a half hours of self-sustained mode, an average power of 479 kilowatts – just under half the promised power of one megawatt. Independent observers were not allowed to watch the measurements or make their own, and the plant remained connected to a power supply during the test allegedly to supply power to the fans and the water pumps. Because of his research into cold fusion for over 15 years, Sergio Focardi was contacted by Andrea Rossi in 2007 in order to validate the apparatus at its early stage of development. After four years of work and measurements together with Rossi, Focardi concluded that nuclear fusion reactions happen inside the Energy Catalyzer. Focardi states that the nuclear process is facilitated by a secret additive, known only by Rossi and not by him. According to Focardi, the process would be much less intense without this additive. Rossi and Focardi are then reported to have been unable to find a peer-reviewed scientific journal that would publish their paper describing how they claim the Energy Catalyzer operates. Their paper appears only in Rossi's self-published blog, Journal of Nuclear Physics. In May 2013 a non-peer-reviewed paper describing "results obtained from evaluations of the operation of the E-Cat HT in two test runs" was submitted to the arXiv digital archive. Although the authors of the paper wrote that they were not in control of all of the aspects of the process, they concluded that, even by the most conservative of measurements, the device produced excess heat with a resulting energy density that was at least one order of magnitude, and possibly several, higher than any other conventional energy source. The test was partly funded by the Swedish energy research consortium, Elforsk. Elforsk stated on their website that the results were very remarkable, but that it was highly questionable to speculate whether nuclear transformation had occurred when no access had been provided to the reactants. In a response to the original manuscript archived on arXiv, commentators criticized the testing as not truly independent, described the report as having "characteristics more typically found in pseudo‐scientific texts", and stated that "The authors seem to jump to conclusions fitting pre‐conceived ideas where alternative explanations are possible." Astrophysicist Ethan Siegel commented at ScienceBlogs saying Rossi did not allow the reactants or products to be measured on this occasion. In the previous tests there were not enough 62 Ni and 64 Ni (the only two nickel isotopes which can fuse with hydrogen), at 3.6% and 0.9% respectively, in the reactants to explain the 10% copper output; these isotope levels are typical of natural copper, rather than of fusion by-product. According to Siegel, Rossi also refused to unplug the machine while it was operating despite it being an easy way to surreptitiously power the device. He also added that the supposedly independent testers had to rely on data supplied by Rossi. In October 2014 a non-peer-reviewed paper by the same authors as the May 2013 report describes results from evaluations in March 2014 of an upgraded version of the E-Cat which runs at higher temperatures. Unlike previous demonstrations, the test was carried out with monitoring equipment and in a laboratory not supplied by Rossi, and was run over an extended duration (32 days). However, as with the previous report, the authors were not in full control of the process; Rossi intervened during the insertion of the fuel charge, start up of the reactor, shut down of the reactor, and extraction of the spent fuel. Overall, the total excess heat measured was calculated to be well beyond that possible by any conventional, non-nuclear source. In this report, they present analyses of samples of spent fuel, concluding from the isotopes found that "nuclear reactions are therefore indicated to be present in the run process, which however is hard to reconcile with the fact that no radioactivity was detected outside the reactor during the run." Following fuel and ash isotopic analysis, the authors speculate that isotopes of especially nickel and lithium being part of the reaction, in particular transmutation of 58 Ni and 60 Ni to 62 Ni, and from 7 Li to 6 Li through some unknown process. Particle physicist Tommaso Dorigo commented on the 2014 test, called the isotopic measurements "startling" but he expressed deep concern about Rossi being involved in collecting the spent fuel, that the testers may have "overlooked some simple trick" and that "given the extraordinary nature of the claim… this constitutes a major flaw, which totally invalidates any conclusions one might otherwise draw." Astrophysicist Ethan Siegel was highly critical of the test, stating that the testers were not independent, that Rossi could have tampered with the fuel samples, that the 'open calorimeter' set up used was inappropriate, and that "it’s relatively easy to fake the amount of energy being drawn through a power cord if there is a hookup to an external source." Reactions to the claimsEdit Theoretical astrophysicist Ethan Siegel and nuclear physicist Peter Thieberger argue that the claims for the E-Cat are incompatible with the fundamentals of nuclear physics. In particular, the Coulomb barrier for the claimed fusion reaction is so high that it is insurmountable anywhere in the known universe, including the interior of stars. The reaction also would create gamma radiation that would have penetrated the few inches of shielding apparently provided by the E-Cat, inducing acute radiation syndrome in persons in the vicinity of the purported demonstrations. Given numerous other scientific inconsistencies – such as the ratio of isotopes in the supposed copper "fusion product" being identical to that in natural copper – the authors argued that it is now time "for the E-Cat's proponents to provide the provable, testable, reproducible science that can answer these straightforward physics objections." Peter Ekström, lecturer at the Department of Nuclear Physics at Lund University in Sweden, concluded in May 2011, "I am convinced that the whole story is one big scam, and that it will be revealed in less than one year." He cited the unlikelihood of a chemical reaction being strong enough to overcome the Coulomb barrier, the lack of gamma rays, the lack of explanation for the origin of the extra energy, the lack of the expected radioactivity after fusing a proton with 58Ni, the unexplained occurrence of 11% iron in the spent fuel, the 10% copper in the spent fuel strangely having the same isotopic ratios as natural copper, and the lack of any unstable copper isotope in the spent fuel as if the reactor only produced stable isotopes. Kjell Aleklett, physics professor at Uppsala University, said the percentage of copper was too high for any known reaction of nickel, and the copper had the same isotopic ratio as natural copper. He also stated, "Known chemical reactions cannot explain the amount of energy measured. A nuclear reaction can explain the amount of energy, but the knowledge we have today says that this reaction cannot take place." Scientific skeptic James Randi, discussing the E-Cat in the context of previous cold fusion claims, suggested that it will eventually be proven to be a fraud. Other reactions to the device have been mixed. In 2011 Dennis M. Bushnell, Chief Scientist at NASA Langley Research Center, described LENR as a "promising" technology and praised the work of Rossi and Focardi. Theoretical nuclear physicist Yeong E. Kim of Purdue University has proposed a potential theoretical explanation of the reported results of the device, but has stated that, for confirmation of this theory, "it is very important to carry out Rossi-type experiments independently." Kim had previously put forward this theory to explain the results of the now-discredited Fleischman and Pons cold fusion experiment in 1989. Steve Featherstone wrote in Popular Science that by the summer of 2012 Rossi's "outlandish claims" for the E-Cat seemed "thoroughly debunked" and that Rossi "looked like a con man clinging to his story to the bitter end." An application in 2008 to patent the device internationally received an unfavorable preliminary report on patentability at the World Intellectual Property Organization from the European Patent Office, noting that the description of the device was based on "general statements and speculations" and citing "numerous deficiencies in both the description and in the evidence provided to support its feasibility" as well as incompatibilities with "generally accepted laws of physics and established theories." The patent application was published on 15 October 2009. On 6 April 2011 an application was approved by the Italian Patent and Trademark Office, which issued a patent for the invention, valid only in Italy. Under then-current Italian law, the examination of the application was more formal and less technical than for the corresponding PCT application. In March 2014 the US Patent Office replied to Rossi's US patent application with a provisional decision to reject it, saying "The specification is objected to as inoperable. Specifically there is no evidence in the corpus of nuclear science to substantiate the claim that nickel will spontaneously ionize hydrogen gas and therefore 'absorb' the resulting proton". In January 2014 a newly formed company, Industrial Heat LLC, announced that it had acquired rights to Rossi's E-Cat technology. In April 2016, Rossi filed a lawsuit in the USA against Industrial Heat, alleging that he was not paid an $89 million licensing fee due after a one-year test period of an E-Cat unit. Industrial Heat's comment on the lawsuit was that after three years of effort they were unable to reproduce Rossi's E-Cat test results. On July 5, 2017 the parties settled; the terms of the settlement were not released. - Patent application WO 2009125444, Andrea Rossi, "Method and Apparatus for carrying out nickel and hydrogen exothermal reactions" . - Zyga, Lisa (2011-08-11). "Controversial energy-generating system lacking credibility (w/ video)". PhysOrg. - Mark Gibbs (17 October 2011). "Hello Cheap Energy, Hello Brave New World". Forbes. the E-Cat is a cold fusion (CF) device (the inventor, Andrea Rossi, prefers to term the technology 'Low Energy Nuclear Reaction' which appears to be the same thing as CF but a less contentious phrasing). - Lisa Zyga (2011-01-20). "Italian Scientists claim to have demonstrated cold fusion". PhysOrg. Andrea Rossi and Sergio Focardi of the University of Bologna announced that they developed a cold fusion device - Peter Clarke (2011-01-24). "Italian scientists claim cold fusion success". EE Times. Andrea Rossi and Sergio Focardi of the physics department of the University of Bologna. The two claim to have developed a cold fusion reactor - "processo ed apparecchiatura per ottenere reazioni esotermiche, in particolare da nickel ed idrogeno" [process and equipment to obtain exothermal reactions, in particular from nickel and hydrogen]. Italian Office for Patents and Trademarks. Patent Number 0001387256, Deposited 9 April 2008, Issued 6 April 2011, Inventor: Andrea Rossi. - S. Focardi; A. Rossi (22 March 2010). "A new energy source from nuclear fusion". unpublished manuscript. - Deotto, Fabio (19 January 2011). "Fusione fredda realizzata a Bologna. Sarà vero?" (in Italian). Daily Wired. - Lisa Zyga (2011-01-20), "Italian Scientists claim to have demonstrated cold fusion", Physorg.com - Featherstone, Steve (2012). "Andrea Rossi's Black box infinite energy: a lone Italian inventor says he has built a machine that can power the world. Could the answer to humanity's energy troubles be so simple?". Popular Science. 281 (5): 62. - Angelo Saso (3 May 2011). La magia del signor Rossi (in Italian). Rai News. Retrieved 10 July 2011.. Retrieved on 10 July 2011.) - Benjamin Radford (21 January 2011). "Cold fusion: Cold Fusion Claims Resurface". Discovery.com. Retrieved 21 May 2011. - "E-cat: l'Università di Bologna non è coinvolta" (in Italian). UNIBO Magazine. University of Bologna. 2011-11-05. - Mackinson, Thomas (2011-11-09). "Fusione fredda fatta in casa Grande scoperta o grande bufala?". Il Fatto Quotidiano. "The University of Bologna – the notice states – is not involved on E-Cat experiments conducted by Leonardo Corp. - Mannella, Lorenzo (2011-10-14). "Fusione fredda a Bologna. I dubbi continuano". Daily Wired (Italian edition).. Retrieved on 10 November 2011. - "E-cat: non ci sono misure in atto". Università di Bologna. 2012-08-27. - E-cat: dichiarazione del Dipartimento di Fisica, 26 January 2012, University of Bologna. - Natalie Wolchover (2012-09-02). "Fraud claims over E-Cat 'cold fusion' machine heating up". msnbc.com. - Ian Bryce. "How Rossi Cold Fusion Tests Misled the World's Scientists" (PDF). Australian Skeptics press release. Archived from the original (PDF) on 13 March 2013. - "Dick Smith: "Rossi E-CAT ... too fantastic to be true"". Forbes. 24 February 2012. Retrieved 3 October 2012. The "checking the wires" detail is in "E-Cat Proof Challenge: $1,000,000 is a "Clownerie"? (Updated)". Forbes. 14 February 2012. - "Update — Inventor Rejects Dick Smith Million Dollar Offer". Australian Skeptics. Retrieved 3 October 2012. - Brandon, John. (2011-11-02). "Cold Fusion Experiment: Major Success or Complex Hoax?". Fox News. - Hambling, David (2011-10-29). "Success for Andrea Rossi's E-Cat cold fusion system, but mysteries remain". Wired. In other words, a group of unknown, unverifiable people carried out tests which cannot be checked. (...) as a demonstration it would have been more impressive for the reactor in its shipping container to be visibly disconnected while operating. - Zreick, Irene (2011-11-15). "Fusione fredda: a chi fa gola l'E-Cat?" (in Italian). Focus. Retrieved 18 November 2011. "Il cliente era rappresentato da Domenico Fioravanti, ingegnere, colonnello del Genio in pensione, che pare abbia scelto personalmente che cosa controllare, e come, durante il test. In conferenza Fioravanti affiancava Rossi, ma non c'è stato modo di strappare neppure un indizio sull'identità dell'azienda rappresentata." TRANSLATION: "The customer was represented by Domenico Fioravanti, engineer, retired colonel of the military engineering, who seemed to choose personally what to control, and how, during the test. In the course of the [press] conference Fioravanti was side by side with Rossi, but even a single hint concerning the identity of the represented company was impossible to get." - James Burgess (29 March 2012). "The Limitless Potential of the E-Cat: An Interview with Andrea Rossi". Retrieved 21 September 2012. - Clarke, Peter (2011-01-24). "Italian scientists claim cold fusion success". EE Times. - Jennifer Ouellette (2011). "Could starships use cold fusion propulsion?". Journal of Nuclear Physics, which is Andrea Rossi's own private journal. - Focardi, S; Rossi, A (2010-02-28). "A new energy source from nuclear fusion". Journal of Nuclear Physics (blog). Retrieved 18 November 2011. - Levi, G.; Foschi, E.; Hartman, T.; Höistad, B.; Pettersson, R.; Tegnér, L.; Essén, H. (2013). "Indication of anomalous heat energy production in a reactor device". arXiv: [physics.gen-ph]. - Mark Gibbs (20 May 2013). "Finally! Independent Testing Of Rossi's E-Cat Cold Fusion Device: Maybe The World Will Change After All". Forbes. - Lisa Zyga (23 May 2013). "Tests find Rossi's E-Cat has an energy density at least 10 times higher than any conventional energy source". PhysOrg. - Francie Diep (2013-05-21). "Cold Fusion Machine Gets Third-Party Verification, Inventor Says. The E-Cat strikes again". Popular Science. - Hambling, David. "Cold Fusion gets red hot and aims for EU". Wired UK. Retrieved 2 February 2014. - "Elforsk" (in Swedish). Retrieved 4 February 2014. - Ericsson, Göran (2013). "Comments on the report "Indications of anomalous heat energy production in a reactor device containing hydrogen loaded nickel powder"". arXiv: [physics.gen-ph]. - Dansie, Mark (2 July 2013). "Rossi, The Need For Third Party Validation". Revolution-Green.com. Retrieved 4 July 2013. - More On Rossi's E-Cat: Ericsson And Pomp Rebut "Independent" Test, 12 July 2013 - Siegel, Ethan (21 May 2013). "The E-Cat is back, and people are still falling for it!". ScienceBlogs. Retrieved 23 May 2013. - Levi, G.; Foschi, E.; Höistad, B.; Pettersson, R.; Tegnér, L.; Essén, H. (2014). "Observation of abundant heat production from a reactor device and of isotopic changes in the fuel" (PDF). Archived from the original (PDF) on 31 October 2014. Retrieved 11 February 2015. - Dorigo, Tommaso (11 Oct 2014). "Cold Fusion: A Better Study On The Infamous E-Cat". Science20. Retrieved 17 Feb 2015. - Siegel, Ethan (15 Oct 2014). "The E-cat, cold fusion or scientific fraud?". Medium. Retrieved 17 Feb 2015. - Ethan Siegel, 2011-12-05, The Physics of why the E-Cat's Cold Fusion Claims Collapse - Jennifer Ouellette, Could starships use cold fusion propulsion? // HowStuffWorks, () - "Cold Fusion: Is it Possible? Is it Real? – Starts With A Bang". Retrieved 22 September 2016. - Ekström, Peter (6 May 2011). "Kall Fusion på italienska (Cold fusion – Italian style)" (PDF) (in Swedish and English). Archived from the original (PDF) on 15 May 2011. - Aleklett, Kjell (11 April 2011). "Rossi energy catalyst – a big hoax or new physics?". Aleklett's Energy Mix (a WordPress blog). Retrieved on 10 July 2011. - James Randi (18 November 2011). The Randi Show – Cold Fusion and Carl Sagan. James Randi Educational Foundation. Retrieved 21 November 2011. Starting ~7:30 Randi says: "But I... I predict that, as I said just a moment ago there, that this man [Rossi] will probably go on the stock market and sell all kinds of shares and issue all kinds of wonderful reports left and right and, um, the reports will influence everybody—er, not everybody—but those who have money to waste and, uh, they will invest in it and then gradually it will become apparent to everybody: 'Gee, maybe it doesn't work'." - The Future of Energy: Part 1 Podcast approved Transcript. At 4 minutes and 34 seconds, Bushnell described several emerging energy technologies, but he identified LENR as "the most interesting and promising at this point". At 10 minutes and 35 seconds, Bushnell continued: "... in January of this year Rossi, backed by Focardi, who had been working on this for many years, and in fact doing some of the best work worldwide, came out and did a demonstration first in January, they re-did it in February, they re-did it in March, where for days they had one of these cells, a small cell, producing in the 10 to 15 kilowatts range, which is far more than enough heat to boil water for tea." - Kim, Yeong E. (2012), "Nuclear Reactions in Micro/Nano-Scale Metal Particles", Few-Body Systems, 52: 25–30, Bibcode:2012FBS...tmp...73K, doi:10.1007/s00601-012-0374-6 - pre-print paper "Generalized Theory of Bose-Einstein Condensation Nuclear Fusion for Hydrogen-Metal System" – Yeong E. Kim – 18 June 2011 Archived 17 February 2015 at the Wayback Machine. - Reger, Goode & Ball 2009, pp. 814–815 "After several years and multiple experiments by numerous investigators, most of the scientific community now considers the original claims unsupported by the evidence. [from image caption] Virtually every experiment that tried to replicate their claims failed. Electrochemical cold fusion is widely considered to be discredited." - Kim, Yeong E. (2009), "Theory of Bose–Einstein condensation mechanism for deuteron-induced nuclear reactions in micro/nano-scale metal grains and particles", Naturwissenschaften, 96 (7): 803–811, Bibcode:2009NW.....96..803K, doi:10.1007/s00114-009-0537-6, PMID 19440686 - International Preliminary Report on Patentability. World Intellectual Property Organization. Retrieved on 7 November 2011. - Alasdair Wilkins (26 January 2011), No, Italian Scientists Have Not Discovered Cold Fusion, Gizmodo. - Mannella, Lorenzo (14 October 2011). "Fusione fredda a Bologna. I dubbi continuano". Daily Wired (Italian edition). Retrieved on 10 November 2011. "il 6 aprile 2011 è stato rilasciato un brevetto in Italia a nome della Efa srl, la società di Maddalena Pascucci, moglie di Andrea Rossi. La dicitura recita " processo ed apparecchiatura per ottenere reazioni esotermiche, in particolare da nickel ed idrogeno"." TRANSLATION: On 6 April 2011 a patent was issued in Italy under the name of Efa srl, the company of Maddalena Pascucci, wife of Andrea Rossi. The heading is: "method and apparatus for carrying out nickel and hydrogen exothermal reactions". - The patent granted 6 April 2011, by the Ufficio Italiano Brevetti e Marchi Archived 16 July 2011 at the Wayback Machine.. Retrieved on 10 July 2011. - De Carolis, Roberta (2 April 2014). "Fusione fredda: all'E-cat negato anche il brevetto USA" (in Italian). NextMe. - United States Patent and Trademark Office (26 March 2014), Office communication concerning application 12/736,193 - Hoyle, Amanda (24 January 2014). "Confirmed: Raleigh's Cherokee buys into controversial nuclear tech device". Triangle Business Journal. Retrieved 15 April 2016. - Main, Douglas (25 January 2014). "Dubious Cold Fusion Machine Acquired By North Carolina Company". Popular Science. Retrieved 15 April 2016. - Dumaine, Brian (27 September 2015). "This investor is chasing a new kind of fusion". Fortune. Retrieved 15 April 2016. - Ohnesorge, Lauren K. (7 April 2016). "Scientist sues Raleigh cold fusion startup, Cherokee Investment Partners over $89M licensing fee". Triangle Business Journal. Retrieved 14 April 2016. - Ramesh, M (12 April 2016). "Cold fusion: This time for real?". The Hindu. Retrieved 14 April 2016. - "Case 1:16-cv-21199-CMA Document 1 Entered on FLSD Docket 04/05/2016". PACER. 5 April 2016. Retrieved 14 April 2016. - Hambling, David (20 April 2016). "In Cold Fusion 2.0, Who's Scamming Whom?". Popular Mechanics. Retrieved 20 April 2016. - Ohnesorge, Lauren. "Dispute between inventor and Raleigh investor over nuclear reaction device ends". Triangle Business Journal. American City Business Journals. Retrieved 4 August 2017. - "Case 1:16-cv-21199-CMA Document 333 Entered on FLSD Docket 07/06/2017". PACER. 6 July 2017. Retrieved 4 August 2017. - Josephson, Brian and Driscoll, Judith. "Andrea Rossi's 'E-cat' nuclear reactor: a video FAQ". University of Cambridge, 28 June 2011.
<urn:uuid:e9244305-40d4-4e34-8265-868af845219e>
2.984375
5,896
Knowledge Article
Science & Tech.
51.46834
95,561,293
Photo- and Penning Ionization of Molecules in the Gas Phase and in the Liquid Phase Electron spectroscopy has contributed a great deal to the understanding of the properties of matter, let it be atoms, molecules or matter in the condensed phase. The common feature of these spectroscopies consists in analysing the kinetic energy of electrons that have experienced an interaction with the probe to be investigated. One way of performing the experiment is to start with a beam of electrons of known kinetic energy. If the loss of kinetic energy due to interaction with the probe is recorded one talks about electron energy loss spectroscopy (EELS). The energy loss spectrum is characteristic of the target atoms or molecules, but the detailed interpretation of such spectra is not always straightforward since the theoretical description of the electron molecule interaction is not simple. Still, many studies of this kind have been performed on gas phase molecules 1,2 and first attempts to use this experimental tool for the investigation of molecules in the liquid phase have been reported 3,4. Another widely used technique, applicable for primary electron beams of several keV energy, is to observe electrons which originate from ions produced by the impact of the primary beam. If the ion is created by removal of an electron out of a core hole then the ion is in a highly excited state and will decay via an Auger process by emission of a second electron into a doubly charged ion. The spectroscopy of these electrons (Auger electron spectroscopy=AES) has developed into a broad field in atomic and molecular physics 5 and has become a routine tool in the analysis of solid surfaces 6. Liquid surfaces have been investigated by this technique as well 2, but its specific contribution to the understanding of liquid surfaces is yet to be understood. KeywordsElectron Spectroscopy Electron Energy Loss Spectroscopy Electron Energy Spectrum Metastable Atom Metastable Helium Atom Unable to display preview. Download preview PDF. - 4.F. Eschen, M. Heyerhoff, H. Morgner and M. Wulf, unpublished results (1993) M. Heyerhoff, Diplom Thesis, University Bochum, (1993).Google Scholar - 5.W. Mehlhorn in Atomic Inner-Shell Physics, ed. B. Craseman, Plenum, New York (1985).Google Scholar - 6.G. Ertl and J. Küppers, “Low Energy Electrons and Surface Chemistry”, VCH Verlagsgesellschaft, Weinheim (1985).Google Scholar - 7.K. Siegbahn, C. Nordling, G. Johansson, J. Hedman, P. F. Hedin, K. Hamrin, U. Gelius, T. Bergmark, L. O. Werme, R. Manne, Y. Baer, “ESCA Applied to Free Molecules”, North Holland, Amsterdam (1969).Google Scholar - 8.D. Wagner, W. M. Riggs, L. E. Davis, “Handbook of X-ray photoelectron spectroscopy” ed. by G. E. Muilenberg, Perkin-Elmer Corp., Physical - Electronics Division, Eden Prairie (1978).Google Scholar - 10.D. W. Turner, A. D. Baker, C. Baker and C. R. Brundle. “Molecular Photoelectron Spectroscopy, A Handbook of 584 Å Spectra”, Interscience, London - New York (1970).Google Scholar - 11.K. Kimura, S. Katsumata, Y. Achiba, T. Yamaski and S. Iwata, “Handbook of HeI Photoelectron Spectra of Fundamental Organic Molecules”, Japan Scientific Societies Press, Tokyo (1981).Google Scholar - 17.V. Cermảk, Retarding-potential measurements of the kinetic energy of electrons released in Penning ionization, J. Phys. 44: 3781–6 (1966).Google Scholar - 23.H. Morgner and H. Seiberle, Transition state spectroscopy with electrons as studied by 3D-trajectory calculations of the reaction He++ Br2 ----> He+Br- +Br, submitted to Can. J. Physics, (Polanyi-Special Issue), to appear in 1994.Google Scholar - 27.A. W. Hertzner, M. Schoen and Morgner, The influence of long range electrostatic forces on static properties of a quasi-Stockmayer fluid, Mol. Phys. 73: 1011–29 (1991).Google Scholar - 28.A. W. Hertzner and H. Morgner, 1991, unpublished results.Google Scholar - 32.H. Morgner, The investigation of liquid surfaces by electron spectroscopy, 5th Int. Conf. Electr. Spectr., Kiev (1993) and J. Electr. Spectr. Rel. Phen. submitted (1993).Google Scholar
<urn:uuid:014588aa-7d97-4284-9d41-f686234f9821>
2.828125
1,057
Academic Writing
Science & Tech.
58.557914
95,561,315
SOUTH AUSTRALIAN SUN-MOTHS Synemon colona (Nullarbor Sun-moth)* Typical male 26mm Dark form male 26mm (left); pale form male 26mm (right) Typical female 26mm Dark form female 26mm (left); worn pale form female 26mm (right) |Another very small sun-moth species 20-30mm, only recently discovered. It is similar to S. theresa but is presently only known to fly in autumn on the native Austrostipa eremophila-A. scabra grassland plains of the east Nullarbor Plains. It is also closely related to S. selene. Its morphology is variable and occurs in both dark and pale forms. The sun-moth is active during the heat of the day. The males set up mating territories (leks) in the habitat area early in the day, usually on open ground or along narrow tracks, where they either remain settled, unless disturbed, or an unmated female or another male enters the lek. Some males will periodically fly over the habitat area looking for unmated females. Newly emerged females are either quickly found by cruising males, or if unfound will then fly into the lek area to find a male. Coupling occurs quickly and the pair will remain stationary on the ground or on a clump of host-grass unless interrupted by another male which will attempt to dislodge the original male. They remain coupled for a couple of hours. Unmated females (and possibly the settled males) send out pheromone signals. The female pheromones linger until coupling occurs and can be picked up by flying males from 5m or more, which will make a right-angle turn in flight when they smell the pheromones. The males continue to fly over the habitat for the rest of the day, looking for females, mated or unmated. Females are then usually active, laying eggs on the hostgrasses. The primary hostplants are Austrostipa eremophila (Desert Spear-grass) and A. scabra (Rough Spear-grass)(Poaceae). This sun-moth does not have a proboscis. The eggs are laid on the stems of the hostplant near its base, which the female usually accesses by landing on the ground near the plant then walking up to the base, then flits onto the grass a few centimetres off the ground to test it and try and find a place to lay. The eggs are typical, elongate ellipsoidal spindle shaped, pale yellow to yellowish white when newly laid, tending to fade with age. They are about 1.6x0.8mm, with four (or sometimes five) prominent equi-spaced longitudinal ridges converging at each end of the egg. Eggs take 40 days to hatch. Larvae likely live within the root zone similar to the grass feeding larva of other Synemon species. First instar larvae are sub-translucent pale yellow and of typical Synemon shape, with a brownish head and anterior prothoracic dorsal plate. Mature larvae and pupae are undescribed. It is considered local or rare. Its habitat is widespread, but its distribution has not been fully studied. As it is located on aboriginal land it should be secure. Coupled adults, female on left Habitat for Synemon colona, comprising the low grasses Rytidosperma and Austrostipa (Poaceae) and Lomandra effusa (Asparagaceae) (large tussocks in foreground). The track is used as a lekking area (meeting place) where newly emerged females come to meet the males. Pale yellow egg of Synemon colona having 4 or 5 longitudinal ridges (left); newly eclosed larva from egg (right). Photography by R. Grund Author: R. GRUND, © copyright 30 April 2011, all rights reserved. Last update 3 December 2011.
<urn:uuid:03471575-8bbe-44fb-89a3-c5378bf8ccef>
2.96875
864
Knowledge Article
Science & Tech.
50.251994
95,561,328
Drought Outlook for the United States In the June 2018 Climate Prediction Center update, it was determined that ENSO-neutral is favored through Northern Hemisphere summer 2018, with the chance for El Nino increasing to 50% during fall and ~65% during winter 2018-19. These forecasts are supported by the ongoing build-up of heat within the tropical Pacific Ocean. Read more on the Climate Prediction Center website. In the National Oceanic and Atmospheric Administration (NOAA) U.S. Seasonal Drought Outlook update covering June through September, existing drought conditions are expected to persist in the far western portion of the country as well as in Texas, Oklahoma and some other isolated areas throughout the northern states. Additionally, drought conditions are expected to develop in both Texas and some of the northeastern states. It is projected that the rest of the area previously under drought will either improve or drought conditions in the area will be alleviated entirely. View the last NOAA U.S. Seasonal Drought Outlook map here. The links below will direct you to information on La Niña, ENSO, the NOAA seasonal drought outlook, and other forecasts and models. Additional Information and Data
<urn:uuid:4e92fd9f-4d2d-4da7-b189-97f2e52a1e56>
2.703125
240
Knowledge Article
Science & Tech.
41.285627
95,561,346
"Some of the time periods in the past are analogies for what is happening today from global warming," says Jocelyn Sessa, doctoral candidate in geosciences, Penn State. "Understanding what happened with diversity in the past can help us provide some prediction on how modern organisms will fare. If we know where we have been, we know something about where it will go." Using contemporary statistical methods and the Paleobiology Database, the researchers report, in today's (July 4) issue of Science, a new diversity curve that shows that most of the early spread of invertebrates took place well before the Late Cretaceous, and that the net increase through the period since, is proportionately small relative to the 65 million years that elapsed. One key to the new curve is the Paleobiology Database, (http://paleodb.org) housed at the National Center for Ecological Analysis and Synthesis, University of California, Santa Barbara. Previous research was based on databases of marine invertebrate fossils that recorded only the first occurrence of an organism and the last occurrence of the organism. There was no information in between for the organism. "Over 30 years ago, researchers looked at the curve they had and considered that perhaps diversity did not increase at all," says Mark E. Patzkowsky, associate professor of geosciences. "What researchers saw was the diversity curve leveled off for quite some time and then took off exponentially. However, diversity results are strongly controlled by sampling techniques." The new database allows researchers to standardize sample size because it includes multiple occurrences of each fossil. Researchers can randomly choose equal samples from equal time spans to create their diversity curve. This new curve uses 11 million-year segments, but the researchers hope to reduce the time intervals to 5 million years to match the interval of the previous curve, known as Sepkoski. The data for this study contains 284,816 fossil occurrences of 18,702 genera that equals about 3.4 million specimens from 5384 literature sources. The old curve, developed by J. John Sepkoski Jr., used a database that contained only about 60,000 occurrences. The researchers also looked at evenness in diversity. If there are 100 specimens divided into 10 time intervals, they could be divided with 10 individual specimens in each interval; or 91 specimens could be in one interval with one each in the remainder. The more even the distribution, the higher the evenness. "Evenness says something about resource distribution," says Patzkowsky. "Much of invertebrate diversity has been attributed to diversity increase in the tropics, but the curve is not driven by that totally. It seems that 450 million years ago was not so different from today because it also contained more diversity in the tropics." The major points of the Sepkoski curve are still seen in the new curve. Some things that are not seen, such as the decrease in diversity due to the Cretaceous Tertiary (KT) extinction 65 million years ago are not visible because of the scale of the intervals used. The extinction and recovery in the KT took less than 11 million years and so do not show. Some things not seen on the Sepkoski curve include a peak in the Permian. Also unexpected is that the diversity in the Jurassic (206 to 144 million years ago) is lower than diversity in the Triassic (248 to 206 million years ago), indicating a dip and rise in the diversity curve. The curve then rises in the Cretaceous and remains more or less flat after that. The previously thought exponential increase in diversity is not there. "Comparing diversity through time is about how our world works, about the origin of species and how diversity changes with temperature," says Sessa. "If we think that the net increase over time will not get much greater, things are very different from if the diversity increases exponentially." Andrea Elyse Messer | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:1a3780d8-7b6a-4839-9c4d-762be8e17674>
3.734375
1,439
Content Listing
Science & Tech.
40.198045
95,561,348
Prototype and Actor Languages In the last chapter, I introduced some of the fundamental concepts of class-based programming languages. The task of this chapter is to introduce two competing paradigms that are closely related to each other: prototype and Actor languages. Historically, prototype languages developed out of the concepts of Actor languages, but they will be considered in the reverse order because prototype-based languages are now an area of active research while Actors tend to be somewhat (unfairly, in our view) neglected. KeywordsMessage Passing Actor Language Shared Structure Message Queue Parent Object Unable to display preview. Download preview PDF.
<urn:uuid:e37b295a-a31b-42c9-afc5-0cda25ee7767>
2.875
128
Truncated
Software Dev.
15.456752
95,561,355
Longstanding Problem Put to Rest News Jun 11, 2015 Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease. The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success. At the ACM Symposium on Theory of Computing (STOC), MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently. In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better. “This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.” Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says. Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row. Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time. That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years. Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists. The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints? In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met. At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time. “This is really nice work,” says Barna Saha, an assistant professor of computer science at the University of Massachusetts atAmherst. “There are lots of people who have been working on this problem, because it has a big practical impact. But they won’t keep trying to develop a subquadratic algorithm, because that seems very unlikely to happen, given the result of this paper.” As for the conjecture that the MIT researchers’ proof depends on — that NP-complete problems can’t be solved in subexponential time — “It’s a very widely believed conjecture,” Saha says. “And there are many other results in this low-polynomial-time complexity domain that rely on this conjecture. Getting to Know the Microbes that Drive Climate ChangeNews A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change.READ MORE How Many People Die From TB Every Year?News Discrepancies between the estimates for global tuberculosis deaths is due to different methodologies and data sources used by each institution. The results highlight the need to improve the modeling approaches in these countries in order to understand the true burden of the disease and design adequate health policies.READ MORE Hay Fever Risk Genes Overlap with Autoimmune DiseaseNews In a large international study involving almost 900,000 participants, researchers from the University of Copenhagen and COPSAC have found new risk genes for hay fever. It is the largest genetic study so far on this type of allergy, which affects millions of people around the world.READ MORE
<urn:uuid:b4835ded-b654-4082-be92-06ed71a3333e>
3.09375
1,591
News Article
Science & Tech.
38.766967
95,561,360
In December, astronomers checked every inch of an interstellar object that had entered our solar system for evidence of artificial technology. They spent weeks pouring over the data, looking for radio signals that would suggest the object, known as ‘Oumuamua, may be something other than a just a strange space rock. They didn’t find anything. The Breakthrough Listen Initiative, a $100 million effort in the search for intelligent extraterrestrial life, did not detect radio emissions from the object, according to a new paper published on arXiv, a repository for papers approved for future publication in journals, on Tuesday. ‘Oumuamua caught everyone by surprise in October as the first known interstellar object to be spotted in our solar system. “From the start, we knew it would be a long shot, like any other SETI experiment,” said Emilio Enriquez, an astronomy Ph.D. student on the Breakthrough Listen team and the lead author of the paper. The decision to check ‘Oumuamua for artificial technology came from Yuri Milner, the Russian billionaire and tech investor who established and is funding Breakthrough Listen. Astronomers released their first results from the observations in mid-December, based on an analysis of just one chunk of the radio data. “Indeed, nothing has popped up, but we’re busy churning through the data we’ve collected so far,” Andrew Siemion, the director of the Berkeley SETI Research Center who leads its Breakthrough Listen Initiative, said at the time. The latest paper includes the analysis of the full dataset. Still nothing. The new paper also reports the team found no evidence of water on ‘Oumuamua, like other groups of astronomers studying the object. (Some suspect ‘Oumuamua does harbor water in the form of ice hidden deep under the crust.) Breakthrough Listen’s data comes from observations by the Green Bank Telescope, a steerable radio telescope in West Virginia. The telescope was prepared to detect a signal, if it existed, similar to the radio waves coming from a cellphone. For eight hours in December, the GBT observed ‘Oumuamua across four bands of radio waves. Breakthrough Listen usually observes targets in one radio band, Enriquez said, but for ‘Oumuamua, they widened the search parameters to include as many frequencies as existing technology allowed. “As in any other SETI experiment, we have no prior knowledge of which frequency any civilization might be sending any kind of signal,” he noted. “So the idea is because we don’t know, basically we need to search all of the available frequencies.” In addition, GBT rotated between four receivers every 30 minutes so that each one had a shot at observing ‘Oumuamua as the object completed one full spin, which takes about seven hours and 20 minutes. This swapping allowed astronomers to study every part of the object. “Imagine there’s only one single antenna pointing in one single direction and you have these rotations,” Enriquez said. “It’s like a lighthouse. You need to wait until the lighthouse hits your direction.” Enriquez said Breakthrough Listen currently has no plans for follow-up observations, but the team will continue to examine the results of the observations. Now that they’ve looked for continuous radio signals in the data, they’ll look next for pulsating signals. When I asked whether they’re ready to call it—move along, nothing SETI to see here—Enriquez said such certainty isn’t quite possible, thanks to the nature of SETI experiments in general. Astronomers are limited not by the extent of their search, but by the capacity of current technology. The GBT and other radio telescopes are not able to study ‘Oumuamua in every frequency. “We were not able to observe at other frequencies, so we don’t know if, for instance, that it might have been a signal that is lower than the frequency that we observed or higher than the frequency that we observed,” Enriquez said. “And unfortunately, that’s kind of the end of the experiment.” Rumblings about ‘Oumuamua being a possible target for SETI observations started soon after its discovery in October, which was the first time humanity had spotted an object of its kind in our solar system. The more scientists learned about ‘Oumuamua, the weirder it seemed. The object appeared to be an asteroid and not a comet—the kind of object scientists had predicted would be most likely to become ejected from its own solar system, travel through interstellar space, and get deposited into ours. The shape of ‘Oumuamua—extremely elongated, like a cigar—was unlike anything they had seen, and would be difficult to create through natural, known processes of the universe. In early December, Avi Loeb, the chair of Harvard’s astronomy department and an advisers on the Breakthrough Listen Initiative, took those rumblings to Milner. Loeb suggested ‘Oumuamua could be an artificial probe dispatched by an alien civilization into the cosmos. Milner was intrigued, and within hours the Breakthrough Listen team was preparing an plan for observations using the GBT. At the time of Milner’s decision, the object was about two astronomical units (AU) from Earth, or about twice the distance between the Earth and the sun from our planet. At the time of this writing, the asteroid was about 2.23 AU from Earth, according to Karen Meech, an astronomer at the University of Hawaii Institute for Astronomy whose team discovered ‘Oumuamua. As ‘Oumuamua speeds away from us, observations by even the most powerful telescopes are becoming more difficult. But the study of interstellar objects is just beginning. Scientists predict that as more telescopes like the one that detected ‘Oumuamua come online and search the skies, they will find more interstellar objects floating in the solar system—and more targets for the search for extraterrestrial life. Thanks to the data they got from ‘Oumuamua, they’ll be more prepared for the next one. “If you look more, everywhere, I think chances are that eventually you will find something,” Milner told me last month. We want to hear what you think. Submit a letter to the editor or write to email@example.com.
<urn:uuid:14ee412a-c574-42e8-b0ee-d60ed028c477>
3.265625
1,394
News Article
Science & Tech.
40.522392
95,561,361
The Solar Magnetic Field by Sami Solanki, Bernd Inhester, Manfred Schussler Publisher: arXiv 2010 Number of pages: 83 The magnetic field of the Sun is the underlying cause of the many diverse phenomena combined under the heading of solar activity. Here we describe the magnetic field as it threads its way from the bottom of the convection zone, where it is built up by the solar dynamo, to the solar surface, where it manifests itself in the form of sunspots and faculae, and beyond into the outer solar atmosphere and, finally, into the heliosphere. Home page url Download or read it online for free here: by Alexander G. Kosovichev - arXiv Helioseismology studies the structure and dynamics of the Sun's interior by observing oscillations on the surface. The basic principles, recent advances and perspectives of global and local helioseismology are reviewed in this text. by Jay M. Pasachoff - Alpha Everything revolves around it...and now you can learn all about the origin and history of the sun. With information on the sun's physical properties; how solar flares, sunspots, and winds on its surface affect Earth's atmosphere and environment. by V. Antonelli, L. Miramonti, C. Pena-Garay, A. Serenelli - arXiv After reviewing the results of the last two decades, which were determinant to solve the long standing solar neutrino puzzle, we focus on the more recent results in this field and on the experiments presently running or planned for the near future. by Reijo Rasinkangas - University of Oulu This textbook studies the solar wind and other plasma environments. It is aimed for students of space physics. The text covers plasma physics, heliosphere, solar wind, magnetosphere, ionosphere, auroras, cosmic rays, space weather, etc.
<urn:uuid:2f617706-0f09-4a06-bba3-addeb58b03c1>
2.765625
408
Content Listing
Science & Tech.
40.154298
95,561,363
In his famous book entitled ‘The Physics of Blown Sand and Desert Dunes’, Bagnold (1941) focused on the studies of sand movement. At that time, saltation was the core subject of wind-erosion research. While the movement of sand remains to be a subject of considerable interest today, the emphasis of wind-erosion research has been shifted from sand to dust. This shift is clearly seen in the recent trend of publications. In 2006, about 50% of wind-erosion related research papers was on dust, in contrast to about 5% in the 1940s (J. Stout, personal communication, 2006). This shift is accompanied by a remarkable expansion of the wind-erosion research territory, as reflected in the following aspects. KeywordsDust Storm Dust Emission Roughness Element Dust Devil Threshold Friction Velocity Unable to display preview. Download preview PDF.
<urn:uuid:d824d006-ebd9-4aa4-8ba9-09c2e143609f>
3
186
Truncated
Science & Tech.
52.156046
95,561,374
The various shapes are in contrast with the liquid drops which can splash, spread or bounce upon hitting a surface. Successive drops freeze rapidly upon impact due to the drainage of a small fraction of liquid, literally stacking on top of each other into surprisingly slender structures know as granular towers. Dripping a mixture of sand and water onto an absorbent surface can lead to striking structures of a wide variety of striking forms. Credit: Image courtesy of Julien Chopin and Arshad Kudrolli In addition, twisted pagoda dome-like structures result upon increasing the flow rate of the damp granular mixture. Experiments show that the towers are held together because of capillary and friction forces, and the shape of the towers depends on a subtle balance between dripping frequency, density of grains, and impact speed. Besides applications in surface patterning, this tower building technique may be a new and easy way to probe the flow properties of dense granular suspensions by observing the shapes of the towers they produce.Peering Out from Under an Invisibility Cloak Most invisibility cloak designs have one serious drawback - they make it impossible for anyone hiding under the cloak to see what's going on in the outside world. Researchers have now come up with an approach that, in theory, should allow us to make cloaks that allow you to peek out while remaining entirely hidden. In effect, they propose making a tiny tear in the cloak, and then stitching the hole with a two types of materials chosen to effectively cancel each other out when seen from the outside, while still allowing light to enter. Although the cloak design currently exists only on paper, it theoretically ensures that aspiring Harry Potters remain entirely undetectable while keeping an eye on the Voldemorts and Snapes all around them. James Riordon | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:5e7e91ad-8766-424d-89fd-d101f76595c5>
3.4375
1,006
Content Listing
Science & Tech.
39.588841
95,561,375
mount - mount filesystem int mount(const char *source, const char *target, const char *filesystemtype, unsigned long mountflags, const void *data); () attaches the filesystem specified by source often a pathname referring to a device, but can also be the pathname of a directory or file, or a dummy string) to the location (a directory or file) specified by the pathname in target Appropriate privilege (Linux: the CAP_SYS_ADMIN capability) is required to mount filesystems. Values for the filesystemtype argument supported by the kernel are listed (e.g., "btrfs", "ext4", "jfs", "xfs", "vfat", "fuse", "tmpfs", "cgroup", "proc", "mqueue", "nfs", "cifs", "iso9660"). Further types may become available when the appropriate modules are loaded. argument is interpreted by the different filesystems. Typically it is a string of comma-separated options understood by this filesystem. See for details of the options available for each filesystem type. A call to mount () performs one of a number of general types of operation, depending on the bits specified in mountflags . The choice of which operation to perform is determined by testing the bits set in , with the tests being conducted in the order listed here: - Remount an existing mount: mountflags includes - Create a bind mount: mountflags includes - Change the propagation type of an existing mount: mountflags includes one of MS_SHARED, MS_PRIVATE, MS_SLAVE, or MS_UNBINDABLE. - Move an existing mount to a new location: mountflags - Create a new mount: mountflags includes none of the Each of these operations is detailed later in this page. Further flags may be specified in mountflags to modify the behavior of mount The list below describes the additional flags that can be specified in . Note that some operation types ignore some or all of these flags, as described later in this page. - MS_DIRSYNC (since Linux 2.5.19) - Make directory changes on this filesystem synchronous. (This property can be obtained for individual directories or subtrees - MS_LAZYTIME (since Linux 4.0) - Reduce on-disk updates of inode timestamps (atime, mtime, ctime) by maintaining these changes only in memory. The on-disk timestamps are updated only when: - the inode needs to be updated for some change unrelated to - the application employs fsync(2), syncfs(2), - an undeleted inode is evicted from memory; or - more than 24 hours have passed since the inode was written - This mount option significantly reduces writes needed to update the inode's timestamps, especially mtime and atime. However, in the event of a system crash, the atime and mtime fields on disk might be out of date by up to 24 hours. Examples of workloads where this option could be of significant benefit include frequent random writes to preallocated files, as well as cases where the mount option is also enabled. (The advantage of is that stat(2) will return the correctly updated atime, but the atime updates will be flushed to disk only in the cases listed above.) - Permit mandatory locking on files in this filesystem. (Mandatory locking must still be enabled on a per-file basis, as described in fcntl(2).) Since Linux 4.5, this mount option requires the - Do not update access times for (all types of) files on this - Do not allow access to devices (special files) on this - Do not update access times for directories on this filesystem. This flag provides a subset of the functionality provided by MS_NOATIME; that is, MS_NOATIME implies - Do not allow programs to be executed from this - Do not honor set-user-ID and set-group-ID bits or file capabilities when executing programs from this filesystem. - Mount filesystem read-only. - MS_REC (since Linux 2.4.11) - Used in conjunction with MS_BIND to create a recursive bind mount, and in conjunction with the propagation type flags to recursively change the propagation type of all of the mounts in a subtree. See below for further details. - MS_RELATIME (since Linux 2.6.20) - When a file on this filesystem is accessed, update the file's last access time (atime) only if the current value of atime is less than or equal to the file's last modification time (mtime) or last status change time (ctime). This option is useful for programs, such as mutt(1), that need to know when a file has been read since it was last modified. Since Linux 2.6.30, the kernel defaults to the behavior provided by this flag (unless MS_NOATIME was specified), and the MS_STRICTATIME flag is required to obtain traditional semantics. In addition, since Linux 2.6.30, the file's last access time is always updated if it is more than 1 day old. - MS_SILENT (since Linux 2.6.17) - Suppress the display of certain (printk()) warning messages in the kernel log. This flag supersedes the misnamed and obsolete MS_VERBOSE flag (available since Linux 2.4.12), which has the same - MS_STRICTATIME (since Linux 2.6.30) - Always update the last access time (atime) when files on this filesystem are accessed. (This was the default behavior before Linux 2.6.30.) Specifying this flag overrides the effect of setting the MS_NOATIME and MS_RELATIME flags. - Make writes on this filesystem synchronous (as though the O_SYNC flag to open(2) was specified for all file opens to From Linux 2.4 onward, the MS_NODEV flags are settable on a per-mount-point basis. From kernel 2.6.16 onward, MS_NOATIME are also settable on a per-mount-point basis. The MS_RELATIME flag is also settable on a per-mount-point basis. Since Linux 2.6.16, MS_RDONLY can be set or cleared on a per-mount-point basis as well as on the underlying filesystem. The mounted filesystem will be writable only if neither the filesystem nor the mountpoint are flagged as read-only. An existing mount may be remounted by specifying MS_REMOUNT . This allows you to change the mountflags of an existing mount without having to unmount and remount the should be the same value specified in the initial arguments are ignored. arguments should match the values used in the original mount () call, except for those parameters that are being deliberately changed. Another exception is that MS_BIND has a different meaning for remount, and it should be included only if explicitly desired. The following mountflags can be changed: MS_LAZYTIME . Attempts to change the setting of the MS_DIRSYNC flag during a remount are silently ignored. Since Linux 3.17, if none of MS_NOATIME , or MS_STRICTATIME is specified in , then the remount operation preserves the existing values of these flags (rather than defaulting to MS_RELATIME Since Linux 2.6.26, this flag can be used with MS_BIND to modify only the per-mount-point flags. This is particularly useful for setting or clearing the "read-only" flag on a mount point without changing the underlying filesystem. Specifying mountflags MS_REMOUNT | MS_BIND | MS_RDONLY will make access through this mountpoint read-only, without affecting other (available since Linux 2.4), then perform a bind mount. A bind mount makes a file or a directory subtree visible at another point within the single directory hierarchy. Bind mounts may cross filesystem boundaries and span chroot(2) arguments are ignored. The remaining bits in the mountflags argument are also ignored, with the exception of MS_REC . (The bind mount has the same mount options as the underlying mount point.) However, see the discussion of remounting above, for a method of making an existing bind mount read-only. By default, when a directory is bind mounted, only that directory is mounted; if there are any submounts under the directory tree, they are not bind mounted. If the MS_REC flag is also specified, then a recursive bind mount operation is performed: all submounts under the source than unbindable mounts) are also bind mounted at the corresponding location in includes one of MS_SHARED , or MS_UNBINDABLE (all available since Linux 2.6.15), then the propagation type of an existing mount is changed. If more than one of these flags is specified, an error results. The only flags that can be used with changing the propagation type are , and data arguments are ignored. The meanings of the propagation type flags are as follows: - Make this mount point shared. Mount and unmount events immediately under this mount point will propagate to the other mount points that are members of this mount's peer group. Propagation here means that the same mount or unmount will automatically occur under all of the other mount points in the peer group. Conversely, mount and unmount events that take place under peer mount points will propagate to this mount - Make this mount point private. Mount and unmount events do not propagate into or out of this mount point. - If this is a shared mount point that is a member of a peer group that contains other members, convert it to a slave mount. If this is a shared mount point that is a member of a peer group that contains no other members, convert it to a private mount. Otherwise, the propagation type of the mount point is left unchanged. When a mount point is a slave, mount and unmount events propagate into this mount point from the (master) shared peer group of which it was formerly a member. Mount and unmount events under this mount point do not propagate to A mount point can be the slave of another peer group while at the same time sharing mount and unmount events with a peer group of which it is a member. - Make this mount unbindable. This is like a private mount, and in addition this mount can't be bind mounted. When a recursive bind mount (mount() with the MS_BIND and MS_REC flags) is performed on a directory subtree, any bind mounts within the subtree are automatically pruned (i.e., not replicated) when replicating that subtree to produce the target subtree. By default, changing the propagation type affects only the target point. If the MS_REC flag is also specified in mountflags the propagation type of all mount points under target is also changed. For further details regarding mount propagation types (including the default propagation type assigned to new mounts), see mount_namespaces(7) contains the flag MS_MOVE (available since Linux 2.4.18), then move a subtree: source specifies an existing mount point specifies the new location to which that mount point is to be relocated. The move is atomic: at no point is the subtree unmounted. The remaining bits in the mountflags argument are ignored, as are the If none of MS_REMOUNT , or MS_UNBINDABLE is specified in , then mount () performs its default action: creating a new mount point. source specifies the source for the new mount point, specifies the directory at which to create the mount point. arguments are employed, and further bits may be specified in mountflags to modify the behavior of the call. On success, zero is returned. On error, -1 is returned, and errno The error values given below result from filesystem type independent errors. Each filesystem type may have its own special errors and its own special behavior. See the Linux kernel source code for details. - A component of a path was not searchable. (See also - Mounting a read-only filesystem was attempted without giving the MS_RDONLY flag. - The block device source is located on a filesystem mounted with the MS_NODEV option. - An attempt was made to stack a new mount directly on top of an existing mount point that was created in this mount namespace with the same source and target. - source cannot be remounted read-only, because it still holds files open for writing. - One of the pointer arguments points outside the user - source had an invalid superblock. - A remount operation (MS_REMOUNT) was attempted, but source was not already mounted on target. - A move operation (MS_MOVE) was attempted, but source was not a mount point, or was '/'. - mountflags includes more than one of MS_SHARED, MS_PRIVATE, MS_SLAVE, or - mountflags includes MS_SHARED, MS_PRIVATE, MS_SLAVE, or MS_UNBINDABLE and also includes a flag other than MS_REC or MS_SILENT. - An attempt was made to bind mount an unbindable mount. - In an unprivileged mount namespace (i.e., a mount namespace owned by a user namespace that was created by an unprivileged user), a bind mount operation (MS_BIND) was attempted without specifying (MS_REC), which would have revealed the filesystem tree underneath one of the submounts of the directory being bound. - Too many links encountered during pathname resolution. - A move operation was attempted, and target is a descendant of source. - (In case no block device is required:) Table of dummy devices is full. - A pathname was longer than MAXPATHLEN. - filesystemtype not configured in the kernel. - A pathname was empty or had a nonexistent component. - The kernel could not allocate a free page to copy filenames or data into. - source is not a block device (and a device was - target, or a prefix of source, is not a - The major number of the block device source is out - The caller does not have the required privileges. The definitions of MS_DIRSYNC were added to glibc headers in This function is Linux-specific and should not be used in programs intended to Since Linux 2.4 a single filesystem can be mounted at multiple mount points, and multiple mounts can be stacked on the same mount point. argument may have the magic number 0xC0ED ( ) in the top 16 bits. (All of the other flags discussed in DESCRIPTION occupy the low order 16 bits of mountflags was required in kernel versions prior to 2.4, but since Linux 2.4 is no longer required and is ignored if specified. The original MS_SYNC flag was renamed MS_SYNCHRONOUS when a different MS_SYNC was added to <mman.h> Before Linux 2.4 an attempt to execute a set-user-ID or set-group-ID program on a filesystem mounted with MS_NOSUID would fail with EPERM Linux 2.4 the set-user-ID and set-group-ID bits are just silently ignored in Starting with kernel 2.4.19, Linux provides per-process mount namespaces. A mount namespace is the set of filesystem mounts that are visible to a process. Mount-point namespaces can be (and usually are) shared between multiple processes, and changes to the namespace (i.e., mounts and unmounts) by one process are visible to all other processes sharing the same namespace. (The pre-2.4.19 Linux situation can be considered as one in which a single namespace was shared by every process on the system.) A child process created by fork(2) shares its parent's mount namespace; the mount namespace is preserved across an execve(2) A process can obtain a private mount namespace if: it was created using the flag, in which case its new namespace is initialized to be a copy of the namespace of the process that called ; or it calls unshare(2) with the CLONE_NEWNS flag, which causes the caller's mount namespace to obtain a private copy of the namespace that it was previously sharing with other processes, so that future mounts and unmounts by the caller are invisible to other processes (except child processes that the caller subsequently creates) and vice versa. The Linux-specific /proc/[pid]/mounts file exposes the list of mount points in the mount namespace of the process with the specified ID; see This page is part of release 4.16 of the Linux man-pages description of the project, information about reporting bugs, and the latest version of this page, can be found at
<urn:uuid:916145d9-f40f-4547-ac6e-d82d93bc6938>
3.390625
3,865
Documentation
Software Dev.
50.427893
95,561,379
- List of Types - Different Sampling Methods: How to Tell the Difference - What is Sampling Error? - More Articles Samples are parts of a population. For example, you might have a list of information on 100 people (your “sample”) out of 10,000 people (the “population”). You can use that list to make some assumptions about the entire population’s behavior. However, it’s not that simple. When you do stats, your sample size has to be ideal—not too large or too small. Then once you’ve decided on a sample size, you must use a sound technique to collect the sample from the population: - Probability Sampling uses randomization to select sample members. You know the probability of each potential member’s inclusion in the sample. For example, 1/100. However, it isn’t necessary for the odds to be equal. Some members might have a 1/100 chance of being chosen, others might have 1/50. - Non-probability sampling uses non-random techniques (i.e. the judgment of the researcher). You can’t calculate the odds of any particular item, person or thing being included in your sample. - Bernoulli samples have independent Bernoulli trials on population elements. The trials decide whether the element becomes part of the sample. All population elements have an equal chance of being included in each choice of a single sample. The sample sizes in Bernoulli samples follow a binomial distribution. Poisson samples (less common): An independent Bernoulli trial decides if each population element makes it to the sample. - Cluster sampes divide the population into groups (clusters). Then a random sample is chosen from the clusters. It’s used when researchers don’t know the individuals in a population but do know the population subsets or groups. - In systematic sampling, you select sample elements from an ordered frame. A sampling frame is just a list of participants that you want to get a sample from. For example, in the equal-probability method, choose an element from a list and then choose every kth element using the equation k = N\n. Small “n” denotes the sample size and capital “N” equals the size of the population. - SRS : Select items completely randomly, so that each element has the same probability of being chosen as any other element. Each subset of elements has the same probability of being chosen as any other subset of k elements. - In stratified sampling, sample each subpopulation independently. First, divide the population into homogeneous (very similar) subgroups before getting the sample. Each population member only belongs to one group. Then apply simple random or a systematic method within each group to choose the sample. Stratified Randomization: a sub-type of stratified used in clinical trials. First, divide patients into strata, then randomize with permuted block randomization. Less Common Types You’ll rarely (if ever) come across these techniques in a basic stats class. However, you’ll come across them in the “real world”: - Acceptance-Rejection Sampling: A way to sample from an unknown distribution using a similar, more convenient distribution. - Accidental sampling (also known as grab, convenience or opportunity sampling): Draw a sample from a convenient, readily available population. It doesn’t give a representative sample for the population but can be useful for pilot testing. - Adaptive sampling (also called response-adaptive designs): adapt your selection criteria as the experiment progresses, based on preliminary results as they come in. - Bootstrap Sample: Select a smaller sample from a larger sample with Bootstrapping. Bootstrapping is a type of resampling where you draw large numbers of smaller samples of the same size, with replacement, from a single original sample. - The Demon algorithm (physics) samples members of a microcanonical ensemble (used to represent the possible states of a mechanical system which has an exactly specified total energy) with a given energy. The “demon” represents a degree of freedom in the system which stores and provides energy. - Critical Case Samples: With this method, you carefully choose cases to maximize the information you can get from a handful of samples. - Discrepant case sampling: you choose cases that appear to contradict your findings. - Distance sample : a widely used technique that estimates the density or abundance of animal populations. - The experience sampling method samples experiences (rather than individuals or members). In this method, study participants stop at certain times and make notes of their experiences as they experience them. - Haphazard Sampling: where a researcher chooses items haphazardly, trying to simulate randomness. However, the result may not be random at all — tainted by selection bias. Additional Uncommon Types You’ll probably not come across these in a basic stats class. - Inverse Sample: based on negative binomial sampling. Take samples until a specified number of successes have happened. - Importance Sampling: a method to model rare events. - The Kish grid: a way to select members of a household for interviews and uses a random number tables for the selections. - Latin hypercube: used to construct computer experiments. It generates samples of plausible collections of values for parameters in a multidimensional distribution. - In line-intercept sampling, a method where you include an element in a sample from a particular region if a certain line segment intersects the element. - Use Maximum Variation Samples when you want to include extremes (like rich/poor or young/old). A related technique: extreme case sampling. - Multistage sampling; one of a variety of cluster sampling techniques where you choose random elements from a cluster (instead of every member in the cluster). - Quota sampling: a way to select survey participants. It’s similar to statified sampling but researchers choose members of a group based on judgment. For example, people closest to the researcher might be chosen for ease of access. - Respondent Driven Sampling. A chain-referral sampling method where participants recommend other people they know. - A sequential sample doesn’t have a set size; take items one (or a few) at a time until you have enough for your research. It’s commonly used in ecology. - Snowball samples: where existing study participants recruit future study participants from people they know. - Square root biased samplea way to choose people for additional screenings at airports. A combination of SRS and profiling. You’ll come across many terms in statistics that define different sampling methods: simple random sampling, systematic sampling, stratified random sampling and cluster sampling. How to tell the difference between the different sampling methods can be a challenge. Different Sampling Methods: How to Tell the Difference: Steps Step 1: Find out if the study sampled from individuals (for example, picked from a pool of people). You’ll find simple random sampling in a school lottery, where individual names are picked out of a hat. But a more “systematic” way of choosing people can be found in “systematic sampling,” where every nth individual is chosen from a population. For example, every 100th customer at a certain store might receive a “doorbuster” gift. Step 2: Find out if the study picked groups of participants. For large numbers of people (like the number of potential draftees in the Vietnam war), it’s much simpler to pick people by groups (simple random sampling). In the case of the draft, draftees were chosen by birth date, “simplifying” the procedure. Step 3: Determine if your study contained data from more than one carefully defined group (“strata” or “cluster”). Some examples of strata could be: Democrats and Republics, Renters and Homeowners, Country Folk vs. City Dwellers, Jacksonville Jaguars fans and San Francisco 49ers fans. If there are two or more very distinct, clear groups, you have a stratified sample or a “cluster sample.” - If you have data about the individuals in the groups, that’s a stratified sample. In order to perform stratified sampling on this sample, you could perform random sampling of each strata independently. - If you only have data about the groups themselves (you may only know the location of the individuals), then that’s a cluster sample. Step 4: Find out if the sample was easy to get. Convenience samples are like convenience stores: why go out of your way to get samples, when you can nip out to the corner store? A classic example of convenience sampling is standing at a shopping mall, asking passers by for their opinion. Errors happen when you take a sample from the population rather than using the entire population. In other words, it’s the difference between the statistic you measure and the parameter you would find if you took a census of the entire population. If you were to survey the entire population (like the US Census), there would be no error. It’s nearly impossible to calculate the error margin. However, when you take samples at random, you estimate the error and call it the margin of error. For example, if you wanted to figure out how many people out of a thousand were under 18, and you came up with the figure 19.357%. If the actual percentage equals 19.300%, the difference (19.357 – 19.300) of 0.57 or 3% = the margin of error. If you continued to take samples of 1,000 people, you’d probably get slightly different statistics, 19.1%, 18.9%, 19.5% etc, but they would all be around the same figure. This is one of the reasons that you’ll often see sample sizes of 1,000 or 1,500 in surveys: they produce a very acceptable margin of error of about 3%. Formula: the formula for the margin of error is 1/√n, where n is the size of the sample. For example, a random sample of 1,000 has about a 1/√n; = 3.2% error. Sample error can only be reduced, this is because it is considered to be an acceptable tradeoff to avoid measuring the entire population. In general, the larger the sample, the smaller the margin of error. There is a notable exception: if you use cluster sampling, this may increase the error because of the similarities between cluster members. A carefully designed experiment or survey can also reduce error. Another Type of Error The non-sampling error could be one reason as to why there’s a difference between the sample and the population. This is due to poor data collection methods (like faulty instruments or inaccurate data recording, selection bias, non response bias (where individuals don’t want to or can’t respond to a survey), or other mistakes in collecting the data. Increasing the sample size will not reduce these errors. They key is to avoid making the errors in the first place with a well-planned design for the survey or experiment. - What is the Large Enough Sample Condition? - What is a Sample? - How to Find a Sample Size in Statistics. - What is the 10% Condition? - What is Direct Sampling? - Double sampling. - What is Efficiency? - Latin Hypercube Sampling. - What is an Effective Sample Size? - Finite Population Correction Factor. - What is Markov Chain Monte Carlo? - What is a Typical Case? - How to Use Slovin’s Formula. - Sample Distributions. - What is the Samp. Distribution of the Sample Proportion? - What is Sampling variability? - Total Population Sampling Check out our YouTube channel for more stats tips and help!------------------------------------------------------------------------------ If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.Comments? Need to post a correction? Please post a comment on our Facebook page.
<urn:uuid:92f73308-dfea-4926-8325-663bd49bee3c>
4.46875
2,609
Tutorial
Science & Tech.
46.942343
95,561,407
+44 1803 865913 By: Lorus Milne and Margery Milne 989 pages, 700 col photos Spiders, bugs, moths, butterflies, beetles, bees, flies, dragonflies, grasshoppers, and many other insects are detailed in more than 700 full-color photographs visually arranged by shape and color. Descriptive text includes measurements, diagnostic details, and information on habitat, range, feeding habits, sounds or songs, flight period, web construction, life cycle, behaviors, folklore, and environmental impact. An illustrated key to the insect orders and detailed drawings of the parts of insects, spiders, and butterflies supplement this extensive coverage. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I'm telling all my friends about your site. We're all into conservation and the environment and the variety of offerings is really impressive. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:cb5005f7-0cad-4d37-a79a-31b45d936a0d>
2.71875
219
Product Page
Science & Tech.
35.793813
95,561,415
Typhoon Soudelor has intensified into a powerful Category 5 storm with NOAA calling this a “Super Typhoon”. As shown in the CIMSS model (Acquired 4 August 2015), it is clear that this storm has reached Category 5 status. This storm has formed and developed within an ideal environment. According to NOAA, this storm has an eye with a diameter of approximately 15 nautical miles. According to modelling, this storm has sustained winds reaching 155 knots (1 knot = 1.85 km/h). When converted, that is approximately 286 km/h. Based on satellite data and modelling, it appears that this storm could have peak wind gusts of 190 knots. When converted, this would imply wind gusts reaching 351 km/h. There are no weather stations currently in its path to verify satellite data or available data. A well developed eye is visible on the satellite photos. As shown in the satellite photo (MODIS – Worldview acquired from NASA 4 August) with overlays, the eye is visible. Surrounding the eye are bands of cumulonimbus clouds marking thunderstorms with heavy rain. The storm takes the shape of a spiral. Using MODIS “Worldview” (NASA 4 August 2015) with overlays, the storm is now passing over the Philippine Sea in a west north west direction and under its present trajectory, it will approach Taiwan and may even clip the northern coast of Taiwan then make landfall in Eastern China. The CIMSS model suggests some weakening back to a Category 4 storm in coming days. It has become clear that this storm has exceeded initial forecasts in terms of strength and it will be interesting to see how this storm ends its life span in coming days.
<urn:uuid:d33ba423-ef70-4b59-95e7-11964de45ba3>
2.59375
357
News Article
Science & Tech.
59.289496
95,561,424
Develop a menu driven program to input the sides and calculate the perimeter and area of a triangle, square, rectangle, pentagon, and polygon. The application should close on selecting the Exit option. Define a C++ abstract class named shape. The class will have attributes for the sides of the shape. The class will also need the accessor method getSide that returns the sides of the shape. The class also needs one mutator method called setSides which will set the sides of the shape. The setSides method should return a Boolean value indicating if the set was successful or not and update the sides accordingly. (You cannot have a 0 or negative side.) You will need to do the class declaration in a header(.h) file and the class implementation in the .cpp file. Also add two more methods in the class namely getArea and getPerimeter to calculate the area and perimeter of the shape. Define a method Display() for this class to print the sides of the shape on the screen. Make this class generic so that the sides of the shape may be integer or float. Use the abstract class shape to define TwoD shape and add all the functionalities to the methods. Define the instances triangle, square and rectangle in the driver class to test the functionality of the TwoD class you have extended from the shape class. Define a new class ThreeD extended from ThreeD class. The constructor of ThreeD class should call constructor of TwoD class, and also the print method should call the Display() of the super class. Override the functions getArea and getPerimeter to work with the new formula. The ThreeD class will represent instances of ThreeD class as a cube and a box. I have read all of the given description i can do it ASAP, i have very good knowledge of C++ and OOP, thanks, Regards Unified Architects Relevant Skills and Experience C++ (3 Years) OOP(2 Years) Proposed Milestones $ Plus Dear Client, I have read and understood your project requirements and I'm very interested and confident to write the programs. I have experience of programming in C, C++ and Java. I have coded many programs including Plus 11 freelance font une offre moyenne de $30 pour ce travail THis inheritance and abstraction based C++ project is super easy. I can deliver it within 24 hours from now. Please, let me know! Relevant Skills and Experience I graduated from UT Austin, and I take on C/C++ projects Plus I will make you the project just like your requirements, in a short period of time. I have done many projects related to your requirements because I am a pro in software development. Relevant Skills and Experience c++ Plus Hey A C++ expert programmer is here I have good advanced expertise in C++ programming I can write the given set of programs for you in C++ Feel Free to message me Regards Relevant Skills and Experience C++ Programming Plus Hello there, Read your project description and it is a pretty simple task. I can provide you the solution in no time which will get you your desired results. I'm a professional Computer Scientist. Relevant Skills and Plus Hello, I am very interested in your project. Please contact me for more details. Relevant Skills and Experience I am studying software engineering and I have more than 4 years experience in this field. I have been w Plus
<urn:uuid:325d6cc7-50a0-4f06-88ab-042c79a3ec7e>
2.890625
709
Comment Section
Software Dev.
54.083655
95,561,430
"We wanted to explore how the surrounding landscape affects people, both in terms of their perceptions and their behavior," explains Yabiku. "Since human behavior ultimately transforms the environment, the feedback people get from their surroundings is important to understand." The spectacular growth of Phoenix--which doubled twice in population size in the past 35 years--gives researchers a unique opportunity to monitor human-induced ecological transformations. "Experimental approaches are rarely used in studies of human-environment interactions,' says Casagrande. "By combining research approaches from both the social and biophysical sciences, we can gain new insights into how peoples' surroundings affect them." The study will run until at least 2010, but the results thus far suggest that even those individuals who grew up in the arid environment of Arizona prefer a more lush landscape conducive to recreation and social networking. In addition to the social interactions resulting from the different landscape designs, the researchers are also looking into residents' level of ecological knowledge, overall environmental values, and perceptions of landscapes. Yabiku and Casagrande hypothesize that residents' knowledge of flora and fauna will increase more in the mesic than in the native desert cluster. Poster Session 16 – Urban Ecology. Wednesday, August 9, 2006, 5:00 – 6:30 PM, Exhibit Hall, Ballroom Level, Cook Convention Center, Memphis, Tennessee. Presenters: David Casagrande, Western Illinois University (firstname.lastname@example.org); Scott Yabiku, Arizona State University, (email@example.com). Annie Drinkard | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:24b9f1b7-155d-4b86-ace1-d91ae3a351f3>
3
931
Content Listing
Science & Tech.
31.549343
95,561,432
1 year ago Location, location, location. For dinosaurs, the asteroid that slammed into Earth and led to their extinction could not have hit a worst place on the planet, according to a new BBC Two documentary, The Day the Dinosaurs Died.” Recovered rocks from under the Gulf of Mexico showcase the details of the cataclysmic event, which struck the shallow waters 66 million years ago — and scientists say that the “great irony” comes in the asteroid’s point of contact with Earth. “This is where we get to the great irony of the story – because in the end it wasn’t the size of the asteroid, the scale of blast, or even its global reach that made dinosaurs extinct – it was where the impact happened,” said evolutionary biologist Ben Garrod, who spent time on the scientist’s drill rig that’s stationed off Mexico’s Yucatan Peninsula. “Had the asteroid struck a few moments earlier or later, rather than hitting shallow coastal waters it might have hit deep ocean…an impact in the nearby Atlantic or Pacific oceans would have meant much less vaporized rock – including the deadly gypsum. “The cloud would have been less dense and sunlight could still have reached the planet’s surface, meaning what happened next might have been avoided,” Garrod added. “In this cold, dark world food ran out of the oceans within a week and shortly after on land. With nothing to eat anywhere on the planet, the mighty dinosaurs stood little chance of survival.”
<urn:uuid:b8d92cc6-020d-47ae-a745-e894331755f9>
3.84375
328
Truncated
Science & Tech.
43.539176
95,561,438
Mature engineering disciplines have handbooks that describe successful solutions to known problems. For instance, automobile designers don't design cars using the laws of physics. Instead, they reuse standard designs with successful track records. The extra few percent of performance available by starting from scratch typically isn't worth the cost. The theme of this issue is ``patterns'' and ``pattern languages'', which are an attempt to describe successful solutions to common software problems. The long term goal is to develop handbooks for software engineers. Though we are a long way from that goal, patterns have proven useful in the short term to help people reuse successful practices. Not only do patterns teach useful techniques, they help people communicate better, and they help people reason about what they do and why. In addition, patterns are a step toward handbooks for software engineers. A pattern is a recurring solution to a standard problem. When related patterns are woven together they form a ``language'' that provides a process for the orderly resolution of software development problems. Pattern languages are not formal languages, but rather a collection of interrelated patterns, though they do provide a vocabulary for talking about a particular problem. Both patterns and pattern languages help developers communicate architectural knowledge, help people learn a new design paradigm or architectural style, and help new developers ignore traps and pitfalls that have traditionally been learned only by costly experience. Patterns have a context in which they apply. In addition, they must balance, or trade off, a set of opposing forces. The way we describe patterns must make all these things clear. Clarity of expression makes it easier to see when and why to use certain patterns, as well as when and why not to use these patterns. All solutions have costs, and pattern descriptions should state the costs clearly. From one point of view, there is nothing new about patterns since by definition patterns capture experience. It has long been recognized that expert programmers don't think about programs in terms of low level programming language elements, but in higher-order abstractions [Adelson and Soloway][Soloway and Erlich][Curtis][Linn and Clancy]. What is new is that people are working hard to systematically document abstractions other than algorithms and data structures. In general, most people working on patterns are not concentrating on developing formalisms for expressing patterns or tools for using them, though a few people are. Instead, they are concentrating on documenting the key patterns that successful developers use, but that relatively few developers thoroughly understand and consistently apply in their daily work. Most of the people documenting patterns are motivated by the following values: - Success is more important than novelty. The longer a pattern has been used successfully, the more valuable it tends to be. In fact, novelty can be a liability, because new techniques are often untested. Finding a pattern is a matter of discovery and experience, not invention. A new technique can be documented as a pattern, but its value is known only after it has been tried. This is why most patterns describe several uses. - Emphasis on writing and clarity of communication. Most pattern descriptions document recurring solutions using a standard format. We look forward to the day when we will have handbooks for software engineers. Therefore, we write our patterns in a form that is like a catalog entry. In this sense, pattern descriptions are both a literary style and technical documentation. The emphasis on clear writing stems from our collective experience developing complex software systems. In many cases, projects failed because developers were unable to communicate good software designs, architectures, and programming practices to each other. Well written pattern descriptions improve communication by naming and concisely articulating the structure and behavior of solutions to common software problems. - Qualitative validation of knowledge. Another part of our ethic is to qualitatively describe concrete solutions to software problems, instead of quantifying or theorizing about them. There is a place for theoretical and quantitative work, but we feel such activities are more appropriate in a context separate from discovering and documenting patterns. Our goal is to appreciate and reward the creative process that expert developers use to build high quality software systems. - Good patterns arise from practical experience. Every experienced developer has valuable patterns that we would like them to share. We value the experience of all software developers, and do not think that a few people have the patterns, and everybody else just sits back and learns them. That is why our use of writer's workshops have been so successful at pattern conferences. In a writer's workshop, participants discuss the strengths and weaknesses of each pattern, accentuate positive aspects of the patterns, share their own experience, and suggest improvements in content and style. Writer's workshops assume that we all can learn from each other. - Recognize the importance of human dimensions in software development. The purpose of patterns is not to replace developer creativity with rote application of rigid design rules. Neither are we trying to replace programmers with automated CASE tools. Instead, our intent is to recognize the importance of human factors in developing software. This recognition appears in design patterns when we discuss their effect on the complexity and understandability of software systems. In addition, this recognition shows itself in patterns on effective software process and organization. The papers in this issue are a good representative of the patterns being written today. The first software patterns were written by object-oriented developers, so they focused on object-oriented design and programming [GOF] or on object-oriented modeling [Coad]. Although there is still a lot of interest on object-oriented patterns, a new trend is patterns that focus on efficient, reliable, and scalable concurrent, parallel, and distributed programming [PLoP2,Schmidt1,SIEMENS]. The majority of papers in this special issue follow the latter trend. McKenney's paper on ``Selecting Locking Primitives for Parallel Programs'' describes a set of patterns used to build efficient operating systems for multi-processor platforms. Islam and Devarakonda's paper on ``An Essential Design Pattern for Fault-Tolerant Distributed State Sharing'' focuses on a design pattern used to create reliable distributed software. Aarsten, Brugali, and Menga's paper on ``Designing Concurrent and Distributed Control Systems: An Approach Based on Design Patterns'' presents a pattern language for developing distributed software for large-scale control systems. Another recent trend in the patterns literature focuses on management, sociological, and organizational issues. Two papers in this issue address these topics. Cockburn's paper ``On the Interaction of Social Issues and Software Architecture'' describes a pattern language that illustrates how social forces affect the decisions that shape the structure of software designs. Goldfedder and Rising's paper on ``Patterns: A Training Experience'' discusses the organizational and sociological aspects of introducing patterns into a commercial software development environment. The study of patterns is well established in many other fields including architecture, anthropology, music, and sociology. Early adopters of software patterns were highly influenced by Christopher Alexander, who is a researcher at University of California, Berkeley that has written extensively on patterns found in architecture for houses, buildings, and communities. As we have gained experience using patterns to document software expertise, new formats and new solutions have arisen to meet the unique challenges associated with developing software. For instance, many developers find it easier to understand design patterns by using software-centric visual aids such as class models and interaction graphs. Therefore, many pattern description formats use popular notations (such as Booch models and OMT) to concisely express their structure and dynamic behavior. In addition, pattern descriptions also commonly contain source code examples, written in the language of choice for the audience. Over the next few years, we expect the following aspects of patterns will receive considerable attention [Schmidt2]. - Integration of design patterns together with frameworks. Some of the most useful patterns describe frameworks. Such patterns can be viewed as abstract descriptions of frameworks that facilitate widespread reuse of software architecture. Similarly, frameworks can be viewed as concrete realizations of patterns that facilitate direct reuse of design and code. One difference between patterns and frameworks is that patterns are described in language-independent manner, whereas frameworks are generally implemented in a particular language. However, patterns and frameworks are highly synergistic concepts, with neither subordinate to the other. The next generation of object-oriented frameworks will explicitly embody many patterns and patterns will be widely used to document the form and contents of frameworks. - Integration of design patterns to form pattern languages. Much of the existing literature on patterns is organized as design pattern catalogs [GoF,Siemens]. These catalogs present a collection of relatively independent solutions to common design problems. As more experience is gained using these patterns, developers and authors will increasingly integrate groups of related patterns to form pattern languages. These pattern languages will encompass a family of related patterns that cover particular domains and disciplines ranging from concurrency, distribution, organizational design, software reuse, real-time systems, business and electronic commerce, and human interface design. In the same sense that comprehensive application frameworks support larger-scale reuse of design and code than do stand-alone functions and class libraries, pattern languages will support larger-scale reuse of software architecture and design than individual patterns. Developing comprehensive pattern languages is challenging and time consuming, but will provide the greatest payoff for pattern-based software development during the next few years. - Integration with current software development methods and software process models. Patterns help to alleviate software complexity at several phases in the software lifecycle. Although patterns are not a software development method or process, they complement existing methods and processes. For instance, patterns help to bridge the abstractions in the domain analysis and architectural design phases with the concrete realizations of these abstractions in the implementation and maintenance phases. In the analysis and design phases, patterns help to guide developers in selecting from software architectures that have proven to be successful. In the implementation and maintenance phases, they help document the strategic properties of software systems at a level higher than source code and models of individual software modules. Ultimately, patterns are successful because people take the time to read them, learn them, use them, and write them. We encourage you to get involved with others working on patterns by attending conferences, participating on the online mailing lists, and contributing your insights and experience. To find out about books, online papers, electronic mailing lists, and conferences on patterns, see the Patterns Home Page. [Adelson and Soloway] B. Adelson and E. Soloway. The Role of Domain Experience in Software Design. IEEE Trans. on Software Engineering, V SE-11, N 11, 1985, pp. 1351-1360. [Coad] P. Coad, Object-Oriented Patterns. Communications of the ACM, V 35 N 9, Sept 1992, pp. 152-159. [Curtis] B. Curtis, Cognitive Issues in Reusing Software Artifacts. In Software Reusability, V II. ed. T. Biggerstaff and A. Perlis, Addison Wesley 1989, pp. 269-287. [GoF] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software. Reading, MA: Addison-Wesley, 1995. [Linn and Clancy] M. Linn and M. Clancy, The Case for Case Studies of Programming Problems. Communications of the ACM V 35 N 3, March 1992, pp. 121-132. [PLoP2] J. O. Coplien, J. Vlissides, and N. Kerth, eds., Pattern Languages of Program Design, Vol 2. Reading, MA: Addison-Wesley, 1996. [Schmidt1] D. C. Schmidt, A Family of Design Patterns for Application-Level Gateways, Theory and Practice of Object Systems, Wiley and Sons, to appear 1996. [Schmidt2] Schmidt, Douglas, Using Design Patterns to Develop Reusable Object-Oriented Communication Software, CACM, (Special Issue on Object-Oriented Experiences, Mohamed Fayad and W.T. Tsai Eds.), 38, 10, October 1995. [Siemens] F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal, Pattern-Oriented Software Architecture - A System of Patterns. Wiley and Sons, 1996. [Soloway and Ehrlich] E. Soloway and K. Ehrlich, Empirical Studies of Programming Knowledge, IEEE Transactions on Software Engineering V SE-10, N 5, September 1984. Last modified 14:17:09 CDT 26 May 2004
<urn:uuid:be6277be-49a0-4e31-b4f8-277fb4086113>
3.0625
2,611
Academic Writing
Software Dev.
31.71989
95,561,439
Install Ruby on Rails on Ubuntu Linux Install Ruby on Rails on Ubuntu Linux Installing Ruby on Rails 4.0 on Ubuntu Linux. Up-to-date, detailed instructions on how to install Rails newest release. 4.0 is the newest version of Rails. This in-depth installation guide is used by developers to configure their working environment for real-world Rails development. This guide doesn't cover installation of Ruby on Rails for a production server. To develop with Rails on Ubuntu, you’ll need Ruby (an interpreter for the Ruby programming language) plus gems (software libraries) containing the Rails web application development framework. Updating Rails Applications See the article Updating Rails if you already have Rails installed. For an overview of what’s changed in each Rails release, see a Ruby on Rails Release History. Ruby on Rails on Ubuntu Ubuntu is a popular platform for Rails development, as are other Unix-based operating systems such as Mac OS X. Installation is relatively easy and widespread help is available in the Rails developer community. Use a Ruby Version Manager As new versions of Ruby are released, you’ll need an easy way to switch between versions. Just as important, you’ll have a dependency mess if you install gems into the system environment. I recommend RVM to manage Ruby versions and gems because it is popular, well-supported, and full-featured. If you are an experienced Unix administrator, you can consider alternatives such as Chruby, Sam Stephenson’s rbenv, or others on this list. Conveniently, you can use RVM to install Ruby. Don’t Install Ruby from a Package Ubuntu provides a package manager system for installing system software. You’ll use this to prepare your computer before installing Ruby. However, don’t use apt-get to install Ruby. The package manager will install an outdated version of Ruby. And it will install Ruby at the system level (for all users). It’s better to use RVM to install Ruby within your user environment. You can use Ruby on Rails without actually installing it on your computer. Hosted development, using a service such as Nitrous.io, means you get a computer “in the cloud” that you use from your web browser. Any computer can access the hosted development environment, though you’ll need a broadband connection. Nitrous.io is free for small projects. Using a hosted environment means you are no longer dependent on the physical presence of a computer that stores all your files. If your computer crashes or is stolen, you can continue to use your hosted environment from any other computer. Likewise, if you frequently work on more than one computer, a hosted environment eliminates the difficulty of maintaining duplicate development environments. For these reasons some developers prefer to “work in the cloud” using Nitrous.io. For more on Nitrous.io, see the article Ruby on Rails with Nitrous.io. Nitrous.io is a good option if you have trouble installing Ruby on Rails on your computer. Prepare Your System You’ll need to prepare your computer with the required system software before installing Ruby on Rails. You’ll need superuser (root) access to update the system software. Update Your Package Manager First: $ sudo apt-get update This must finish without error or the following step will fail. $ sudo apt-get install curl You’ll use Curl for installing RVM. Install Ruby Using RVM Use RVM, the Ruby Version Manager, to install Ruby and manage your Rails versions. If you have an older version of Ruby installed on your computer, there’s no need to remove it. RVM will leave your “system Ruby” untouched and use your shell to intercept any calls to Ruby. Any older Ruby versions will remain on your system and the RVM version will take precedence. Ruby 2.0.0-p353 was current when this was written. You can check for the current recommended version of Ruby. RVM will install the newest stable Ruby version. The RVM website explains how to install RVM. Here’s the simplest way : $ curl -L https://get.rvm.io | bash -s stable –ruby Note the backslash before “curl” (this avoids potential version conflicts). The “—ruby” flag will install the newest version of Ruby. RVM includes an “autolibs” option to identify and install system software needed for your operating system. See the article RVM Autolibs: Automatic Dependency Handling and Ruby 2.0 for more information. If You Already Have RVM Installed If you already have RVM installed, update it to the latest version and install Ruby : $ rvm get stable –autolibs=enable $ rvm install ruby $ rvm –default use ruby-2.0.0-p353 1. Installation Troubleshooting and Advice If you have trouble installing Ruby with RVM, see the article “Installing Ruby” for Installation Troubleshooting and Advice. If you have problems installing RVM, use Nitrous.io. 2. Install Node.js $ sudo apt-get install nodejs and set it in your $PATH. If you don’t install Node.js, you’ll need to add this to the Gemfile for each Rails application you build : 3. Check the Gem Manager RubyGems is the gem manager in Ruby. 4. Check the Installed Gem Manager Version : $ gem -v 2.1.11 You should have : RubyGems 2.1.11 — check for newer version Use gem update –system to upgrade the Ruby gem manager if necessary. 5. RVM Gemsets Not all Rails developers use RVM to manage gems, but many recommend it. 6. Display a List of Gemsets: $ rvm gemset list gemsets for ruby-2.0.0-p353 => (default) global Only the “default” and “global” gemsets are pre-installed. If you get an error “rvm is not a function,” close your console and open it again. 7. RVM’s Global Gemset See what gems are installed in the “global” gemset : $ rvm gemset use global $ gem list A trouble-free development environment requires the newest versions of the default gems. Several gems are installed with Ruby or the RVM default gemset: bundler (1.3.5) check for newer version bundler-unload (1.0.1) check for newer version rake (10.1.0) check for newer version rubygems-bundler (1.3.3) check for newer version rvm (126.96.36.199) check for newer version To get a list of gems that are outdated : $ gem outdated ### list not shown for brevity To update all stale gems : $ gem update ### list not shown for brevity Faster Gem Installation By default, when you install gems, documentation files will be installed. Developers seldom use gem documentation files (they’ll browse the web instead). Installing gem documentation files takes time, so many developers like to toggle the default so no documentation is installed. Here’s how to speed up gem installation by disabling the documentation step : $ echo "gem: –no-document" >> ~/.gemrc This adds the line gem: –no-document to the .gemrc file in your home directory. You can stay informed of new gem versions by creating an account at RubyGems.org and visiting your dashboard. Search for each gem you use and “subscribe” to see a feed of updates in the dashboard (an RSS feed is available from the dashboard). After you’ve built an application and set up a GitHub repository, you can stay informed with Gemnasium or VersionEye. These services survey your GitHub repo and send email notifications when gem versions change. Gemnasium and VersionEye are free for public repositories with a premium plan for private repositories. Rails Installation Options Check for the current version of Rails. Rails 4.0.2 was current when this was written. You can install Rails directly into the global gemset. However, many developers prefer to keep the global gemset sparse and install Rails into project-specific gemsets, so each project has the appropriate version of Rails. Let’s consider the options you have for installing Rails. If you want the most recent stable release : $ gem install rails $ rails -v If you want the newest beta version or release candidate, you can install with –pre. $ gem install rails –pre $ rails -v Or you can get a specific version. For example, if you want the Rails 3.2.16 release: $ gem install rails –version=3.2.16 $ rails -v Create a Workspace Folder You’ll need a convenient folder to store your Rails projects. You can give it any name, such as code/ or projects/. For this tutorial, we’ll call itworkspace/. Create a Projects Folder and Move Into The Folder : $ mkdir workspace $ cd workspace This is where you’ll create your Rails applications. New Rails 4.0 Application Here’s how to create a project-specific gemset, installing the current version of Rails 4.0, and creating a new application. $ mkdir myapp $ cd myapp $ rvm use [email protected] –ruby-version –create $ gem install rails $ rails new . We’ll name the new application “myapp.” Obviously, you can give it any name you like. With this workflow, you’ll first create a root directory for your application, then move into the new directory. With one command you’ll create a new project-specific gemset. The option “—ruby-version” creates .ruby-version and .ruby-gemset files in the root directory. RVM recognizes these files in an application’s root directory and loads the required version of Ruby and the correct gemset whenever you enter the directory. When we create the gemset, it will be empty (though it inherits use of all the gems in the global gemset). We immediately install Rails. The command gem install rails installs the most recent release of Rails. Finally we run rails new .. We use the Unix “dot” convention to refer to the current directory. This assigns the name of the directory to the new application. This approach is different from the way most beginners are taught to create a Rails application. Most instructions suggest usingrails new myapp to generate a new application and then enter the directory to begin work. Our approach makes it easy to create a project-specific gemset and install Rails before the application is created. The rails new command generates the default Rails starter app. If you wish, you can use the Rails Composer tool to generate a starter application with a choice of basic features and popular gems. For a “smoke test” to see if everything runs, display a list of Rake tasks. $ rake -T There’s no need to run bundle exec rake instead of rake when you are using RVM (see RVM and bundler integration). This concludes the instructions for installing Ruby and Rails. Read on for additional advice and tips. Rails Starter Apps The starter application you create with rails new is very basic. Use the Rails Composer tool to build a full-featured Rails starter app. You’ll get a choice of starter applications with basic features and popular gems. Here’s how to generate a new Rails application using the Rails Composer tool: Using the conventional approach : $ rails new myapp -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb Or, first creating an empty application root directory : $ mkdir myapp $ cd myapp $ rvm use [email protected] –ruby-version –create $ gem install rails $ rails new . -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb You can add the -T flags to skip Test::Unit if you are using RSpec for testing. You can add the -O flags to skip Active Record if you are using a NoSQL datastore such as MongoDB. If you get an error “OpenSSL certificate verify failed” when you try to generate a new Rails app, see the article OpenSSL errors and Rails. Rails Tutorials and Example Applications The RailsApps project provides example apps that show how real-world Rails applications are built. Each example is known to work and can serve as your personal “reference implementation”. Each is an open source project. Dozens of developers use the apps, report problems as they arise, and propose solutions as GitHub issues. There is a tutorial for each one so there is no mystery code. Purchasing a subscription for the tutorials gives the project financial support. |Example Applications for Rails 4.0||Tutorial||Comments| |Learn Rails||coming soon||introduction to Rails for beginners| |Rails and Bootstrap||Tutorial||starter app for Rails and Twitter Bootstrap| |Example Applications for Rails 3.2||Tutorial||Comments| |Twitter Bootstrap, Devise, CanCan||Tutorial||Devise for authentication, CanCan for authorization, Twitter Bootstrap for CSS| |Rails Membership Site with Stripe||Tutorial||Site with subscription billing using Stripe| |Rails Membership Site with Recurly||Tutorial||Site with subscription billing using Recurly| |Startup Prelaunch Signup App||Tutorial||For a startup prelaunch signup site| |Devise, RSpec, Cucumber||Tutorial||Devise for authentication with ActiveRecord and SQLite for a database| |Devise, Mongoid||Tutorial||Devise for authentication with a MongoDB datastore| |OmniAuth, Mongoid||Tutorial||OmniAuth for authentication with a MongoDB datastore| |Subdomains, Devise, Mongoid||Tutorial||Basecamp-style subdomains with Devise and MongoDB| Adding a Gemset to an Existing Application If you’ve already created an application with the command rails new myapp, you can still create a project-specific gemset. Here’s how to create a gemset for an application named “myapp” and create .ruby-version and .ruby-gemset files in the application’s root directory : $ rvm use [email protected] –ruby-version –create You’ll need to install Rails and the gems listed in your Gemfile into the new gemset by running : $ gem install rails $ bundle install Specifying a Gemset for an Existing Application If you have already created both an application and a gemset, but not .ruby-version and .ruby-gemset files, here’s how to add the files. For example, if you want to use an existing gemset named “[email protected]” : $ echo "ruby-2.0.0" > .ruby-version $ echo "myapp" > .ruby-gemset Using .ruby-version and .ruby-gemset files means you’ll automatically be using the correct Rails and gem version when you switch to your application root directory on your local machine. Databases for Rails Rails uses the SQLite database by default. RVM installs SQLite and there’s nothing to configure. Though SQLite is adequate for development (and even some production applications), a new Rails application can be configured for other databases. The command rails new myapp –database= will show you a list of supported databases. Supported for preconfiguration are: mysql, oracle, postgresql, sqlite3, frontbase, ibm_db, sqlserver, jdbcmysql, jdbcsqlite3, jdbcpostgresql, jdbc. For example, to create a new Rails application to use PostgreSQL : $ rails new myapp –database=postgresql The –database=postgresql parameter will add the pg database adapter gem to the Gemfile and create a suitable config/database.yml file. Don’t use the –database= argument with the Rails Composer tool. You’ll select a database from a menu instead. If you wish to run your own servers, you can deploy a Rails application using Capistrano deployment scripts. However, unless system administration is a personal passion, it is much easier to deploy your application with a “platform as a service” provider such as Heroku. For easy deployment, use a “platform as a service” provider such as: For deployment on Heroku, see the article: By design, Rails encourages practices that avoid common web application vulnerabilities. The Rails security team actively investigates and patches vulnerabilities. If you use the most current version of Rails, you will be protected from known vulnerabilities. See the Ruby On Rails Security Guide for an overview of potential issues and watch the Ruby on Rails Security Mailing List for announcements and discussion. 4. Your Application’s Secret Token Problems with “Segmentation Fault” If you get a “segfault” when you try rails new, try removing and reinstalling RVM. Problems with “Gem::RemoteFetcher::FetchError: SSL_connect” Ruby and RubyGems (starting with Ruby 1.9.3p194 and RubyGems 1.8.23) require verification of server SSL certificates when Ruby makes an Internet connection via https. If you run rails new and get an error “Gem::RemoteFetcher::FetchError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate” see this article suggesting solutions: OpenSSL errors and Rails. Problems with “Certificate Verify Failed” Are you getting an error “OpenSSL certificate verify failed” when you try to generate a new Rails app from an application template? See this article suggesting solutions: OpenSSL errors and Rails. Where to Get Help Your best source for help with problems is Stack Overflow. Your issue may have been encountered and addressed by others. You can also try Rails Hotline, a free telephone hotline for Rails help staffed by volunteers. Reblogged from railsapps.github.io
<urn:uuid:790311f9-277d-4d8e-8a49-b41f3012b368>
2.5625
4,030
Tutorial
Software Dev.
54.004141
95,561,442
In a first-of-its-kind study, seismologists have used tiny "microearthquakes" along a section of Californias notorious San Andreas Fault to create unique images of the contorted geology scientists will face as they continue drilling deeper into the fault zone to construct a major earthquake "observatory." A chain of 32 seismometers recorded the small earthquakes at underground locations along a 7,100-foot-deep vertical drill hole. This eight-inch-diameter pilot hole was excavated last year about 1.1 miles southwest of the San Andreas Fault to monitor earthquake activity and assess the areas underground environment before drilling the main hole. After more vertical drilling at the same location next summer, the main hole will be angled off towards the northeast to pierce the fault zone itself. In a paper in the Friday, Dec. 5, 2003 issue of the research journal Science, researchers from Duke University and the United States Geological Survey (USGS) described how they used seismic signals and computer analysis to derive outlines of what may be secondary faults, and perhaps fluid filled cracks, in subterranean locations between the main fault and the pilot hole. Monte Basgall | EurekAlert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 12.07.2018 | Event News 03.07.2018 | Event News 28.06.2018 | Event News 13.07.2018 | Information Technology 13.07.2018 | Physics and Astronomy 13.07.2018 | Information Technology
<urn:uuid:e2306353-545c-4f49-bf59-a470d4de4303>
3.5625
866
Content Listing
Science & Tech.
38.960971
95,561,460
Climate Change, Desertification and the Mediterranean Region Land desertification, a series of natural processes leading to gradual environmental degradation, is now considered as a serious threat to the semi-arid areas of the Mediterranean, and particularly to the marginal hilly lands of the region. Soil erosion comprises the dominant process of land deterioration and desertification. Adverse climatic conditions, irregular terrain with steep slopes, geology and long periods of land misuse are the main factors responsible for desertification in the Mediterranean. The climate of the region is characterised by strong seasonal and spatial variations in rainfall and large oscillations between minimum and maximum daily temperatures. Moreover, higher temperatures and more pronounced aridity are predicted to prevail during the next decades. The extensive deforestation and intensive cultivation of sloping lands since ancient times has already led to soil erosion and degradation through the progressive inability of the vegetation and soils to regenerate themselves. Hilly soils developed on Tertiary and Quaternary formations usually have a restricted effective rooting depth for plant growth. Under hot and dry climatic conditions, the tolerance of these soils to erosion is low, and rainfed vegetation can no longer be supported. Many areas on limestone formations are already desertified with the soil mantle eroded and the vegetation cover completely removed. For example, soils formed on marl deposits, despite their considerable depth and high productivity in normal and wet years, are very susceptible to desertification in dry years. Intensive human interference in hilly areas vulnerable to desertification has severely damaged or totally destroyed the productivity of these lands due to the loss of soil volume beyond a critical point. KeywordsSoil Erosion Mediterranean Region Hilly Area Land Desertification Landscape Position Unable to display preview. Download preview PDF.
<urn:uuid:cd0f0bc4-96c9-468d-891b-b3a57f47598c>
3.78125
351
Truncated
Science & Tech.
1.402203
95,561,467
Mathematics for the Practical Man by George Howe Publisher: Van Nostrand 1918 Number of pages: 221 In this book mathematics, from algebra through calculus, has been treated in such a manner as to be clear to anyone. Men who wish to study a part of mathematics which they have not hitherto had, engineers who wish to refer to phases of mathematics which so easily slip from the memory, students who desire a simple reference book, will find this manual just the book for which they have been looking. Home page url Download or read it online for free here: by Christoph Kirsch - University of North Carolina Topics covered: Introduction to boundary value problems for the diffusion, Laplace and wave partial differential equations; Bessel functions and Legendre functions; Introduction to complex variables including the calculus of residues. by J. Carlson, A. Jaffe, A. Wiles - American Mathematical Society Guided by the premise that solving the most important mathematical problems will advance the field, this book offers a fascinating look at the seven unsolved Millennium Prize problems. This work describes these problems at the professional level. by Ivan S. Sokolnikoff - McGraw Hill The chief purpose of the book is to help to bridge the gap which separates many engineers from mathematics by giving them a bird's-eye view of those mathematical topics which are indispensable in the study of the physical sciences. by Peter J. Mitas - Quick Reference Handbooks This handbook, written by an experienced math teacher, lets readers quickly look up definitions, facts, and problem solving steps. It includes over 700 detailed examples and tips to help them improve their mathematical problem solving skills.
<urn:uuid:9ce9b401-2dc4-4414-a789-f4eebf1c3def>
2.84375
344
Content Listing
Science & Tech.
37.082637
95,561,481
The SAX Packages: The SAX parser is defined in the following packages. |org.xml.sax||Defines the SAX interfaces. The name |org.xml.sax.ext||Defines SAX extensions that are used when doing more sophisticated SAX processing, for example, to process a document type definitions (DTD) or to see the detailed syntax for a file.| Contains helper classes that make it easier to use SAX -- for example, by defining a default handler that has null-methods for all of the interfaces, so you only need to override the ones you actually want to implement. javax.xml.parsers Package : Describing the main classes needed here |SAXParser||Defines the API that wraps an XMLReader implementation class| |SAXParserFactory||Defines a factory API that enables applications to configure and obtain a SAX based parser to parse XML documents| org.xml.sax Package : Describing few interfaces |ContentHandler||Receive notification of the logical content of a document.| |DTDHandler||Receive notification of basic DTD-related events.| |EntityResolver||Basic interface for resolving entities.| |ErrorHandler||Basic interface for SAX error handlers.| org.xml.sax.helpers Package : Describing the needed interface |DefaultHandler||Default base class for SAX2 event handlers.| Understanding SAX Parser At the very first, create an instance of the class which generates an instance of the parser. This parser wraps a SAXReader object. When the parser's parse() method is invoked, the reader invokes one of the several callback methods (implemented in the application). These callback methods are defined by the interfaces Brief description of the key SAX APIs: SAXParserinterface defines several kinds of parse() methods. Generally, XML data source and a DefaultHandler object is passed to the parser. This parser processes the XML file and invokes the appropriate method on the handler object. EntityResolverinterfaces (with null methods).You override only the ones you're interested in. endElementare invoked when an XML tag is recognized. This interface also defines methods processingInstruction, which are invoked when the parser encounters the text in an XML element or an inline processing instruction, respectively. warningare invoked in response to various parsing errors. The default error handler throws an exception for fatal errors and ignores other errors (including validation errors). To ensure the correct handling, you'll need to supply your own error handler to the parser. resolveEntitymethod is invoked when the parser needs to identify the data referenced by a URI. .
<urn:uuid:1fe8231e-7aa6-4cd4-8280-68fda68e866d>
2.625
573
Documentation
Software Dev.
33.191783
95,561,533
Role of the Ocean in climate Kevin E. Trenberth NCAR. The role of the climate system. ICE LAND Atmosphere Ocean. The role of the atmosphere. The atmosphere is the most volatile component of climate system Kevin E. Trenberth NCAR of heat, freshwater and salts Major ice sheets, e.g., Antarctica and Greenland. Penetration of heat occurs primarily through conduction. The mass involved in changes from year to year is small but important on century time scales. Unlike land, ice melts changes in sea level on longer time-scales. Ice volumes: 28,000,000 km3 water is in ice sheets, ice caps and glaciers. Most is in the Antarctic ice sheet which, if melted, would increase sea level by 65 m, vs Greenland 7 m and the other glaciers and ice caps 0.35 m. In Arctic: sea ice ~ 3-4 m thick Around Antarctic: ~ 1-2 m thick El Niño-Southern Oscillation ENSO Some phenomena would not otherwise occur: ENSO is a natural mode of the coupled ocean-atmosphere system ENSO: EN (ocean) and SO (atmosphere) together: Refers to whole cycle of warming and cooling. ENSO events have been going on for centuries (records in corals, and in glacial ice in S. America) ENSO arises from air-sea interactions in the tropical Pacific El Niño: warm phase, La Niña: cold phase EN events occur about every 3-7 years The main external influence on planet Earth is from radiation. Incoming solar shortwave radiation is unevenly distributed owing to the geometry of the Earth-sun system, and the rotation of the Earth. Outgoing longwave radiation is more uniform. The incoming radiant energy is transformed into various forms (internal heat, potential energy, latent energy, and kinetic energy) moved around in various ways primarily by the atmosphere and oceans, stored and sequestered in the ocean, land, and ice components of the climate system, and ultimately radiated back to space as infrared radiation. An equilibrium climate mandates a balance between the incoming and outgoing radiation and that the flows of energy are systematic. These drive the weather systems in the atmosphere, currents in the ocean, and fundamentally determine the climate. And they can be perturbed, with climate change. .FO= - Fs Difference due to ocean transports Trenberth & Stepaniak, 2003 Equivalent ocean heat content (Ignores annual cycle in ocean heat transports) Base period 1900-99; data from NOAA Data: Hadley Centre, UK Atlantic: N vs S Indian: Steady warming Pacific: tropics leads (ENSO, PDO) The result is an imprint on global weather patterns: Ocean heat content and sea levelGlobal warming from increasing greenhouse gases creates an imbalance in radiation at the Top-Of-Atmosphere: now order 0.9 W m-2.Where does this heat go? Main sink is ocean: thermosteric sea level rise associated with increasing ocean heat content. Some melts sea ice: no change in SL Some melts land ice. SL increases much more per unit of energy from land-ice melt: ratio about 30 to 90 to 1. Sea-ice melt does not change sea level. from 1950-1960’s to 1990-2000’s (IPCC 2007 Figure 5.18) 1961-2003 (Blue bars) 1993-2003 (Burgundy bars) The overturning transport 26.5N above 1000 m (green line), and the five snapshot estimates from hydrographic sections by Bryden et al., (2005). All time series have been smoothed with a three-day low pass filter. As modified from Baringer and Meinen (2008). No statement on acceleration possible in AR4 Annual ocean heat content 0-700m relative to 1961-90 average XBT drop rate problems Ishii et al 2006 Willis et al 2004 0.8 W m-2 0.3 W m-2 Levitus et al 2009 Yearly time series of ocean heat content (1022 J) for the 0-700 m layer from Levitus et al (2009), Domingues et al.(2008) and Ishii and Kimoto (2009) with a base period of 1957-1990. Linear trends for each series for 1969-2007 given in the upper portion of the figure. Palmer et al OceanObs’09 0.77 W m-2 0.54 W m-2 Sea level and thermosteric Von Schuckmann et al JGR 2009 2003-2008 – 1990-2008 10 m depth Difference for 2003-2008 Levitus et al Von Schuckmann et al JGR 2009 1. Lyman et al 2010 : to 700m 8. von Schuckmann et al 2009 :to 2000m From Trenberth 2010 Nature The ocean salinity budget The single most important role of the oceans in climate is that they are wet! Arctic sea ice area decreased by 2.7% per decade up to 2006: 2007: 22% (106 km2) lower than 2005 2008: second lowest 2010: third lowest Divergences of water fluxes from E-P estimates over the oceans; values in Sv: New estimate of fresh water transport in ocean from new values of E-P over ocean plus new river discharge estimates from Dai and Trenberth (2002) Holfort and Siedler (2001) get –0.55 Sv at 30°S. C. Mean E-P 1980-1993 m3/yr B. Linear trends pss/50yr (top) Durack and Wijffels 2010 JC Durack and Wijffels 2010 JC Subduction on isopycnals appears to account for much of the subsurface changes Global mean surface temperatures 1997 2003 2008 Can we track energy since 1993 when we have had good sea level measurements? Trenberth and Fasullo Science 2010 In CCSM4, during periods with no sfc T rise, the energy imbalance at TOA remains about 1 W m-2 warming. So where does the heat go? In CCSM4, during periods with no sfc T rise, the energy goes into the deep ocean, somehow. Stasis also in upper OHC; but not for full depth ocean: heat below 700 m Questions regarding the mechanisms driving variability in deep ocean heat content remain. Both the CCSM4 and observations suggest that ENSO plays a necessary, if not sufficient, role. Strong recent ENSO events, including the El Niño of 1997/98 and the La Niña of 2007/08 exert a strong influence on trends in global temperature computed across this period. Similarly, cooling decades from the CCSM4 are bounded by El Niño events at their initiation and La Niña events are their termination. Yet other intervals bounded by El Niño and La Niña are not accompanied by significant cooling. Our current work focuses on understanding this variable association between ENSO and global temperature trends. Carbon Inventories of Reservoirs that Naturally Exchange Carbon on Time Scales of Decades to Centuries Based on 3 million measurements since 1970 Global flux is 1.4 Pg C/yr Takahashi et al., Deep Sea Res. II, 2009 Human Perturbation of the Global Carbon Budget fossil fuel emissions CO2 flux (Pg C y-1) Canadell et al. (2007) Global Carbon Project (2008) The challenge is to better determine the heat budget at the surface of the Earth on a continuing basis: Provides for changes in heat storage of oceans, glacier and ice sheet melt, changes in SSTs and associated changes in atmospheric circulation, some aspects of which should be predictable on decadal time scales. Several models now can simulate major changes like the Sub-Sahara African drought beginning in the 1960s, the 1930’s “Dust Bowl” era in North America, given global SSTs. Can coupled models predict these evolutions? (Not so far). But there is hope that they will improve. In any case models should show some skill simply based on the current state, when it becomes well known and properly assimilated into models: Need better observing system!
<urn:uuid:f2d25610-f5f5-43f0-987d-32e361ec86a4>
3.46875
1,803
Academic Writing
Science & Tech.
62.141733
95,561,545
A new tool being developed by UT Arlington assistant professor of physics could help scientists map and track the interactions between neurons inside different areas of the brain. The journal Optics Letters recently published a paper by Samarendra Mohanty on the development of a fiber-optic, two-photon, optogenetic stimulator and its use on human cells in a laboratory. The tiny tool builds on Mohanty’s previous discovery that near-infrared light can be used to stimulate a light-sensitive protein introduced into living cells and neurons in the brain. This new method could show how different parts of the brain react when a linked area is stimulated. The technology would be useful in the BRAIN mapping initiative recently championed by President Barack Obama, Mohanty said. BRAIN stands for Brain Research Through Advancing Innovative Neurotechnologies and will include $100 million in government investments in research. “Scientists have spent a lot of time looking at the physical connections between different regions of the brain. But that information is not sufficient unless we examine how those connections function,” Mohanty said. “That’s where two-photon optogenetics comes into play. This is a tool not only to control the neuronal activity but to understand how the brain works.” The two-photon optogenetic stimulation described in the Optics Letter paper involves introducing the gene for ChR2, a protein that responds to light, into a sample of excitable cells. A fiber-optic infrared beam of light can then be used to precisely excite the neurons in a tissue circuit. In the brain, researchers would then observe responses in the excited area as well as other parts of the neural circuit. In living subjects, scientists would also observe the behavioral outcome, Mohanty said. Optogenetic stimulation avoids damage to living tissue by using light to stimulate neurons instead of electric pulses used in past research. Mohanty’s method of using low-energy near-infrared light also enables more precision and a deeper focus than the blue or green light beams often used in optogenetic stimulation, the paper said. Using fiber optics to deliver the two-photon optogenetic beam is another advance. Previous methods required bulky microscopes or complex scanning beams. Mohanty’s group is collaborating with UT Arlington Department of Psychology assistant professor Linda Perrotti to apply this technology in living animals. “Dr. Mohanty’s innovations continue to be recognized because of the great potential they hold,” said Pamela Jansma, dean of the UT Arlington College of Science. “Hopefully, his work will one day provide researchers in other fields the tools they need to examine how the human body works and why normal processes sometimes fail.” Mohanty’s co-authors on the research were members of his lab, Kamal Dhakal, Ling Gu and Bryan Black. The paper in Optics Letters is called “Fiber-optic two-photon optogenetic stimulation” and it is available online at http://www.opticsinfobase.org/ol/upcoming.cfm?page=2. University of Texas at Arlington
<urn:uuid:29d57c34-d902-4f2e-baa9-ffef63e0e557>
2.828125
659
News Article
Science & Tech.
34.826474
95,561,579
Authors: M. Zou, F. Yang, J. Ma, Y. Chen, L. Cao, F. Liu Affilation: Chinese Academy of Inspection and Quarantine, China Pages: 510 - 513 Keywords: nanoscale titanium dioxide, particle size, crystal structure, light absorbance Nanoscale titanium dioxide is an important physical sun-block material and is widely used in sunscreen. In our test, the influences of the particle size and crystal structure on light absorbance were examined. The experiment shows that the absorbance (Abs) of nanoscale titanium dioxide in UV-ray district is much higher than it in visible light district under the same concentration (0.04 g/L), as shown in figure 1. The Abs of nanoscale rutile keeps increasing, reaches the highest at 360 nm, and then starts to decrease as the wavelength becomes shorter. The nanoscale anatase has the similar light absorbance characteristic, with the highest Abs peak at 320 nm. In the UVA district, the rutile has stronger absorbance ability than the anatase, while in UVB and UVC, there is a similar absorbance between rutile and anatase. Compared with the bulk titanium dioxide, the nanoscale titanium dioxide can more effectively absorb the UV-ray and let the visible light easily pass through. Since the anatase is of the stronger activity than rutile and thus probably easily harms the skin, the nanoscale rutile should be the best selection used for sunscreen of the four samples in figure 1. However, when the particle size of rutile is too small, for example smaller than 10 nm, it will have much higher Abs in UVB and UVC district and lower Abs in UVA and visible light district, as shown in figure 2. Therefore the appropriate particle size and crystal structure must be considered when nanoscale titanium dioxide is applied for sun-block effect. The detailed test optimizations are being carried out in our lab. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:24f79347-f84c-400e-9698-116a02a65dfd>
2.625
428
Academic Writing
Science & Tech.
39.327615
95,561,599
Just lately it was reported political dispute between Serbia and Kosovo is sapping a small quantity of vitality from the native grid, inflicting a domino impact throughout the 25-nation community spanning the continent from Portugal to Poland and Greece to Germany. “The deviation from Europe’s customary 50 Hz frequency has been sufficient to trigger electrical clocks that maintain time by the ability system’s frequency, somewhat than built-in quartz crystals, to fall behind by about six minutes since mid- January.” Fragility of complicated techniques This can be a good instance for a phenomenon often known as the fragility of complicated techniques: the tendency for multilayered and interconnected techniques like electrical energy grids, transport networks (assume gridlock) and the world well being system (assume pandemics and antibiotic resistance) to turn out to be more and more liable to failure as layers of organisation and connectivity are added. Why? Advanced techniques have extra ranges and factors of vulnerability. There are extra issues that may go unsuitable. And when issues do go unsuitable, the impacts can unfold additional all through the system than can be the case with a less complicated or extra localised system. To provide an instance, one of many benefits of electrical automobiles is that electrical motors have few transferring components, whereas inner combustion engines have tons of. Electrical engines are additionally extra environment friendly. Simplicity doesn’t all the time equal effectivity, nevertheless it does normally equal reliability. A system with a number of layers of complexity is extra weak to a breakdown. In South Australia, in 2016 the impression of a localised extreme climate occasion led to a statewide “black system” because of the cascading impacts of supposedly protecting responses increased up within the system The counterargument is that complicated techniques normally have correspondingly higher layers of inbuilt safeguards and redundancies that outcome within the upkeep of a excessive degree of resilience. Within the case of climate and local weather or pure ecosystems, resilience is the product of thousands and thousands of years of evolution. Even they’ve limits and breaking factors, although, as we’re seeing by pumping extra CO2 into the ambiance than it could deal with. In human-made techniques, resilience normally develops by trial and error. Fellow aficionados of aircraft crash tv applications will know that it normally takes three issues to go unsuitable earlier than an enormous trendy aircraft goes down. However errors usually must be made earlier than know-how, regulation or behavioural change leads to a rise in system resilience. This course of – name it breakdown, studying and restore; or illness, prognosis and remedy – must be repeated as system complexity will increase over time and house. And each such each failsafe or backup system provides its personal layer of value and complexity. Electrical energy, in essence, isn’t complicated. However the Promethean technique of turning the bodily properties of electromagnetism into circuits to securely harness its properties for energy, lighting, heating, electronics and so forth provides a layer of complexity. It’s like placing lightning in a field. Changing from DC to AC or vice versa complicates the method. This technique turns into nonetheless extra complicated once we assemble a geographically dispersed grid, and have to put in substations to distribute vitality and to take care of voltage and frequency fluctuations in a system that have to be synchronised over hundreds of kilometres to inside tens of milliseconds. In Australia’s electrical energy system, grid resilience has usually been equated with short-term safety and reliability, and maintained by tweaking the technical and operational parameters of the system And interconnectors, too, once we wish to hyperlink beforehand separate areas. A monetary market to commerce vitality creates yet one more layer of complexity: the spot and futures markets; forecasting and settlements; inter-regional buying and selling; hedging contracts and energy buy agreements. Extra lately, the rooftop photo voltaic revolution has led to two-way vitality flows, together with occasions and locations the place the online vitality circulate is again up the system to substations – one thing they weren’t designed for however have to be tailored to go well with. Much more layers of complexity are rising, together with: - the expansion of variable, non-synchronous large-scale renewable vitality era; - the rise of house automation (usually related to international IT techniques) and Web of Issues-enabled home equipment and units; and - the necessity for the vitality system to reply to the rising variability of Australia’s local weather (e.g., by having higher strategic reserves to name on throughout heatwaves).* The basic case of fragility in Australia’s grid is the blackout in South Australia on 29 September 2016. To place it merely, the impression of a localised extreme climate occasion (“two tornadoes with wind speeds between 190 and 260 kilometres per hour tore by means of a single-circuit 275-kilovolt transmission line and a double-circuit 275kV transmission line, about 170km aside”, says the ABC) led to a statewide “black system” or whole shutdown because of the cascading impacts of supposedly protecting responses increased up within the system. The responses have been, within the case of the safety settings of the wind farms concerned, apparently too conservative or delicate. In different phrases, the a number of layers of technical complexity in a system present process fast change, and its dispersed nature, resulted within the impacts being felt much more extensively than they may in any other case have. Local weather change responses In Australia’s electrical energy system, grid resilience has usually been equated with short-term safety and reliability, and maintained by tweaking the technical and operational parameters of the system. However a variety of potential threats to electrical energy techniques – local weather change; cyberattacks; and an financial disaster – is more and more prone to expose the fragility of this complicated system. Let’s think about solely the primary of those. We all know that increased summer season temperatures are rising the incidence of thermal overload on transformers; and that the higher incidence and depth of bushfires and extreme storms is rising the dangers to poles and wires. There are different impacts, although, that we have to think about, together with coastal inundation, flooding and decreased rainfall in some areas. “The present market design isn’t sufficiently valuing useful resource traits of flexibility and dispatchability” The business has already turned its thoughts to those dangers. Responses up to now embody threat evaluation planning, recommending extra interstate interconnectors* to supply backup provide in case of regional outages; taking clients off grid in areas with excessive bushfire threat; and dashing up the rollout of sensible meters, which allow simpler fault identification. All nicely and good, however there’s a lot extra we’d like to consider and do. The most effective try and date to plan for a extra resilient grid is the CSIRO/Vitality Networks Affiliation’s 2016 Community Transformation Roadmap. It recognised the rising complexity of this ‘system of techniques’ and proposed options to take care of the technical challenges. It didn’t, although, particularly think about rising system complexity to be a threat in itself. Final week the Australian Vitality Market Operator (AEMO) took an enormous step on this course with the discharge of its “observations” on operational and market challenges to reliability and safety within the Nationwide Electrical energy System (NEM). AEMO is anxious that “the present market design isn’t sufficiently valuing useful resource traits of flexibility and dispatchability” to reply to rising variable renewable vitality (VRE) and local weather change impacts, and proposes a lot of market reforms to enhance system safety and reliability. Balancing independence and dependence At a conceptual degree, AEMO’s response quantities to rising resilience by constructing extra safeguards or ranges of redundancy into the system because it turns into extra complicated and unpredictable. This can be a legitimate response, however in the long term there are others price contemplating. Abandoning the system – by going off grid, say – isn’t a financially viable choice for most individuals at the moment. Neither is it a very good use of assets, with extra photo voltaic vitality going to waste when the batteries are full and the batteries sitting idle a lot of the time. After Hurricane Maria ripped up Puerto Rico’s grid final September, Elon Musk claimed that Tesla’s photo voltaic and battery techniques might restore energy throughout the island. Primarily Tesla would substitute the previous, unreliable, oil-dependent centralised grid with a sequence of offgrid techniques or microgrids. Going fully native isn’t all the time one of the best resolution, although. Apart from the comparatively excessive value and inefficiency (i.e., low load issue) of offgrid techniques, even an remoted microgrid could not all the time be one of the best resolution. At current, most cities and cities are unable to function in islanded mode. Being designed to be equipped from a distance, they merely shut down if the centralised provide is lower An enormous fats hurricane is one factor, but when the outage is triggered domestically (a bushfire or twister, say), the dearth of a connection to a centralised grid could possibly be a minus, not a plus. Logically, a grid that’s composed of interconnected microgrids that may be islanded from the primary grid when a disaster strikes would seem like the comfortable medium between a grid that’s both completely centralised or fully composed of offgrid techniques. That may work for particular person households and companies, and for purpose-built microgrids. At current, although, most cities and cities are unable to function in islanded mode. Being designed to be equipped from a distance, they merely shut down if the centralised provide is lower. To work in a different way, they want extra native era and metering and communication techniques which are suited to part-time or backup microgrid use; and guidelines that recognise the prices and advantages of the native use of the system. Ideally, we might finally see a system of nested or meshed microgrids at completely different ranges of the system, from particular person buildings by means of particular person feeders to distribution and zone substations protecting entire cities, suburbs or sub-regions. It sounds costly, however then, so are extended outages. These microgrids might discuss to and provide one another with out the necessity for centralised hub-and-spokes communications. Different responses to extend system resilience within the face of local weather change may embody: - Much less funding in areas or property at best threat of local weather change impacts. - Guaranteeing there’s quite a lot of scales, places and sorts of renewable era and storage out there. - Undergrounding energy traces. - Lowering reliability requirements the place this may favour a higher number of provide and distribution sorts. - And, as ever, guaranteeing that customers are higher incentivised to make use of much less vitality to satisfy their wants. All positive in idea; however how can we worth resilience within the grid? In different phrases, how can we place a greenback or different worth on long-term planning and investments into infrastructure that takes such threats significantly? There may be nothing within the Nationwide Electrical energy Guidelines to incentivise taking local weather change impacts into consideration when making funding selections – significantly in relation to excessive impression however comparatively low likelihood occasions like extreme climate or longer-term climatic adjustments.** An instance of the issue. In AEMO’s in any other case wonderful Built-in System Plan Session, local weather change raises just one single entry, and that relates solely to “local weather and vitality coverage uncertainty”. Consequently, the Snowy area and adjoining areas in New South Wales and Victoria are mooted as potential Renewable Vitality Zones. When you have a look at the Federal Authorities’s Local weather Analogues instrument, 2050 projections for the Snowy area present warming of as much as three levels and a discount in rainfall of as much as 15 %. This is perhaps the worst case situation, however globally, local weather change information are principally monitoring on the higher finish of earlier IPCC projections, so we should always take this prospect significantly. But when you then have a look at the Snowy 2.zero Feasibility Examine, what number of references are there to local weather change and decreased rainfall as dangers doubtlessly affecting the engineering and monetary viability of this $four billion-plus taxpayer-funded venture? So far as I can inform, none. As a possible stranded asset, that ranks up there with the Nationwide Broadband Community (NBN). A holistic imaginative and prescient Indian Buddhism spoke of an unlimited internet of jewels hanging from the palace of the nice god Indra. Every jewel displays and is mirrored in each different jewel in an infinite, multidimensional internet of hologram-like connections. Whether or not you consider every jewel as a constructing, a group or a substation, the imaginative and prescient is considered one of neither whole dependence nor full independence however mutual interdependence. * See, e.g., AEMO’s 2016 Nationwide Transmission Community Growth Plan ** As an example, within the regulatory funding assessments (RITs) for transmission and distribution. This text was first printed on Reneweconomy.com.au and is republished right here with permission from the creator.
<urn:uuid:fda27e66-ef24-405d-a497-6d74cd2008de>
2.546875
2,833
Truncated
Science & Tech.
20.562711
95,561,603
Net ecosystem productivity of temperate grasslands in northern China: An upscaling studyUSGS Staff -- Published Research Date of this Version1-1-2014 Agricultural and Forest Meteorology 184 (2014) 71– 81 AbstractGrassland is one of the widespread biome types globally, and plays an important role in the terrestrial car-bon cycle. We examined net ecosystem production (NEP) for the temperate grasslands in northern China from 2000 to 2010. We combined flux observations, satellite data, and climate data to develop a piece-wise regression model for NEP, and then used the model to map NEP for grasslands in northern China. Over the growing season, the northern China’s grassland had a net carbon uptake of 158 ± 25 g C m−2during 2000–2010 with the mean regional NEP estimate of 126 Tg C. Our results showed generally higher grassland NEP at high latitudes (northeast) than at low latitudes (central and west) because of different grassland types and environmental conditions. In the northeast, which is dominated by meadow steppes, the growing season NEP generally reached 200–300 g C m−2. In the southwest corner of the region, which is partially occupied by alpine meadow systems, the growing season NEP also reached 200–300 g C m−2. In the central part, which is dominated by typical steppe systems, the growing season NEP generally varied in the range of 100–200 g C m−2. The NEP of the northern China’s grasslands was highly variable through years, ranging from 129 (2001) to 217 g C m−2growing season−1 (2010). The large inter annual variations of NEP could be attributed to the sensitivity of temperate grasslands to climate changes and extreme climatic events. The droughts in 2000, 2001, and 2006 reduced the carbon uptake over the growing season by 11%, 29%, and 16% relative to the long-term (2000–2010) mean. Over the study period (2000–2010), precipitation was significantly correlated with NEP for the growing season (R2= 0.35, p-value < 0.1), indicating that water availability is an important stressor for the productivity of the temperate grasslands in semi-arid and arid regions in northern China. We conclude that northern temperate grasslands have the potential to sequester carbon, but the capacity of carbon sequestration depends on grassland types and environmental conditions. Extreme climate events like drought can significantly reduce the net carbon uptake of grasslands. Citation InformationLi Zhang, Huadong Guo, Gensou Jia, Bruce K. Wylie, et al.. "Net ecosystem productivity of temperate grasslands in northern China: An upscaling study" (2014) Available at: http://works.bepress.com/bruce_wylie/20/
<urn:uuid:739aba09-0f96-43b9-a542-c838fa36e1ce>
2.578125
613
Academic Writing
Science & Tech.
48.696901
95,561,612
posted by sylvia when aluminum metal reacts with iron (III) oxide to form aluminum oxide and iron metal, 429.6 kJ of heat are given off for each mole of aluminum metal consumed, under constant pressure and standard conditions. What is the correct value for the standard enthalpy of reaction in the thermochemical reaction... 2Al + Fe2O3--> 2Fe + Al2O3 If 429.6 kJ of heat are given off for each mole of Al and there are 2 mols of Al in the balanced equation, then there must be 2 x 429.6 kJ released for 2 mols of Al.
<urn:uuid:1b2de71f-b940-40b0-bc7e-e29730597c65>
2.984375
133
Q&A Forum
Science & Tech.
71.817577
95,561,615
The Search for Mars Biosignatures Up at High AltitudesFebruary 07, 2017 / Posted by: Miki Huynh Summit of the Simba volcano (19,400 ft) – The summit crater lake is shallow and its water column completely transparent. The red color of the lake is from an algae that has developed special pigments in response to extreme levels of short wavelength (UVA and UVB) radiation. Source: SETI Institute/ NAI High Lakes Project From October through November 2016, Nathalie Cabrol, director of the Carl Sagan Center at the SETI Institute, with members of the NASA Astrobiology Institute team based at SETI, went on a month-long expedition to Chile, visiting Mars-analogue sites between 800 and 6,000 km above sea level to collect samples and test in situ instruments in preparation for the Mars 2020 and ExoMars science payloads. Photos and posts from the field sites written by Nathalie Cabrol are available at the SETI institute website, and are linked to below. The Search for Biosignatures on Mars Starts High on Earth: http://www.seti.org/seti-institute/aiming-high-the-search-for-biosignatures-on-mars-starts-high-on-earth. Hello from Salar Grande: http://www.seti.org/seti-institute/salar-grande Welcome to Salar Grande: http://www.seti.org/seti-institute/welcome-to-salar-grande Dr. Cabrol Visits Expedition Site 2 in the Andes: http://www.seti.org/seti-institute/Dr-Cabrol-visits-Expedition-Site-2-Andes Chile Expedition 2016 Album 3: http://www.seti.org/seti-institute/chile-expedition-2016/album-3 The samples taken from the field study are currently being characterized by SETI and the University of Montana, with future talks planned to present the research findings. The SETI scientist camp site at Salar Grande. Domes and tents consist of personal tents, dining/working areas, kitchen, and a lab. Source: Victor Robles Bravo, Campoalto/SETI Institute - Life Underground - Available to Play - Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea - Yosemite Granite Tells New Story About Earth's Geologic History - Supporting SHERLOC in the Detection of Kerogen as a Biosignature - New Estimates of Earth's Ancient Climate and Ocean pH - How Microbes From Spacecrafts Survive Clean Rooms - Radical Factors in the Evolution of Animal Life - Understanding Oxygen as an Exoplanet Biosignature - Recap of the 2018 Astrobiology Graduate Conference (AbGradCon) - Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award
<urn:uuid:b1a67f22-22c7-4e66-ad42-bbfcf185dc48>
2.65625
638
News (Org.)
Science & Tech.
35.750318
95,561,643
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 6 times found in rainforest, 5 times found in Mixed deciduous forest, 4 times found in evergreen forest, 1 times found in moist evergreen forest, 2 times found in secondary forest, 2 times found in dry Dipterocarp Forest, 2 times found in ecotone between mix deciduous/dry dipterocarp, 1 times found in in garden, 1 times found in Deciduous dipterocarp forest, 1 times found in Dipterocarp forest, ... Found most commonly in these microhabitats: 4 times ex rotten log, 1 times under log, 1 times in soil under log, 1 times ground nest. Collected most commonly using these methods: 62 times Malaise trap, 20 times Malaise traps, 1 times Malaise, 2 times pan traps, 1 times Malaise trap 3-B, 1 times Winkler #19, 1 times Winkler #8. Elevations: collected from 100 - 1700 meters, 575 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0
<urn:uuid:6d58f6c1-7877-4722-8805-17f555125927>
2.953125
363
Knowledge Article
Science & Tech.
58.0125
95,561,646
Monarch butterfly on butterflyweed. Photo by Mara Koenig/USFWS. We have developed a database to capture information about recently (i.e. since 2014) completed, ongoing and planned conservation efforts for the monarch butterfly. Conservation efforts are on-the-ground actions designed to improve the population status of monarchs. This includes improving and creating habitat by enhancing milkweed and blooming nectar plant resources. The database will help the U.S. Fish and Wildlife Service and conservation partners assess conditions for the monarch now and into the future, across the United States. The first step to register for access to the database web application is to send an email to FW3_monarchconservation@fws.gov. In the email include the following information: Before providing us with your information, you may prefer to read the disclaimers that describe how we use your information and your privacy rights. The disclaimers are available below. After sending the registration request, you will receive an email from the Fish and Wildlife Service’s Environmental Conservation Online System (ECOS) Helpdesk with: Please see the User Guide (below) for instructions on how to continue using the Monarch Conservation Database. Tutorial web conference join instructions Phone: 888-593-8438 (audio through your computer is not available) Participant passcode: 4911869 1. Join the meeting: 2. Enter the required fields - the Conference/Meeting Passcode is not required. If requested, the conference meeting number is 743130248. 4. Click on Proceed. User Guide (PDF) Quick Reference guide (PDF) Database Fields (PDF) updated June 7, 2018 Land Use and Activities Table (PDF) updated June 13, 2018 Simplified Version of Data Model (1.3 MB PDF) Excel Bulk Upload Template (XLSX) Required Workflow Diagram (PDF) updated May 31, 2018 Database FAQs (PDF) Information provided during MCD development
<urn:uuid:60728c2d-0299-4838-9af7-24d4591216ec>
2.609375
420
Customer Support
Science & Tech.
40.987404
95,561,647
Researchers at Imperial College London have just begun a 5-year project to design and build tiny earthquake measuring devices to go to Mars on the 2007 NetLander mission. A microseismometer machined out of a single piece of silicon. The central rectangular weight is attached by two springs to a surrounding frame. © Imperial College of Science, Technology and Medicine Unlike the instruments on next year`s European Mars Express/Beagle II mission, the Marsquake sensors will be the first to look deep inside the planet. The internal structure of Mars is a key to understanding some fundamental questions about the planet including whether life ever existed there. The sensors are capable of detecting liquid water reservoirs hidden below the surface, where life could possibly survive on Mars today. The recent discovery by the Mars Odyssey orbiter of large amounts of ice at the poles opens up the possibility of liquid water existing in the warmer conditions underground near the Martian equator. Tom Miller | alphagalileo Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e441492d-dd74-4ba7-be9f-7884c43f3a90>
3.171875
771
Content Listing
Science & Tech.
36.474599
95,561,648
Genetically encoded optical tools have revolutionized modern biology by allowing detection and control of biological processes with exceptional spatiotemporal precision and sensitivity. Natural photoreceptors provide researchers with a vast source of molecular templates for engineering of fluorescent proteins, biosensors, and optogenetic tools. Here, we give a brief overview of natural photoreceptors and their mechanisms of action. We then discuss fluorescent proteins and biosensors developed from light-oxygen-voltage-sensing (LOV) domains and phytochromes as well as their properties and applications. These fluorescent tools possess unique characteristics not achievable with green fluorescent protein-like probes, including near-infrared fluorescence, independence of oxygen, small size, and photosensitizer activity. We next provide an overview of available optogenetic tools of various origins, such as LOV and BLUF (blue-light-utilizing flavin adenine dinucleotide) domains, cryptochromes, and phytochromes, enabling control of versatile cellular processes. We analyze the principles of their function and practical requirements for use. We focus mainly on optical tools with demonstrated use beyond bacteria, with a specific emphasis on their applications in mammalian cells. Expected final online publication date for the Annual Review of Biochemistry Volume 84 is June 02, 2015. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:666a3418-12ab-4dc6-9bdc-1840a32269ee>
2.921875
315
Academic Writing
Science & Tech.
5.487699
95,561,649
The maturation of the brain of unborn infants is given a gentle “prod” by its mother. A protein messenger from the mother’s blood is transferred to the embryo and stimulates the growth and wiring of the neurons in the brain. Neuroscientists in Bochum (Prof. Petra Wahle, Developmental Neurobiology at the Ruhr University), Magdeburg (Dr. Peter Landgraf, Prof. Michael R. Kreutz) and in Münster (Prof. Hans-Christian Pape) performed a detailed investigation of this signal transduction pathway and identified those molecules in the brain of the embryo that interact with the maternal messenger. This achievement delivers an important step towards the comprehension of this signal transduction pathway. Their research work is published in the current volume of the Journal of Biological Chemistry. The maternal immune system produces a signal molecule In previous studies, the scientists had already managed to isolate the polypeptide messenger that plays a decisive role in the brain development of embryos and newborn infants, namely the “survival promoting peptide / Y P30.” Y-P30 enhances the survival of thalamic (diencephalic) neurons and promotes the neuritogenic activity of cerebellar and thalamic neurons. Prof. Wahle explained that it is “interesting to note that Y-P30 is not synthesized directly within the developing infant brain, but is produced by specific immune cells of the mother’s blood during pregnancy. From there it passes the blood-placenta barrier and accumulates - inter alia - in neurons of the cerebral cortex of the embryo.” (Landgraf P, Sieg F, Wahle P, Meyer G, Kreutz MR, Pape HC (2005) “A maternal blood-borne factor promotes survival of the developing thalamus”. FASEB Journal 19:225-227.”) The scientists were able to provide evidence of the peptide in the brain of fetuses of mice and humans, and of postnatal rats. Messengers need receptors to be effective It was of particular interest to identify possible receptors for Y-P30 to enable investigation of the biological role of the messenger and to clarify its mechanisms of action. The research team has succeeded in identifying the molecules that interact with Y-P30, namely pleiotrophin, a protein within the extracellular space, and so-called syndecans, i.e. proteins on the cell surface. It was known that both binding partners could promote the growth of neurons. The scientists were now able to show the Y-P30 enhances the development of the pleiotrophin/syndecan signaling complex and stabilizes it. The signaling activity within the neurons is increased and enhances the neuritogenic activity. Prof. Petra Wahle and Suvarna Wagh, PhD student in research training group 736, were able to demonstrate a direct action of the Y-P30 peptide on the growth of axons (neurites). The signal-receptor-complex comprised of Y-P30, pleiotrophin und syndecan thus appears to enhance the development of the axonal projection tracts and the wiring of the brain. Prof. Dr. Petra Wahle | alfa Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:bd82c12e-7e7e-408f-a290-9375fb6b077f>
3
1,327
Content Listing
Science & Tech.
40.556932
95,561,682
Bacteria - Wikipedia Bacteria (/ b æ k ˈ t ɪər i ə / ( listen); common noun bacteria, singular bacterium) is a type of biological cell.They constitute a large domain of prokaryotic microorganisms. Bacteria: Definition, Types & Infections - Live Science Bacteria are microscopic single-celled organisms that can be helpful, such as those that live in our guts, or harmful, such as flesh-eating bacteria. Bacteria: What you need to know - Medical News Today Bacteria are single-celled organisms that exist in their millions, in every environment, inside or outside other organisms. Some are harmful, but others support life. bacteria | Cell, Evolution, & Classification | Britannica.com Bacteria: Bacteria, microscopic single-celled organisms that inhabit virtually all environments on Earth, including the bodies of multicellular animals. Introduction to the Bacteria - UCMP Bacteria are often maligned as the causes of human and animal disease (like this one, Leptospira, which causes serious disease in livestock). However, certain bacteria, the actinomycetes, produce antibiotics such as streptomycin and nocardicin; others live symbiotically in the guts of animals ... About Microbiology – Bacteria What are bacteria? Find out about the different groups of bacteria, how they reproduce and their survival skills. Bacteria - definition of bacteria by The Free Dictionary Define bacteria. bacteria synonyms, bacteria pronunciation, bacteria translation, English dictionary definition of bacteria. n. Plural of bacterium. pl n , ... Bacteria - ScienceDaily Bacteria (singular: bacterium) are a major group of living organisms. Most are microscopic and unicellular, with a relatively simple cell structure lacking a cell nucleus, and organelles such as mitochondria and chloroplasts.Bacteria are the most abundant of all organisms. Bacteria | Define Bacteria at Dictionary.com Bacteria definition, ubiquitous one-celled organisms, spherical, spiral, or rod-shaped and appearing singly or in chains, comprising the Schizomycota, a phylum of the kingdom Monera (in some classification systems the plant class Schizomycetes), various species of which are involved in fermentation, putrefaction, infectious diseases, or ... Introduction to Bacteria - YouTube This is a new high definition (HD) dramatic video choreographed to powerful music that introduces the viewer/student to Bacteria. It is designed as a motivat...
<urn:uuid:cc44be19-5ed1-4796-be30-c4af25e0cfc9>
3.4375
545
Content Listing
Science & Tech.
18.042742
95,561,696
Diffusion is de net movement of mowecuwes or atoms from a region of high concentration (or high chemicaw potentiaw) to a region of wow concentration (or wow chemicaw potentiaw) as a resuwt of random motion of de mowecuwes or atoms. Diffusion is driven by a gradient in chemicaw potentiaw of de diffusing species. A gradient is de change in de vawue of a qwantity e.g. concentration, pressure, or temperature wif de change in anoder variabwe, usuawwy distance. A change in concentration over a distance is cawwed a concentration gradient, a change in pressure over a distance is cawwed a pressure gradient, and a change in temperature over a distance is a cawwed a temperature gradient. The word diffusion derives from de Latin word, diffundere, which means "to spread way out". A distinguishing feature of diffusion is dat it depends on particwe random wawk, and resuwts in mixing or mass transport widout reqwiring directed buwk motion, uh-hah-hah-hah. Buwk motion, or buwk fwow, is de characteristic of advection. The term convection is used to describe de combination of bof transport phenomena. - 1 Diffusion vs. buwk fwow - 2 Diffusion in de context of different discipwines - 3 History of diffusion in physics - 4 Basic modews of diffusion - 4.1 Diffusion fwux - 4.2 Fick's waw and eqwations - 4.3 Onsager's eqwations for muwticomponent diffusion and dermodiffusion - 4.4 Nondiagonaw diffusion must be nonwinear - 4.5 Einstein's mobiwity and Teoreww formuwa - 4.6 Jumps on de surface and in sowids - 4.7 Diffusion in porous media - 5 Diffusion in physics - 6 Random wawk (random motion) - 7 See awso - 8 References Diffusion vs. buwk fwow An exampwe of a situation in which buwk motion and diffusion can be differentiated is de mechanism by which oxygen enters de body during externaw respiration known as breading. The wungs are wocated in de doracic cavity, which expands as de first step in externaw respiration, uh-hah-hah-hah. This expansion weads to an increase in vowume of de awveowi in de wungs, which causes a decrease in pressure in de awveowi. This creates a pressure gradient between de air outside de body at rewativewy high pressure and de awveowi at rewativewy wow pressure. The air moves down de pressure gradient drough de airways of de wungs and into de awveowi untiw de pressure of de air and dat in de awveowi are eqwaw i.e. de movement of air by buwk fwow stops once dere is no wonger a pressure gradient. The air arriving in de awveowi has a higher concentration of oxygen dan de “stawe” air in de awveowi. The increase in oxygen concentration creates a concentration gradient for oxygen between de air in de awveowi and de bwood in de capiwwaries dat surround de awveowi. Oxygen den moves by diffusion, down de concentration gradient, into de bwood. The oder conseqwence of de air arriving in awveowi is dat de concentration of carbon dioxide in de awveowi decreases. This creates a concentration gradient for carbon dioxide to diffuse from de bwood into de awveowi, as fresh air has a very wow concentration of carbon dioxide compared to de bwood in de body. The pumping action of de heart den transports de bwood around de body. As de weft ventricwe of de heart contracts, de vowume decreases, which increases de pressure in de ventricwe. This creates a pressure gradient between de heart and de capiwwaries, and bwood moves drough bwood vessews by buwk fwow down de pressure gradient. As de doracic cavity contracts during expiration, de vowume of de awveowi decreases and creates a pressure gradient between de awveowi and de air outside de body, and air moves by buwk fwow down de pressure gradient. Diffusion in de context of different discipwines The concept of diffusion is widewy used in: physics (particwe diffusion), chemistry, biowogy, sociowogy, economics, and finance (diffusion of peopwe, ideas and of price vawues). However, in each case, de object (e.g., atom, idea, etc.) dat is undergoing diffusion is “spreading out” from a point or wocation at which dere is a higher concentration of dat object. There are two ways to introduce de notion of diffusion: eider a phenomenowogicaw approach starting wif Fick's waws of diffusion and deir madematicaw conseqwences, or a physicaw and atomistic one, by considering de random wawk of de diffusing particwes. In de phenomenowogicaw approach, diffusion is de movement of a substance from a region of high concentration to a region of wow concentration widout buwk motion. According to Fick's waws, de diffusion fwux is proportionaw to de negative gradient of concentrations. It goes from regions of higher concentration to regions of wower concentration, uh-hah-hah-hah. Sometime water, various generawizations of Fick's waws were devewoped in de frame of dermodynamics and non-eqwiwibrium dermodynamics. From de atomistic point of view, diffusion is considered as a resuwt of de random wawk of de diffusing particwes. In mowecuwar diffusion, de moving mowecuwes are sewf-propewwed by dermaw energy. Random wawk of smaww particwes in suspension in a fwuid was discovered in 1827 by Robert Brown. The deory of de Brownian motion and de atomistic backgrounds of diffusion were devewoped by Awbert Einstein. The concept of diffusion is typicawwy appwied to any subject matter invowving random wawks in ensembwes of individuaws. Biowogists often use de terms "net movement" or "net diffusion" to describe de movement of ions or mowecuwes by diffusion, uh-hah-hah-hah. For exampwe, oxygen can diffuse drough ceww membranes so wong as dere is a higher concentration of oxygen outside de ceww. However, because de movement of mowecuwes is random, occasionawwy oxygen mowecuwes move out of de ceww (against de concentration gradient). Because dere are more oxygen mowecuwes outside de ceww, de probabiwity dat oxygen mowecuwes wiww enter de ceww is higher dan de probabiwity dat oxygen mowecuwes wiww weave de ceww. Therefore, de "net" movement of oxygen mowecuwes (de difference between de number of mowecuwes eider entering or weaving de ceww) is into de ceww. In oder words, dere is a net movement of oxygen mowecuwes down de concentration gradient. History of diffusion in physics In de scope of time, diffusion in sowids was used wong before de deory of diffusion was created. For exampwe, Pwiny de Ewder had previouswy described de cementation process, which produces steew from de ewement iron (Fe) drough carbon diffusion, uh-hah-hah-hah. Anoder exampwe is weww known for many centuries, de diffusion of cowours of stained gwass or eardenware and Chinese ceramics. "...gases of different nature, when brought into contact, do not arrange demsewves according to deir density, de heaviest undermost, and de wighter uppermost, but dey spontaneouswy diffuse, mutuawwy and eqwawwy, drough each oder, and so remain in de intimate state of mixture for any wengf of time.” The measurements of Graham contributed to James Cwerk Maxweww deriving, in 1867, de coefficient of diffusion for CO2 in air. The error rate is wess dan 5%. In 1855, Adowf Fick, de 26-year-owd anatomy demonstrator from Zürich, proposed his waw of diffusion. He used Graham's research, stating his goaw as "de devewopment of a fundamentaw waw, for de operation of diffusion in a singwe ewement of space". He asserted a deep anawogy between diffusion and conduction of heat or ewectricity, creating a formawism dat is simiwar to Fourier's waw for heat conduction (1822) and Ohm's waw for ewectric current (1827). Robert Boywe demonstrated diffusion in sowids in de 17f century by penetration of zinc into a copper coin, uh-hah-hah-hah. Neverdewess, diffusion in sowids was not systematicawwy studied untiw de second part of de 19f century. Wiwwiam Chandwer Roberts-Austen, de weww-known British metawwurgist, and former assistant of Thomas Graham, studied systematicawwy sowid state diffusion on de exampwe of gowd in wead in 1896. : "... My wong connection wif Graham's researches made it awmost a duty to attempt to extend his work on wiqwid diffusion to metaws." In 1858, Rudowf Cwausius introduced de concept of de mean free paf. In de same year, James Cwerk Maxweww devewoped de first atomistic deory of transport processes in gases. The modern atomistic deory of diffusion and Brownian motion was devewoped by Awbert Einstein, Marian Smowuchowski and Jean-Baptiste Perrin. Ludwig Bowtzmann, in de devewopment of de atomistic backgrounds of de macroscopic transport processes, introduced de Bowtzmann eqwation, which has served madematics and physics wif a source of transport process ideas and concerns for more dan 140 years. Yakov Frenkew (sometimes, Jakov/Jacov Frenkew) proposed, and ewaborated in 1926, de idea of diffusion in crystaws drough wocaw defects (vacancies and interstitiaw atoms). He concwuded, de diffusion process in condensed matter is an ensembwe of ewementary jumps and qwasichemicaw interactions of particwes and defects. He introduced severaw mechanisms of diffusion and found rate constants from experimentaw data. Sometime water, Carw Wagner and Wawter H. Schottky devewoped Frenkew's ideas about mechanisms of diffusion furder. Presentwy, it is universawwy recognized dat atomic defects are necessary to mediate diffusion in crystaws. Henry Eyring, wif co-audors, appwied his deory of absowute reaction rates to Frenkew's qwasichemicaw modew of diffusion, uh-hah-hah-hah. The anawogy between reaction kinetics and diffusion weads to various nonwinear versions of Fick's waw. Basic modews of diffusion Each modew of diffusion expresses de diffusion fwux drough concentrations, densities and deir derivatives. Fwux is a vector . The transfer of a physicaw qwantity drough a smaww area wif normaw per time is The dimension of de diffusion fwux is [fwux] = [qwantity]/([time]·[area]). The diffusing physicaw qwantity may be de number of particwes, mass, energy, ewectric charge, or any oder scawar extensive qwantity. For its density, , de diffusion eqwation has de form where is intensity of any wocaw source of dis qwantity (de rate of a chemicaw reaction, for exampwe). For de diffusion eqwation, de no-fwux boundary conditions can be formuwated as on de boundary, where is de normaw to de boundary at point . Fick's waw and eqwations Fick's first waw: de diffusion fwux is proportionaw to de negative of de concentration gradient: The corresponding diffusion eqwation (Fick's second waw) is where is de Lapwace operator, Onsager's eqwations for muwticomponent diffusion and dermodiffusion Fick's waw describes diffusion of an admixture in a medium. The concentration of dis admixture shouwd be smaww and de gradient of dis concentration shouwd be awso smaww. The driving force of diffusion in Fick's waw is de antigradient of concentration, . where is de fwux of de if physicaw qwantity (component) and is de jf dermodynamic force. The dermodynamic forces for de transport processes were introduced by Onsager as de space gradients of de derivatives of de entropy density s (he used de term "force" in qwotation marks or "driving force"): where are de "dermodynamic coordinates". For de heat and mass transfer one can take (de density of internaw energy) and is de concentration of de if component. The corresponding driving forces are de space vectors where T is de absowute temperature and is de chemicaw potentiaw of de if component. It shouwd be stressed dat de separate diffusion eqwations describe de mixing or mass transport widout buwk motion, uh-hah-hah-hah. Therefore, de terms wif variation of de totaw pressure are negwected. It is possibwe for diffusion of smaww admixtures and for smaww gradients. For de winear Onsager eqwations, we must take de dermodynamic forces in de winear approximation near eqwiwibrium: The transport eqwations are Here, aww de indexes i, j, k = 0, 1, 2, ... are rewated to de internaw energy (0) and various components. The expression in de sqware brackets is de matrix of de diffusion (i,k > 0), dermodiffusion (i > 0, k = 0 or k > 0, i = 0) and dermaw conductivity (i = k = 0) coefficients. Under isodermaw conditions T = constant. The rewevant dermodynamic potentiaw is de free energy (or de free entropy). The dermodynamic driving forces for de isodermaw diffusion are antigradients of chemicaw potentiaws, , and de matrix of diffusion coefficients is (i,k > 0). There is intrinsic arbitrariness in de definition of de dermodynamic forces and kinetic coefficients because dey are not measurabwe separatewy and onwy deir combinations can be measured. For exampwe, in de originaw work of Onsager de dermodynamic forces incwude additionaw muwtipwier T, whereas in de Course of Theoreticaw Physics dis muwtipwier is omitted but de sign of de dermodynamic forces is opposite. Aww dese changes are suppwemented by de corresponding changes in de coefficients and do not affect de measurabwe qwantities. Nondiagonaw diffusion must be nonwinear The formawism of winear irreversibwe dermodynamics (Onsager) generates de systems of winear diffusion eqwations in de form If de matrix of diffusion coefficients is diagonaw, den dis system of eqwations is just a cowwection of decoupwed Fick's eqwations for various components. Assume dat diffusion is non-diagonaw, for exampwe, , and consider de state wif . At dis state, . If at some points, den becomes negative at dese points in a short time. Therefore, winear non-diagonaw diffusion does not preserve positivity of concentrations. Non-diagonaw eqwations of muwticomponent diffusion must be non-winear. Einstein's mobiwity and Teoreww formuwa Bewow, to combine in de same formuwa de chemicaw potentiaw μ and de mobiwity, we use for mobiwity de notation . The mobiwity—based approach was furder appwied by T. Teoreww. In 1935, he studied de diffusion of ions drough a membrane. He formuwated de essence of his approach in de formuwa: - de fwux is eqwaw to mobiwity × concentration × force per gram-ion. The force under isodermaw conditions consists of two parts: - Diffusion force caused by concentration gradient: . - Ewectrostatic force caused by ewectric potentiaw gradient: . Here R is de gas constant, T is de absowute temperature, n is de concentration, de eqwiwibrium concentration is marked by a superscript "eq", q is de charge and φ is de ewectric potentiaw. The simpwe but cruciaw difference between de Teoreww formuwa and de Onsager waws is de concentration factor in de Teoreww expression for de fwux. In de Einstein–Teoreww approach, If for de finite force de concentration tends to zero den de fwux awso tends to zero, whereas de Onsager eqwations viowate dis simpwe and physicawwy obvious ruwe. The generaw formuwation of de Teoreww formuwa for non-perfect systems under isodermaw conditions is where μ is de chemicaw potentiaw, μ0 is de standard vawue of de chemicaw potentiaw. The expression is de so-cawwed activity. It measures de "effective concentration" of a species in a non-ideaw mixture. In dis notation, de Teoreww formuwa for de fwux has a very simpwe form The standard derivation of de activity incwudes a normawization factor and for smaww concentrations , where is de standard concentration, uh-hah-hah-hah. Therefore, dis formuwa for de fwux describes de fwux of de normawized dimensionwess qwantity : Teoreww formuwa for muwticomponent diffusion The Teoreww formuwa wif combination of Onsager's definition of de diffusion force gives where is de mobiwity of de if component, is its activity, is de matrix of de coefficients, is de dermodynamic diffusion force, . For de isodermaw perfect systems, . Therefore, de Einstein–Teoreww approach gives de fowwowing muwticomponent generawization of de Fick's waw for muwticomponent diffusion: where is de matrix of coefficients. The Chapman–Enskog formuwas for diffusion in gases incwude exactwy de same terms. Earwier, such terms were introduced in de Maxweww–Stefan diffusion eqwation, uh-hah-hah-hah. Jumps on de surface and in sowids Diffusion of reagents on de surface of a catawyst may pway an important rowe in heterogeneous catawysis. The modew of diffusion in de ideaw monowayer is based on de jumps of de reagents on de nearest free pwaces. This modew was used for CO on Pt oxidation under wow gas pressure. The system incwudes severaw reagents on de surface. Their surface concentrations are The surface is a wattice of de adsorption pwaces. Each reagent mowecuwe fiwws a pwace on de surface. Some of de pwaces are free. The concentration of de free pwaces is . The sum of aww (incwuding free pwaces) is constant, de density of adsorption pwaces b. The jump modew gives for de diffusion fwux of (i = 1, ..., n): The corresponding diffusion eqwation is: Due to de conservation waw, and we have de system of m diffusion eqwations. For one component we get Fick's waw and winear eqwations because . For two and more components de eqwations are nonwinear. If aww particwes can exchange deir positions wif deir cwosest neighbours den a simpwe generawization gives where is a symmetric matrix of coefficients dat characterize de intensities of jumps. The free pwaces (vacancies) shouwd be considered as speciaw "particwes" wif concentration . Various versions of dese jump modews are awso suitabwe for simpwe diffusion mechanisms in sowids. Diffusion in porous media For diffusion in porous media de basic eqwations are: where D is de diffusion coefficient, n is de concentration, m > 0 (usuawwy m > 1, de case m = 1 corresponds to Fick's waw). For diffusion of gases in porous media dis eqwation is de formawisation of Darcy's waw: de vewocity of a gas in de porous media is For underground water infiwtration, de Boussinesq approximation gives de same eqwation wif m = 2. For pwasma wif de high wevew of radiation, de Zewdovich–Raizer eqwation gives m > 4 for de heat transfer. Diffusion in physics Ewementary deory of diffusion coefficient in gases The diffusion coefficient is de coefficient in de Fick's first waw , where J is de diffusion fwux (amount of substance) per unit area per unit time, n (for ideaw mixtures) is de concentration, x is de position [wengf]. Let us consider two gases wif mowecuwes of de same diameter d and mass m (sewf-diffusion). In dis case, de ewementary mean free paf deory of diffusion gives for de diffusion coefficient We can see dat de diffusion coefficient in de mean free paf approximation grows wif T as T3/2 and decreases wif P as 1/P. If we use for P de ideaw gas waw P = RnT wif de totaw concentration n, den we can see dat for given concentration n de diffusion coefficient grows wif T as T1/2 and for given temperature it decreases wif de totaw concentration as 1/n. For two different gases, A and B, wif mowecuwar masses mA, mB and mowecuwar diameters dA, dB, de mean free paf estimate of de diffusion coefficient of A in B and B in A is: The deory of diffusion in gases based on Bowtzmann's eqwation In Bowtzmann's kinetics of de mixture of gases, each gas has its own distribution function, , where t is de time moment, x is position and c is vewocity of mowecuwe of de if component of de mixture. Each component has its mean vewocity . If de vewocities do not coincide den dere exists diffusion. - individuaw concentrations of particwes, (particwes per vowume), - density of momentum (mi is de if particwe mass), - density of kinetic energy The kinetic temperature T and pressure P are defined in 3D space as where is de totaw density. For two gases, de difference between vewocities, is given by de expression: where is de force appwied to de mowecuwes of de if component and is de dermodiffusion ratio. The coefficient D12 is positive. This is de diffusion coefficient. Four terms in de formuwa for C1-C2 describe four main effects in de diffusion of gases: - describes de fwux of de first component from de areas wif de high ratio n1/n to de areas wif wower vawues of dis ratio (and, anawogouswy de fwux of de second component from high n2/n to wow n2/n because n2/n = 1 – n1/n); - describes de fwux of de heavier mowecuwes to de areas wif higher pressure and de wighter mowecuwes to de areas wif wower pressure, dis is barodiffusion; - describes diffusion caused by de difference of de forces appwied to mowecuwes of different types. For exampwe, in de Earf's gravitationaw fiewd, de heavier mowecuwes shouwd go down, or in ewectric fiewd de charged mowecuwes shouwd move, untiw dis effect is not eqwiwibrated by de sum of oder terms. This effect shouwd not be confused wif barodiffusion caused by de pressure gradient. - describes dermodiffusion, de diffusion fwux caused by de temperature gradient. Aww dese effects are cawwed diffusion because dey describe de differences between vewocities of different components in de mixture. Therefore, dese effects cannot be described as a buwk transport and differ from advection or convection, uh-hah-hah-hah. In de first approximation, - for rigid spheres; - for repuwsing force The number is defined by qwadratures (formuwas (3.7), (3.9), Ch. 10 of de cwassicaw Chapman and Cowwing book) We can see dat de dependence on T for de rigid spheres is de same as for de simpwe mean free paf deory but for de power repuwsion waws de exponent is different. Dependence on a totaw concentration n for a given temperature has awways de same character, 1/n. In appwications to gas dynamics, de diffusion fwux and de buwk fwow shouwd be joined in one system of transport eqwations. The buwk fwow describes de mass transfer. Its vewocity V is de mass average vewocity. It is defined drough de momentum density and de mass concentrations: where is de mass concentration of de if species, is de mass density. By definition, de diffusion vewocity of de if component is , . The mass transfer of de if component is described by de continuity eqwation where is de net mass production rate in chemicaw reactions, . In dese eqwations, de term describes advection of de if component and de term represents diffusion of dis component. In 1948, Wendeww H. Furry proposed to use de form of de diffusion rates found in kinetic deory as a framework for de new phenomenowogicaw approach to diffusion in gases. This approach was devewoped furder by F.A. Wiwwiams and S.H. Lam. For de diffusion vewocities in muwticomponent gases (N components) dey used Here, is de diffusion coefficient matrix, is de dermaw diffusion coefficient, is de body force per unite mass acting on de if species, is de partiaw pressure fraction of de if species (and is de partiaw pressure), is de mass fraction of de if species, and Diffusion of ewectrons in sowids When de density of ewectrons in sowids is not in eqwiwibrium, diffusion of ewectrons occurs. For exampwe, when a bias is appwied to two ends of a chunk of semiconductor, or a wight shines on one end (see right figure), ewectron diffuse from high density regions (center) to wow density regions (two ends), forming a gradient of ewectron density. This process generates current, referred to as diffusion current. Diffusion current can awso be described by Fick's first waw where J is de diffusion current density (amount of substance) per unit area per unit time, n (for ideaw mixtures) is de ewectron density, x is de position [wengf]. Diffusion in geophysics Anawyticaw and numericaw modews dat sowve de diffusion eqwation for different initiaw and boundary conditions have been popuwar for studying a wide variety of changes to de Earf's surface. Diffusion has been used extensivewy in erosion studies of hiwwswope retreat, bwuff erosion, fauwt scarp degradation, wave-cut terrace/shorewine retreat, awwuviaw channew incision, coastaw shewf retreat, and dewta progradation. Awdough de Earf's surface is not witerawwy diffusing in many of dese cases, de process of diffusion effectivewy mimics de howistic changes dat occur over decades to miwwennia. Diffusion modews may awso be used to sowve inverse boundary vawue probwems in which some information about de depositionaw environment is known from paweoenvironmentaw reconstruction and de diffusion eqwation is used to figure out de sediment infwux and time series of wandform changes. Random wawk (random motion) One common misconception is dat individuaw atoms, ions or mowecuwes move randomwy, which dey do not. In de animation on de right, de ion on in de weft panew has a “random” motion, but dis motion is not random as it is de resuwt of “cowwisions” wif oder ions. As such, de movement of a singwe atom, ion, or mowecuwe widin a mixture just appears random when viewed in isowation, uh-hah-hah-hah. The movement of a substance widin a mixture by “random wawk” is governed by de kinetic energy widin de system dat can be affected by changes in concentration, pressure or temperature. Separation of diffusion from convection in gases Whiwe Brownian motion of muwti-mowecuwar mesoscopic particwes (wike powwen grains studied by Brown) is observabwe under an opticaw microscope, mowecuwar diffusion can onwy be probed in carefuwwy controwwed experimentaw conditions. Since Graham experiments, it is weww known dat avoiding of convection is necessary and dis may be a non-triviaw task. Under normaw conditions, mowecuwar diffusion dominates onwy on wengf scawes between nanometer and miwwimeter. On warger wengf scawes, transport in wiqwids and gases is normawwy due to anoder transport phenomenon, convection, and to study diffusion on de warger scawe, speciaw efforts are needed. Therefore, some often cited exampwes of diffusion are wrong: If cowogne is sprayed in one pwace, it can soon be smewwed in de entire room, but a simpwe cawcuwation shows dat dis can't be due to diffusion, uh-hah-hah-hah. Convective motion persists in de room because of de temperature [inhomogeneity]. If ink is dropped in water, one usuawwy observes an inhomogeneous evowution of de spatiaw distribution, which cwearwy indicates convection (caused, in particuwar, by dis dropping). In contrast, heat conduction drough sowid media is an everyday occurrence (e.g. a metaw spoon partwy immersed in a hot wiqwid). This expwains why de diffusion of heat was expwained madematicawwy before de diffusion of mass. Oder types of diffusion - Anisotropic diffusion, awso known as de Perona–Mawik eqwation, enhances high gradients - Anomawous diffusion, in porous medium - Atomic diffusion, in sowids - Eddy diffusion, in coarse-grained description of turbuwent fwow - Effusion of a gas drough smaww howes - Ewectronic diffusion, resuwting in an ewectric current cawwed de diffusion current - Faciwitated diffusion, present in some organisms - Gaseous diffusion, used for isotope separation - Heat eqwation, diffusion of dermaw energy - Itō diffusion, madematisation of Brownian motion, continuous stochastic process. - Kinesis (biowogy) is an animaw's non-directionaw movement activity in response to a stimuwus. - Knudsen diffusion of gas in wong pores wif freqwent waww cowwisions - Levy fwights and wawks - Momentum diffusion ex. de diffusion of de hydrodynamic vewocity fiewd - Photon diffusion - Pwasma diffusion - Random wawk, modew for diffusion - Reverse diffusion, against de concentration gradient, in phase separation - Rotationaw diffusion, random reorientations of mowecuwes - Surface diffusion, diffusion of adparticwes on a surface - Turbuwent diffusion, transport of mass, heat, or momentum widin a turbuwent fwuid - Diffusion-wimited aggregation - Darken's eqwations - Fawse diffusion - Isobaric counterdiffusion - J.G. Kirkwood, R.L. Bawdwin, P.J. Dunwop, L.J. Gosting, G. Kegewes (1960)Fwow eqwations and frames of reference for isodermaw diffusion in wiqwids. The Journaw of Chemicaw Physics 33(5):1505–13. - J. Phiwibert (2005). One and a hawf century of diffusion: Fick, Einstein, before and beyond. Archived 2013-12-13 at de Wayback Machine. Diffusion Fundamentaws, 2, 1.1–1.10. - S.R. De Groot, P. Mazur (1962). Non-eqwiwibrium Thermodynamics. Norf-Howwand, Amsterdam. - A. Einstein (1905). "Über die von der mowekuwarkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Fwüssigkeiten suspendierten Teiwchen" (PDF). Ann, uh-hah-hah-hah. Phys. 17 (8): 549–60. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. - Diffusion Processes, Thomas Graham Symposium, ed. J.N. Sherwood, A.V. Chadwick, W.M.Muir, F.L. Swinton, Gordon and Breach, London, 1971. - L.W. Barr (1997), In: Diffusion in Materiaws, DIMAT 96, ed. H.Mehrer, Chr. Herzig, N.A. Stowwijk, H. Bracht, Scitec Pubwications, Vow.1, pp. 1–9. - H. Mehrer; N.A. Stowwijk (2009). "Heroes and Highwights in de History of Diffusion" (PDF). Diffusion Fundamentaws. 11 (1): 1–32. - S. Chapman, T. G. Cowwing (1970) The Madematicaw Theory of Non-uniform Gases: An Account of de Kinetic Theory of Viscosity, Thermaw Conduction and Diffusion in Gases, Cambridge University Press (3rd edition), ISBN 052140844X. - J.F. Kincaid; H. Eyring; A.E. Stearn (1941). "The deory of absowute reaction rates and its appwication to viscosity and diffusion in de wiqwid State". Chem. Rev. 28 (2): 301–65. doi:10.1021/cr60090a005. - A.N. Gorban, H.P. Sargsyan and H.A. Wahab (2011). "Quasichemicaw Modews of Muwticomponent Nonwinear Diffusion". Madematicaw Modewwing of Naturaw Phenomena. 6 (5): 184–262. arXiv: . doi:10.1051/mmnp/20116509. - Onsager, L. (1931). "Reciprocaw Rewations in Irreversibwe Processes. I". Physicaw Review. 37 (4): 405–26. Bibcode:1931PhRv...37..405O. doi:10.1103/PhysRev.37.405. - L.D. Landau, E.M. Lifshitz (1980). Statisticaw Physics. Vow. 5 (3rd ed.). Butterworf-Heinemann. ISBN 978-0-7506-3372-7. - S. Bromberg, K.A. Diww (2002), Mowecuwar Driving Forces: Statisticaw Thermodynamics in Chemistry and Biowogy, Garwand Science, ISBN 0815320515. - T. Teoreww (1935). "Studies on de "Diffusion Effect" upon Ionic Distribution, uh-hah-hah-hah. Some Theoreticaw Considerations". Proceedings of de Nationaw Academy of Sciences of de United States of America. 21 (3): 152–61. Bibcode:1935PNAS...21..152T. doi:10.1073/pnas.21.3.152. PMC . PMID 16587950. - J. L. Vázqwez (2006), The Porous Medium Eqwation, uh-hah-hah-hah. Madematicaw Theory, Oxford Univ. Press, ISBN 0198569033. - S. H. Lam (2006). "Muwticomponent diffusion revisited" (PDF). Physics of Fwuids. 18 (7): 073101. Bibcode:2006PhFw...18g3101L. doi:10.1063/1.2221312. - Pasternack, Gregory B.; Brush, Grace S.; Hiwgartner, Wiwwiam B. (2001-04-01). "Impact of historic wand-use change on sediment dewivery to a Chesapeake Bay subestuarine dewta". Earf Surface Processes and Landforms. 26 (4): 409–27. Bibcode:2001ESPL...26..409P. doi:10.1002/esp.189. ISSN 1096-9837. - Gregory B. Pasternack. "Watershed Hydrowogy, Geomorphowogy, and Ecohydrauwics :: TFD Modewing". pasternack.ucdavis.edu. Retrieved 2017-06-12. - D. Ben-Avraham and S. Havwin (2000). Diffusion and Reactions in Fractaws and Disordered Systems (PDF). Cambridge University Press. ISBN 0521622786. - Weiss, G. (1994). Aspects and Appwications of de Random Wawk. Norf-Howwand. ISBN 0444816062.
<urn:uuid:55f0745f-2cb5-407a-b146-c510b9046300>
3.40625
8,738
Knowledge Article
Science & Tech.
46.988388
95,561,706
If you are looking for articles on C# programming language, then this page is the right place for you. C# is widely used programming language in .NET framework. Apart from .NET framework, you can use C# also in unity3d game engine to edit your script. This page will provide you a series of articles to learn concepts of C#. - Data Type Conversion :- This article will help you to understand implicit and explicit conversion in C#. It will also give you insight of is and as operator in C#. - Boxing and Unboxing :- This article explains boxing and unboxing concepts in C#. - Nullable Value Types :- This article explains about nullable value types in C# and null-coalescing operator. - String and StringBuilder :- This article explains difference between string and stringbuilder in C#. - Indexers in C# :- This article will help you to understand basic concepts of Indexers in C#. - Delegates in C# :- This article explains basics concepts of delegates in C#. It also explains multicast delegate. - Asynchronous method calling in C# :- This article explains about how you can call a method asynchronously in C#. - Anonymous Methods :- This article explains about what an anonymous method is and where it is used? - Threading :- This article explains basic concepts of threading in C#. - Multithreading :- This article explains how you can create thrads in C# and call a method asynchronously with this newly created thread. - Thread Pooling:- This article explains basics of thread pool. - Task parallelism:- This article gives an idea about how tasks can be used to achieve parallelism. - Tuples in C#: This article explains tuples in C# including C# 7.0. Latest posts by Gyanendu Shekhar (see all) - Difference between Update, FixedUpdate and LateUpdate: Unity Tutorial - July 15, 2018 - Difference between Texture and Sprite in Unity - April 14, 2018 - Create and play animation at runtime: Unity Tutorial - March 18, 2018
<urn:uuid:7bae3702-6828-4d2f-9b23-9ff71cc0f057>
3.109375
452
Content Listing
Software Dev.
49.212177
95,561,715
Content-type: text/xml; charset=UTF-8 Last-Modified: Fri, 20 Jul 2018 04:36:33 GMT Zeilen gesamt: 19 Zeilen Header: 13 Zeilen Body: 5 Astronomy News -- ScienceDaily Astronomy news. New! Earth-like extrasolar planet found; double helix nebula; supermassive black holes, astronomy articles, astronomy pictures. Updated daily. Traveling to the sun: Why won't Parker Solar Probe melt? Donnerstag, 19.07.2018, 22:50:26 Uhr This summer, NASA's Parker Solar Probe will launch to travel closer to the Sun, deeper into the solar atmosphere, than any mission before it. Cutting-edge technology and engineering will help it beat the heat. CALET succeeds in direct measurements of cosmic-ray electron spectrum up to 4.8 TeV Donnerstag, 19.07.2018, 15:44:22 Uhr Researchers have succeeded in the direct, high-precision measurements of cosmic-ray electron spectrum up to 4.8 TeV, based on observations with the Calorimetric Electron Telescope (CALET). Observations by CALET are expected to reveal the mysteries of cosmic-rays and nature of dark matter in the future. NASA's new mini satellite will study Milky Way's halo Mittwoch, 18.07.2018, 23:03:15 Uhr A new mission called HaloSat will help scientists search for the universe's missing matter by studying X-rays from hot gas surrounding the Milky Way galaxy. Solar corona is more structured, dynamic than previously thought Mittwoch, 18.07.2018, 18:48:11 Uhr Scientists have discovered never-before-detected, fine-grained structures in the Sun's outer atmosphere, or corona. The team imaged this critical region in detail using sophisticated software techniques and longer exposures from the COR-2 camera on board NASA's Solar and Terrestrial Relations Observatory-A (STEREO-A). X-ray data may be first evidence of a star devouring a planet Mittwoch, 18.07.2018, 17:33:32 Uhr An analysis of X-ray data suggests the first observations of a star swallowing a planet, and may also explain the star's mysterious dimming. Planck: Final data from the mission lends support to the standard cosmological model Mittwoch, 18.07.2018, 16:47:57 Uhr With its increased reliability and its data on the polarization of relic radiation, the Planck mission corroborates the standard cosmological model with unrivaled precision for these parameters, even if some anomalies still remain. Finding a planet with a 10 year orbit in just a few months Mittwoch, 18.07.2018, 15:25:07 Uhr To discover the presence of a planet around stars, astronomers wait until it has completed three orbits. However, this effective technique has its drawbacks since it cannot confirm the presence of planets at relatively long periods. To overcome this obstacle, astronomers have developed a method that makes it possible to ensure the presence of a planet in a few months, even if it takes 10 years to circle its star. Supersharp images from new VLT adaptive optics Mittwoch, 18.07.2018, 14:22:20 Uhr ESO's Very Large Telescope (VLT) has achieved first light with a new adaptive optics mode called laser tomography -- and has captured remarkably sharp test images of the planet Neptune and other objects. The MUSE instrument working with the GALACSI adaptive optics module, can now use this new technique to correct for turbulence at different altitudes in the atmosphere. It is now possible to capture images from the ground at visible wavelengths that are sharper than those from the NASA/ESA Hubble Space Telescope. A dozen new moons of Jupiter discovered, including one 'oddball' Dienstag, 17.07.2018, 16:12:56 Uhr Twelve new moons orbiting Jupiter have been found -- 11 'normal' outer moons, and one that they're calling an 'oddball.' Astronomers first spotted the moons in the spring of 2017 while they were looking for very distant solar system objects as part of the hunt for a possible massive planet far beyond Pluto. Astronomers find a famous exoplanet's doppelganger Dienstag, 17.07.2018, 15:48:07 Uhr A new planet has been imaged, and it appears nearly identical to one of the best studied gas-giant planets. But this doppelganger differs in one very important way: Its origin. One object has long been known: the 13-Jupiter-mass planet beta Pictoris b, one of the first planets discovered by direct imaging, back in 2009. The new object, dubbed 2MASS 0249 c, has the same mass, brightness, and spectrum as beta Pictoris b. Disruption tolerant networking to demonstrate Internet in space Montag, 16.07.2018, 17:45:26 Uhr The interplanetary Internet may soon become a reality. NASA is about to demonstrate Delay/Disruption Tolerant Networking, or DTN -- a technology that sends information through space and ground networks to its destination. How might dark matter interact with ordinary matter? Freitag, 13.07.2018, 15:35:45 Uhr Scientists have imposed conditions on how dark matter may interact with ordinary matter. In the search for direct detection of dark matter, the experimental focus has been on WIMPs, or weakly interacting massive particles, the hypothetical particles thought to make up dark matter. But the research team invokes a different theory to challenge the WIMP paradigm: the self-interacting dark matter model, or SIDM. VERITAS supplies critical piece to neutrino discovery puzzle Donnerstag, 12.07.2018, 17:45:44 Uhr The VERITAS array has confirmed the detection of gamma rays from the vicinity of a supermassive black hole. While these detections are relatively common for VERITAS, this black hole is potentially the first known astrophysical source of high-energy cosmic neutrinos, a type of ghostly subatomic particle. Breakthrough in the search for cosmic particle accelerators Donnerstag, 12.07.2018, 17:45:25 Uhr In a global observation campaign, scientist have for the first time located a source of high-energy cosmic neutrinos, ghostly elementary particles that travel billions of light years through the universe, flying unaffected through stars, planets and entire galaxies. Hubble and Gaia team up to fuel cosmic conundrum Donnerstag, 12.07.2018, 17:44:48 Uhr Using the power and synergy of two space telescopes, astronomers have made the most precise measurement to date of the universe's expansion rate. Could gravitational waves reveal how fast our universe is expanding? Donnerstag, 12.07.2018, 17:44:34 Uhr An new study finds black holes and neutron stars are key to measuring our expanding universe. Centenary of cosmological constant lambda Mittwoch, 11.07.2018, 16:57:07 Uhr Physicists are now celebrating the 100th anniversary of the cosmological constant. On this occasion, two recent articles highlight its role in modern physics and cosmology. Before becoming widely accepted, the cosmological constant had to undergo many discussions about its necessity, its value and its physical essence. Today, there are still unresolved problems in understanding the deep physical nature of the phenomena associated with the cosmological constant. Colorful celestial landscape Mittwoch, 11.07.2018, 15:31:37 Uhr New observations show the star cluster RCW 38 in all its glory. This image was taken during testing of the HAWK-I camera with the GRAAL adaptive optics system. It shows RCW 38 and its surrounding clouds of brightly glowing gas in exquisite detail, with dark tendrils of dust threading through the bright core of this young gathering of stars. Rocky planet neighbor looks familiar, but is not Earth's twin Dienstag, 10.07.2018, 18:28:24 Uhr Last autumn, the world was excited by the discovery of an exoplanet called Ross 128 b, which is just 11 light years away from Earth. New work has for the first time determined detailed chemical abundances of the planet's host star, Ross 128. Plasma-spewing quasar shines light on universe's youth, early galaxy formation Montag, 09.07.2018, 16:11:47 Uhr Astronomers found a quasar with the brightest radio emission ever observed in the early universe, due to it spewing out a jet of extremely fast-moving material. Scientists have revealed in unprecedented detail the jet shooting out of a quasar that formed within the universe's first billion years of existence. The Gaia Sausage: The major collision that changed the Milky Way galaxy Mittwoch, 04.07.2018, 17:20:33 Uhr Astronomers have discovered an ancient and dramatic head-on collision between the Milky Way and a smaller object, dubbed the 'Sausage' galaxy. The cosmic crash was a defining event in the early history of the Milky Way and reshaped the structure of our galaxy, fashioning both its inner bulge and its outer halo, the astronomers report in a series of new papers. Superstar Eta Carinae shoots cosmic rays Dienstag, 03.07.2018, 17:28:24 Uhr NASA's NuSTAR space telescope shows that Eta Carinae, the most luminous and massive stellar system within 10,000 light-years, is accelerating cosmic rays. Milky Way type dust particles discovered in a galaxy 11 billion light years from Earth Dienstag, 03.07.2018, 16:59:53 Uhr An international research team has found the same type of interstellar dust that we know from the Milky Way in a distant galaxy 11 billion light years from Earth. This type of dust has been found to be rare in other galaxies and the new discovery plays an important role in understanding what it takes for this particular type of interstellar dust to be formed. Molecular oxygen in comet's atmosphere not created on its surface Dienstag, 03.07.2018, 16:54:39 Uhr Scientists have found that molecular oxygen around comet 67P is not produced on its surface, as some suggested, but may be from its body. Beam of light from first confirmed neutron star merger emerge from behind sun Montag, 02.07.2018, 17:11:25 Uhr Astronomers had to wait over 100 days for the sight of the first of confirmed neutron star merger to reemerge from behind the glare of the sun. First confirmed image of newborn planet caught with ESO's VLT Montag, 02.07.2018, 15:40:40 Uhr SPHERE, a planet-hunting instrument on ESO's Very Large Telescope, has captured the first confirmed image of a planet caught in the act of forming in the dusty disc surrounding a young star. The young planet is carving a path through the primordial disc of gas and dust around the very young star PDS 70. The data suggest that the planet's atmosphere is cloudy. Astronomers observe the magnetic field of the remains of supernova 1987A Freitag, 29.06.2018, 16:26:17 Uhr For the first time, astronomers have directly observed the magnetism in one of astronomy's most studied objects: the remains of Supernova 1987A (SN 1987A), a dying star that appeared in our skies over thirty years ago. In addition to being an impressive observational achievement, the detection provides insight into the early stages of the evolution of supernova remnants and the cosmic magnetism within them. More clues that Earth-like exoplanets are indeed Earth-like Donnerstag, 28.06.2018, 21:17:08 Uhr Researchers suggest that two Earth-like exoplanets (Kepler-186f and 62f) have very stable axial tilts, much like the Earth, making it likely that each has regular seasons and a stable climate. Mars dust storm may lead to new weather discoveries Donnerstag, 28.06.2018, 18:44:12 Uhr Mars is experiencing an estimated 15.8-million-square-mile dust storm, roughly the size of North and South America. This storm may not be good news for the NASA solar-powered Opportunity rover, but one professor sees this as a chance to learn more about Martian weather. Scientists develop new strategies to discover life beyond earth Donnerstag, 28.06.2018, 16:51:13 Uhr Scientists now think we may be able to detect signs of life on planets beyond our solar system in the next few decades, but to do so new tools and techniques will be required. Researchers from around the world just produced a roadmap to develop the techniques that may finally answer the question of whether we are alone in the Universe. Meteorite 'Black Beauty' expands window for when life might have existed on Mars Donnerstag, 28.06.2018, 16:50:25 Uhr New evidence for a rapid crystallization and crust formation on Mars has just been published. The study, based on the analysis of the rare Mars meteorite Black Beauty, significantly expands the window for when life might have existed on Mars. Scientists find evidence of complex organic molecules from Enceladus Mittwoch, 27.06.2018, 22:04:56 Uhr Using mass spectrometry data from NASA's Cassini spacecraft, scientists found that large, carbon-rich organic molecules are ejected from cracks in the icy surface of Saturn's moon Enceladus. Scientists think chemical reactions between the moon's rocky core and warm water from its subsurface ocean are linked to these complex molecules. Milky Way is rich in grease-like molecules Mittwoch, 27.06.2018, 22:03:57 Uhr Our galaxy is rich in grease-like molecules, according to new research. Astronomers used a laboratory to manufacture material with the same properties as interstellar dust and used their results to estimate the amount of 'space grease' found in the Milky Way. Why bacteria survive in space Mittwoch, 27.06.2018, 22:02:49 Uhr Earth germs could be contaminating other planets. Despite extreme decontamination efforts, bacteria from Earth still manages to find its way into outer space aboard spacecraft. Biologist are working to better understand how and why some spores elude decontamination. `Oumuamua gets a boost Mittwoch, 27.06.2018, 22:02:40 Uhr `Oumuamua, the first interstellar object discovered in the Solar System, is moving away from the Sun faster than expected. This anomalous behavior was detected by a worldwide astronomical collaboration. The new results suggest that `Oumuamua is most likely an interstellar comet and not an asteroid. A galactic test will clarify the existence of dark matter Dienstag, 26.06.2018, 01:27:45 Uhr Researchers used sophisticated computer simulations to devise a test that could answer a burning question in astrophysics: is there really dark matter? Or does Newton's gravitational law need to be modified? The new study shows that the answer is hidden in the motion of the stars within small satellite galaxies swirling around the Milky Way. Recipe for star clusters Dienstag, 26.06.2018, 01:26:59 Uhr Clusters of stars across the vast reaches of time and space of the entire universe were all created the same way, researchers have determined. NASA's James Webb Space Telescope to target Jupiter's Great Red Spot Dienstag, 26.06.2018, 01:26:54 Uhr NASA's James Webb Space Telescope, the most ambitious and complex space observatory ever built, will use its unparalleled infrared capabilities to study Jupiter's Great Red Spot, shedding new light on the enigmatic storm and building upon data returned from NASA's Hubble Space Telescope and other observatories. Scientists developing guidebook for finding life beyond Earth Dienstag, 26.06.2018, 01:26:48 Uhr Some of the leading experts in the field, including a UC Riverside team of researchers, have written a major series of review papers on the past, present, and future of the search for life on other planets. Einstein proved right in another galaxy Donnerstag, 21.06.2018, 20:10:43 Uhr Astronomers have made the most precise test of gravity outside our own solar system. By combining data taken with NASA's Hubble Space Telescope and the European Southern Observatory's Very Large Telescope, the researchers show that gravity in this galaxy behaves as predicted by Albert Einstein's general theory of relativity, confirming the theory's validity on galactic scales. Nearly 80 exoplanet candidates identified in record time Donnerstag, 21.06.2018, 18:19:01 Uhr Scientists have analyzed data from K2, the follow-up mission to NASA's Kepler Space Telescope, and have discovered a trove of possible exoplanets amid some 50,000 stars. The scientists report the discovery of nearly 80 new planetary candidates, including a particular standout: a likely planet that orbits the star HD 73344, which would be the brightest planet host ever discovered by the K2 mission. Old star clusters could have been the birthplace of supermassive stars Donnerstag, 21.06.2018, 16:10:34 Uhr Astrophysicists may have found a solution to a problem that has perplexed scientists for more than 50 years: why are the stars in globular clusters made of material different to other stars found in the Milky Way? Martian dust storm grows global: Curiosity captures photos of thickening haze Mittwoch, 20.06.2018, 23:09:56 Uhr A storm of tiny dust particles has engulfed much of Mars over the last two weeks and prompted NASA's Opportunity rover to suspend science operations. But across the planet, NASA's Curiosity rover, which has been studying Martian soil at Gale Crater, is expected to remain largely unaffected by the dust. The Martian dust storm has grown in size and is now officially a 'planet-encircling' (or 'global') dust event. Last of universe's missing ordinary matter Mittwoch, 20.06.2018, 21:00:53 Uhr Researchers have helped to find the last reservoir of ordinary matter hiding in the universe. Surgery in space Mittwoch, 20.06.2018, 15:48:01 Uhr With renewed public interest in manned space exploration comes the potential need to diagnose and treat medical issues encountered by future space travelers. Best evidence of rare black hole captured Montag, 18.06.2018, 20:18:34 Uhr Scientists have been able to prove the existence of small black holes and those that are super-massive but the existence of an elusive type of black hole, known as intermediate-mass black holes (IMBHs) is hotly debated. New research shows the strongest evidence to date that this middle-of-the-road black hole exists, by serendipitously capturing one in action devouring an encountering star. Hunting molecules to find new planets Montag, 18.06.2018, 17:30:30 Uhr It's been impossible to obtain images of an exoplanet, so dazzling is the light of its star. However, astronomers have the idea of detecting molecules that are present in the planet's atmosphere in order to make it visible, provided that these same molecules are absent from its star. Thanks to this innovative technique, the device is sensitive to the selected molecules, making the star invisible and allowing the astronomers to observe the planet. Explosive volcanoes spawned mysterious Martian rock formation Montag, 18.06.2018, 16:25:53 Uhr Explosive volcanic eruptions that shot jets of hot ash, rock and gas skyward are the likely source of a mysterious Martian rock formation, a new study finds. The new finding could add to scientists' understanding of Mars's interior and its past potential for habitability, according to the study's authors. Astronomers see distant eruption as black hole destroys star Freitag, 15.06.2018, 03:38:19 Uhr Scientists get first direct images showing fast-moving jet of particles ejected as a supermassive black hole at the core of a galaxy shreds a passing star. Dust clouds can explain puzzling features of active galactic nuclei Freitag, 15.06.2018, 03:36:15 Uhr Many large galaxies have a bright central region called an active galactic nucleus, powered by matter spiraling into a supermassive black hole. Gas clouds around the AGN emit light at characteristic wavelengths, but the complexity and variability of these emissions has been a longstanding puzzle. A new study explains these and other puzzling features of active galactic nuclei as the result of small clouds of dust that can partially obscure the innermost regions of AGNs. Distant moons may harbor life Freitag, 15.06.2018, 03:36:00 Uhr Researchers have identified more than 100 giant planets that potentially host moons capable of supporting life. Their work will guide the design of future telescopes that can detect these potential moons and look for tell-tale signs of life, called biosignatures, in their atmospheres. Short gamma-ray bursts do follow binary neutron star mergers Donnerstag, 14.06.2018, 07:51:28 Uhr Researchers have confirmed that last fall's union of two neutron stars did in fact cause a short gamma-ray burst. The true power of the solar wind Dienstag, 12.06.2018, 16:57:53 Uhr The planets and moons of our solar system are continuously being bombarded by particles from the sun. On the Moon or on Mercury, the uppermost layer of rock is gradually eroded by the impact of sun particles. New results show that previous models of this process are incomplete. The effects of solar wind bombardment are much more drastic than previously thought. Diamond dust shimmering around distant stars Montag, 11.06.2018, 19:38:02 Uhr Some of the tiniest diamonds in the universe -- bits of crystalline carbon hundreds of thousands of times smaller than a grain of sand -- have been detected swirling around three infant star systems in the Milky Way. These microscopic gemstones are neither rare nor precious; they are, however, exciting for astronomers who identified them as the source of a mysterious cosmic microwave 'glow' emanating from several protoplanetary disks in our galaxy. Minerology on Mars points to a cold and icy ancient climate Freitag, 08.06.2018, 06:31:55 Uhr The climate throughout Mars' early history has long been debated -- was the Red Planet warm and wet, or cold and icy? New research published in Icarus provides evidence for the latter. NASA finds ancient organic material, mysterious methane on Mars Donnerstag, 07.06.2018, 20:29:16 Uhr NASA's Curiosity rover has found new evidence preserved in rocks on Mars that suggests the planet could have supported ancient life, as well as new evidence in the Martian atmosphere that relates to the search for current life on the Red Planet. The disc of the Milky Way is bigger than we thought Donnerstag, 07.06.2018, 17:27:46 Uhr A team of researchers suggests that if we could travel at the speed of light it would take us 200,000 years to cross the disc of our Galaxy. One of the most massive neutron stars ever discovered Donnerstag, 07.06.2018, 17:27:37 Uhr Using a pioneering method, researchers have found a neutron star of about 2.3 Solar masses -- one of the most massive ever detected. Multiple alkali metals in unique exoplanet Donnerstag, 07.06.2018, 17:27:35 Uhr Scientists have observed a rare gaseous planet, with partly clear skies, and strong signatures of alkali metals in its atmosphere. How solar prominences vibrate Donnerstag, 07.06.2018, 17:27:30 Uhr Researchers have cataloged around 200 oscillations of the solar prominences during the first half of 2014. Its development has been possible thanks to the GONG network of telescopes, of which one of them is located in the Teide Observatory.
<urn:uuid:35e4bcf9-9fa4-4ab2-9a7d-8377903655a8>
2.921875
5,287
Content Listing
Science & Tech.
55.773258
95,561,717
The authors of a paper in last weeks Proceedings of the Royal Society of London Section B, who say their 7.9 mm-long fish from a peat swamp in Southeast Asia is the smallest fish and vertebrate known, have failed to make note of work published last fall that describes sexually mature, male anglerfishes measuring 6.2 mm to 7.4 mm in length. The 6.2 mm specimen is by far the smallest of any vertebrate, beating the recent claim by a full 1.7 mm, according to Ted Pietsch, a University of Washington professor of aquatic and fisheries sciences, who has described the specimen. Pietsch includes information about the tiny specimen, collected in the Philippines, in a review of whats known about reproduction in anglerfishes, so called because they have bioluminescent lures growing from their heads that they wave or cause to blink in order to attract prey to their mouths. The work appeared in the September issue of Ichthyological Research, published by the Ichthyological Society of Japan. Sandra Hines | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:a45d0e27-7984-4a18-953d-98a25f5c19a3>
2.78125
807
Content Listing
Science & Tech.
42.645605
95,561,739
The photo shows a sky region imaged with the multi-mode FORS2 instrument on the 8.2-m VLT YEPUN telescope, in which a number of galaxies in the redshift range from 4.8 to 5.8 were discovered. They are accordingly located at a distance of about 12,600 million light-years from the Earth. Nowadays, the Universe is pervaded by energetic ultraviolet radiation, produced by quasars and hot stars. The short-wavelength photons liberate electrons from the hydrogen atoms that make up the diffuse intergalactic medium and the latter is therefore almost completely ionised. There was, however, an early epoch in the history of the Universe when this was not so. The Universe emanated from a hot and extremely dense initial state, the so-called Big Bang. Astronomers now believe that it took place about 13,700 million years ago. During the first few minutes, enormous quantities of protons, neutrons and electrons were produced. The Universe was so hot that protons and electrons were floating freely: the Universe was fully ionised. Richard West | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b21cd8dc-86ec-43e5-bcaf-650f2d64bc49>
3.390625
805
Content Listing
Science & Tech.
39.695531
95,561,740
Working with counters This article is obsolete and may be not valid anymore! 1. The principle of counters When you write checks that deal with performance data (CPU usage, network traffic, disk IO), in many cases you will be confronted with counters. As an example, look into the file /proc/stat on your Linux box. Lets grep for the line beginning with processes: user@host:~$ grep processes /proc/stat processes 205458 What does that mean? It's the number of processes created since the system has booted (not the number of processes currently running!). Now let's do it again: user@host:~$ grep processes /proc/stat processes 206160 What do we learn from this? The number of process creations has raised from 205458 to 206160. That is 702 new process creations. If we now assume that we have waited exactly 10 seconds between the two calls, then processes have been created at a rate of 70.2 process creations per second. So if we have a counter and want to compute a rate, we need to compare the value of the counter between two points of time and need to know how much time has been passed between the first and the second sample of the counter. 2. Counters in Check_MK The good news is: Check_MK supports check programmers with the handling of counters. It can keep memory or counter values and compute rates for you. Counter values are stored in /var/lib/check_mk/counters (OMD: tmp/check_mk/counters). The key to this is the helper function get_counter, which is called like this: timedif, rate_per_sec = get_counter("some.unique.name", this_time, counter_value) It is important to call this function with a unique name for each separate counter. This is usually done by using the check type and the item as a prefix (unless your check is always using None as item). Here is an example of how the check winperf.diskstat uses get_couter: It uses a separate counter for read IO and write IO: read_timedif, read_per_sec = get_counter("diskstat.read", this_time, read_bytes_ctr) write_timedif, write_per_sec = get_counter("diskstat.write", this_time, write_bytes_ctr) The check if is a bit more complex, since it deals with many switch ports and for each port with several counters. It uses the counter name (name) and the port number/description (item) to make the counter unique: timedif, rate = get_counter("if.%s.%s" % (name, item), this_time, saveint(counter)) The get_counter function returns two values: 3. Counter wraps and resets There are two situations where you have to be careful when working with counters: wraps and resets. A wrap occurs when a counter with a limited precision overflows. The most prominent example are the 32 Bit counters in the SNMP IF-MIB that are used for the network traffic - for example ifOutOctets. After 4GB of traffic over a port the counter wraps over back to 0. Such a case must be detected and handled. Otherwise you would get negative values for your rate. Another situation is a reboot of the target device. In that case all counters start again from 0. This must also be handled correctly in order to avoid anomalies. get_counter makes a simple but effective wrap detection. If the new counter value is lower than the old one, a wrap/reset is being assumed. Now something important happens: get_counter does not return any value, but raises a Python exception of the MKCounterWrapped. This exception immediately aborts the execution of the check. No check result will be sent to Nagios this turn! You might think of this as a bug. But it's the only way to do it right! If a counter anomaly occurs then it is not possible to compute the rate in a reliable way. Even if we would know the maximum value of the counter, we still would have no way to distinguish between a wrap and a reset. And returning a rate of 0 would also be wrong and might trigger false alarms, create invalid RRD graphs and would send the user a check result that does not reflect the reality. Also if get_counter is called for the very first time for a specific counter, as MKCounterWrapped will be raised. No rate can be computed without a previous value. This is the reason why some checks need a bit longer to leave the pending state when a new host is added to the monitoring. 3.1. Checks based on multiple counters If your check uses more then one are two counters then the time until the first time the check produces results might be no acceptable. The problem is that the MKCounterWrapped exception will immediately abort the check execution as soon as the first counter is initialized. The second time the check is called the second counter will be initialized and so on. If you want all counters to be initalized at the first check then you need to: The following pseudo-code illustrates how to do this correctly (variant a): wrapped = False for ...: # loop over counters try: timedif, rate = get_counter(.....) # process resulting rate... except MKCounterWrapped: wrapped = True # continue, other counters might wrap as well # after all counters are handled if wrapped: raise MKCounterWrapped("Counter wrap") Variant b) is only possible if the counters just supply additional performance data but do not influence the check result. In any case make sure, that your check does always output the same number of performance variables. If some would be missing due to counter wraps, then output none at all. Graphing tools such as PNP4Nagios may break if the number of performance variables vary.
<urn:uuid:4d58ee79-f202-46f9-b85e-b4d72262e248>
3.671875
1,255
Documentation
Software Dev.
59.93003
95,561,761