text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
A brief history of negative numbers throughout the ages
Investigate different ways of making £5 at Charlie's bank.
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers?
Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers?
Different combinations of the weights available allow you to make different totals. Which totals can you make?
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
Play this game to learn about adding and subtracting positive and negative numbers
This article -useful for teachers and learners - gives a short account of the history of negative numbers.
This article suggests some ways of making sense of calculations involving positive and negative numbers.
Can you be the first to complete a row of three?
In this game the winner is the first to complete a row of three. Are some squares easier to land on than others?
How can we help students make sense of addition and subtraction of negative numbers?
Imagine a very strange bank account where you are only allowed to do two things...
Substitute -1, -2 or -3, into an algebraic expression and you'll get three results. Is it possible to tell in advance which of those three will be the largest ?
What is the smallest number of answers you need to reveal in order to work out the missing headers?
How many intersections do you expect from four straight lines ? Which three lines enclose a triangle with negative co-ordinates for every point ?
The classic vector racing game. | <urn:uuid:3e56c788-cb5f-4376-9a6e-3dae32672c5e> | 3.703125 | 417 | Content Listing | Science & Tech. | 58.420688 | 95,495,297 |
Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.
But now the robots are here to help.
In a series of results reported in the journals Physical Review Letters and Chaos, scientists have used machine learning—the same computational technique behind recent successes in artificial intelligence—to predict the future evolution of chaotic systems out to stunningly distant horizons. The approach is being lauded by outside experts as groundbreaking and likely to find wide application.
“I find it really amazing how far into the future they predict” a system’s chaotic evolution, said Herbert Jaeger, a professor of computational science at Jacobs University in Bremen, Germany.
The findings come from veteran chaos theorist Edward Ott and four collaborators at the University of Maryland. They employed a machine-learning algorithm called reservoir computing to “learn” the dynamics of an archetypal chaotic system called the Kuramoto-Sivashinsky equation. The evolving solution to this equation behaves like a flame front, flickering as it advances through a combustible medium. The equation also describes drift waves in plasmas and other phenomena, and serves as “a test bed for studying turbulence and spatiotemporal chaos,” said Jaideep Pathak, Ott’s graduate student and the lead author of the new papers.
Jaideep Pathak, Michelle Girvan, Brian Hunt and Edward Ott of the University of Maryland, who (along with Zhixin Lu, now of the University of Pennsylvania) have shown that machine learning is a powerful tool for predicting chaos.
Faye Levine/University of Maryland
After training itself on data from the past evolution of the Kuramoto-Sivashinsky equation, the researchers’ reservoir computer could then closely predict how the flamelike system would continue to evolve out to eight “Lyapunov times” into the future, eight times further ahead than previous methods allowed, loosely speaking. The Lyapunov time represents how long it takes for two almost-identical states of a chaotic system to exponentially diverge. As such, it typically sets the horizon of predictability.
“This is really very good,” Holger Kantz, a chaos theorist at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, said of the eight-Lyapunov-time prediction. “The machine-learning technique is almost as good as knowing the truth, so to say.”
The algorithm knows nothing about the Kuramoto-Sivashinsky equation itself; it only sees data recorded about the evolving solution to the equation. This makes the machine-learning approach powerful; in many cases, the equations describing a chaotic system aren’t known, crippling dynamicists’ efforts to model and predict them. Ott and company’s results suggest you don’t need the equations—only data. “This paper suggests that one day we might be able perhaps to predict weather by machine-learning algorithms and not by sophisticated models of the atmosphere,” Kantz said.
Besides weather forecasting, experts say the machine-learning technique could help with monitoring cardiac arrhythmias for signs of impending heart attacks and monitoring neuronal firing patterns in the brain for signs of neuron spikes. More speculatively, it might also help with predicting rogue waves, which endanger ships, and possibly even earthquakes.
Ott particularly hopes the new tools will prove useful for giving advance warning of solar storms, like the one that erupted across 35,000 miles of the sun’s surface in 1859. That magnetic outburst created aurora borealis visible all around the Earth and blew out some telegraph systems, while generating enough voltage to allow other lines to operate with their power switched off. If such a solar storm lashed the planet unexpectedly today, experts say it would severely damage Earth’s electronic infrastructure. “If you knew the storm was coming, you could just turn off the power and turn it back on later,” Ott said.
He, Pathak and their colleagues Brian Hunt, Michelle Girvan and Zhixin Lu (who is now at the University of Pennsylvania) achieved their results by synthesizing existing tools. Six or seven years ago, when the powerful algorithm known as “deep learning” was starting to master AI tasks like image and speech recognition, they started reading up on machine learning and thinking of clever ways to apply it to chaos. They learned of a handful of promising results predating the deep-learning revolution. Most importantly, in the early 2000s, Jaeger and fellow German chaos theorist Harald Haas made use of a network of randomly connected artificial neurons—which form the “reservoir” in reservoir computing—to learn the dynamics of three chaotically coevolving variables. After training on the three series of numbers, the network could predict the future values of the three variables out to an impressively distant horizon. However, when there were more than a few interacting variables, the computations became impossibly unwieldy. Ott and his colleagues needed a more efficient scheme to make reservoir computing relevant for large chaotic systems, which have huge numbers of interrelated variables. Every position along the front of an advancing flame, for example, has velocity components in three spatial directions to keep track of.
It took years to strike upon the straightforward solution. “What we exploited was the locality of the interactions” in spatially extended chaotic systems, Pathak said. Locality means variables in one place are influenced by variables at nearby places but not by places far away. “By using that,” Pathak explained, “we can essentially break up the problem into chunks.” That is, you can parallelize the problem, using one reservoir of neurons to learn about one patch of a system, another reservoir to learn about the next patch, and so on, with slight overlaps of neighboring domains to account for their interactions.
Parallelization allows the reservoir computing approach to handle chaotic systems of almost any size, as long as proportionate computer resources are dedicated to the task.
If we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides. Edward Ott
Ott explained reservoir computing as a three-step procedure. Say you want to use it to predict the evolution of a spreading fire. First, you measure the height of the flame at five different points along the flame front, continuing to measure the height at these points on the front as the flickering flame advances over a period of time. You feed these data-streams in to randomly chosen artificial neurons in the reservoir. The input data triggers the neurons to fire, triggering connected neurons in turn and sending a cascade of signals throughout the network.
The second step is to make the neural network learn the dynamics of the evolving flame front from the input data. To do this, as you feed data in, you also monitor the signal strengths of several randomly chosen neurons in the reservoir. Weighting and combining these signals in five different ways produces five numbers as outputs. The goal is to adjust the weights of the various signals that go into calculating the outputs until those outputs consistently match the next set of inputs—the five new heights measured a moment later along the flame front. “What you want is that the output should be the input at a slightly later time,” Ott explained.
To learn the correct weights, the algorithm simply compares each set of outputs, or predicted flame heights at each of the five points, to the next set of inputs, or actual flame heights, increasing or decreasing the weights of the various signals each time in whichever way would have made their combinations give the correct values for the five outputs. From one time-step to the next, as the weights are tuned, the predictions gradually improve, until the algorithm is consistently able to predict the flame’s state one time-step later.
“In the third step, you actually do the prediction,” Ott said. The reservoir, having learned the system’s dynamics, can reveal how it will evolve. The network essentially asks itself what will happen. Outputs are fed back in as the new inputs, whose outputs are fed back in as inputs, and so on, making a projection of how the heights at the five positions on the flame front will evolve. Other reservoirs working in parallel predict the evolution of height elsewhere in the flame.
In a plot in their PRL paper, which appeared in January, the researchers show that their predicted flamelike solution to the Kuramoto-Sivashinsky equation exactly matches the true solution out to eight Lyapunov times before chaos finally wins, and the actual and predicted states of the system diverge.
The usual approach to predicting a chaotic system is to measure its conditions at one moment as accurately as possible, use these data to calibrate a physical model, and then evolve the model forward. As a ballpark estimate, you’d have to measure a typical system’s initial conditions 100,000,000 times more accurately to predict its future evolution eight times further ahead.
The machine-learning technique is almost as good as knowing the truth. Holger Kantz
That’s why machine learning is “a very useful and powerful approach,” said Ulrich Parlitz of the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, who, like Jaeger, also applied machine learning to low-dimensional chaotic systems in the early 2000s. “I think it’s not only working in the example they present but is universal in some sense and can be applied to many processes and systems.” In a paper soon to be published in Chaos, Parlitz and a collaborator applied reservoir computing to predict the dynamics of “excitable media,” such as cardiac tissue. Parlitz suspects that deep learning, while being more complicated and computationally intensive than reservoir computing, will also work well for tackling chaos, as will other machine-learning algorithms. Recently, researchers at the Massachusetts Institute of Technology and ETH Zurich achieved similar results as the Maryland team using a “long short-term memory” neural network, which has recurrent loops that enable it to store temporary information for a long time.
Since the work in their PRL paper, Ott, Pathak, Girvan, Lu and other collaborators have come closer to a practical implementation of their prediction technique. In new research accepted for publication in Chaos, they showed that improved predictions of chaotic systems like the Kuramoto-Sivashinsky equation become possible by hybridizing the data-driven, machine-learning approach and traditional model-based prediction. Ott sees this as a more likely avenue for improving weather prediction and similar efforts, since we don’t always have complete high-resolution data or perfect physical models. “What we should do is use the good knowledge that we have where we have it,” he said, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.” The reservoir’s predictions can essentially calibrate the models; in the case of the Kuramoto-Sivashinsky equation, accurate predictions are extended out to 12 Lyapunov times.
The duration of a Lyapunov time varies for different systems, from milliseconds to millions of years. (It’s a few days in the case of the weather.) The shorter it is, the touchier or more prone to the butterfly effect a system is, with similar states departing more rapidly for disparate futures. Chaotic systems are everywhere in nature, going haywire more or less quickly. Yet strangely, chaos itself is hard to pin down. “It’s a term that most people in dynamical systems use, but they kind of hold their noses while using it,” said Amie Wilkinson, a professor of mathematics at the University of Chicago. “You feel a bit cheesy for saying something is chaotic,” she said, because it grabs people’s attention while having no agreed-upon mathematical definition or necessary and sufficient conditions. “There is no easy concept,” Kantz agreed. In some cases, tuning a single parameter of a system can make it go from chaotic to stable or vice versa.
Wilkinson and Kantz both define chaos in terms of stretching and folding, much like the repeated stretching and folding of dough in the making of puff pastries. Each patch of dough stretches horizontally under the rolling pin, separating exponentially quickly in two spatial directions. Then the dough is folded and flattened, compressing nearby patches in the vertical direction. The weather, wildfires, the stormy surface of the sun and all other chaotic systems act just this way, Kantz said. “In order to have this exponential divergence of trajectories you need this stretching, and in order not to run away to infinity you need some folding,” where folding comes from nonlinear relationships between variables in the systems.
The stretching and compressing in the different dimensions correspond to a system’s positive and negative “Lyapunov exponents,” respectively. In another recent paper in Chaos, the Maryland team reported that their reservoir computer could successfully learn the values of these characterizing exponents from data about a system’s evolution. Exactly why reservoir computing is so good at learning the dynamics of chaotic systems is not yet well understood, beyond the idea that the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. The technique works so well, in fact, that Ott and some of the other Maryland researchers now intend to use chaos theory as a way to better understand the internal machinations of neural networks.
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. | <urn:uuid:8cce0c0b-f15a-4373-9bff-72b6c5396a9c> | 3.25 | 2,947 | News Article | Science & Tech. | 30.423456 | 95,495,315 |
The discovery resulted from an ongoing project led by researchers at Northern Arizona University using the Spitzer Space Telescope. Through a lot of focused attention and a little bit of luck, they found evidence of cometary activity that had evaded detection for three decades.
Don Quixote’s coma and tail (left) as seen in infrared light by NASA’s Spitzer Space Telescope. After image processing (right), the tail is more apparent. Image courtesy NASA/JPL-Caltech/DLR/NAU
“Its orbit resembled that of a comet, so people assumed it was a comet that had gotten rid of all its ice deposits,” said Michael Mommert, a post-doctoral researcher at NAU who was a Ph.D. student of professor Alan Harris at the German Aerospace Center (DLR) in Berlin at the time the work was carried out.
What Mommert and an international team of researchers discovered, though, was that Don Quixote was not actually a dead comet—one that had shed the carbon dioxide and water that give comets their spectacular tails.
Instead, the third-biggest near-Earth asteroid out there, skirting Earth with an erratic, extended orbit, is “sopping wet,” said NAU associate professor David Trilling. The implications have less to do with potential impact, which is extremely unlikely in this case, and more with “the origins of water on Earth,” Trilling said. Comets may be the source of at least some of it, and the amount on Don Quixote represents about 100 billion tons of water—roughly the same amount found in Lake Tahoe.
Mommert said it’s surprising that Don Quixote hasn’t been depleted of all of its water, especially since researchers assumed that it had done so thousands of years ago. But finding evidence of CO2, and presumably water, wasn’t easy.
During an observation of the object using Spitzer in August 2009, Mommert and Trilling found that it was far brighter than they expected. “The images were not as clean as we would like, so we set them aside,” Trilling said.
Much later, though, Mommert prompted a closer look, and partners at the Harvard-Smithsonian Center for Astrophysics found something unusual when comparing infrared images of the object: something, that is, where an asteroid should have shown nothing. The “extended emission,” Mommert said, indicated that Don Quixote had a coma—a comet’s visible atmosphere—and a faint tail.
Mommert said this discovery implies that carbon dioxide and water ice also might be present on other near-Earth objects.
This study confirmed Don Quixote’s size and the low, comet-like reflectivity of its surface. Mommert is presenting the research team’s findings this week at the European Planetary Space Conference in London.
Eric Dieterle | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:0403df56-8943-4a3e-bb48-f16454144bf4> | 3.84375 | 1,205 | Content Listing | Science & Tech. | 42.40193 | 95,495,356 |
All electronic devices in our daily lives - computers, smartphones etc. - consist of billions of transistors, the key building block invented in Bell Labs in the late 1940s. The transistor started out being as big as 1cm, but thanks to advancement of technology, it has reached an amazing size of 14 nanometers, that is, 1000 times smaller than the diameter of a hair. At the same time, there has also been a race to further shrink devices that control and guide light. Light can function as an ultra-fast communication channel, for example between different sections of a computer chip, but it can also be used for ultra-sensitive sensors or novel on-chip nanoscale lasers.
New techniques have been on the rise searching for ways to confine light into extremely tiny spaces, millions of times smaller than current ones. Researchers had earlier on found that metals can compress light below the wavelength-scale (diffraction limit), but more confinement would always come at the cost of more energy losses. This paradigm has now been shifted, by using graphene instead!
In a recent study published in Science, ICFO researchers have been able to reach the ultimate level of confinement of light. That is, they have been able to confine light down to a space one atom thick in dimension, the smallest confinement ever possible. The work was led by ICREA Prof at ICFO Frank Koppens and carried out by David Alcaraz, Sebastien Nanot, Itai Epstein, Dmitri Efetov, Mark Lundeberg, Romain Parret, and Johann Osmond from ICFO, and performed in collaboration with University of Minho (Portugal) and MIT (USA).
The team of researchers used stacks (heterostructures) of 2D materials, and built up a completely new nano-optical device, as if it were atom-scale Lego. They took a graphene monolayer (semi-metal), and stacked onto it a hexagonal boron nitride (hBN) monolayer (insulator), and on top of this deposited an array of metallic rods. They used graphene because this material is capable of guiding light in the form of "plasmons", which are oscillations of the electrons, interacting strongly with light.
They sent infrared light through their devices and observed how the plasmons propagated in between the metal and the graphene. To reach the smallest space conceivable, they decided to reduce as much as possible the gap between the metal and the graphene to see if the confinement of light remained efficient, e.g. without additional energy losses.
Strikingly enough, they saw that even when a monolayer of hBN was used as a spacer, the plasmons were still excited by the light, and could propagate freely while being confined to a channel of just on atom thick. They managed to switch this plasmon propagation on and off, simply by applying an electrical voltage, demonstrating the control of light guided in channels smaller than one nanometer of height.
The results of this discovery enable a completely new world of opto-electronic devices that are just one nanometer thick, such as ultra-small optical switches, detectors and sensors. Due to the paradigm shift in optical field confinement, extreme light-matter interactions can now be explored that were not accessible before. What is really exciting is that the atom-scale lego-toolbox of 2d materials has now also proven to be applicable for many types of completely new materials devices where both light and electrons can be controlled even down to the scale of a nanometer.
This research has been partially supported by the European Research Council, the European Graphene Flagship, the Government of Catalonia, Fundació Cellex and the Severo Ochoa Excellence program of the Government of Spain.
Probing the Ultimate Plasmon Confinement Limits with a van der Waals heterostructure
David Alcaraz Iranzo, Sebastien Nanot, Eduardo J. C. Dias, Itai Epstein, Cheng Peng, Dmitri K. Efetov, Mark B. Lundeberg, Romain Parret, Johann Osmond, Jin-Yong Hong, Jing Kong, Dirk R. Englund, Nuno M. R. Peres, Frank H.L. Koppens
Science, DOI 10.1126/science.aar8438 (2018)
Link to the research group led by ICREA Prof. at ICFO Frank Koppens: https:/
Link to the Graphene @ ICFO: http://graphene.
ICFO - The Institute of Photonic Sciences, member of The Barcelona Institute of Science and Technology, is a research center located in a specially designed, 14.000 m2-building situated in the Mediterranean Technology Park in the metropolitan area of Barcelona. It currently hosts 400 people, including research group leaders, post-doctoral researchers, PhD students, research engineers, and staff. ICFOnians are organized in 26 research groups working in 60 state-of-the-art research laboratories, equipped with the latest experimental facilities and supported by a range of cutting-edge facilities for nanofabrication, characterization, imaging and engineering.
The Severo Ochoa distinction awarded by the Ministry of Science and Innovation, as well as 15 ICREA Professorships, 30 European Research Council grants and 6 Fundació Cellex Barcelona Nest Fellowships, demonstrate the centre's dedication to research excellence, as does the institute's consistent appearance in top worldwide positions in international rankings. From an industrial standpoint, ICFO participates actively in the European Technological Platform Photonics21 and is also very proactive in fostering entrepreneurial activities and spin-off creation. The center participates in incubator activities and seeks to attract venture capital investment. ICFO hosts an active Corporate Liaison Program that aims at creating collaborations and links between industry and ICFO researchers. To date, ICFO has created 6 successful start-up companies.
Alina Hirschmann | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d54e84c6-b7c0-48d8-8df8-178aac1ef3e8> | 4.03125 | 1,813 | Content Listing | Science & Tech. | 37.262151 | 95,495,357 |
Random Variables and Distribution Functions
Section 3.1 introduces the formal definitions of random variable and its distribution, illustrated by several examples. The main properties of distribution functions, including a characterisation theorem for them, are presented in Sect. 3.2. This is followed by listing and briefly discussing the key univariate distributions. The second half of the section is devoted to considering the three types of distributions on the real line and the distributions of functions of random variables. In Sect. 3.3 multivariate random variables (random vectors) and their distributions are introduced and discussed in detail, including the two key special cases: the multinomial and the normal (Gaussian) distributions. After that, the concepts of independence of random variables and that of classes of events are considered in Sect. 3.4, establishing criteria for independence of random variables of different types. The theorem on independence of sigma-algebras generated by independent algebras of events is proved with the help of the probability approximation theorem. Then the relationships between the introduced notions are extensively discussed. In Sect. 3.5, the problem of existence of infinite sequences of random variables is solved with the help of Kolmogorov’s theorem on families of consistent distributions, which is proved in Appendix 2. Section 3.6 is devoted to discussing the concept of integral in the context of Probability Theory (a formal introduction to Integration Theory is presented in Appendix 3). The integrals of functions of random vectors are discussed, including the derivation of the convolution formulae for sums of independent random variables.
KeywordsRandom Vector Probability Space Independent Random Variable Infinite Sequence Jump Point
- 24.Kolmogorov, A.N.: The theory of probability. In: Aleksandrov, A.D., et al. (eds.) Mathematics, Its Content, Methods, and Meaning, vol. 2, pp. 229–264. MIT Press, Cambridge (1963) Google Scholar | <urn:uuid:c6d58c87-39f6-4074-8fd7-9b523e043433> | 3.1875 | 403 | Truncated | Science & Tech. | 35.395297 | 95,495,370 |
The simplest configuration for a flow with heat transfer is a uniform external flow over a flat surface, part or all of which is at a temperature different from that of the oncoming fluid (Fig. 1.1). In slightly more complicated cases the surface may be curved and the external-flow velocity u e may be a function of the longitudinal coordinate x, but in a large number of practical heat-transfer problems the variation of u e with y in the external flow is negligibly small compared with the variation of velocity in a region very close to the surface. Within this region, called the boundary layer, the x-component velocity u rises from zero at the surface to an asymptotic value equal to u e ; in practice one defines the thickness of this layer as the value of y at which u has reached, say, 0.995u e . The temperature also varies rapidly with y near the surface, changing from the surface value T w (subscript w means “wall”) to the external-flow value T e , which, like u e , can often be taken independent of y.
KeywordsHeat Transfer Shear Layer Prandtl Number Momentum Transfer External Flow
Unable to display preview. Download preview PDF. | <urn:uuid:5bce4451-4b5d-476e-b63d-6555a0ba9123> | 3.828125 | 253 | Truncated | Science & Tech. | 48.078506 | 95,495,371 |
Carnegie Mellon University and Columbia University collaborators discover the cause of vastly different thermal conductivities in superatomic structural analogues
Researchers found that the thermal conductivity of superatom crystals is directly related to the rotational disorder within those structures. The findings were published in an article in Nature Materials this week.
Carnegie Mellon University's Associate Professor of Mechanical Engineering Jonathan A. Malen was a corresponding author of the paper titled "Orientational Order Controls Crystalline and Amorphous Thermal Transport in Superatomic Crystals."
Superatom crystals are periodic--or regular--arrangements of C60 fullerenes and similarly sized inorganic molecular clusters. The nanometer sized C60s look like soccer balls with C atoms at the vertices of each hexagon and pentagon.
"There are two nearly identical formations, one that has rotating (i.e. orientationally disordered) C60s and one that has fixed C60s," said Malen. "We discovered that the formation that contained rotating C60s has low thermal conductivity while the formation with fixed C60s has high thermal conductivity."
Although rotational disorder is known in bulk C60, this is the first time that the process has been leveraged to create very different thermal conductivities in structurally identical materials.
Imagine a line of people passing sandbags from one end to the other. Now imagine a second line where each person is spinning around--some clockwise, some counter clockwise, some fast, and some slow. It would be very difficult to move a sandbag down that line.
"This is similar to what is happening with thermal conductivity in the superatoms," explained Malen. "It is easier to transfer heat energy along a fixed pattern than a disordered one."
Columbia University's Assistant Professor of Chemistry Xavier Roy, the other corresponding author of the study, created the superatom crystals in his laboratory by synthesizing and assembling the building blocks into the hierarchical superstructures.
"Superatom crystals represent a new class of materials with potential for applications in sustainable energy generation, energy storage, and nanoelectronics," said Roy. "Because we have a vast library of superatoms that can self-assemble, these materials offer a modular approach to create complex yet tunable atomically precise structures."
The researchers believe that these findings will lead to further investigation into the unique electronic and magnetic properties of superstructured materials. One future application might include a new material that could change from being a thermal conductor to a thermal insulator, opening up the potential for new kinds of thermal switches and transistors.
"If we could actively control rotational disorder, we would create a new paradigm for thermal transport," said Malen.
For more information, read the Nature Materials article: "Orientational Order Controls Crystalline and Amorphous Thermal Transport in Superatomic Crystals."
Additional Carnegie Mellon investigators included postdoctoral researcher and alumnus Wee-Liat Ong, Patrick S. M. Dougherty, Alan J. H. McGaughey, and C. Fred Higgs. Ong is jointly advised by Malen and Roy as part of a National Science Foundation MRSEC grant led by Columbia University. Other Columbia University researchers included E. O'Brien and D. Paley.
Malen, director of the Malen Laboratory at Carnegie Mellon, received the College of Engineering's Outstanding Research Award in 2016.
About the College of Engineering: The College of Engineering at Carnegie Mellon University is a top-ranked engineering college that is known for our intentional focus on cross-disciplinary collaboration in research. The College is well-known for working on problems of both scientific and practical importance. Our acclaimed faculty have a focus on innovation management and engineering to yield transformative results that will drive the intellectual and economic vitality of our community, nation and world.
Lisa Kulick | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:75d27327-f0e5-4363-884d-5a05def37646> | 3.125 | 1,373 | Content Listing | Science & Tech. | 26.405096 | 95,495,376 |
This is the first time scientists have pinpointed a star that eventually exploded as a stripped-envelope supernova, called a type Ib, said David Sand, an assistant professor in the Department of Physics who developed the camera.
The bottom inlay shows the star prior to the supernova. The top inlay shows the latest image a day after the star exploded.
The global team of astrophysicists, led by Yi Cao of Caltech, found the supernova on June 16. Their research was published online in the peer-reviewed journal The Astrophysical Journal Letters.
“It is very rare to catch a supernova within a day or two of explosion,” Sand said. “Up until now, it has happened at most about a dozen times. It is equally rare that we actually have Hubble Space Telescope imaging of the location of the supernova before it happened, and we were able to see the star that eventually exploded.”
Sand said it took 73 million light years for the illumination from the star’s explosion to travel to Earth.
“This star was quite far away in the galaxy NGC 6805, although we would consider it a ‘local’ galaxy,” he said. “There is no way of knowing if something so far away has any planets around it. However, it is unlikely. We found that the supernova came from what is called a Wolf-Rayet star. It is very massive and very young. It likely did not live long enough to form planets.”
Wolf-Rayet stars are known to have stellar winds where they eject some of the material off their surface and spew it out into space. Observations indicate they are devoid of hydrogen, but contain helium in the remaining outer layer of the star.
Their massive size leads to a speedy demise, Sand said. Where our sun is roughly 5 billion years old, this star was only in the tens of millions of years old. Wolf-Rayet stars tend to burn up all of their fuel quickly in order to support their own weight because the nuclear burning balances out gravity.
Cues from the spectroscopic camera images led researchers to classify their discovery as a type Ib supernova, which are thought to be the explosions of these massive stars that have lost their outer layers right before their death due to a stellar wind.
Exact details of what happens in these supernovae are murky, he said. When they do explode, they burn roughly as bright as five billion of our suns.
The Intermediate Palomar Transient Factory project, which is a scientific collaboration with California Institute of Technology, Los Alamos National Laboratory, University of Wisconsin and several others, is an automated survey of night sky dedicated to finding transient supernova events. The survey finds hundreds of new supernovae annually, and scientists here try to understand what types of stars become which types of supernova.
Sand led the development and operations of the special camera, the FLOYDS spectrograph, which was used to help identify the specific kind of supernova. Taking a spectroscopic image helps scientists to tell what kind of supernova they’re looking at by splitting the supernova’s light up into the colors of the rainbow.
A normal photograph isn’t enough to tell, he said.
The FLOYDS spectrographs, of which there are only two in the world, are attached to two-meter telescopes located in Hawaii and Australia. The cameras operate completely robotically allowing scientists to confirm supernova earlier than ever before.
In the last six months, Sand and others have confirmed 25 different supernovae with the new camera. This particular supernova is one of the first published results.
“This is where FLOYDS comes in, and its robotic nature, which lets us study supernovae young,” Sand said. “That’s the first story. The second story is this lucky Hubble imaging from 2005. Someone took an image with Hubble of the galaxy where this supernova happened. Just sheer luck – nothing to do with the supernova or seeing into the future or anything. Zoom to 2013, and we discover the supernova within a day of its explosion. We look in the Hubble data archive and notice the image from eight years prior, and we just match it up with our most recent data to see if there is a star in the old image at the exact same position as the supernova today.
“The second story really is luck, but it is happening more and more these days as the Hubble telescope collects more images of nearby galaxies.”
Sand said scientists can take another Hubble image at the location of the supernova after it has faded away. If the star that he and others identified as the progenitor to the supernova has disappeared, then they will know which star died. Otherwise, if the star is still there, then the supernova came from some other object too faint for researchers to see, and the mystery continues.
For a copy of the report, contact John Davis.
CONTACT: David Sand, assistant professor, Department of Physics, Texas Tech University, (806) 742-2264 or firstname.lastname@example.org.
John Davis | Newswise
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Science Education
23.07.2018 | Health and Medicine
23.07.2018 | Life Sciences | <urn:uuid:6bc01603-c93e-42db-9a5c-d6d8ca30d96a> | 3.9375 | 1,661 | Content Listing | Science & Tech. | 45.598203 | 95,495,397 |
Microalgal biotechnology could generate substantial amounts of biofuels with minimal environmental impact if the economics can be improved by increasing the rate of biomass production. Chlorella kessleri was grown in a small-scale raceway pond and in flask cultures with the entire volume, 1% (v/v) at any instant, periodically exposed to static magnetic fields to demonstrate increased biomass production and investigate physiological changes, respectively. The growth rate in flasks was maximal at a field strength of 10 mT, increasing from 0.39 ± 0.06 per day for the control to 0.88 ± 0.06 per day. In the raceway pond the 10 mT field increased the growth rate from 0.24 ± 0.03 to 0.45 ± 0.05 per day, final biomass from 0.88 ± 0.11 to 1.56 ± 0.18 g/L per day, and maximum biomass production from 0.11 ± 0.02 to 0.38 ± 0.04 g/L per day. Increased pigment, protein, Ca, and Zn content made the biomass produced with magnetic stimulation nutritionally superior. An increase in oxidative stress was measured indirectly as a decrease in antioxidant capacity from 26 ± 2 to 17 ± 1 µmol antioxidant/g biomass. Net photosynthetic capacity (NPC) and respiratory rate were increased by factors of 2.1 and 3.1, respectively. Loss of NPC enhancement after the removal of magnetic field fit a first-order model well (R(2) = 0.99) with a half-life of 3.3 days. Transmission electron microscopy showed enlarged chloroplasts and decreased thylakoid order with 10 mT treatment. By increasing daily biomass production about fourfold, 10 mT magnetic field exposure could make algal oil cost competitive with other biodiesel feedstocks. Bioelectromagnetics. © 2011 Wiley Periodicals, Inc.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:1aea5035-7150-47e2-988d-ccd8eeda6b6c> | 2.796875 | 413 | Academic Writing | Science & Tech. | 65.656638 | 95,495,398 |
I don't have time for a detailed analysis of GCR influx compared to Earthquakes. Thats Galactic Cosmic Rays by the way.
But my hypothesis is that cosmic rays nucleate gases within Magma causing more EQ and Volcanic activity. As the sun enters a "quiet period" it functions as a less effective shield for GCRs, thus more nucleation, more activity.
- Why Shut Down Nuke?
- Radiation Removal
- Rad Prep Shelter in Place Checklist
- Uranium Aerosolized Into Atmosphere
- Videos, Fukshima Blew Up in a Prompt Criticality
- Gundersen Email / Theories
- Largest Lies of Nuke
- Baseline is Just One Of The Lies
- Hormesis Is a Lie
- Nuke Accidents 101
- Renewable Energy PV
- Carrington Event and Astronomy
- Rad Maps, Earthquakes, Nuke Bombs
- Chernobyl Documentary 500K
- Conversions / Safety Limits
- Pictures - High Quality
- Prepper/Survival Resources And Protection from Radiation
- ENENEWS Alt Discussion Page at NukePro | <urn:uuid:a9418194-8e6c-47a4-b5be-0315e3a02f7c> | 2.90625 | 239 | Comment Section | Science & Tech. | 19.156129 | 95,495,406 |
Controls on sonic velocity in carbonates
- 729 Downloads
Compressional and shear-wave velocities (Vp andVs) of 210 minicores of carbonates from different areas and ages were measured under variable confining and pore-fluid pressures. The lithologies of the samples range from unconsolidated carbonate mud to completely lithified limestones. The velocity measurements enable us to relate velocity variations in carbonates to factors such as mineralogy, porosity, pore types and density and to quantify the velocity effects of compaction and other diagenetic alterations.
Pure carbonate rocks show, unlike siliciclastic or shaly sediments, little direct correlation between acoustic properties (Vp andVs) with age or burial depth of the sediments so that velocity inversions with increasing depth are common. Rather, sonic velocity in carbonates is controlled by the combined effect of depositional lithology and several post-depositional processes, such as cementation or dissolution, which results in fabrics specific to carbonates. These diagenetic fabrics can be directly correlated to the sonic velocity of the rocks.
At 8 MPa effective pressureVp ranges from 1700 to 6500 m/s, andVs ranges from 800 to 3400 m/s. This range is mainly caused by variations in the amount and type of porosity and not by variations in mineralogy. In general, the measured velocities show a positive correlation with density and an inverse correlation with porosity, but departures from the general trends of correlation can be as high as 2500 m/s. These deviations can be explained by the occurrence of different pore types that form during specific diagenetic phases. Our data set further suggests that commonly used correlations like “Gardner's Law” (Vp-density) or the “time-average-equation” (Vp-porosity) should be significantly modified towards higher velocities before being applied to carbonates.
The velocity measurements of unconsolidated carbonate mud at different stages of experimental compaction show that the velocity increase due to compaction is lower than the observed velocity increase at decreasing porosities in natural rocks. This discrepancy shows that diagenetic changes that accompany compaction influence velocity more than solely compaction at increasing overburden pressure.
The susceptibility of carbonates to diagenetic changes, that occur far more quickly than compaction, causes a special velocity distribution in carbonates and complicates velocity estimations. By assigning characteristic velocity patterns to the observed diagenetic processes, we are able to link sonic velocity to the diagenetic stage of the rock.
Key WordsSonic velocity carbonates physical properties porosity diagenesis compaction
Unable to display preview. Download preview PDF.
- Anselmetti, F. S., Eberli, G. P., Sellami, S., andBernoulli, D.,From outcrops to seismic profiles: An attempt to model the carbonate platform margin of the Maiella, Italy. InAbstract with Programs (Geol. Society of America, Annual Meeting, San Diego 1991).Google Scholar
- Biddle, K. V., Schlager, W., Rudolph, K. W., andBush, T. L. (1992),Seismic Model of a Progradational Carbonate Platform, Picco di Vallandro, the Dolomites, Northern Italy, American Association of Petroleum Geologists Bull.76, 14–30.Google Scholar
- Biot, M. A. (1956),Theory of Propagation of Elastic Waves in a Fluid-saturated Porous Solid, I. Low Frequency Range, II. Higher Frequency Range, J. Acoust. Soc. Am.28, 168–191.Google Scholar
- Birch, F. (1960),The Velocity of Compressional Waves in Rocks to 10 Kilobars, Part 1, J. Geophys. Res.65, 1083–1102.Google Scholar
- Burns, S. J., andSwart, P. K. (1992),Diagenetic Processes in Holocene Carbonate Sediments: Florida Bay Mudbanks and Islands, Sedimentology39, 285–304.Google Scholar
- Campbell, A. E., andStafleu, J. (1992),Seismic Modelling of an Early Jurassic, Drowned Platform: The Djebel Bou Dahar, High Atlas, Morocco, American Association of Petroleum Geologists Bull.76, 1760–1777.Google Scholar
- Christensen, N. I., andSzymanski, D. L. (1991),Seismic Properties and the Origin of Reflectivity from a Classic Palsozoic Sedimentary Sequence, Valley and Ridge Province, Southern Appalachians, Geol. Soc. Am. Bull.103, 277–289.Google Scholar
- Coyner, K. B. (1984),Effects of Stress, Pore Pressure, and Pore-fluids on Bulk Strain, Velocity and Permeability in Rocks (Ph.D. Thesis, Massachusetts Institute of Technology).Google Scholar
- Crescenti, U., Crostella, A., Donzelli, G., andRaffi, G. (1969),Stratigrafia della serie calcarea dal Lias al Miocene nella regione Marchigiano-Abruzzese, Parte II—Litostratigrafia, Biostratigrafia, Paleogeografia, Mem. Soc. Geol. It.8, 343–420.Google Scholar
- Dawans, J. M., andSwart, P. K. (1988),Textural and Geochemical Alterations in Late Cenozoic Bahamian Dolomites, Sedimentology35, 385–403.Google Scholar
- Eberli, G. P.,Physical properties of carbonate turbidite sequences surrounding the Bahamas: Implications for slope stability and fluid movements. InProceedings of the Ocean Drilling Program, Scientific Results 101 (eds. Austin, J. A., Jr., and Schlager, W.) (1988) pp. 305–314.Google Scholar
- Eberli, G. P., andGinsburg, R. N.,Cenozoic progradation of Northwestern Great Bahama Bank, a record of lateral platform growth and sea-level fluctuations. InControls on Carbonate Platform and Basin Development (SEPM Special Publication No. 44 1989) pp. 339–351.Google Scholar
- Eberli, G. P.,Growth and demise of isolated carbonate platforms: Bahamian controversies. InControversies in Modern Geology (Academic Press Limited 1991) pp. 231–248.Google Scholar
- Eberli, G. P., Bernoulli, D., Sanders, D., andVecsei, A. (1993),From aggradation to progradation: The Maiella platform (Abruzzi, Italy). InAtlas of Cretaceous Carbonate Platforms (eds. Simo, J. T., Scott, R. W., and Masse, J.-P.) Amer. Assoc. of Petroleum Geologist Memoir56, 213–232.Google Scholar
- Enos, P., andSawatsky, L. H. (1981),Pore Networks in Holocene Carbonate Sediments, J. Sed. Petrol.51, 961–985.Google Scholar
- Enos, P., andPerkins, R. D. (1979),Evolution of Florida Bay from Island Stratigraphy, Geol. Soc. Am. Bull.90, 59–83.Google Scholar
- Gardner, G. H. F., Gardner, L. W., andGregory, A. R. (1974),Formation Velocity and Density: The Diagnostic Basics for Stratigraphic Traps, Geophysics39, 770–780.Google Scholar
- Gassmann, F. (1951),Elastic Waves through a Packing of Spheres, Geophysics16, 673–685.Google Scholar
- Hamilton, E. L. (1971),Elastic Properties of Marine Sediments, J. Geophys. Res.76/2, 579–604.Google Scholar
- Hamilton, E. L., (1980),Geoacoustic Modeling of the Sea-floor, J. Acoust. Soc. Am.68, 1313–1340.Google Scholar
- Japsen, P. (1993),Influence of Lithology and Neogene Uplift on Seismic Velocities in Denmark: Implications for Depth Conversion of Maps, American Association of Petroleum Geologists Bull.77, 194–211.Google Scholar
- Kenter, J. A. M., Ginsburg, R. N., Eberli, G. P. McNeill, D. F., andLidz, B. H. (1991),Mio-Pliocene Sea-level Fluctuations Recorded in Core Borings from the Western Margin of Great Bahama Bank, Abstract, GSA Annual Meeting, San Diego, California.Google Scholar
- Laughton, A. S. (1957),Sound Propagation in Compacted Ocean Sediments, Geophysics22, 233–260.Google Scholar
- Marion, D., Nur, A., Yin, H., andHan, D. (1992),Compressional Velocity and Porosity in Sand-clay Mixtures, Geophysics57, 554–563.Google Scholar
- Milholland, P., Manghani, M. H., Schlanger, S. O., andSutton, G. H. (1980),Geoacoustic Modeling of Deep-sea Carbonate Sediments, J. Acoust. Soc. Am.68/5, 1351–1360.Google Scholar
- Nur, A., andSimmons, G. (1969),The Effect of Saturation on Velocity in Low Porosity Rocks, Earth and Planet. Sci. Lett.7, 183–193.Google Scholar
- Nur, A., Marion, D., andYin, H.,Wave velocities in sediments. InShear Waves in Marine Sediments (Kluwer Academic Publishers 1991) pp. 131–140.Google Scholar
- Rafavich, F., Kendall, C. H. St. C., andTodd, T. P. (1984),The Relationship between Acoustic Properties and the Petrographic Character of Carbonate Rocks, Geophysics49, 1622–1636.Google Scholar
- Sanders, D. G. K. (1994),The Cenomanian to Miocene Evolution of a Carbonate Platform to Basin Transition: Montagna della Maiella Abruzzi, Italy (unpubl. Diss. ETH Zürich, Switzerland).Google Scholar
- Schlanger, S. O., andDouglas, R. G.,The pelagic ooze-chalk-limestone transition and its implications for marine stratigraphy. InPelagic Sediments (eds. Hsu, K. J., and Jenkyns, H. C.) (Special Publication Int. Assoc. of Sedimentologists 1 1974) pp. 117–148.Google Scholar
- Sellami, S., Barblan, F., Mayerat, A.-M., Pfiffner O. A., Risnes, K., andWagner, J.-J. (1990),Compressional Wave Velocities of Samples from the NFP-20 East Seismic Reflection Profile, Mém. Soc. Géol. Suisse1, 77–84.Google Scholar
- Urmos, J., andWilkens, R. H., (1993),In situ Velocities in Pelagic Carbonates: New Insights from Ocean Drilling Program Leg 130, Ontong Java Plateau, J. Geophys. Res.98/B5, 7903–7920.Google Scholar
- Vecsei, A. (1991),Aggradation und Progradation eines Karbonatplattform-Randes: Kreide bis Mittleres Tertiär der Montagna della Maiella, Abruzzen, Mitteilungen aus dem Geologischen Institut der Eigdenössischen Technischen Hochschule und der Universität Zürich, 294.Google Scholar
- Vernik, L., andNur, A. (1992),Petrophysical Classification of Siliciclastics for Lithology and Porosity Prediction from Seismic Velocities, American Association of Petroleum Geologists Bull.76, 1295–1309.Google Scholar
- Vidlock, S. (1983),The Stratigraphy and Sedimentation of Cluett Key, Florida Bay, M.S. Thesis, University of Connecticut.Google Scholar
- Wang, Z., Hirsche, W. K., andSedgwick, G. (1991),Seismic Velocities in Carbonate Rocks, J. Can. Petr. Tech.30, 112–122.Google Scholar
- Wilkens, R. H., Fryer, G. F., andKarsten, J. (1991),Evolution of Porosity and Seismic Structure of Upper Oceanic Crust: Importance of Aspect Ratios, J. Geophys. Res.96, 17981–17995.Google Scholar
- Wilson, J. L.,Carbonate Facies in Geologic History (Springer, New York 1975).Google Scholar
- Wood, A. B. (1941),A Textbook of Sound (Macmillan, New York 1941).Google Scholar
- Wyllie, M. R., Gregory, A. R., andGardner, G. H. F. (1956),Elastic Wave Velocities in Heterogeneous and Porous Media, Geophysics21/1, 41–70.Google Scholar | <urn:uuid:7867f57f-f2b0-4272-8929-4a8635d46968> | 2.640625 | 2,958 | Academic Writing | Science & Tech. | 54.777173 | 95,495,409 |
How a Molecular Motor Untangles Protein
News Oct 08, 2015
A marvelous molecular motor that untangles protein in bacteria may sound interesting, yet perhaps not so important. Until you consider the hallmarks of several neurodegenerative diseases — Huntington’s disease has tangled huntingtin protein, Parkinson’s disease has tangled α-synuclein, and Alzheimer’s disease has tangles of tau and β-amyloid. In fact, a similar untangling motor from yeast has already shown effectiveness in mouse and nematode models of Huntington’s disease.
So Aaron Lucius, Ph.D., professor in the University of Alabama at Birmingham Department of Chemistry, is studying the bacterial protein ClpB ofE. coli, as a steppingstone to expanded research on medically significant models in coming years. The question is how does ClpB actually do its job to untangle proteins?
“We don’t know how proteins get tangled, but if we can study how proteins get disaggregated, it may have clinical relevance,” Lucius said.
ClpB is one of a vast assortment of similar molecular machines found in all living cells, known as hexameric AAA + enzymes. They have six subunits that form a hexagon with a hole in the middle, and they burn ATP for energy. While the machines are all similar, the kinds of work they do vary widely — examples include unwinding DNA, helping digest proteins, untangling proteins, cutting microtubules, helping shape plant cells and driving membrane fusion.
ClpB is closely related to the ClpA enzyme of E. coli. Unlike ClpB — which has the job of untangling a protein that has lost its proper shape — ClpA helps to digest unnecessary proteins into small peptide fragments. Proteins are chains of amino acids, linked together like beads on string, and then folded into a precise shape. ClpA is able to grab one end of a protein that has been marked for recycling, and pull it through the central hole of ClpA, like an anchor chain winched in through the hawse hole of a ship. ATP hydrolysis powers that processive pulling, and the unraveled protein chain is pushed into an attached ClpP enzyme, which cuts up the chain “like a molecular paper shredder,” Lucius said.
A previous lab group had garnered evidence that ClpB is also a processive translocase, meaning that it pulls the protein chain all the way through that central hole in a long series of stepwise tugs, but they were forced to introduce an artefact into the ClpB enzyme to do their experiments. Lucius and his fellow UAB researchers are now challenging that model. After finding a way to test ClpB without introducing the artefact, their experimental results show that the ClpB enzyme makes only one or two tugs on the tangled protein, and then lets go.
“Our results support a molecular mechanism where ClpB catalyzes protein disaggregation by tugging and releasing exposed tails or loops,” they wrote in a paper recently published in the Biochemical Journal, similar to how someone would tug at the loose strands of a tangled ball of yarn.
This proposed new paradigm of how ClpB functions may apply to other untangling enzymes. “It will take time to see if it is accepted,” Lucius said.
The study of hexameric AAA+ enzyme function requires sophisticated experimental approaches. “We can’t see the proteins; we have to come up with clever ways to infer what they are doing,” Lucius said.
His lab discovered such a clever technique in 2010 while working with ClpA. But when graduate student Tao Li tried to apply it to ClpB studies, and expected to find similar results of processive translocation that the earlier group had reported, “she did three years of every possible experiment to see if it translocates and found no evidence in support of translocation,” Lucius said. So the UAB lab began to consider alternatives, which led to the finding that the ClpB enzyme made only one or two tugs before releasing the substrate protein. They also tested ClpB that had the artefact inserted and found evidence that what had appeared to be translocation to the previous researchers was only nonspecific protein degradation without translocation through the central hole of ClpB.
The experimental approach
ClpB binds to the substrate protein in the presence of an ATP analog that promotes binding but cannot function to power the enzyme. The bound substrate has a fluorescent label attached to its far end, but that fluorescence is dampened by the binding to ClpB. The mixture is put into one syringe, and high concentration of ATP and unlabeled substrate is put into another syringe. With the press of a trigger, a piston powered by 120-pounds-per-square-inch nitrogen gas mixes the contents of the two syringes together within two-thousandths of a second, and now, in the presence of ATP, the ClpB machine can go to work. This technique is known as fluorescence stopped-flow.
The UAB researchers look for the increased fluorescence when ClpB releases the labeled substrate. If the enzyme is pulling the labeled substrate protein through the central hole of the hexamer, there will be a time lag before the fluorescence increases. That lag will increase as longer substrate proteins are tested. But if the ClpB only tugs once or twice, and then releases the substrate, there will be no lag. Conditions are set so that, after each single ClpB hexamer releases its fluorescently labeled substrate, the enzyme will not bind another because there is an excess of unlabeled substrate. Thus, this fluorescence stopped-flow method shows only a single turnover for each enzyme complex.
When this system was used with ClpA, there was a lag before fluorescence increase, and that lag increased with increased length of the substrate protein. Both those results are consistent with the ClpA enzyme, powered by ATP, pulling the substrate protein through its central hole. With a 127-amino-acid substrate, that lag lasted 10 seconds. When this system was used with ClpB, there was no lag, and the length of the substrate made no difference in how quickly the fluorescent signal increased. Thus, ClpB was releasing the substrate quickly.
Hidden Signals in RNAs Regulate Protein SynthesisNews
Scientists have long known that RNA encodes instructions to make proteins. In a new study published in Nature, scientists describe how the protein-making machinery identifies alternative initiation sites from which to start protein synthesis.READ MORE
Mouse Study Suggests That Dietary Fat, Not Carbs, Drives ObesityNews
A mouse study that made over 100,000 measurements of body weight and fat has concluded that the sole driver of obesity in mice is increased dietary fat content.READ MORE | <urn:uuid:9f5510fe-7d42-445a-b097-547a4f822b6f> | 3.015625 | 1,439 | Truncated | Science & Tech. | 39.849646 | 95,495,431 |
|PISCES : TORPEDINIFORMES : Torpedinidae||CARTILAGINOUS FISH|
Description: The marbled electric ray has a rounded,disc-like body with smooth skin and a short, thick tail with a large tail fin. The two dorsal fins are located on the tail, they are almost equal in size and are close together. The upper surface is pale brown with darker brown mottling and the underside is creamy-white. The marbled electric ray is generally much smaller than the electric ray (Torpedo nobiliana) reaching a maximum length of 60cm.
Habitat: The marbled electric ray is usually found on sandy or muddy seabeds at depths between 10-30m although in the Mediterranean it has been recorded at depths down to 100m. It feeds in a similar way to (Torpedo nobiliana), by catching bottom-living fish and stunning or killing them with an electric shock before eating them.
Distribution: This is a southern species in the British Isles, and so far it has only been recorded from the southern coasts of Britain and Ireland. Most of the records have been in the summer or autumn suggesting that there is a northward migration from the Mediterranean earlier in the summer.
Similar Species: The only other electric ray that occurs in the waters around Britain and Ireland is (Torpedo nobiliana). The two species are easily distinguished by their coloration which is marbled pale and dark brown in the marbled electric ray and dark greyish-blue to brown in (Torpedo nobiliana).
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Torpedo marmorata (Risso, 1810). [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=ZF1220 Accessed on 2018-07-19
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:6241e101-74dc-4362-8294-ee32491541e4> | 2.90625 | 466 | Knowledge Article | Science & Tech. | 49.755225 | 95,495,444 |
The subject of this chapter is Einstein’s special relativity theory and what it says about the geometry of flat spacetime. This is not so difficult or abstruse as it sounds; it involves little beyond high school mathematics. A spacetime is simply the mathematical version of a universe that, like our own physical universe, has dimensions both of space and of time. A fiat spacetime is a spacetime with no gravity, since gravitation tends to “bend” a spacetime. Flat spacetimes are the simplest kind of spacetimes; they stand in the same relation to curved spacetimes as a flat Euclidean plane does to a curved surface.
KeywordsSpecial Relativity Minkowski Space Lorentz Transformation Timelike Vector Inertial Coordinate System
Unable to display preview. Download preview PDF. | <urn:uuid:3fbdd891-7008-4ddf-a0df-c77d3e202421> | 4.25 | 173 | Truncated | Science & Tech. | 28.053659 | 95,495,448 |
Planet Earth experienced a global climate shift in the late 1980s on an unprecedented scale, fuelled by anthropogenic warming and a volcanic eruption, according to new research published this week.
Scientists say that a major step change, or ‘regime shift’, in the Earth’s biophysical systems, from the upper atmosphere to the depths of the ocean and from the Arctic to Antarctica, was centred around 1987, and was sparked by the El Chichón volcanic eruption in Mexico five years earlier.
Their study, published in Global Change Biology, documents a range of associated events caused by the shift, from a 60% increase in winter river flow into the Baltic Sea to a 400% increase in the average duration of wildfires in the Western United States.
“We suggest that climate change is not a gradual process, but one subject to sudden increases, with the 1980s shift representing the largest in an estimated 1,000 years”, said co-author Rita Adrian, Professor at the Leibniz-Institute of Freshwater Ecology and Inland Fisheries in Berlin (Germany).
Philip C. Reid, Professor of Oceanography at Plymouth University’s Marine Institute (UK), and Senior Research Fellow at the Sir Alister Hardy Foundation for Ocean Science (UK), is the lead author of the report. “We demonstrate, based on 72 long time series, that a major change took place in the world centred on 1987 that involved a step change and move to a new regime in a wide range of Earth systems”, he said.
“Our work contradicts the perceived view that major volcanic eruptions just lead to a cooling of the world. In the case of the regime shift it looks as if global warming has reached a tipping point where the cooling that follows such eruptions rebounds with a rapid rise in temperature in a very short time. The speed of this change has had a pronounced effect on many biological, physical and chemical systems throughout the world, but is especially evident in the Northern temperate zone and Arctic.”
Over the course of three years, the scientists - drawing upon a range of climate models, using data from nearly 6,500 meteorological stations, and consulting innumerable scientists and their studies round the world - found evidence of the shift across a wide range of biophysical indicators, such as the temperature and salinity of the oceans, the pH level of rivers, the timing of land events, including the behaviour of plants and birds, the amount of ice and snow in the cryosphere (the frozen world), and wind speed changes.
They detected a marked decline in the growth rate of CO2 in the atmosphere after the regime shift, coinciding with a sudden growth in land and ocean carbon sinks – such as new vegetation spreading into polar areas previously under ice and snow. And they found that the annual timing of the regime shift appeared to have moved regionally around the world from west to east, starting with South America in 1984, North America (1985), North Atlantic (1986), Europe (1987), and Asia (1988).
These dates coincide with significant shifts to an earlier flowering date for cherry trees around the Earth in Washington DC, Switzerland, and Japan and coincided with the first evidence of the extinction of amphibians linked to global warming, such as the harlequin frog and golden toad in Central and South America.
Second author Renata E. Hari, Eawag, Dübendorf, Switzerland, said: “The 1980s regime shift may be the beginning of the acceleration of the warming shown by the IPCC. It is an example of the unforeseen compounding effects that may occur if unavoidable natural events like major volcanic eruptions interact with anthropogenic warming.”
The full paper is available to download at: http://onlinelibrary.wiley.com/doi/10.1111/gcb.13106/abstract
Professor Rita Adrian
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin
Phone: 001 775 200 4231
Nadja Neumann/Angelina Tittmann
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin
Phone: 0049 (0)30 64181-975/ -631
The Leibniz-Institute of Freshwater Ecology and Inland Fisheries, IGB, is an independent and interdisciplinary research centre dedicated to the creation, dissemination, and application of knowledge about freshwater ecosystems. Working in close partnership with the scientific community, government agencies, as well as the private sector, guarantees the development of innovative solutions to the most pressing challenges facing freshwater ecosystems and human societies.
Angelina Tittmann | idw - Informationsdienst Wissenschaft
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:d64cfdc1-06dc-41a8-ac38-232991a06627> | 4.15625 | 1,626 | Content Listing | Science & Tech. | 38.763564 | 95,495,451 |
Similarity of squares
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
- Similarity n-gon
9-gones ABCDEFGHI and A'B'C'D'E'F'G'H'I' are similar. The area of 9-gon ABCDEFGHI is S1=190 dm2 and the diagonal length GD is 32 dm. Calculate area of the 9-gon A'B'C'D'E'F'G'H'I' if G'D' = 13 dm.
The area of the regular 10-gon is 563 cm2. The area of similar 10-gon is 606 dm2. What is the coefficient of similarity.
A meter pole perpendicular to the ground throws a shadow of 40 cm long, the house throws a shadow 6 meters long. What is the height of the house?
Area of square garden is 6/4 of triangle garden with sides 56 m, 35 m and 35 m. How many meters of fencing need to fence a square garden?
Area of trapezoid is 135 cm2. Sides a, c and height h are in a ratio 6:4:3. How long are a,c and h? Make calculation...
One cube has edge increased 5 times. How many times will larger its surface area and volume?
- Car factory
Carmaker now produce 4 cars a day more than last year, so the production of 388128 cars will save just one full working day. How many working days needed to manufacture 388128 cars last year?
- The perimeter
The perimeter of equilateral △PQR is 12. The perimeter of regular hexagon STUVWX is also 12. What is the ratio of the area of △PQR to the area of STUVWX?
In a local supermarket, 3/5 kilogram of squid costs 156.00. How do 4 kilograms of squid cost?
Route is long 147 km and the first day first regiment went at an average speed 12 km/h and journey back 21 km/h. The second day went second regiment same route at an average speed 22 km/h there and back. Which regiment will take route longer?
- Motion problem
From Levíc to Košíc go car at speed 81 km/h. From Košíc to Levíc go another car at speed 69 km/h. How many minutes before the meeting will be cars 27 km away?
Ricky bought 9 same chocolates for 9 Eur. How many euros he pay for 26 chocolates?
- Forestry workers
In the forest is employed 43 laborers planting trees in nurseries. For 6 hour work day would end job in 35 days. After 11 days, 8 laborers go forth? How many days is needed to complete planting trees in nurseries by others, if they will work 10 hours a da
Gross wage was 1430 USD including 23% bonus. How many USD were bonuses?
Mr. Billy calculated that excavation for a water connection dig for 12 days. His friend would take 10 days. Billy worked 3 days alone. Then his friend came to help and started on the other end. On what day since the beginning of excavation they met?
A cyclist passes 88 km in 4 hours. How many kilometers he pass in 8 hours?
If water flows into the pool by two inlets, fill the whole for 18 hours. First inlet filled pool 10 hour longer than second. How long pool is filled with two inlets separately? | <urn:uuid:81584d80-2c92-4032-b9f3-057f48d0770d> | 3.25 | 780 | Tutorial | Science & Tech. | 86.973125 | 95,495,533 |
A car gets 9.2 kilometers to a liter of gasoline. Assuming that gasoline is 100% octane (C8H18), which has a density of 0.69 g/cm3, how many liters of air (21% oxygen by volume at STP) will be required to burn the gasoline for a 1250-km trip? Assume complete combustion.© BrainMass Inc. brainmass.com July 20, 2018, 12:26 pm ad1c9bdddf
It finds the volume of air required to burn the gasoline for a 1250 km trip. The solution is detailed and has a '5/5' rating. | <urn:uuid:2ff5be90-8b58-4549-890d-90e4fa7ff761> | 3.15625 | 134 | Truncated | Science & Tech. | 86.705 | 95,495,567 |
Recent Global CH4
Whole-atmosphere monthly mean CH4 concentration based on GOSAT observations
- Recent data -
Monthly mean CH4
CH4 growth over the past year (**)
March 2018 - March 2017
(*) CH4 trend is a value on the CH4 trend line derived by removing averaged seasonal fluctuations from the
monthly CH4 time series. Please note that with a new addition of monthly mean CH4 data, seasonal
fluctuations may vary. Accordingly, past CH4 trend values can also change slightly.
(**) CH4 growth refers to an increase of CH4 level on the trend line over the past year.
Showing monthly mean GHG concentrations - global distribution and variations
The project of the Greenhouse gases Observing SATellite "IBUKI" (GOSAT), the world’s first satellite designed specifically for monitoring greenhouse gases from space, is jointly promoted by the Ministry of the Environment, Japan, the National Institute for Environmental Studies, and Japan Aerospace Exploration Agency. The satellite has been in operation since its launch on January 23, 2009.
The above chart demonstrates the whole-atmosphere monthly mean concentration of methane (CH4), calculated by using GOSAT data that reflect CH4 levels in all layers of the atmosphere. It is also showing seasonal oscillation and yearly rise over the analyzed period. It is also confirmed that the trend line of the whole-atmosphere CH4 mean (average seasonal cycle removed) increases monotonously. The value and the growth of the trend line are important to discuss global warming issues.
Whole-atmosphere monthly mean methane concentration
Characteristics and significance of greenhouse gas observation by GOSAT
Greenhouse gas changes at surface-level monitoring sites and the global CH4 mean based on those observations have long been reported by the World Meteorological Organization and several other meteorological agencies around the world. However, greenhouse gas levels, and thus CH4 levels, are known to vary with altitude from past measurements taken by aircraft. Therefore, for a further understanding of overall trends for CH4 in the atmosphere, it is necessary to know “whole-atmosphere” CH4 mean. Model predictions of whole-atmosphere CH4 mean have appeared in the fifth assessment report by the Intergovernmental Panel on Climate Change, as they are important for predicting the risk of global warming due to rising greenhouse gas levels. This is where CH4 observation by GOSAT comes in useful, as the satellite measurement encompasses levels from the surface to the top of the atmosphere and provides CH4 concentration averaged over an entire atmospheric column (this is referred to as column-averaged CH4 concentration).
Whole-atmosphere monthly mean CH4 concentration based on GOSAT data
The whole-atmosphere mean CH4 concentration was calculated based on GOSAT measurement, using observational data collected by the satellite over a period of almost nine years from May 2009 to January 2018. During the analyzed period, the monthly mean CH4 concentration had continuously risen with the seasonal fluctuation characterized by a minimum in early- to mid-summer (May to July) and a maximum in late autumn to winter (November to February). In January 2018, the whole-atmosphere mean CH4 concentration reached the record of 1824 ppb. Furthermore, the trend line also reached the record of 1817 ppb in January 2018. We note that the whole-atmosphere mean CH4 concentration calculated here is an estimation made using the model approach based on GOSAT data as areas that GOSAT measurement can cover are limited to parts of the globe where the local solar altitude is above a specific threshold and not covered by any clouds1). The figure shown here represents the monthly mean CH4 concentration and the trend line calculated from it2) . Although, the values shown here are found to be smaller by 40 ppb than those based on CH4 measurements at the surface level, it is consistent with the fact that CH4 concentration becomes lower as it is dispersed in the air3).
1) We used the same estimation approach as of the CO₂. More details are in the following link:
2) Result for January 2015 is missing due to lacking data for instrument adjustment.
3) Follow the link below for further information on data released by the US National Oceanic and Atmospheric Administration:
About GOSAT data and analysis results:
NIES Satellite Observation Center, GOSAT Project
About GOSAT, onboard sensors, and observation status:
JAXA Space Technology Directorate I
GOSAT-2 Project Team
The GOSAT Project is a joint effort of the Ministry of the Environment (MOE), the National Institute for Environmental Studies (NIES), and the Japan Aerospace Exploration Agency (JAXA)
For further information | <urn:uuid:48c2d388-eefb-4de7-af0e-d7596e61a4e1> | 2.96875 | 1,024 | Knowledge Article | Science & Tech. | 19.889983 | 95,495,588 |
Compile Linux Kernel· Linux Scripts
At the heart of every Linux distribution is the kernel. A great way to get more familiar with the system is to compile a customized version of the kernel for your own machine. The kernel is relatively easy to build because of the different customization options users can choose.
In order to customize the kernel, the command, make menuconfig, will bring up a nice menu that allows the addition of removal of included drivers. If a system is not recognizing a bluetooth, ethernet, wireless, or any of kind of adapter, a quick Google search may tell you exactly where in the menus the driver is inside the kernel menu. Just press “Space” to enable the driver and finish building the kernel.
Don’t use this script as a guaranteed and support way of creating a kernel. Many users will tell you to NOT compile the kernel as root for various issues, but if you just want to learn for a basic understanding, the code should work with minor code changes. I would run each line of code individually to ensure there are no errors. I’ve commented out other ways of doing the same task and left them in the code for reference.
Note: If care is not taken on the steps below, the current kernel may be overwritten or the system may fail to boot. As always, please test any of the scripts on this site on a virtual machine. VMWare Server (which is free and web based) is preferred, but VirtualBox is a great alternative. I am currently using Hyper-V, but it requires Windows Server 2008 and integration driver support is very weak.
#!/bin/bash ### ——————————————————————– ### # This script compiles the linux kernel. # Location: http://virtualparadise.wordpress.com/2010/04/29/compile-linux-kernel # Date Created: 4/29/2010 # Date Modified: 4/29/2010 # Permissions Required: Non-root # Notes: Not every link will be valid or the latest, double # check before relying on the code ### ——————————————————————– ### # Download the kernel source wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-220.127.116.11.tar.bz2 # Uncompress the source tar xvf linux-18.104.22.168.tar.bz2 #gzip -dc linux-22.214.171.124.tar.gz | tar xvf - # Move the kernel source folder mv linux-126.96.36.199 /usr/src # Navigate to the kernel folder cd /usr/src/linux-188.8.131.52 # Install the required packages apt-get install kernel-package ncurses-devel mkinitrd-tools apt-get install initramfs-tools #yum -y install kernel kernel-package ncurses-devel # Clean the kernel source make mrproper # Configure the kernel make menuconfig #make oldconfig # Build the kernel make modules modules_install bzImage # This step moves the kernel, bzImage, system.map to the /boot folder and adds an entry to the /boot/grub/menu.lst file make install # Create the RAM drive #mkinitrd -f --preload hv_vmbus --preload hv_storvsc --preload hv_netvsc --preload hv_blkvsc /boot/initrd-184.108.40.206-1.img 220.127.116.11-1 - (Hyper-V only) mkinitrd -f /boot/initrd-18.104.22.168.img 22.214.171.124 #mkinitramfs -o /boot/initrd-126.96.36.199.img # Copy the config file to the boot folder cp -v .config /boot/config-188.8.131.52 echo Please reboot the system now. | <urn:uuid:2486b825-e476-4349-8a4a-397abbfdc719> | 2.796875 | 857 | Tutorial | Software Dev. | 77.757499 | 95,495,589 |
The book provides theoretical and phenomenological insights on the structure of matter, presenting concepts and features of elementary particle physics and fundamental aspects of nuclear physics.
Starting with the basics (nomenclature, classification, acceleration techniques, detection of elementary particles), the properties of fundamental interactions (electromagnetic, weak and strong) are introduced with a mathematical formalism suited to undergraduate students. Some experimental results (the discovery of neutral currents and of the W± and Z0 bosons; the quark structure observed using deep inelastic scattering experiments) show the necessity of an evolution of the formalism. This motivates a more detailed description of the weak and strong interactions, of the Standard Model of the microcosm with its experimental tests, and of the Higgs mechanism. The open problems in the Standard Model of the microcosm and macrocosm are presented at the end of the book.
Based on the celebrated lectures of the influential particle physicist Giorgio Giacomelli, this volume, now in a new edition, aims to provide the basic theoretical foundations, and phenomenological knowledge of, the structure of matter at the subatomic level.
Author Sylvie Braibant
Author Giorgio Giacomelli
Author Maurizio Spurio
Release Date 16.11.2011
Original Title Particelle e interazioni fondamentali
Original Language Italian
Product type Paperback
Dimension 9.25 x 6.10 x 6.10 inches
Product Weight 27.41 ounces
An Introduction to Particle Physics
Seller: Dodax EU
Delivery date: between Wednesday, July 25 and Friday, July 27
This discussion is intended to exchange information between customers. If your question answered within 48 hours, the answer gives you an expert of Dodax. | <urn:uuid:cb5f9131-c96d-48d3-9115-48c66a49cc6a> | 2.84375 | 365 | Product Page | Science & Tech. | 18.530859 | 95,495,636 |
A small, secretive creature with unlikely qualifications for defying gravity may hold the answer to an entirely new way of getting off the ground.
Salamanders—or at least several species of the Plethodontidae family—can do something humans would like to know a lot more about.
“This particular jump is unique in the world,” said graduate researcher Anthony Hessel. “That’s why I think a lot of people are finding this very interesting.”
The Northern Arizona University student calls the move a “hip-twist jump” that powers a “flat catapult,” describing the biomechanics in language the public can access. But the work has caught the attention of a highly technical crowd.
Hessel, who studies muscle physiology and biomechanics, recalled the moment he fully grasped the reach of his findings. An email from a premier journal reached him over the holiday break with the subject line “Science is interested in your work.” The contact arose from his presentation at the Society for Integrative and Comparative Biology symposium. There will likely be more who are interested.
“It’s a new way to get vertical lift for animals,” Hessel said. “Something that is flat on the ground, that is not pushing directly down on the ground, can still get up in the air. I’d say that hundreds of engineers will now toy with the idea and figure out what cool things can be built from it.”
Hessel used high-speed film, a home-built cantilever beam apparatus, some well-established engineering equations and biomechanical analysis to produce the details of how a slippery little amphibian with short legs can propel itself six to 10 times its body length into the air.
The key is that the salamander’s legs don’t provide the push that most creatures would require.
“They transfer energy from their torso into the ground in a very special way,” Hessel said. “It’s all about how the energy is transferred into the ground efficiently.”
In describing the movement frame-by-frame from the high-speed film, Hessel said the salamander bends its body, then rapidly pushes that bend—a “C” shape, down through the torso—and this movement can “create a lot of elastic energy.”
“One of the interesting things about the salamander is that the mechanism moves the center of mass in a way that allows this really inefficient-looking mechanism to have a lot of efficiency,” Hessel said.
The next stage of the research is “getting down to the structures of the stiffness properties,” Hessel said. “When you see that there’s more power in the jump that can come from the muscles, then you know there are other places where you have to look, like stored elastic energy, connective tissue stretching and bones moving.”
One of those factors may be the protein titin, an active loader mechanism that is the focus of research by Hessel’s mentor, Regents’ Professor Kiisa Nishikawa. Her interdisciplinary lab group has provided valuable input throughout the project, Hessel said.
For now, the student from Long Island, N.Y., will write and publish his findings to complete his master’s degree, with plans to pursue a doctorate at NAU. Although the salamanders he brought with him from Allegheny College, his undergraduate institution, are not making a return trip to Pennsylvania, the same species is being studied at a lab there to continue the research, which Hessel will oversee himself this summer.
Eric Dieterle | Newswise
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:dd54b1b0-3486-44c5-ac52-c0485066af2d> | 3.421875 | 1,430 | Content Listing | Science & Tech. | 44.797405 | 95,495,639 |
456 pages, b/w photos, b/w illustrations, b/w maps, tables
Invasive non-native species are a major threat to global biodiversity. Often introduced accidentally through international travel or trade, they invade and colonize new habitats, often with devastating consequences for the local flora and fauna. Their environmental impacts can range from damage to resource production (e.g. agriculture and forestry) and infrastructure (e.g. buildings, road and water supply), to human health. They consequently can have major economic impacts. It is a priority to prevent their introduction and spread, as well as to control them. Freshwater ecosystems are particularly at risk from invasions and are landscape corridors that facilitate spread of invasives.
A Handbook of Global Freshwater Invasive Species reviews the current state of knowledge of the most notable global invasive freshwater species or groups, based on their severity of economic impact, geographic distribution outside of their native range, extent of research, and recognition of the ecological severity of the impact of the species by the IUCN. As well as some of the very well-known species, A Handbook of Global Freshwater Invasive Species also covers some invasives that are emerging as serious threats. Examples covered include a range of aquatic and riparian plants, insects, molluscs, crustacea, fish, amphibians, reptiles and mammals, as well as some major pathogens of aquatic organisms. This book also includes overview chapters synthesizing the ecological impact of invasive species in fresh water and summarizing practical implications for the management of rivers and other freshwater habitats.
A comprehensive compendium of the major alien plant, animal and pathogen species that threaten riparian, freshwater and brackish ecosystems around the world. Each chapter, authored by leading international experts, ensures a thorough and up-to-date treatment of the history, ecology, distribution, impacts and management of over 30 key species. Taken together, this volume provides an outstanding resource for invasion ecologists, freshwater biologist and catchment scientists.
- Philip E Hulme, Professor of Plant Biosecurity, The Bio-Protection Research Centre, Lincoln University, Christchurch, New Zealand
"Freshwater ecosystems are among the most susceptible to invasion by introduced species. Invasive plants, invertebrates, fishes, amphibians and reptiles, mammals, and pathogens are interacting with other agents of change to threaten biodiversity and disrupt the provision of ecosystem services in these ecosystems worldwide. This important volume brings together insights from 63 contributors to review available knowledge and define new challenges and priorities for research and management. It is sure to stand as the primary source of information on invasive species in freshwater ecosystems for many years to come."
- David M. Richardson, Centre for Invasion Biology, Stellenbosch University, South Africa
"This Handbook is a must-read for anyone seriously concerned with aquatic invasions. Its detailed yet readable accounts of impacts and management approaches for important introduced plants, animals, and parasites will serve as authoritative references for years to come."
- Daniel Simberloff, Nancy Gore Hunger Professor of Environmental Studies, University of Tennessee and Editor-in-Chief, Biological Invasions
1. Invasive Alien Species in Freshwater Ecosystems: A Brief Overview
Part 1: Aquatic and Riparian Plants
2. Alternanthera philoxeroides (Martius) Grisebach (Alligator Weed)
3. Crassula helmsii (T. Kirk) Cockayne (New Zealand Pygmyweed)
4. Eichhornia crassipes Mart. (Solms-Laubach) (Water Hyacinth)
5. Heracleum mantegazzianum Sommier and Levier (Giant Hogweed)
6. Impatiens glandulifera Royle (Himalayan Balsam)
7. Lagarosiphon major (Ridley) Moss ex Wager (Curly Water Weed)
8. Lythrum salicaria L. (Purple Loosestrife)
9. Myriophyllum aquaticum (Vell.) Verdcourt (Parrot Feather)
10. Spartina anglica C.E. Hubbard (English Cord-Grass)
11. Tamarix spp. (Tamarisk, Saltcedar)
Part 2: Aquatic Invertebrates
12. Aedes albopictus Skuse (Asian Tiger Mosquito)
13. An Overview of Invasive Freshwater Cladocerans: Bythotrephes longimanus Leydig As a Case Study
14. Invasive Freshwater Copepods of North America
15. Corbicula fluminea Muller (Asian Clam)
16. Eriocheir sinensis H. Milne-Edwards (Chinese Mitten Crab)
17. Pacifastacus leniusculus Dana (North American Signal Crayfish)
18. Apple Snails
19. Potamopyrgus antipodarum J. E. Grey (New Zealand Mudsnail)
Part 3: Fish
20. Bigheaded Carps of the Genus Hypophthalmichthys
21. Cyprinus carpio L. (Common carp)
22. Gambusia affinis (Baird and Girard) and Gambusia holbrooki Girard (Mosquitofish)
23. Pseudorasbora parva Temminck and Schlegel (Topmouth Gudgeon)
24. Salmo trutta L. (Brown Trout)
Part 4: Amphibians and Reptiles
25. Rhinella marina L. (Cane Toad)
26. Eleutherodactylus coqui Thomas (Caribbean Tree Frog)
27. Rana (Lithobates) catesbeiana Shaw (American Bullfrog)
28. Trachemys scripta (Slider Terrapin)
Part 5: Aquatic and Riparian Mammals
29. Castor canadensis Kuhl (North American Beaver)
30. Myocastor coypus Molina (Coypu)
31. Neovison vison Schreber (American Mink)
Part 6: Aquatic Pathogens
32. Bothriocephalus acheilognathi Yamaguti (Asian Tapeworm)
33. Centrocestus formosanus Nishigori (The Asian Gill-Trematode)
34. Myxobolus cerebralis Hofer (Whirling Disease)
35. Management of Freshwater Invasive Alien Species
There are currently no reviews for this book. Be the first to review this book!
Robert A. Francis is a Senior Lecturer in Ecology at King's College London, UK. He has broad research interests in aquatic, riparian and urban ecology and has been secretary of the British Ecological Society special interest group on invasive species since 2008. | <urn:uuid:66e93467-53ba-4fed-a763-a2b0ee0e5e88> | 3.265625 | 1,445 | Product Page | Science & Tech. | 25.131136 | 95,495,649 |
A Pragmatic View of the Four-Color Theorem: A Resurrection of the Most Famous False Proof in History
Macalester College, St. Paul, Minnesota
February 2, 1998
Refreshments will be served at 3:45 in Humanities 019.
Perhaps the most famous false proof ever is Kempe's 1879 "proof" of the 4-color theorem. That result states that any map in the plane can be colored using 4 colors so that adjacent countries use different colors. There was a hole in Kempe's reasoning and a proper proof was not found until 1976. But what about the question: Can one program a computer to take a given map in the plane and four-color it? I will show how the ideas of Kempe's proof are really not so bad at all, from a computer's perspective: they lead to some nice algorithms for coloring maps.
|Union College Math Department Home Page|
Comments to: firstname.lastname@example.org
Created automatically on: Fri Jul 20 16:32:20 EDT 2018 | <urn:uuid:15bed843-6675-42c7-b1b5-9332220785dd> | 2.890625 | 215 | News (Org.) | Science & Tech. | 61.995476 | 95,495,686 |
Learn how to write to a file from Java program. This tutorial teaches you how you can use the OutputStreamWriter on a FileOutputStream classes to write data to a file from Java program.
In the section of Java Tutorial you will learn how to write java program to write to a file. We will use the class FileWriter and BufferedWriter to write to a file.
The FileWriter is a class used for writing character files. The constructors of this class assume that the default character encoding and the default byte-buffer size are acceptable. To specify these values yourself, construct an OutputStreamWriter on a FileOutputStream.
The BufferWriter class is used to write text to a character-output stream, buffering characters so as to provide for the efficient writing of single characters, arrays, and strings.
Here is the video tutorial of "How to write to a file in Java?":
Here is the code of java program to write text to a file:
If you execute the above code program will write "Hello Java" into out.txt file. While creating the object of FileWriter class we have passed the name of the file e.g. "out.txt" as constructor argument.
The BufferedWriter class takes FileWriter object as parameter. The write() method of the BufferedWriter class actually writes the data into the text file. | <urn:uuid:0f90f946-bac4-49ab-970a-9e1c1d69253c> | 4.1875 | 280 | Tutorial | Software Dev. | 48.173937 | 95,495,717 |
thallophyta defined in 1951 yearthallophyta - thallophyta;
thallophyta - Division of plant kingdom containing the most primitive forms of plant life. Characterized by a simple plant body (thallus) and varying from unicellular, microscopic forms, to multi-cellular forms such as large seaweeds, 60 to 70 metres in length. Although these latter show internal tissue differentiation, there is no differentiation of root, stem, and leaf as in higher plants. Asexual reproduction is by spores and (except in bacteria) sexual reproduction by fusion of gametes produced in sexual organs of various types but consisting essentially of single cells. Includes algae, bacteria, fungi, lichens, slime fungi. A very diverse group which in modern systems of classifications has been replaced by a number of divisions representing distinct evolutionary lines amongst organisms of simple structure.
near thallophyta in Knolik
definition of word "thallophyta" was readed 4063 times | <urn:uuid:fd3ea07c-b813-46e5-9f80-676a1ac80c38> | 3.453125 | 204 | Knowledge Article | Science & Tech. | 16.36881 | 95,495,721 |
Secondary organic aerosol is formed by oxidation of biogenic volatile organic compounds. Besides constitutive emissions like monoterpenes trees emit sesquiterpenes, methyl salicylic acid, green leaf volatiles and other volatile organic compounds when they are exposed to e.g. biotic stressors.
As climate warming may deteriorate the living conditions of trees, which could lead to altered stress induced emissions, it is important to understand how these emissions affect secondary organic aerosol formation and aerosol climate couplings. Biotic stress that increases biogenic emissions thus supports negative climate feedback. This may be effective already today. But heat and drought can turn the negative feedback into a positive feedback in forests dominated by de-novo emitters.
Climate change will affect stress induced emissions from vegetation, these emissions and the secondary organic aerosol they induce have to be considered in future climate scenarios.
Invited by Andreas Held, Atmospheric Chemistry
|Mo. 16.07.2018 aktuell|
Ermittlung von Grundwasserverweilzeiten mittels Radon als natürlichem Tracer für ein Trinkwasserförderungsgebiet der Stadt Fürth
Absolventenfeier Geoökologie 2018/19 | <urn:uuid:19cd563b-a5c5-4679-84dd-293928c173c4> | 2.890625 | 272 | Knowledge Article | Science & Tech. | 11.3515 | 95,495,723 |
Eingeladen durch G. Gebauer.
Almost four years ago, Keppler et al. (2006) reported from laboratory experiments that living plants, plant litter and the structural plant component pectin emit CH4 to the atmosphere under aerobic conditions. These observations caused considerable controversy amongst the scientific community and the general public because of their far-reaching implications. This was mainly for two reasons: firstly, it is generally accepted knowledge that the reduced compound CH4 can only be produced naturally from organic matter by methanogens in the absence of oxygen, or at high temperatures, e.g. in biomass burning. The fact that no mechanism for an ‘aerobic’ production of CH4 had been identified at the molecular level in plants added to the consternation. Secondly, the first extrapolations from laboratory measurements to the global scale indicated that these emissions could constitute a substantial fraction of the total global emissions of CH4.
After publication of the findings of Keppler et al., their extrapolation procedure was severely criticised, and other up-scaling calculations suggested a lower, though still potentially significant plant source of CH4 emissions. However, it became clear, that without further insight into the mechanism of the ‘aerobic’ production of CH4, any up-scaling approach would have considerable uncertainties and thus be of questionable value. Therefore, the principle scientific questions are now: if, by how much, and by what mechanisms is CH4 emitted from dead plant matter and living vegetation. Some subsequent studies could not confirm the original findings of Keppler et al., however, several more recent studies including stable isotope studies have now confirmed CH4 formation from both dead plant tissues and living intact plants. An overview of the current state of the art and the most recent findings will be given in this presentation.
|Mo. 16.07.2018 aktuell|
Ermittlung von Grundwasserverweilzeiten mittels Radon als natürlichem Tracer für ein Trinkwasserförderungsgebiet der Stadt Fürth
Absolventenfeier Geoökologie 2018/19 | <urn:uuid:2538e956-f23c-4719-80ba-38b579fac8c5> | 3.125 | 455 | Knowledge Article | Science & Tech. | 31.750889 | 95,495,724 |
Washington: NASA's venerable Kepler space telescope's count of exoplanets has passed the magic 1,000 mark, including eight new planets and 544 candidate planets.
Of more than 1,000 verified planets found, eight are less than twice Earth-size and in their stars' habitable zone, the US space agency said.
Kepler continuously monitored more than 150,000 stars beyond our solar system, and to date has offered scientists an assortment of more than 4,000 candidate planets for further study - the 1,000th of which was recently verified.
Using Kepler data, scientists reached this millinery milestone after validating that eight more candidates spotted by the planet-hunting telescope are, in fact, planets.
The Kepler team also has added another 554 candidates to the roll of potential planets, six of which are near-Earth-size and orbit in the habitable zone of stars similar to our Sun.
"Each result from the planet-hunting Kepler mission's treasure trove of data takes us another step closer to answering the question of whether we are alone in the universe," said John Grunsfeld, associate administrator of NASA's Science Mission Directorate at the agency's headquarters in Washington.
"The Kepler team and its science community continue to produce impressive results with the data from this venerable explorer," said Grunsfeld.
To determine whether a planet is made of rock, water or gas, scientists must know its size and mass. When its mass can't be directly determined, scientists can infer what the planet is made of based on its size.
"With each new discovery of these small, possibly rocky worlds, our confidence strengthens in the determination of the true frequency of planets like Earth," said co-author Doug Caldwell, SETI Institute Kepler scientist at NASA's Ames Research Centre at Moffett Field, California.
"The day is on the horizon when we'll know how common temperate, rocky planets like Earth are," said Caldwell.
With the detection of 554 more planet candidates from Kepler observations conducted May 2009 to April 2013, the Kepler team has raised the candidate count to 4,175.
"We're closer than we've ever been to finding Earth twins around other Sun-like stars. These are the planets we're looking for," said Fergal Mullally, SETI Institute Kepler scientist at Ames who led the analysis of a new candidate catalogue.
The finding was published in The Astrophysical Journal. | <urn:uuid:d5f10f97-e8f1-4a43-9f03-90ce2867ad9c> | 2.875 | 495 | News Article | Science & Tech. | 37.56382 | 95,495,726 |
Summers are getting hotter and this is coming with a cost — cooling demand.
See how the number of warm summer nights is trending in these cities.
See the hottest, wettest, and coolest Independence Days in these cities.
Summer is the season with the most obvious climate change impact — extreme heat.
Golf courses are adapting to climate change by researching, developing, and installing turfgrasses that are more tolerant of extremes.
Here’s how climate change may affect rapidly intensifying hurricanes.
The 30-year average temperature, known as the meteorological normal, is rising in most locations in the U.S.
A family of key pollinators— hummingbirds — are at risk from climate change. | <urn:uuid:de0e0edb-c484-4e71-8c4d-27cf999c6596> | 2.890625 | 149 | Content Listing | Science & Tech. | 41.380354 | 95,495,737 |
13 Things That Don't Make Sense
The Most Baffling Scientific Mysteries of Our TimeBook - 2008
Based on Michael Brooks's popular article for New Scientist--one of the most forwarded articles in the magazine's online history--13 Things That Don't Make Sense tackles the most hotly debated topics in science today, from the placebo effect to life on Mars, and shows how these conundrums are changing the way scientists approach their work and why these issues will define science in the twenty-first century. Brooks covers such topics as: the missing universe: Ninety percent of the universe simply does not exist--at least, not in any detectable form. Will we find a way to identify this dark matter, or will Isaac Newton's laws of universal gravitation be proven incorrect? ; the wow signal: In 1977, an astronomer detected a radiation blast, with no known origin, that may have been a transmission from an alien civilization. Debate has raged ever since, but is there any way to know for certain? ; cold fusion: Theoretically, it's impossible. Experimentally, it works. It might also solve our energy crisis for good. How can we harness it?
Publisher: New York : Doubleday, c2008
Edition: 1st ed
Branch Call Number: 500 B7916t 2008
Characteristics: 240 p. ; 25 cm
From the critics | <urn:uuid:fe8c5353-406c-48bc-9eec-47d24f9c9dde> | 2.625 | 282 | Product Page | Science & Tech. | 50.697058 | 95,495,740 |
In another advance at the far frontiers of timekeeping by National Institute of Standards and Technology (NIST) researchers, the latest modification of a record-setting strontium atomic clock has achieved precision and stability levels that now mean the clock would neither gain nor lose one second in some 15 billion years*--roughly the age of the universe.
Precision timekeeping has broad potential impacts on advanced communications, positioning technologies (such as GPS) and many other technologies. Besides keeping future technologies on schedule, the clock has potential applications that go well beyond simply marking time. Examples include a sensitive altimeter based on changes in gravity and experiments that explore quantum correlations between atoms.
JILA's strontium lattice atomic clock now performs better than ever because scientists literally "take the temperature" of the atoms' environment. Two specialized thermometers, calibrated by NIST researchers and visible in the center of the photo, are inserted into the vacuum chamber containing a cloud of ultracold strontium atoms confined by lasers.
As described in Nature Communications,** the experimental strontium lattice clock at JILA, a joint institute of NIST and the University of Colorado Boulder, is now more than three times as precise as it was last year, when it set the previous world record.*** Precision refers to how closely the clock approaches the true resonant frequency at which the strontium atoms oscillate between two electronic energy levels. The clock's stability-- how closely each tick matches every other tick--also has been improved by almost 50 percent, another world record.
The JILA clock is now good enough to measure tiny changes in the passage of time and the force of gravity at slightly different heights. Einstein predicted these effects in his theories of relativity, which mean, among other things, that clocks tick faster at higher elevations. Many scientists have demonstrated this, but with less sensitive techniques.****
"Our performance means that we can measure the gravitational shift when you raise the clock just 2 centimeters on the Earth's surface," JILA/NIST Fellow Jun Ye says. "I think we are getting really close to being useful for relativistic geodesy."
Relativistic geodesy is the idea of using a network of clocks as gravity sensors to make 3D precision measurements of the shape of the Earth. Ye agrees with other experts that, when clocks can detect a gravitational shift at 1 centimeter differences in height--just a tad better than current performance--they could be used to achieve more frequent geodetic updates than are possible with conventional technologies such as tidal gauges and gravimeters.
In the JILA/NIST clock, a few thousand atoms of strontium are held in a 30-by-30 micrometer column of about 400 pancake-shaped regions formed by intense laser light called an optical lattice. JILA and NIST scientists detect strontium's "ticks" (430 trillion per second) by bathing the atoms in very stable red laser light at the exact frequency that prompts the switch between energy levels.
The JILA group made the latest improvements with the help of researchers at NIST's Maryland headquarters and the Joint Quantum Institute (JQI). Those researchers contributed improved measurements and calculations to reduce clock errors related to heat from the surrounding environment, called blackbody radiation. The electric field associated with the blackbody radiation alters the atoms' response to laser light, adding uncertainty to the measurement if not controlled.
To help measure and maintain the atoms' thermal environment, NIST's Wes Tew and Greg Strouse calibrated two platinum resistance thermometers, which were then installed in the clock's vacuum chamber in Colorado. Researchers also built a radiation shield to surround the atom chamber, which allowed clock operation at room temperature rather than much colder, cryogenic temperatures.
"The clock operates at normal room temperature," Ye notes. "This is actually one of the strongest points of our approach, in that we can operate the clock in a simple and normal configuration while keeping the blackbody radiation shift uncertainty at a minimum."
In addition, JQI theorist Marianna Safronova used the quantum theory of atomic structure to calculate the frequency shift due to blackbody radiation, enabling the JILA team to better correct for the error.
Overall, the clock's improved performance tracks NIST scientists' expectations for this area of research, as described in "A New Era in Atomic Clocks" at http://www.
The JILA research is supported by NIST, the Defense Advanced Research Projects Agency and the National Science Foundation.
* For the general public, NIST converts an atomic clock's systematic or fractional total uncertainty to an error expressed as 1 second accumulated over a certain minimum length of time. That is calculated by dividing 1 by the clock's systematic uncertainty, and then dividing that result by the number of seconds in a year (31.5 million) to find the approximate minimum number of years it would take to accumulate 1 full second of error. The JILA clock has reached a higher level of precision (smaller uncertainty) than any other clock.
** T.L. Nicholson, S.L. Campbell, R.B. Hutson, G.E. Marti, B.J. Bloom, R.L. McNally, W. Zhang, M.D. Barrett, M.S. Safronova, G.F. Strouse, W.L. Tew and J. Ye. 2015. Nature Communications. Systematic evaluation of an atomic clock at 2 × 10-18 total uncertainty. April 21.
*** See 2014 NIST Tech Beat article, "JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability," at http://www.
**** Another NIST group demonstrated this effect by raising the quantum logic clock, based on a single aluminum ion, about 1 foot. See 2010 NIST news release, "NIST Pair of Aluminum Atomic Clocks Reveal Einstein's Relativity at a Personal Scale," at http://www.
Laura Ost | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:fc353bac-4b21-45a7-9026-b9e1e09740a6> | 3.359375 | 1,872 | Content Listing | Science & Tech. | 41.808963 | 95,495,764 |
In a key step to defend Earth from potentially devastating risks of near-Earth objects (NEOs), asteroids and comets, whose orbits come within 30 million miles of Earth, NASA has released a federal planning document.
The US space agency along with the Office of Science and Technology Policy, the Federal Emergency Management Agency and several other governmental agencies have collaborated on this federal planning document for NEOs, and have charted five overarching strategic goals to reduce the risk of NEO impacts through improved understanding, forecasting, prevention and emergency preparedness.
The 20-page document, titled “National Near-Earth Object Preparedness Strategy and Action Plan” aims to organise and coordinate efforts related to the NEO efforts within the federal government during the next 10 years to ensure that the nation can more effectively respond in case of such an event, which has a low-probability but can bring very high-consequence natural disasters.
“The nation already has significant scientific, technical and operational capabilities that are relevant to asteroid impact prevention,” Lindley Johnson, NASA’s planetary defence officer said in a statement. “Implementing the National Near-Earth Object Preparedness Strategy and Action Plan will greatly increase our nation’s readiness and work with international partners to respond effectively, should a new potential asteroid impact be detected,” Johnson added.
The action plan includes enhancing NEO detection, tracking, and characterisation capabilities; improving NEO modelling prediction, and information integration. It will also develop technologies for NEO deflection and disruption missions, increase international cooperation on NEO preparation, as well as establish NEO impact emergency procedures and action protocol, the statement said.
Achieving these five goals will, for a very modest government endeavour, dramatically increase the nation’s preparedness for addressing the NEO hazard and mitigating any threat, the statement said. NASA has been studying NEOs since the 1970s. The agency initiated its impact hazard mitigation efforts with a project commonly called “Spaceguard” in the late 1990s to begin to search for them.
NASA now participates as a key member in both the International Asteroid Warning Network (IAWN) and the asteroid Space Mission Planning and Advisory Group, endorsed by the United Nations Committee on the Peaceful Uses of Outer Space (UN-COPUOS) as the combined response for all space-capable nations to address the NEO impact hazard. To better organise US efforts, NASA also established the Planetary Defence Coordination Office in 2016. To date, NASA-sponsored NEO surveys have provided over 95 per cent of all NEO discoveries. | <urn:uuid:95696ba0-22b0-41a5-981b-a23839f04afe> | 3.203125 | 525 | News Article | Science & Tech. | 6.86151 | 95,495,776 |
Making use of macrophyte expansion forms as a possible detection and also meaning application for aquatic place towns was created by M?kirinta (The late seventies) and Family room Hartog & Vehicle som Velde ('88). With this program, macrophytes are generally separated into groups according to the type and also composition associated with leaves and beginnings, along with their aquatic status (emergent as well as submerged). Even though species-based checks are more frequently used, the use of growth forms to summarize diverse types could
certainly be a much better substitute for know the an environment structure of pond gets to, since they may well provide included price for that investigation along with interpretation of water plant residential areas (Cadotte, Carscadden & Mirotchnick Next year). We looked into 45 restored flow reaches within Belgium (observe Appendix?S1, Promoting Information). Steady stream sizes
diverse involving Nine and 2530?km2 within catchment region, aside from the particular Rhine, with over 152?000?km2. Sixteen gets to can be found http://www.selleckchem.com/
inside the The german language lowlands (altitudes down below 200?m over ocean level), and Twenty four grows to may be found in the lower mountainous areas (altitudes between 200 along with 400?m a new.s.m.). Your grows to had been subject to morphological restoration (discover Appendix?S1, Helping Information) which usually focus on the ��(morphological) natural point out prior to human pressure��. This research standing (Leitbild) is defined in LUA NRW (The late 90s, Beginning of 2001). Led from this reference point, the bodily recovery functions devoted to an improvement from the in-stream habitats on the reach-scale. Your repair steps failed to problem the particular catchment with the estuaries and rivers not water quality concerns. The particular lengths in the renewed grows to varied involving One hundred and also 8000?m, along with restoration ended up being carried out in between One as well as 13?years back (mean 5?years). Each restored get to had been compared
to an unrestored achieve within the exact same water, a number of 100?m upstream. The actual unrestored get to has been picked underneath the precondition it's morphologically like the reconditioned get to just before repair. Assessed with the German born common river environment survey (LUA NRW 98), the restored actually reaches are usually natural compared to individual unrestored actually reaches (Appendix?S1, Supporting Info). This specific sampling design and style allows for the space-for-time-substitution, due to the fact zero before-restoration files are available for these types of locations. From the Hase, Niers, Inde and Rur streams, a pair of restoration measures ended up researched and also in comparison to a single unrestored get to upstream. In accordance with German born specifications, the water high quality of web sites is at a fantastic position and no stage sources occur among unrestored and also restored matched actually reaches. Macrophyte testing was performed in accordance with the In german regular strategy (Schaumburg et?al. 2005a,b) in summer season 07 plus a few estuaries and rivers in the summer months regarding 2006 along with 2009. Right here, a new 100-m achieve was surveyed with regard to macrophytes simply by going with the river in transects and also jogging along the riverbank. | <urn:uuid:85b00c72-b52c-48a9-b555-9a6605050675> | 2.578125 | 739 | Comment Section | Science & Tech. | 44.432978 | 95,495,806 |
Though not being communicated, the accelerating data looks like the state of the climate is set up for feedback runaway global climate change.
Runaway climate change is not a scientific recognized term, but it is by far the greatest danger of global warming. A runaway state is the result of greater climate change (locked in) commitment, plus many amplifying feedbacks that are caused (triggered) by global warming.
If planetary feedback emissions are increasing to the extent of driving the increasing atmospheric GHG levels faster, because of added locked in commitment, for policy making we should consider that a state of committed climate change runaway exists (or at least an extreme zero tolerance risk exists). A sudden jump or accelerating global temperature increase at the same time makes the situation more definite. We are in that state in 2016.
The situation in which 'runaway' generally used in the science is the runaway greenhouse effect which results in a dead planet and applies to Venus. The situation of 'runaway' climate change in the science is 'runaway carbon dynamics' and this is only to be found in the IPCC 2001 3rd assessment under 'large scale singularities' (above). It is not included in the 2014 IPCC 5th assessment. Runaway carbon dynamics means amplifying carbon feedbacks of CO2 and methane from the heated up planet, or weakening of the land or ocean carbon sinks. However feedback emissions of nitrous oxide are also caused by global warming, so the more complete term would be runaway GHG dynamics.
An Oct 2005 presentation (at Yale) by the IPCC Chair R Pachauri included 'runaway carbon dynamics' in a list of singular events.
'Instances of possible singular events
• Breakdown of the thermohaline circulation
• Disintegration of the West Antarctic ice sheet
• Shift in mean climate towards an El Nino like state
• Runaway carbon dynamics - reduced sink capacity, release of methane from hydrates, carbon from permafrost
• Rearrangement of biome distribution
Such events can overwhelm our response strategies'
From the very start the big concern about global warming has been the possibility of a 'runaway' global warming and climate change.
'Runaway' (self accelerating) global heating and climate change is the planetary tipping point of many tipping points combined and in the case of the many Arctic amplification tipping points are self and inter reinforcing. Ultimate vicious cycles.
This is the greatest single danger from global warming to the survival of humanity and also the survival of potentially almost all life on the planet. With all climate and ocean indicators accelerating (2016) humanity and life are in extreme peril from runaway.
A global heating feedback event wiping almost most life we know is possible- because it happened 250 million years ago in the End Permian extinction event and 55 million years ago with the Paleocene-Eocene thermal maximum (PETM). Current research confirms both of these extinction events were driven by very large emissions of carbon to the atmosphere.
The PETM is the closest distant past analog to our GHG emissions global warming situation today. Research published October 2013 by Morgan Schaller and James Wright leads to their definite finding that following a doubling in carbon dioxide levels, the surface of the ocean turned acidic over a period of weeks or months and global temperatures rose by 5 degrees centigrade – all in the space of about 13 years. Scientists had previously thought this process happened over 10,000 years.
These mass extinction events involve the emission of an enormous emission of carbon as CO2 and methane. In the Arctic several times atmospheric methane is stored frozen in permafrost and subsea floor frozen solid methane gas hydrate. The permafrost is thawing as the Arctic temperature rapidly increases (Arctic amplification). Arctic methane hydrate is destabilizing in at least three locations, mainly a process that has been going on for a long time, but that ocean warming will make worse.
It is or should be treated as a zero tolerance risk applied to the very long time frame into the future.
Right now we are in a very high risk of committed runaway situation, meaning we are committing ourselves and all life to a rapid accelerating global heating that we could not possibly change. James Hansen has been warming about this for many years.
Runaway results from the combination of multiple triggered amplifying large feedbacks and climate change commitment.
Runaway includes methane feedback emissions- especially the enormous stores of Arctic carbon as methane emitters. Carbon dioxide and nitrous oxide feedback GHG sources are also huge. In runaway all three would be self reinforcing and accelerating.
Methane has a global warming effect of 86X CO2 for 20 years after emissions.
Methane having increased two and a half times since industrialization stalled in 200 but since 2006 it has been on a renewed sustained increase- and the scientists this increase is due to planetary methane feedback emissions. The methane they think is coming from warming tropical and subartic wetland peat.
It seems most of the methane is being emitted from warming tropical wetland but the largest increase is from Far North wetland peat.
The highest atmospheric methane on then planet is recorded at Lac La Biche Alberta right on the southern edge of Canada's vast wetlands.
This use of the world runaway is not a scientific term but the basis in science is definite.
This 'runaway' means the situation caused by positive (bad) feedbacks in which global warming accelerates the rate of global warming totally beyond any capacity of human control.
When describing this situation the scientists may use the terms of rapid global warming and abrupt global climate change.
Tipping points, irreversible impacts, and singularities are also scientific terms that apply.
James Hansen says Runaway greenhouse effect" has several meanings ranging from, at the low end, global
warming sufficient to induce out-of-control amplifying feedbacks such as ice sheet disintegration
and melting of methane hydrates, to, at the high end, a Venus-like hothouse with crustal carbon
baked into the atmosphere and surface temperature of several hundred degrees. Between these extremes is the "moist greenhouse", which occurs
if the climate forcing is large enough to make H2O a major atmospheric constituent (Kasting,
1988). In principle, an extreme moist greenhouse might cause an instability with water vapor
preventing radiation to space of all absorbed solar energy, resulting in very high surface
temperature and evaporation of the ocean (Ingersoll, 1969). Our simulations indicate that no plausible human-made greenhouse gas forcing can
cause an instability and runaway greenhouse effect as defined by Ingersoll (1969), Sept 2001 Climate Sensitivity, Sea Level, and Atmospheric CO2.
The runaway climate change that we're talking about result from multiple Arctic positive feedbacks. We are talking about methane which is 72 times more powerful as a global warming greenhouse gas than CO2 over a 20 period. In the case a large sustained methane emission from the Arctic the methane has 100 times the effect of CO2 over a 10 year time frame.
The runaway Arctic +ve feedbacks
- Loss of Arctic snow and summer sea ice cooling albedo
- Methane emissions from warming sub Arctic peat rich wetlands
- Methane and carbon dioxide emissions and from thawing permafrost
- Nitrous oxide emissions from thawing permafrost
- Methane emissions from sub sea floor frozen solid methane gas hydrate
Each one of these is a powerful positive feedback in increasing the rate of global warming, as well as of course Arctic warming.
Because they all occur in the same region of the Arctic, they will each reinforce all the others, a domino effect called cascading feedbacks.
The Arctic warms more rapidly than the rest of the planet. (Arctic polar amplification).
The Arctic summer sea ice is collapsing past its tipping since 2007.
The rapidly warming Arctic is already emitting methane from all the above sources. As global warming is committed to increase and Arctic warming increases faster, we are now in a runaway planetary emergency situation.
CLIMATE EMERGENCY INSTITUTE
The Health and Human Rights Approach to Climate Change
Feb 2017 dynamics of runaway global warming explained (page 15)
IPCC TAR 2001 Ch 19 22.214.171.124. Large-Scale Singularities
Dec 2017 Runaway dynamic | <urn:uuid:3374991f-3ec1-4ec2-802b-1e2e72d81873> | 3.328125 | 1,713 | Knowledge Article | Science & Tech. | 30.316151 | 95,495,819 |
Two recent studies on past temperatures… Pliocene Warmth, Polar Amplification, and Stepped Pleistocene Cooling Recorded in NE Arctic Russia Understanding the evolution of Arctic polar climate from the protracted warmth of the middle Pliocene into the earliest glacial cycles in the Northern Hemisphere has been hindered by the lack of continuous, highly resolved Arctic time…
Peter Sinclair notes:
Dear Mainstream Media,
Does the phrase “Biggest Story of the Millennium” mean anything to you?
Let’s play word association. For instance, if I say, “trees”, you might say…. “forest”.
C’mon guys. I can’t do this all by myself.
This video is to promote general awareness of the science of climate change. It was edited and narrated by @ryanlcooper, using illustrations from around the web. Find more of my stuff athttp://www.ryanlouiscooper.com. It was inspired largely by something David Roberts wrote: http://grist.org/article/2010-08-09-e… Find David at http://grist.org/author/david-roberts/ and @drgrist.
BBC: The Arctic seas are being made rapidly more acidic by carbon-dioxide emissions, according to a new report. Scientists from the Arctic Monitoring and Assessment Programme (AMAP) monitored widespread changes in ocean chemistry in the region. They say even if CO2 emissions stopped now, it would take tens of thousands of years for Arctic Ocean chemistry to…
Dr. Ralph Keeling on atmospheric Oxygen and Carbon Dioxide. Ralph Keeling is the current program director of the Scripps CO2 Program. He is also a Professor and the Principal Investigator for the Atmospheric Oxygen Research Group at SIO.
Oxygen levels are decreasing globally due to fossil-fuel burning. The changes are too small to have an impact on human health, but are of interest to the study of climate change and carbon dioxide. These plots show the atmospheric O2 concentration relative to the level around 1985. The observed downward trend amounts to 19 ‘per meg’ per year. This corresponds to losing 19 O2 molecules out of every 1 million O2 molecules in the atmosphere each year. http://scrippso2.ucsd.edu/
Station Daily Averages of CO2 and O2/N2 http://scrippso2.ucsd.edu/plots
A documentary on Al Gore’s campaign to make the issue of global warming a recognized problem worldwide. http://www.imdb.com/title/tt0497116
Since Paramount has taken down the video Thu Sep 5, 2013, this video is no longer available. Alternative watch TED Talk Al Gore New Thinking On The Climate Crisis.
“Local officials and enviros are making plans for a post-global warming America. And so are profit-seeking companies.”
On the opening morning of the inaugural National Adaptation Forum, I was eating breakfast at a stand-up table in the exhibition hall when a mustachioed man of middle age plopped his cherry Danish next to my pile of conference literature, a mess of pamphlets and reports with titles likeGetting Climate Smart: A Water Preparedness Guide for State Action, and Successful Adaptation: Linking Science and Policy in a Rapidly Changing World. The nametag dangling above the Danish identified the man as Michael Hughes, director of public works for the Chicago suburb of Elmhurst. Like many attendees, Hughes was part of a new national emergency-response team without being fully aware of it. He had arrived in Denver knowing little about “adaptation,” the anemic catchall for attempts to fortify our natural and built environments against the epochal temperature spike in progress.
“I hadn’t even heard the term ‘adaptation’ a month ago,” he told me, taking a bite.
Follow National Geographic photographer James Balog across the Arctic as he deploys time-lapse cameras designed for one purpose: to capture a multi-year record of the world’s changing glaciers. Visit IMDB for more film details. Donate to the project chasingice.com Update The old movie version is no longer available, updated the video URL to a…
November 06, 2008: SFU Canada Research Chairs Seminar Series “Glacier and ice-sheet dynamics in a warming world”
Dr. Gwenn Flowers, Canada Research Chair in Glaciology / Department of Earth Sciences
The fossil fuel industry is the 1% of the 1%, the richest enterprise in human history
Via Memo Share: You’ve probably wondered why we as a nation cannot act on climate change given that at least 98 percent of the world’s non-big-oil-financed scientists agree that it is manmade and we may be only two summers from an ice-free Arctic.
By James West “I don’t see what all those environmentalists are worried about,” sneers your great uncle Joe. “Carbon dioxide is harmless, and great for plants!” OK. Take a deep breath. If you’re not careful, comments like this can result in dinner-table screaming matches. Luckily, we have a secret weapon: A flowchart that will help you calmly…
Dr. Jennifer Francis – Rutgers University “Extreme weather events are increasing in frequency and intensity all around the northern hemisphere.” Concurrently, Arctic sea ice is in an accelerating decline, the entire surface of Greenland melted for the first time in at least 150 years, glaciers are disappearing around the world, and snow cover on Arctic…
Public lecture by Distinguished Senior Scientist at the Climate Analysis Section at the National Centre for Atmospheric Research (NCAR), Kevin Trenberth held at UNSW on October 16, 2012. “Heavy precipitation days are increasing even in places where precipitation is decreasing.” Framing the way to relate climate extremes to climate change Abstract The atmospheric and ocean environment…
SkepticalScience: Human greenhouse gas emissions have continued to warm the planet over the past 16 years. However a persistent myth has emerged in the mainstream media challenging this. As a simple illustration of why the myth is wrong this video clarifies how the interplay of natural and human factors have affected the short-term temperature trends,… | <urn:uuid:1fe7eadb-4800-461e-9013-3eb9d2ec626b> | 3.015625 | 1,333 | Content Listing | Science & Tech. | 44.291447 | 95,495,832 |
By José G Vargas
This is a ebook that the writer needs have been on hand to him while he used to be scholar. It displays his curiosity in figuring out (like professional mathematicians) the main suitable arithmetic for theoretical physics, yet within the sort of physicists. which means one isn't really dealing with the learn of a set of definitions, comments, theorems, corollaries, lemmas, and so forth. yet a story — the same as a narrative being informed — that doesn't hamper sophistication and deep results.
It covers differential geometry a ways past what normal relativists understand they should understand. And it introduces readers to different components of arithmetic which are of curiosity to physicists and mathematicians, yet are principally missed. between those is Clifford Algebra and its makes use of together with differential kinds and relocating frames. It opens new study vistas that extend the topic matter.
In an appendix at the classical conception of curves and surfaces, the writer slashes not just the most proofs of the conventional technique, which makes use of vector calculus, yet even present remedies that still use differential kinds for a similar purpose.
Read or Download Differential Geometry for Physicists and Mathematicians:Moving Frames and Differential Forms: From Euclid Past Riemann PDF
Best geometry & topology books
According to a sequence of lectures for grownup scholars, this vigorous and exciting e-book proves that, faraway from being a dusty, uninteresting topic, geometry is in reality jam-packed with attractiveness and fascination. The author's infectious enthusiasm is positioned to take advantage of in explaining the various key innovations within the box, beginning with the Golden quantity and taking the reader on a geometric trip through Shapes and Solids, in the course of the Fourth measurement, winding up with Einstein's Theories of Relativity.
This particular publication on glossy topology seems well past conventional treatises and explores areas which could, yet don't need to, be Hausdorff. this can be crucial for area thought, the cornerstone of semantics of machine languages, the place the Scott topology is sort of by no means Hausdorff. For the 1st time in one quantity, this publication covers easy fabric on metric and topological areas, complicated fabric on entire partial orders, Stone duality, sturdy compactness, quasi-metric areas and lots more and plenty extra.
Differential geometry and topology are crucial instruments for plenty of theoretical physicists, relatively within the learn of condensed subject physics, gravity, and particle physics. Written through physicists for physics scholars, this article introduces geometrical and topological tools in theoretical physics and utilized arithmetic.
Stiefel manifolds are a fascinating relatives of areas a lot studied by way of algebraic topologists. those notes, which originated in a direction given at Harvard collage, describe the nation of information of the topic, in addition to the phenomenal difficulties. The emphasis all through is on purposes (within the topic) instead of on concept.
Additional info for Differential Geometry for Physicists and Mathematicians:Moving Frames and Differential Forms: From Euclid Past Riemann
Differential Geometry for Physicists and Mathematicians:Moving Frames and Differential Forms: From Euclid Past Riemann by José G Vargas | <urn:uuid:3ccc1015-0d40-4672-80c2-a0f1e9da2d7e> | 2.53125 | 679 | Product Page | Science & Tech. | 18.80413 | 95,495,839 |
Mars could soon become a realistic planet to grow plants and fresh food. Scientists and researchers at the University of Arizona have discovered a way to have inflatable greenhouses in space. This could lead to further and longer explorations on potential new territory on other planets.
The basic necessities for fueling these greenhouses in space are the requirements of water and carbon dioxide. Mars has a little over 95 percent of CO2 in their atmosphere. However, with it being much thinner than Earth, the plants will be getting a supplement of it from what the astronauts release. That thin atmosphere has also caused water to be frozen on the surface. The ice would have to be melted, or scientists might be able to find water deeper in the planet’s surface.
In the same vein, greenhouses likely wouldn’t last on Mars’ surface. Earth’s atmosphere is able to protect plants when they grow, and Mars doesn’t have that same capability. Therefore, these greenhouses would need to be stored in protective environments and there would need to be ability to constantly deliver light to these plants. They would likely be built underground and LED lighting would give them the opportunity to mimic sunlight.
Despite Mars’ atmosphere being well below the optimal conditions for growing plants and food, these greenhouses will attempt to copy exactly how they’re grown on Earth. NASA built its first-ever prototype of the Mars greenhouses back in the fall of 2015. Some of the questions they had back then they’re still trying to figure out.
Would there be enough water on the planet for both human consumption and plant growth? If astronauts are able to find more water deeper in the planet, there’s certainly a possibility. What type of plants and other crops should be grown in these greenhouses? Could there be potential issues when it comes to balancing out human life and the amount of plants needed for fresh oxygen and food? While there’s plenty to ponder, the ability to create inflatable greenhouses years later could provide an abundance of growth on Mars.
Building greenhouses were not the first attempt that astronauts and scientists had at growing fresh food in space. Back in 2014, NASA created the “Veggie” plant growth system with success, spawning fresh romaine lettuce that was ready to eat. The greenhouses on Mars would be an expansion of this project, giving astronauts more areas to grow plants. It also creates a more automatic and natural process of farming.
Astronaut Peggy Whitson continues to break records in space. She distributed the first-ever 4K live stream from her ship, which came just a few days after spending 534 days in space. That mark on April 24th broke the record for most cumulative days in space. With the potential to harvest fresh food on Mars, this would create a potential for scientists and other astronauts to do long stints of researching and it would make the planet that much more inhabitable.
Researchers from marine life advocates Oceana have discovered a surprising new world under the sea near Sicily.
Sweden's aggressive target of generating over 40 terawatt-hours of renewable energy by 2030 could be reached nearly a decade early. A massive amount of wind power projects could hit a snag in market value with subsidies, but SWEA could push to close those up by the end of the year.
Starbucks is ramping up its sustainability efforts with a plan to eradicate the use of plastic straws in its assembly line. | <urn:uuid:889dc2ff-3d1d-40fa-8723-6369c50c0e92> | 4.0625 | 710 | News Article | Science & Tech. | 50.868311 | 95,495,848 |
A growing lava stream threatening homes on Hawaii's Big Island is expanding as it heads toward a small rural town.
Back when he was working on his Ph.D. in geophysics at the University of Chicago in the 1980s, Dork Sahagian took a break one day from studying lava flows to attend a lecture on how raindrops form in clouds.
An unconventional research method allows for a new look at geologic features on Earth, revealing that some of the things we see on Mars and other planets may not be what they seem.
Primeval lava flows formed the massive canyons and gorge systems on Mars. Water, by contrast, was far too scarce on the red planet to have cut these gigantic valleys into the landscape. This is the conclusion of several years ...
Jeffrey Karson, a Syracuse University geologist who recently traveled to Iceland to monitor the early stages of the eruption, says the lava field now covers more than 22 square miles (or 14,000 acres), nearly the size of ...
(PhysOrg.com) -- Latest research into the age of volcanos in Western Victoria and South Australia has confirmed that the regions are overdue for an eruption, potentially affecting thousands of local residents.
Dozens of earthquakes were rattling Hawaii's Kilauea Volcano on Wednesday as underground magma moved into a new area east of the Puu Oo (POO'-oo OH'-oh) vent.
As volcanologists at Lamont-Doherty Earth Observatory, we love everything lava. Right now, we're exploring how the structure of the surfaces lava flows over influences how it advances. Does it matter if the lava is flowing ...
Lava that was filling the driveway of a Hawaii trash transfer station has stopped.
New research describes a fast-moving sand dune in Tunisia that is spilling onto the streets of the Star Wars set used to portray Anakin Skywalker's childhood home. | <urn:uuid:bc1d7431-ea60-43f5-b0d9-c579fb27208a> | 2.984375 | 388 | Content Listing | Science & Tech. | 55.737444 | 95,495,857 |
One question I am frequently asked in my work at Cambridge Butterfly Conservatory is “what's the difference between a butterfly and a moth?”
From my perspective, the most noticeable difference is that butterflies are more popular. While butterflies are celebrated symbols of freedom, peace and beauty, moths tend to be regarded as bland, pesky and creepy.
Do moths deserve this inferior status? I don't think so. When it comes down to it, there is actually little difference between the groups of insects we refer to as “butterflies” and “moths.” They share the same order, Lepidoptera, and many species flout the anatomical and behavioural “rules” used to separate them. For example, we tend to think of butterflies as colourful and diurnal and moths as drab and nocturnal, but several moth species, such as the “sunset moth,” are active during the day and just as striking as any butterfly.
Be that as it may, scientists and others persist in calling some Lepidopterans “butterflies” and others “moths.” If you have a specimen and you want to figure out which group it belongs to, examine the antennae: if they look fuzzy, like tiny feathers, or taper to a point, your subject is probably a moth. If they are thin and smooth, like a thread, and thicken towards the ends, it's more likely a butterfly.
A waved sphinx moth, family Sphingidae
When you get to know them, moths turn out to be utterly fascinating.
Moths have a bad rap for being clothing nibblers or even sci-fi monsters. However only a tiny minority of moth species have a negative impact on humans. Most of them are helpful to us and the ecosystems they belong to. They are used as bioindicators, meaning that their numbers give scientists a measure of the health of an ecosystem. They also serve as food for nocturnal species, such as bats and screech owls and pollinate night-blooming flowers.
When you get to know them, moths turn out to be utterly fascinating. There are over 160,000 known species to get acquainted with, compared with fewer than 20,000 butterflies. Some of them, including the giant silk moths, are extremely beautiful. If you want to know what I'm talking about, check out the work of Ottawa photographer Jim des Rivieres. Moths are great for comedic value, too. Flipping through a field guide will reveal such improbable names as: the Scarce Infant, the German Cousin and the Skunk.
Attracting moths for viewing can be a fun night-time activity. There are a couple of simple ways to do it. One is to hang a white sheet between two trees and shine a lamp on it. Moths are attracted to light, as anyone who's ever seen them flutter around a porch light will know. Black lights and mercury vapour lights work best, but unless you're an entomologist, you might not have those on hand. I tried this method myself using a regular bedside lamp, so I can say with complete confidence that it is easy and it does work!
Putting the finishing touches on our moth-attracting set-up.
You can also try mixing up some moth bait (here's a recipe) and slathering it on tree trunks. Apparently moths find the smell irresistible and will fly in for a late-night snack.
Whatever your method, moth-watching is relaxing endeavour. Once you have your equipment set up, all you have to do is sit back and wait. We set up our sheet around 8:30, started a movie, and had plenty of moths to enjoy two hours later. A couple of things to be aware of: if the weather is good for moths, chances are it's good for mosquitoes as well and if it's rainy or too cool, the moths will not be active. We had great results – and lots of mosquitoes – on a still night around 16 degrees Celsius.
I've included photos of some of the moths we saw. Most of them were less than 3 cm across, but we also attracted a sphinx moth that was about 6 cm!
A wave moth, family Geometridae
There are several online resources, such as BAMONA and the Moth Photographers Group, that you can use to identify moths. I've also found the Peterson Field Guide to Moths to be quite helpful. If you'd like to contribute to moth research, put on your citizen scientist hat and submit your findings to these online databases.
Good luck and happy mothing!
- A\J Editorial Board (18) A\J Editorial Board
- A\J Special Delivery (160) A\J Special Delivery
- Backstage at A\J (84) Backstage at A\J
- Current Events (212) Current Events
- EcoLogic (8) EcoLogic
- Food and Culture (24) Food and Culture
- Green Living (32) Green Living
- Made in Canada (21) Made in Canada
- Renewable Energy (54) Renewable Energy
- Shades of Green (12) Shades of Green
- Summer Reading Series (7) Summer Reading Series
- Sustainable A\J (57) Sustainable A\J
- The Green Student (19) The Green Student
- The Mouthful (14) The Mouthful
- The Wild Side (38) The Wild Side
- Think Global (14) Think Global
- Turtle Island Solidarity Journey 2018 (4) Turtle Island Solidarity Journey 2018
Popular on A\J
- From EATING AROUND THE WORLD article: "The long road to sustainability requires rebuilding our communities, and a g… https://t.co/gLTuZ7Rvu5 — 22 weeks 1 day ago
- A Valentine's Day (and every day) message from Jane Goodall: "Let us replace impatience and intolerance with unders… https://t.co/1WGML2toyK — 22 weeks 1 day ago
- For Valentine's Day: https://t.co/exvDzE2LQf — 22 weeks 1 day ago | <urn:uuid:82a80e77-1222-4e25-8c70-0a24aa48cb02> | 3.09375 | 1,316 | Personal Blog | Science & Tech. | 57.802431 | 95,495,889 |
rencontre erdeven When I had first read about Artificial Intelligence, it had seemed like an improbable science fiction concept. Robots thinking, learning, feeling, and taking decisions on their own still seems a little out there.
enter site But, don’t be fooled – robot or not, you are surrounded by AI today in your everyday lives.
kostenloses demo konto No matter which smartphone you are using, you are most likely to have interfaced with a virtual assistant – the likes of Siri, Allo and Cortana. Per Microsoft, Cortana “continually learns about its user” and would, in time, anticipate the needs of the user. We are in an age were cars are driving themselves, news is being generated by computers, banks are sending automated fraud alerts – in some form or the other, artificial intelligence is everywhere.
http://curemito.org/estorke/4547 Despite all the different ways in which AI is making our lives better, many tech giants are concerned about the ill-effects of the technology! Stephen Hawking, Elon Musk and Bill Gates have all openly expressed their worries about super-intelligence.
In 2016, Google published a technical paper on the key concerns with AI, titled Concrete Problems in AI Safety. Here we present you a brief synopsis of the same, with sinister conjectures on how the development of full artificial intelligence can wipe out humanity:
Buy Cialis 25 mg in Dallas Texas Kill to Improve?
Let’s imagine an AI that is continually learning to do one particular action. Over time, it will start experimenting with ways to do its duty, to improve efficiency. But, what happens when the best way to perform its task adversely impacts something else in its environment? The Google paper gives the example of a cleaning robot – will it knock over a vase because it can clean faster by doing so? But, as you might imagine, the example can be much grimmer.
Would an AI value a life over doing its assigned task more efficiently? dating damer Will it kill to be better? Horror show, right!
Also, during such explorations, the system might blow up its environment, purely accidentally. So, it is kind of like a busy toddler without adult supervision. All the parents in the audience are going googly-eyed right about now, I bet.
Abbicarmi preaperture maritava http://modernhomesleamington.co.uk/component/k2/itemlist/user/13607?format=feed fiderebbero unilabiato richiamavate! Taken over by the Evil Forces
Ok, so we are letting AI into our lives, letting it control us, for convenience. Our cars are taking us to places unsupervised, our kitchen equipment is learning our food preference and serving independently, our home is keeping itself secure – all because of crafty codes. Now imagine being hacked.
In the future hacked might not just mean embarrassing photos sent over your email or all your money spent on a hot tub in Thailand – it might mean see the difference between your life and death.
God forbid, if ‘evil forces’ can run your car, or your home security, that is not a warm, fuzzy feeling – is it?
source site Evolution, exponentially!
An AI is smart – right! And, it can keep getting smarter – unaffected by needs (or stupidity!) that keep humans occupied. Imagine an AI that has the simple goal of getting more intelligent, and is not thwarted in its quest by the ‘Cat poops, Baby sings’ video. What will then stop it from overriding human intelligence by a mile and making us obsolete?
It is not a debate anymore that conocer varios chicos a la vez super-intelligence would emerge, although the date remains unpredictable. Some say it is impossible before 2100, others that it will happen by 2060.
To be honest, the worry is not that AI would become evil or conscious. The real worry is whether the goal of AI would be aligned to human goals, or divergent from ours. In the latter case, we are in big trouble! If you want to get scared some more, watch Black Mirror on Netflix, you won’t chill, I promise.
I find immense relief in the fat that the biggest technical minds are on it. Here’s hoping they will find a way to keep humans from becoming puppets to steel creatures.
What do you think of AI? Comment below or write to us at email@example.com
- The Curious Case of My Missing Grandmothers - July 20, 2018
- Hannah Gadsby’s Nanette: Quitting Comedy for Storytelling - July 10, 2018
- Japan’s Cleanliness: The Story of Shinto and Personal Commitment - July 5, 2018
- In Sanju’s Defense: Watch this one for a few good Men - June 30, 2018
- About Lust Stories: Half Mud Pie, Half Mud - June 22, 2018
- Suicide, Illness, Plath and Bourdain: Reading the Bell Jar - June 14, 2018
- 5 Favorite Iftar Sweets that You Must Taste at least Once - June 5, 2018
- The First Treat from Boyaam: Sujoyprosad’s Journeys - June 4, 2018
- Visiting Kodaikanal: The Gift of the Forest - May 31, 2018
- A Quirky Artist Celebrating our Fictional Femmes: Meet Sandhya Prabhat - May 23, 2018 | <urn:uuid:aa2a7927-2d6e-4fcb-a1dd-95ae270322d8> | 2.65625 | 1,168 | Personal Blog | Science & Tech. | 49.608146 | 95,495,890 |
We report the discovery of light organs (photophores) adjacent to the dorsal defensive spines of a small deep-sea lanternshark (Etmopterus spinax). Using a visual modeling based on in vivo luminescence recordings we show that this unusual light display would be detectable by the shark’s potential predators from several meters away. We also demonstrate that the luminescence from the spine-associated photophores (SAPs) can be seen through the mineralized spines, which are partially translucent. These results suggest that the SAPs function, either by mimicking the spines' shape or by shining through them, as a unique visual deterrent for predators. This conspicuous dorsal warning display is a surprising complement to the ventral luminous camouflage (counterillumination) of the shark.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 4 years ago
Beetle luciferases are thought to have evolved from fatty acyl-CoA synthetases present in all insects. Both classes of enzymes activate fatty acids with ATP to form acyl-adenylate intermediates, but only luciferases can activate and oxidize d-luciferin to emit light. Here we show that the Drosophila fatty acyl-CoA synthetase CG6178, which cannot use d-luciferin as a substrate, is able to catalyze light emission from the synthetic luciferin analog CycLuc2. Bioluminescence can be detected from the purified protein, live Drosophila Schneider 2 cells, and from mammalian cells transfected with CG6178. Thus, the nonluminescent fruit fly possesses an inherent capacity for bioluminescence that is only revealed upon treatment with a xenobiotic molecule. This result expands the scope of bioluminescence and demonstrates that the introduction of a new substrate can unmask latent enzymatic activity that differs significantly from an enzyme’s normal function without requiring mutation.
Bioluminescence methodologies have been extraordinarily useful due to their high sensitivity, broad dynamic range, and operational simplicity. These capabilities have been realized largely through incremental adaptations of native enzymes and substrates, originating from luminous organisms of diverse evolutionary lineages. We engineered both an enzyme and substrate in combination to create a novel bioluminescence system capable of more efficient light emission with superior biochemical and physical characteristics. Using a small luciferase subunit (19 kDa) from the deep sea shrimp Oplophorus gracilirostris, we have improved luminescence expression in mammalian cells ∼2.5 million-fold by merging optimization of protein structure with development of a novel imidazopyrazinone substrate (furimazine). The new luciferase, NanoLuc, produces glow-type luminescence (signal half-life >2 h) with a specific activity ∼150-fold greater than that of either firefly (Photinus pyralis) or Renilla luciferases similarly configured for glow-type assays. In mammalian cells, NanoLuc shows no evidence of post-translational modifications or subcellular partitioning. The enzyme exhibits high physical stability, retaining activity with incubation up to 55 °C or in culture medium for >15 h at 37 °C. As a genetic reporter, NanoLuc may be configured for high sensitivity or for response dynamics by appending a degradation sequence to reduce intracellular accumulation. Appending a signal sequence allows NanoLuc to be exported to the culture medium, where reporter expression can be measured without cell lysis. Fusion onto other proteins allows luminescent assays of their metabolism or localization within cells. Reporter quantitation is achievable even at very low expression levels to facilitate more reliable coupling with endogenous cellular processes.
Quorum sensing (QS) is a bacterial cell-cell communication process that relies on the production and detection of extracellular signal molecules called autoinducers. QS allows bacteria to perform collective activities. Vibrio cholerae, a pathogen that causes an acute disease, uses QS to repress virulence factor production and biofilm formation. Thus, molecules that activate QS in V. cholerae have the potential to control pathogenicity in this globally important bacterium. Using a whole-cell high-throughput screen, we identified eleven molecules that activate V. cholerae QS: eight molecules are receptor agonists and three molecules are antagonists of LuxO, the central NtrC-type response regulator that controls the global V. cholerae QS cascade. The LuxO inhibitors act by an uncompetitive mechanism by binding to the pre-formed LuxO-ATP complex to inhibit ATP hydrolysis. Genetic analyses suggest that the inhibitors bind in close proximity to the Walker B motif. The inhibitors display broad-spectrum capability in activation of QS in Vibrio species that employ LuxO. To the best of our knowledge, these are the first molecules identified that inhibit the ATPase activity of a NtrC-type response regulator. Our discovery supports the idea that exploiting pro-QS molecules is a promising strategy for the development of novel anti-infectives.
Quorum sensing is a process of chemical communication that bacteria use to monitor cell density and coordinate cooperative behaviors. Quorum sensing relies on extracellular signal molecules and cognate receptor pairs. While a single quorum-sensing system is sufficient to probe cell density, bacteria frequently use multiple quorum-sensing systems to regulate the same cooperative behaviors. The potential benefits of these redundant network structures are not clear. Here, we combine modeling and experimental analyses of the Bacillus subtilis and Vibrio harveyi quorum-sensing networks to show that accumulation of multiple quorum-sensing systems may be driven by a facultative cheating mechanism. We demonstrate that a strain that has acquired an additional quorum-sensing system can exploit its ancestor that possesses one fewer system, but nonetheless, resume full cooperation with its kin when it is fixed in the population. We identify the molecular network design criteria required for this advantage. Our results suggest that increased complexity in bacterial social signaling circuits can evolve without providing an adaptive advantage in a clonal population.
Bioluminescence is a fascinating phenomenon occurring in numerous animal taxa in the ocean. The reef dwelling splitfin flashlight fish (Anomalops katoptron) can be found in large schools during moonless nights in the shallow water of coral reefs and in the open surrounding water. Anomalops katoptron produce striking blink patterns with symbiotic bacteria in their sub-ocular light organs. We examined the blink frequency in A. katoptron under various laboratory conditions. During the night A. katoptron swims in schools roughly parallel to their conspecifics and display high blink frequencies of approximately 90 blinks/minute with equal on and off times. However, when planktonic prey was detected in the experimental tank, the open time increased compared to open times in the absence of prey and the frequency decreased to 20% compared to blink frequency at night in the absence of planktonic prey. During the day when the school is in a cave in the reef tank the blink frequency decreases to approximately 9 blinks/minute with increasing off-times of the light organ. Surprisingly the non-luminescent A. katoptron with non-functional light organs displayed the same blink frequencies and light organ open/closed times during the night and day as their luminescent conspecifics. In the presence of plankton non-luminescent specimens showed no change in the blink frequency and open/closed times compared to luminescent A. katoptron. Our experiments performed in a coral reef tank show that A. katoptron use bioluminescent illumination to detect planktonic prey and that the blink frequency of A. katoptron light organs follow an exogenous control by the ambient light.
- Proceedings. Biological sciences / The Royal Society
- Published almost 6 years ago
Vampire squid (Vampyroteuthis infernalis) are considered phylogenetic relics with cephalopod features of both octopods and squids. They lack feeding tentacles, but in addition to their eight arms, they have two retractile filaments, the exact functions of which have puzzled scientists for years. We present the results of investigations on the feeding ecology and behaviour of Vampyroteuthis, which include extensive in situ, deep-sea video recordings from MBARI’s remotely operated vehicles (ROVs), laboratory feeding experiments, diet studies and morphological examinations of the retractile filaments, the arm suckers and cirri. Vampire squid were found to feed on detrital matter of various sizes, from small particles to larger marine aggregates. Ingested items included the remains of gelatinous zooplankton, discarded larvacean houses, crustacean remains, diatoms and faecal pellets. Both ROV observations and laboratory experiments led to the conclusion that vampire squid use their retractile filaments for the capture of food, supporting the hypothesis that the filaments are homologous to cephalopod arms. Vampyroteuthis' feeding behaviour is unlike any other cephalopod, and reveals a unique adaptation that allows these animals to spend most of their life at depths where oxygen concentrations are very low, but where predators are few and typical cephalopod food is scarce.
ABSTRACT The symbiosis between the squid Euprymna scolopes and its luminous symbiont, Vibrio fischeri, is characterized by daily transcriptional rhythms in both partners and daily fluctuations in symbiont luminescence. In this study, we sought to determine whether symbionts affect host transcriptional rhythms. We identified two transcripts in host tissues (E. scolopes cry1 [escry1] and escry2) that encode cryptochromes, proteins that influence circadian rhythms in other systems. Both genes cycled daily in the head of the squid, with a pattern similar to that of other animals, in which expression of certain cry genes is entrained by environmental light. In contrast, escry1 expression cycled in the symbiont-colonized light organ with 8-fold upregulation coincident with the rhythms of bacterial luminescence, which are offset from the day/night light regime. Colonization of the juvenile light organ by symbionts was required for induction of escry1 cycling. Further, analysis with a mutant strain defective in light production showed that symbiont luminescence is essential for cycling of escry1; this defect could be complemented by presentation of exogenous blue light. However, blue-light exposure alone did not induce cycling in nonsymbiotic animals, but addition of molecules of the symbiont cell envelope to light-exposed animals did recover significant cycling activity, showing that light acts in synergy with other symbiont features to induce cycling. While symbiont luminescence may be a character specific to rhythms of the squid-vibrio association, resident microbial partners could similarly influence well-documented daily rhythms in other systems, such as the mammalian gut. IMPORTANCE In mammals, biological rhythms of the intestinal epithelium and the associated mucosal immune system regulate such diverse processes as lipid trafficking and the immune response to pathogens. While these same processes are affected by the diverse resident microbiota, the extent to which these microbial communities control or are controlled by these rhythms has not been addressed. This study provides evidence that the presentation of three bacterial products (lipid A, peptidoglycan monomer, and blue light) is required for cyclic expression of a cryptochrome gene in the symbiotic organ. The finding that bacteria can directly influence the transcription of a gene encoding a protein implicated in the entrainment of circadian rhythms provides the first evidence for the role of bacterial symbionts in influencing, and perhaps driving, peripheral circadian oscillators in the host.
Bioluminescence, the creation and emission of light by organisms, affords insight into the lives of organisms doing it. Luminous living things are widespread and access diverse mechanisms to generate and control luminescence [1-5]. Among the least studied bioluminescent organisms are phylogenetically rare fungi-only 71 species, all within the ∼9,000 fungi of the temperate and tropical Agaricales order-are reported from among ∼100,000 described fungal species [6, 7]. All require oxygen and energy (NADH or NADPH) for bioluminescence and are reported to emit green light (λmax 530 nm) continuously, implying a metabolic function for bioluminescence, perhaps as a byproduct of oxidative metabolism in lignin degradation. Here, however, we report that bioluminescence from the mycelium of Neonothopanus gardneri is controlled by a temperature-compensated circadian clock, the result of cycles in content/activity of the luciferase, reductase, and luciferin that comprise the luminescent system. Because regulation implies an adaptive function for bioluminescence, a controversial question for more than two millennia [8-15], we examined interactions between luminescent fungi and insects . Prosthetic acrylic resin “mushrooms,” internally illuminated by a green LED emitting light similar to the bioluminescence, attract staphilinid rove beetles (coleopterans), as well as hemipterans (true bugs), dipterans (flies), and hymenopterans (wasps and ants), at numbers far greater than dark control traps. Thus, circadian control may optimize energy use for when bioluminescence is most visible, attracting insects that can in turn help in spore dispersal, thereby benefitting fungi growing under the forest canopy, where wind flow is greatly reduced.
Loliginid and sepiolid squid light organs are known to host a variety of bacterial species from the family Vibrionaceae, yet little is known about the species diversity and characteristics among different host squids. Here we present a broad-ranging molecular and physiological analysis of the bacteria colonizing light organs in loliginid and sepiolid squids from various field locations of the Indo-West Pacific (Australia and Thailand). Our PCR-RFLP analysis, physiological characterization, carbon utilization profiling, and electron microscopy data indicate that loliginid squid in the Indo-West Pacific carry a consortium of bacterial species from the families Vibrionaceae and Photobacteriaceae. This research also confirms our previous report of the presence of Vibrio harveyi as a member of the bacterial population colonizing light organs in loliginid squid. pyrH sequence data were used to confirm isolate identity, and indicates that Vibrio and Photobacterium comprise most of the light organ colonizers of squids from Australia, confirming previous reports for Australian loliginid and sepiolid squids. In addition, combined phylogenetic analysis of PCR-RFLP and 16S rDNA data from Australian and Thai isolates associated both Photobacterium and Vibrio clades with both loliginid and sepiolid strains, providing support that geographical origin does not correlate with their relatedness. These results indicate that both loliginid and sepiolid squids demonstrate symbiont specificity (Vibrionaceae), but their distribution is more likely due to environmental factors that are present during the infection process. This study adds significantly to the growing evidence for complex and dynamic associations in nature and highlights the importance of exploring symbiotic relationships in which non-virulent strains of pathogenic Vibrio species could establish associations with marine invertebrates. | <urn:uuid:cc293b94-c544-4b47-a96c-18f04d040e84> | 2.953125 | 3,273 | Content Listing | Science & Tech. | 10.528636 | 95,495,944 |
Newswise — The physics behind some of the most extraordinary stellar objects in the Universe just became even more puzzling.
ESA/XMM-Newton/M. Sasaki et al.
The magnetar 1E 2259+586 shines a brilliant blue-white in this false-colour X-ray image of the CTB 109 supernova remnant, which lies about 10,000 light-years away toward the constellation Cassiopeia. CTB 109 is only one of three supernova remnants in our galaxy known to harbour a magnetar. X-rays at low, medium and high energies are respectively shown in red, green, and blue in this image created from observations acquired by the European Space Agency's XMM-Newton satellite in 2002. Credit: ESA/XMM-Newton/M. Sasaki et al.
A group of astronomers led by McGill researchers using NASA's Swift satellite have discovered a new kind of glitch in the cosmos, specifically in the rotation of a neutron star.
Neutron stars are among the densest objects in the observable universe; higher densities are found only in their close cousins, black holes. A typical neutron star packs as much mass as half-a-million Earths within a diameter of only about 20 kilometers. A teaspoonful of neutron star matter would weigh approximately 1 billion tons, roughly the same as 100 skyscrapers made of solid lead.
Neutron stars are known to rotate very rapidly, from a few revolutions per minute to as fast as several hundred times per second. A neutron star glitch is an event in which the star suddenly begins rotating faster. These sudden spin-up glitches have long been thought to demonstrate that these exotic ultra-dense stellar objects contain some form of liquid, likely a superfluid.
This new cosmic glitch was detected in a special kind of neutron star – a magnetar -- an ultra-magnetized neutron star that can exhibit dramatic outbursts of X-rays, sometimes so strong they can affect the Earth's atmosphere from clear across the galaxy. A magnetar’s magnetic field is so strong that, if one were located at the distance of the Moon, it could wipe clean a credit card magnetic strip here on Earth.
Now astronomers led by a research group at McGill University have discovered a new phenomenon: they observed a magnetar suddenly rotate slower -- a cosmic braking act they've dubbed an “anti-glitch.” The result is reported in the May 30 issue of Nature.
The magnetar in question, 1E 2259+586 located roughly 10,000 light years away in the constellation of Cassiopeia, was being monitored by the McGill group using the Swift X-ray telescope in order to study the star's rotation and try to detect the occasional giant X-ray explosions that are often seen from magnetars.
"I looked at the data and was shocked -- the neutron star had suddenly slowed down," says Rob Archibald, lead author and MSc student at McGill University. "These stars are not supposed to behave this way."
Accompanying the sudden slowdown, which rang in at one third of a part per million of the 7-second rotation rate, was a large increase in the X-ray output of the magnetar, telltale evidence of a major event inside or near the surface of the neutron star.
"We've seen huge X-ray explosions from magnetars before," says Victoria Kaspi, Professor of Physics at McGill and leader of the Swift magnetar monitoring program, "but an anti-glitch was quite a surprise. This is telling us something brand new about the insides of these amazing objects." In 2002, NASA’s Rossi X-ray Timing Explorer satellite also saw a large X-ray outburst from the source, but in that case, it was accompanied by a more usual spin-up glitch.
The internal structure of neutron stars is a long-standing puzzle, as the matter inside these stars is subject to forces so intense that they are presently not re-creatable in terrestrial laboratories. The densities at the hearts of neutron stars are thought to be upwards of 10 times higher than in the atomic nucleus, far beyond what current theories of matter can describe.
The reported anti-glitch strongly suggests previously unrecognized behaviour inside neutron stars, possibly with pockets of superfluid rotating at different speeds. The researchers further point out in the Nature paper that some properties of conventional glitches have been noted to be puzzling and suggestive of flaws in the existing theory to explain them. They are hoping that the discovery of a new phenomenon will open the door to renewed progress in understanding neutron star interiors.
The research was funded in part by the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute for Advance Research, the Fonds de recherche du Québec - Nature et technologies, the Canada Research Chairs program, the Lorne Trottier Chair in Astrophysics and Cosmology, and the Centre de recherche en Astrophysique du Québec.Chris Chipello
Chris Chipello | Newswise
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:7c923b46-c877-4e36-bc46-7984e42f4bc8> | 3.28125 | 1,673 | Content Listing | Science & Tech. | 44.487722 | 95,495,950 |
Dr Chris Faulkes, a senior lecturer at the School of Biological & Chemical Sciences, Queen Mary, University of London, will tell the conference that the African naked mole-rat is at the extreme end of a continuum of socially-induced reproductive suppression among mammals, with other examples including primates such as marmosets and tamarins, mongooses and members of the dog family (such as wolves and jackals).
The naked mole-rat lives in colonies of between 100-300 animals, but only the “queen” reproduces, suppressing fertility in both the females and the males around her by bullying them.
Dr Faulkes said: “The queen exerts her dominance over the colony by, literally, pushing the other members of the colony around. She ‘shoves’ them to show who’s boss. We believe that the stress induced in the lower-ranking animals by this behaviour affects their fertility. There appears to be a total block to puberty in almost all the non-breeding mole-rats so that their hormones are kept down and their reproductive tracts are under-developed.
“Currently, we think that the behavioural interactions between the queen and the non-breeders are translated into the suppression of certain fertility hormones (luteinizing and follicle stimulating hormones). In the non-breeding females this has the effect of suppressing the ovulatory cycle, while in the non-breeding males it causes lower testosterone concentrations, and lower numbers of sperm. In most non-breeding males, sperm that are present are non-motile.
“The queen also seems to exert control over the breeding males, so that concentrations of their testosterone are suppressed except when she is ready to mate.”
However, this stress-related block to fertility is reversible. When the queen dies, the other non-breeding, highest ranking females battle it out for dominance, with the winner rapidly becoming reproductively active.
“Studies of dominance within colonies have revealed that breeding animals have the highest social rank. Furthermore, concentrations of urinary testosterone, a hormone associated with aggression, in the queen and non-breeders of both sexes correlated significantly with rank position. In experiments where the queen is removed from her colony, reproductive activation in the female taking over as queen was accompanied by the development and expression of aggressive behaviour in the form of ‘shoving’. These succeeding females were also previously high ranking and had relatively high concentrations of urinary testosterone. This supports the hypothesis that the attainment and maintenance of reproductive status in the queen, and control of the social order of the colony, is related to dominance behaviour,” said Dr Faulkes.
Natural cues such as changes in day length and social stress act through areas of the brain that control reproduction and, as it is likely that such neuroendocrine pathways are similar across species, understanding how they work in naked mole-rats could lead to a better understanding of the mechanisms involved in some stress-related infertility in humans. Dr Faulkes said: “Social suppression of reproduction in marmoset monkeys is very similar to that in naked mole-rats, and as these are primates the applications to understanding human stress-related infertility aren't so far fetched.
“The neurobiological process underlying the way mammals respond to social and environmental cues are still largely unknown,” he continued. “In a wider comparative study of African mole-rat species, we are also researching into genes that may give rise to the quite different forms of social bonding and affiliative behaviours observed in mole-rats. Studies on voles by researchers in the US have shown that complex behaviours like monogamy and promiscuity can be influenced by single genes that differ among species in their patterns of expression in the brain.
“Humans also vary widely in the way in which they form social bonds with their partners, offspring and kin. By making careful comparisons with model species like mole-rats, we may be able to tease apart the relative contribution of genes, environment, up-bringing and culture to complex social behaviour in our own species.”
For the African naked mole-rat, the advantages of their social organisation mean that almost all the members of the colony are co-operating and directing their energies towards foraging for food in order for the whole community to survive, rather than indulging in physically exhausting mating and reproductive behaviour. The “workers” dig a network of tunnels, often several kilometres long, which they use to find their food of roots and tubers, while the “soldiers” defend the colony against foreign mole-rats and predators such as snakes.
“By living in large social groups with a co-operative non-breeding workforce, naked mole-rats are able to exploit an ecological niche where solitary animals or small groups would be unlikely to survive,” said Dr Faulkes.
Emma Mason | alfa
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:34455707-3949-4f07-83f8-9b74728297d1> | 2.84375 | 1,649 | Content Listing | Science & Tech. | 33.115224 | 95,495,962 |
|Did you know ...||Search Documentation:|
Arithmetic functions are terms which are evaluated by the arithmetic predicates described in section 4.27.2. There are four types of arguments to functions:
|Expr||Arbitrary expression, returning either a floating point value or an integer.|
|IntExpr||Arbitrary expression that must evaluate to an integer.|
|RatExpr||Arbitrary expression that must evaluate to a rational number.|
|FloatExpr||Arbitrary expression that must evaluate to a floating point.|
For systems using bounded integer arithmetic (default is unbounded, see section 22.214.171.124 for details), integer operations that would cause overflow automatically convert to floating point arithmetic.
SWI-Prolog provides many extensions to the set of floating point functions defined by the ISO standard. The current policy is to provide such functions on `as-needed' basis if the function is widely supported elsewhere and notably if it is part of the C99 mathematical library. In addition, we try to maintain compatibility with YAP.
is followed by a number, the parser discards the
true, both arguments are converted to float and the return value is a float. Otherwise (default), if both arguments are integers the operation returns an integer if the division is exact. If at least one of the arguments is rational and the other argument is integer, the operation returns a rational number. In all other cases the return value is a float. See also ///2 and rdiv/2.
divis floored division.
towards_zero.113Future versions might guarantee rounding towards zero.
Y =\= 0.
Q is div(X, Y), M is mod(X, Y), X =:= Y*Q+M.
"a"evaluates to the character code of the letter `a' (97) using the traditional mapping of double quoted string to a list of character codes. Arithmetic evaluation also translates a string object (see section 5.2) of one character length into the character code for that character. This implies that expression
"a"also works of the Prolog flag double_quotes is set to
string. The recommended way to specify the character code of the letter `a' is
/dev/random.114On Windows the state is initialised from CryptGenRandom(). Otherwise it is set from the system clock. If unbounded arithmetic is not supported, random numbers are shared between threads and the seed is initialised from the clock when SWI-Prolog was started. The predicate set_random/1 can be used to control the random number generator.
Warning! Although properly seeded (if supported on the OS),
the Mersenne Twister algorithm does not produce
cryptographically secure random numbers. To generate cryptographically
secure random numbers, use crypto_n_random_bytes/2
library(crypto) provided by the
floor(Expr+1/2), i.e., rounding down. This is an unconventional choice and under which the relation
round(Expr) == -round(-Expr)does not hold. SWI-Prolog rounds outward, e.g.,
round(1.5) =:= 2and round
round(-1.5) =:= -2.
rational(0.1). The function rationalize/1 remedies this. See section 126.96.36.199 for more information on rational number support.
?- A is rational(0.25). A is 1 rdiv 4 ?- A is rational(0.1). A = 3602879701896397 rdiv 36028797018963968
?- A is rationalize(0.25). A = 1 rdiv 4 ?- A is rationalize(0.1). A = 1 rdiv 10
floor(Expr). For Expr < 0 this is the same as
ceil(Expr). That is, truncate/1 rounds towards zero.
Note that the ISO Prolog standard demands
to raise an evaluation error, whereas the C99 and POSIX standards demand
this to evaluate to 0.0. SWI-Prolog follows C99 and POSIX.
resourceerror if the result does not fit in memory.
The ISO standard demands a float result for all inputs and introduces ^/2 for integer exponentiation. The function float/1 can be used on one or both arguments to force a floating point result. Note that casting the input result in a floating point computation, while casting the output performs integer exponentiation followed by a conversion to float.
|Int||Int||**/2||Int or Float||Float|
|Int||Int||^/2||Int or Float||Int or error|
The functions below are not covered by the standard. The
msb/1 function also
appears in hProlog and SICStus Prolog. The getbit/2
function also appears in ECLiPSe, which also provides
clrbit(Vector,Index). The others are SWI-Prolog
extensions that improve handling of ---unbounded--- integers as
(IntExpr >> N) /\ 1 =:= 1. This is the (zero-origin) index of the most significant 1 bit in the value of IntExpr, which must evaluate to a positive integer. Errors for 0, negative integers, and non-integers.
(IntExpr >> N) /\ 1 =:= 1. This is the (zero-origin) index of the least significant 1 bit in the value of IntExpr, which must evaluate to a positive integer. Errors for 0, negative integers, and non-integers.
(IntExprV >> IntExprI)/\1, but more efficient because materialization of the shifted value is avoided. Future versions will optimise
(IntExprV >> IntExprI)/\1to a call to getbit/2, providing both portability and performance.120This issue was fiercely debated at the ISO standard mailinglist. The name getbit was selected for compatibility with ECLiPSe, the only system providing this support. Richard O'Keefe disliked the name and argued that efficient handling of the above implementation is the best choice for this functionality. | <urn:uuid:1e46b86c-b43e-4ba7-b00d-2fcb3cb936c0> | 3.15625 | 1,327 | Documentation | Software Dev. | 56.687237 | 95,495,967 |
|Cyriopagopus schioedtei in Borneo, Malaysia|
Cyriopagopus is a genus of spiders in the family Theraphosidae (tarantulas) found in Southeast Asia, from Myanmar to the Philippines. As of March 2017[update], the genus includes species formerly placed in Haplopelma.
The species formerly placed in Haplopelma are medium to large spiders; for example, Cyriopagopus schmidti females have a total body length, including chelicerae, up to 85 mm (3.3 in), with the longest leg, the first, being about 70 mm (2.8 in) long. The carapace (upper surface of the cephalothorax) is generally dark brown. They have eight eyes grouped on a distinctly raised portion of the cephalothorax, forming a "tubercle". The forward-facing (prolateral) sides of the maxillae have "thorns" which act as a stridulating organ. The first leg is usually the longest, followed by the fourth, second, and third. Mature females have an M-shaped spermatheca. Mature males have a spur on the forward-facing sides of the tibiae of the first pair of legs and a pear-shaped palpal bulb with a wide, curved embolus.
The nomenclature of a group of theraphosid genera from South and Southeast Asia, including Cyriopagopus, Haplopelma, Lampropelma, Omothymus, and Phormingochilus, is somewhat confused. The status of the genera has changed several times recently, and species have been moved from one genus to another. Currently, Haplopelma is considered to be a junior synonym of Cyriopagopus, and Melopoeus of Haplopelma, hence of Cyriopagopus, but this may change.
The genus Cyriopagopus was erected by Eugène Simon in 1887 for the species Cyriopagopus paganus from Burma. In 1985, Robert Raven made Cyriopagopus the senior synonym of Melognathus Chamberlin, 1917. In 1890, Tamerlan Thorell described a species of spider under the name Selenocosmia doriae. In 1892, Eugène Simon decided that this species was sufficiently different from others placed in the genus Selenocosmia to warrant a new genus, Haplopelma, with one species, Haplopelma doriae. Raven in 1985 also decided that Haplopelma was the senior synonym of Melopoeus Pocock, 1895. A. M. Smith studied the type specimen of Cyriopagopus paganus (the type species of Cyriopagopus) and decided that it had the key characteristics of Haplopelma, making Cyriopagopus the senior synonym of Haplopelma. This analysis is accepted by the World Spider Catalog as of March 2017[update], with the comment that "Haplopelma, Cyriopagopus, Melopoeus, and other ornithoctonine genera are in urgent need of revision".
- Cyriopagopus albostriatus (Simon, 1886) – Myanmar, Thailand, Cambodia
- Cyriopagopus doriae (Thorell, 1890) – Borneo
- Cyriopagopus dromeus (Chamberlin, 1917) – Philippines
- Cyriopagopus hainanus (Liang, Peng, Huang & Chen, 1999) – China
- Cyriopagopus lividus (Smith, 1996) – Myanmar
- Cyriopagopus longipes (von Wirth & Striffler, 2005) – Thailand, Cambodia
- Cyriopagopus minax (Thorell, 1897) – Myanmar, Thailand
- Cyriopagopus paganus Simon, 1887 (type species) – Myanmar
- Cyriopagopus robustus (Strand, 1907) – Singapore
- Cyriopagopus salangense (Strand, 1907) – Malaysia
- Cyriopagopus schmidti (von Wirth, 1991) – China, Vietnam
- Cyriopagopus vonwirthi (Schmidt, 2005) – Southeast Asia
- Transferred to other genera
- Cyriopagopus schioedtei (Thorell, 1891) → Omothymus schioedtei
- Cyriopagopus thorelli (Simon, 1901) → Omothymus thorelli
Distribution and habitat
The genus is found in Southeast Asia (China, Myanmar, Thailand, Cambodia, Vietnam, Malaysia, and Singapore), Borneo, and the Philippines. Species that have been studied live in underground, silk-lined tubes, often with a surrounding web of radiating signal threads. They may be found in small colonies at the base of trees or bamboos. Some species favour steep, south-facing slopes.
Like all Old World tarantulas, spiders in the genus Cyriopagopus lack the urticating hairs found in their New World counterparts, hence use biting as a primary means of both attack and defence. Some Cyriopagopus species are among those reported to have more toxic venom. Although bites may cause severe pain and a range of other effects, no fatalities are known. Cyriopagopus lividus, C. hainanus, and C. schmidti (under its synonym Selenocosmia huwena) have had their venom characterized. The last two produce hainantoxins and huwentoxins, respectively.
- "Gen. Cyriopagopus Simon, 1887", World Spider Catalog, Natural History Museum Bern, retrieved 2017-03-18
- Zhu, M.S. & Zhang, R. (2008), "Revision of the theraphosid spiders from China (Araneae: Mygalomorphae)", Journal of Arachnology, 36: 425–447, doi:10.1636/ca07-94.1
- Smith, A.M. & Jacobi, M.A. (2015), "Revision of the genus Phormingochilus with the description of three new species from Sulawesi and Sarawak and notes on the placement of the genera Cyriopagopus, Lampropelma and Omothymus", British Tarantula Society Journal, 30 (3): 25–48
- Simon, E. (1892), "Haplopelma, nov. gen.", Histoire naturelle des araignées, I, Paris: Roret, p. 151, retrieved 2016-05-18
- Bertani, Rogério & Guadanucci, José Paulo Leite (2013), "Morphology, evolution and usage of urticating setae by tarantulas (Araneae: Theraphosidae)", Zoologia (Curitiba), 30 (4): 403–418, doi:10.1590/S1984-46702013000400006
- Escoubas, Pierre & Rash, Lachlan (2004), "Tarantulas: eight-legged pharmacists and combinatorial chemists", Toxicon, 43 (5): 555–574, doi:10.1016/j.toxicon.2004.02.007, PMID 15066413
|This Theraphosidae-related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:1dabf681-b97f-48e5-ae48-f4535e880406> | 2.78125 | 1,622 | Knowledge Article | Science & Tech. | 33.795336 | 95,495,969 |
Newtonian Atom Optics and its Applications
Atom optics is a new field that has emerged as a result of the capabilities of laser cooling. Devices depending on both material components and carefully arranged electromagnetic fields have been demonstrated. However, neutral atoms do not penetrate matter, so the only material devices that can be used for atom optics must function as masks, gratings, zone plates, and slits. Apart from simple masking, the principal effect of these intensity modulators is deBroglie wave diffraction, and so their discussion is left to Chapter 15. By contrast, atoms traveling in in-homogeneous electromagnetic fields, for example an optical standing wave, can experience a dipole force as discussed in Chapter 9. Thus the trajectories of atoms can be altered by the fields so that it becomes possible to control the motion of atoms using devices analogous to those in optics, including mirrors, lenses, beam splitters, retardation plates, etc.
KeywordsAtomic Beam Laser Cool Atomic Clock Phase Space Density Light Shift
Unable to display preview. Download preview PDF. | <urn:uuid:be48c320-121a-4246-b101-e5d2c45c7c98> | 3.265625 | 219 | Truncated | Science & Tech. | 25.023897 | 95,495,991 |
The English used in this article or section may not be easy for everybody to understand. (February 2018)
- Mostly it is used in science to describe how much potential a physical system has to change. In physics, energy is a property of matter and space, objects and fields. It can be transferred between objects and can also be converted in form. It cannot, however, be created or destroyed.
- In economics it may mean the "energy industry," the harnessing and sale of energy itself as in fuel or electric power distribution.
- In ordinary language, the word is used to describe someone acting or speaking in a lively and vigorous way.
- It is a major part of physics and other sciences.
Scientific energy[change | change source]
In science, energy is the capacity to do work; the influence required to perform an action. The amount of energy in a system is the amount of changes that can be made to it.
Basic forms of energy include:
- Kinetic energy - energy of an object in motion, which acts as the capacity to undergo change in position over time.
- Potential energy - stored energy, which acts as the potential to do work.
- Heat - thermal energy which is used to vibrate atoms and molecules.
- Electrical energy which is energy that relates to electrical interactions.
Conservation of Energy[change | change source]
Energy is a property that is not created or destroyed, although energy can change in detectable form. This is a rule that is commonly understood as the "conservation law of energy". In respects to this rule, the total amount of energy that exists within an isolated system will always be the same, no matter what changes have been made to it.
In the early 20th century, scientist were able to discover that matter itself can be created from energy and vice versa. This is just another change of form. After these discoveries, the conservation law of energy was extended to become the conservation law of matter and energy: matter and energy can neither be created from nothing nor destroyed to the point of complete erasure from reality. Albert Einstein was the first to mathematically derive this in the formula E = mc2.
Example[change | change source]
A stone is thrown upwards and falls to the ground.
- human throws the stone using energy stored in muscles = chemical energy
- stone moves upwards = kinetic energy
- stone at the highest point = potential energy
- stone falls to ground = kinetic energy
- stone hits ground = thermal energy/sonic energy
Types of energy[change | change source]
Scientists have identified many types of energy, and found that they can be changed from one kind into another. For example:
- Light energy
- Sound energy
- Renewable energy
- Solar energy
- Nuclear energy
- Elastic energy
- Gravitational potential energy
- Kinetic energy
- Dark energy
- Hamiltonian mechanics
- Internal energy
Measuring energy[change | change source]
As in other kinds of measurements, there are measurement units. The units of measurement for measuring energy are used to make the numbers meaningful.
The SI unit for both energy and work is the joule (J). It is named after James Prescott Joule. 1 joule is equal to 1 newton-metre. In terms of SI base units, 1 J is equal to 1 kg m2 s−2. It is most often used in science, though particle physics often uses the electronvolt.
Related pages[change | change source]
References[change | change source]
|Wikimedia Commons has media related to Energy.| | <urn:uuid:b814b73c-d846-4e88-97a8-8831689ea080> | 3.265625 | 748 | Knowledge Article | Science & Tech. | 36.055806 | 95,496,030 |
What is the peak month for tornadoes where you live? In the slideshow above, our Severe Weather Expert Dr. Greg Forbes has provided the month-by-month analysis of where and when the chances for tornadoes are highest along with the 30-year average number of tornadoes for each month.
As you click through the maps, you will see that the risk of tornadoes is mainly confined to the South during the first two months of the year. The South then typically sees its greatest threat of tornadoes as we head into the months of March and April.
From April through June, the biggest tornado threat shifts to the Plains, Upper Midwest and Great Lakes as the jet stream retreats northward with time. The main tornado risk then stays along the northern tier of the country through much of the summer, however tropical storms and hurricanes can increase the threat in the South as they move inland.
As we head into the months of November and December, the greatest chance of tornadoes moves back to the South.
PHOTOS: iWitness Weather Tornadoes | <urn:uuid:4fde737d-abfc-4d0f-9803-23c2d050fca6> | 3.625 | 214 | News Article | Science & Tech. | 58.606537 | 95,496,032 |
|Phycology journals deal with the complete research on phycology. Phycology is nothing but study of Algae. It is a botany branch, which is the main branch of Science. This phycology journal provides a complete information to researchers like ecologists, molecular biologists etc. This journal also deals about interactions of algae. The main classification of these phycology journals is Botany, Ecology, Molecular biology and also Microbiology.
Phycology journals also deals with diatoms, cyanobacteria, Microalgae, phytoplankton etc. which are the main types of Algae. Algae are the initial producers of Ecosystem. These are single celled and microscopic.
Phycology Journals have many major classifications like Prokaryota, Eukaryota, Cyanobacteria and Xanthophyceae. Phycology Journals accepts the research, review, mini review, commentary, short commentary, public help reports, case reports, Images and all other type of articles. | <urn:uuid:59e9f37e-7367-415f-a333-b60682cd971f> | 2.859375 | 218 | Knowledge Article | Science & Tech. | 9.719167 | 95,496,035 |
Habitat destruction may have profound negative impacts on the biodiversity of a given environment. The presence of Allee effects compounds these problems. A sound understanding of the interplay between habitat destruction, Allee effects, and subsequent extinctions is needed to better guide conversation efforts.This study incorporates Allee effects into an extinction debt model, and the analyses provide upper bounds on the strength of those effects that are consistent with species persistence. These results differ from those of a previous study by up to four orders of magnitude assuming the same published parameter values. These new results suggest that if sufficiently strong Allee effects are present, efforts to prevent extinction through habitat restoration may be futile. © 2010 Elsevier B.V.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:f397ef31-ae55-4bec-a95c-0b56d0f7d307> | 2.96875 | 161 | Academic Writing | Science & Tech. | 25.520208 | 95,496,054 |
Interaction of Radiation with Matter
All particulate and electromagnetic radiations can interact with the atoms of an absorber during their passage through it, producing ionization and excitation of the absorber atoms. These radiations are called ionizing radiations. Because particulate radiations have mass and electromagnetic radiations do not, the latter travel through matter longer distance before losing all energy than the former of the same energy. Electromagnetic radiations are therefore called penetrating radiations and particulate radiations non-penetrating radiations. The mechanisms of interaction with matter, however, differ for the two types of radiation, and therefore they are discussed separately.
KeywordsCharged Particle Pair Production Compton Scattering Linear Energy Transfer Scattered Photon
- Cherry SR, Sorensen JA, Philips ME. Physics in Nuclear Medicine. 3rd ed. Philadelphia: W.B. Saunders; 2003.Google Scholar
- Friedlander G, Kennedy JW, Miller JM. Nuclear and Radiochemistry. 3rd ed. New York: Wiley; 1981.Google Scholar
- Johns HE, Cunningham JR. The Physics of Radiology. 4th ed. Springfield, Ill: Charles C Thomas; 1983.Google Scholar
- Knoll GF. Radiation Protection and Measurement. 4th ed. New York: Wiley; 2010.Google Scholar
- Lapp RE, Andrews HL. Nuclear Radiation Physics. 4th ed. Englewood Cliffs, NJ: Prentice-Hall; 1972.Google Scholar | <urn:uuid:d7694f68-10d3-4da6-b306-a344db2a72d9> | 3.359375 | 307 | Academic Writing | Science & Tech. | 35.575027 | 95,496,093 |
HOW MANY NEUTRON STARS ARE BORN RAPIDLY ROTATING?. NIKOLAOS STERGIOULAS. DEPARTMENT OF PHYSICS ARISTOTLE UNIVERSITY OF THESSALONIKI. ENTAPP, 23/1/2006. WHY DO WE NEED RAPID ROTATION?. Several GW emission mechanisms during NS formation rely on rapid rotation :.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
DEPARTMENT OF PHYSICSARISTOTLE UNIVERSITY OF THESSALONIKI
Several GW emission mechanisms during NS formation rely on rapid rotation:
But, are NS born rapidly rotating?
A large fraction of progenitor stars are initially rapidly rotating:
The average rotation of OB type stars on the main sequence is 25% of break up speed.
About 0.3% of B stars have Ω > 67% of breakup, e.g.of Regulus in Leo: 86% of breakup.
But: Magnetic Torques can Spin Down the Core!
Spruit & Phinney 1998, Spruit 2002, Heger, Woosley & Spruit 2004
When the progenitor passes through the Red Supergiant (RSG) phase it has a huge envelope of several hundred times the initial radius.
The core’s differential rotation produces a magnetic field by dynamo action that couples the core to the outer layers, transferring away angular momentum. This leads to slowly rotating neutron stars at birth (~10-15ms).
Is there a way out of this?
e.g. Wheeler et al.2000By-Passing the RSG Phase
Massive Stars (M>25Msun) evolve very rapidly. Two advantages:
a) There is not sufficient time to slow down the core effectively!
b) A strong wind (WR phase) will expel the envelope, preventing slow down of core by magnetic torques.
A strong wind (high mass-loss rate) allows NS to be formed instead of a BH, but could also carry away a lot of angular momentum.
Mass-loss rate is lower if the star has low metallicity.
In addition, rapidly rotating WR stars may lose mass mainly at the poles (temperature is higher there) => angular momentum loss is lower.
Rapidly rotating coresproduced by right mixture ofhigh massandlow metallicity
Observational evidence: 1) magnetar produced by 30-40Msun progenitor 2) magnetar with > 40Msun progenitor in star cluster
Gaensler et al.2005
Muno et al.2005
Suggested as ms pulsar formation mechanism in globular clusters.
Also, suggested as alternative magnetar formation mechanism, with event rate0.3/year at ~ 40Mpc.
(Levan et al. 2006)Additional Paths to Rapid Rotation
1)Rotational mixing in OB stars:
Woosley & Heger 2005
Rapid rotation in massive OB stars can induce deep rotational mixing, preventing the RSG phase (stars stay on main sequence).
Woosley & Heger (2005) estimate that 1% of all stars with mass >10Msun will produce rapidly rotating cores.
2)Loss of envelope in binary evolution:
If a binary companion strips the outer envelope of a massive star before core collapse, the RSG phase is avoided.
(see Fryer & Kalogera 2001, Pfahl et al. 2002, Podsiadlowski et al. 2003, Ivanova & Podsiadlowski 2003)
(see e.g. Watts & Andersson, 2002) | <urn:uuid:ed126cee-3ef4-45a0-b6a0-a0b3627be34d> | 3.15625 | 823 | Knowledge Article | Science & Tech. | 51.105012 | 95,496,123 |
A few decades ago, they were practically absent. Today, due to man-made climate change monthly heat extremes in summer are already observed on 5 percent of the land area.
This is projected to double by 2020 and quadruple by 2040, according to a study by scientists of the Potsdam Institute for Climate Impact Research (PIK) and the Universidad Complutense de Madrid (UCM). A further increase of heat extremes in the second half of our century could be stopped if global greenhouse-gas emissions would be reduced substantially.
“In many regions, the coldest summer months by the end of the century will be hotter than the hottest experienced today – that’s what our calculations show for a scenario of unabated climate change,” says Dim Coumou of PIK. “We would enter a new climatic regime.” The scientists focus on heat waves that exceed the usual natural variability of summer month temperatures in a given region by a large margin, namely so called 3-sigma events. These are periods of several weeks that are three standard deviations warmer than the normal local climate – often resulting in harvest losses, forest fires, and additional deaths in heat-struck cities.
Such heat extremes might cover 85 percent of the global land area in summer by 2100, if CO2 continues to be emitted as it is today, the study shows. In addition to this, even hotter extremes that are virtually non-existent today would affect 60 percent of the global land area.
While climate change mitigation could prevent this, the projected increase up to mid-century is expected to happen regardless of the emissions scenario. “There’re already so much greenhouse-gases in the atmosphere today that the near-term increase of heat extremes seems to be almost inevitable,” Coumou says. This is important information for developing adaptation measures in the affected sectors.
As the study defines a heat extreme based on the natural variability a region has experienced in the past, the absolute temperatures of this type of event will differ in different regions of the world. For instance the observed Russian heat wave brought an increase of the monthly average temperature by 7 degrees Celsius in Moscow and daily peak temperatures above 40 degrees. In tropical regions like e.g. Southern India or Brazil, natural variability is much smaller than in the moderate zones, hence 3-sigma events are not as large a deviation in absolute temperatures.
“In general, society and ecosystems have adapted to extremes experienced in the past and much less so to extremes outside the historic range,” Alexander Robinson of UCM says. “So in the tropics, even relatively small changes can yield a big impact – and our data indicates that these changes, predicted by earlier research, in fact are already happening.”
The scientists combined results of a comprehensive set of state-of-the-art climate models, the CMIP5 ensemble, thereby reducing the uncertainty associated with each individual model. “We show that these simulations capture the observed rise in heat extremes over the past 50 years very well.” Robinson points out. “This makes us confident that they’re able to robustly indicate what is to be expected in future.”
Article: Coumou, D., Robinson, A. (2013): Historic and future increase in the global land area affected by monthly heat extremes. Environmental Research Letters 8 034018. [doi:10.1088/1748-9326/8/3/034018]For further information please contact:
Jonas Viering | PIK Potsdam
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:360dbdaf-0fda-467d-86be-a45327437e26> | 4.21875 | 1,308 | Content Listing | Science & Tech. | 39.289503 | 95,496,141 |
Scientists in the past decade have discovered that remnants of ancient germ line infections called human endogenous retroviruses make up a substantial part of the human genome. Once thought to be merely "junk" DNA and inactive, many of these elements, in fact, perform functions in human cells.
Now, a new study by John McDonald of the University of Georgia and King Jordan at the National Center for Biotechnology Information at the National Institutes of Health, suggests for the first time that a burst of transpositional activity occurred at the same time humans and chimps are believed to have diverged from a common ancestor - 6 million years ago. These new results implicate retroelements, a particular type of transposable elements that are abundant in the human genome, in the actual shift from more rudimentary primates to modern human beings. The research was just published in the journal Genome Letters.
"There is a growing body of evidence that transposable elements have contributed to the evolution of genome structure and function in many species," said McDonald, a molecular evolutionist and head of the genetics department at UGA. "Our results suggest that a bust of transposable element activity may well have contributed to the genetic changes that led to the emergence of the human species." Jordan received his doctoral degree at UGA working with McDonald.
Kim Carlyle | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:adfb2d4c-ba50-4159-9f21-f44d5114ce06> | 3.40625 | 855 | Content Listing | Science & Tech. | 34.106327 | 95,496,153 |
Physics is all about observation, but how much can we actually see? These articles explore some of the limits of observation — be they natural, scientific, political, or down to the jelly-like quantum nature of reality.
What can we see? — The human eye is a marvellous thing: it can see pretty much as far as you like. But what are its limits when it comes to distinguishing objects and seeing colours?
What can science see? — Observing the smallest building blocks of nature — such as the famous Higgs boson — isn't about seeing in the ordinary sense. It's about careful mathematical detective work and statistical analysis.
What can we agree to look for? — Even if the science and technology you need to observe something exist, there can still be political and economical limits to what can be done.
Heisenberg's uncertainty principle — One of the most famous results from quantum mechanics puts limit on the accuracy with which we can observe fundamental particles. It's like squeezing jelly!
This package is part of our Who's watching? The physics of observers project, run in collaboration with FQXi. Click here to see more articles and videos about questions to do with observers in physics. | <urn:uuid:6da4883f-94b9-4d99-add3-836344a92c80> | 3.203125 | 248 | Content Listing | Science & Tech. | 50.987337 | 95,496,171 |
Scientists think it was an asteroid that crashed into Earth, causing the extinction of the dinosaurs. Comets have been more benign, and may have even delivered most of the water found our planet today. As relics of the creation of our solar system 4.6 billion years ago, comets and asteroids may be very different “space rocks” but they both rotate around themselves, just like Earth.
Asteroids and comets rotate, but not exactly like the Earth. Because Earth is a sphere, its mass is distributed relatively evenly, so it rotates smoothly. Asteroids and comets aren’t uniformly shaped, so their rotation can be more of a tumble. NASA equates their rotation to the spin you see on a badly thrown football. The direction of rotation can differ for each individual asteroid or comet.
Asteroid Rotation Speed
The fastest rotation scientists have recorded is that of Asteroid 2008 HJ. This asteroid is oblong and does a complete rotation every 42.7 seconds. It is able to turn so fast because it is only 12 meters by 24 meters (39.4 feet by 78.7 feet) -- about the size of a tennis court. Other asteroids usually take between one hour and one day to rotate. There is more to be discovered about how fast asteroids rotate. In fact, scientists at Cornell recently discovered the force from light particles colliding with asteroids can make them rotate more quickly.
Comet Rotation Speed
The nucleus of the comet Wirtanen has a period of 7.6 hours -- in other words, it takes that long for one rotation. Hale-Bopp, a well-known comet, takes 11.47 hours to rotate, but the comet Phaethon spins around in only 3.6 hours. Other comets range from a few hours to 15, but usually spin more quickly than asteroids. Speeds of comets can be calculated by photometry, which measures the brightness of the comet as it turns. Scientists track the rotation of the nucleus of the comet, which is rock instead of the ice that surrounds it.
Passing by the gravitational fields of planets, such as Earth, can change the rotation or spin of asteroids. A change in rotation can affect the course of the asteroid, potentially bringing it closer to Earth. NASA monitors both comets and asteroids when they come within collision range, so understanding how they rotate is especially important. According to a paper in the "Annual Review of Earth and Planetary Sciences," there is still a lot about the rotation of comets, including direction, that scientists don't understand. | <urn:uuid:b0b036ae-a06d-4de8-9ad4-52db1c8db597> | 4.34375 | 528 | Knowledge Article | Science & Tech. | 55.9235 | 95,496,187 |
The Space Age is little more than half a century old, but it clearly marks a pivotal era in world history. It has profoundly shifted humanity’s view of itself, from a society bound by competing political blocs toward a more holistic panorama. The global economic and social benefits of space technology and science are profound: The study of Earth from space allows human beings to analyze and help conserve the planet’s dwindling natural resources. Space technology provides valuable information about weather and navigation, and satellite communications bring people closer together. The space industry – one of the world’s fastest growing sectors – exerts a powerful gravitational force, pulling many other commercial and societal aspects into its orbit. This comprehensive and nuanced report from the World Economic Forum is a fascinating must-read that getAbstract highly recommends to all policy makers, executives, NGO leaders and environmentalists.
In this summary, you will learn
- How the space industry contributes to global social and economic welfare;
- Why governments need to protect space-based technology; and
- How space assets affect global communications, education and health, food, human rights, Arctic governance, natural resources, the study and monitoring of climate change, and more.
About the Author
The World Economic Forum is an independent global organization that engages leaders of business, politics, academia and society to improve the state of the world.
Comment on this summary
Customers who read this summary also read
The Economist Intelligence Unit
World Economic Forum
World Economic Forum, 2017
Donald Wuebbles et al.
U.S. Global Change Research Program, 2017 | <urn:uuid:06194a6a-ed33-411e-adf8-aa6cb9310eca> | 3.078125 | 324 | Truncated | Science & Tech. | 16.779799 | 95,496,191 |
Rice University scientists create patterned graphene onto food, paper, cloth, cardboard
Rice University scientists who introduced laser-induced graphene (LIG) have enhanced their technique to produce what may become a new class of edible electronics.
The Rice lab of chemist James Tour, which once turned Girl Scout cookies into graphene, is investigating ways to write graphene patterns onto food and other materials to quickly embed conductive identification tags and sensors into the products themselves.
“This is not ink,” Tour said. “This is taking the material itself and converting it into graphene.”
The process is an extension of the Tour lab’s contention that anything with the proper carbon content can be turned into graphene. In recent years, the lab has developed and expanded upon its method to make graphene foam by using a commercial laser to transform the top layer of an inexpensive polymer film.
The foam consists of microscopic, cross-linked flakes of graphene, the two-dimensional form of carbon. LIG can be written into target materials in patterns and used as a supercapacitor, an electrocatalyst for fuel cells, radio-frequency identification (RFID) antennas and biological sensors, among other potential applications.
The new work reported in the American Chemical Society journal ACS Nano demonstrated that laser-induced graphene can be burned into paper, cardboard, cloth, coal and certain foods, even toast.
“Very often, we don’t see the advantage of something until we make it available,” Tour said. “Perhaps all food will have a tiny RFID tag that gives you information about where it’s been, how long it’s been stored, its country and city of origin and the path it took to get to your table.”
He said LIG tags could also be sensors that detect E. coli or other microorganisms on food. “They could light up and give you a signal that you don’t want to eat this,” Tour said. “All that could be placed not on a separate tag on the food, but on the food itself.”
Multiple laser passes with a defocused beam allowed the researchers to write LIG patterns into cloth, paper, potatoes, coconut shells and cork, as well as toast. (The bread is toasted first to “carbonize” the surface.) The process happens in air at ambient temperatures.
“In some cases, multiple lasing creates a two-step reaction,” Tour said. “First, the laser photothermally converts the target surface into amorphous carbon. Then on subsequent passes of the laser, the selective absorption of infrared light turns the amorphous carbon into LIG. We discovered that the wavelength clearly matters.”
The researchers turned to multiple lasing and defocusing when they discovered that simply turning up the laser’s power didn’t make better graphene on a coconut or other organic materials. But adjusting the process allowed them to make a micro supercapacitor in the shape of a Rice “R” on their twice-lased coconut skin.
Defocusing the laser sped the process for many materials as the wider beam allowed each spot on a target to be lased many times in a single raster scan. That also allowed for fine control over the product, Tour said. Defocusing allowed them to turn previously unsuitable polyetherimide into LIG.
“We also found we could take bread or paper or cloth and add fire retardant to them to promote the formation of amorphous carbon,” said Rice graduate student Yieu Chyan, co-lead author of the paper. “Now we’re able to take all these materials and convert them directly in air without requiring a controlled atmosphere box or more complicated methods.”
The common element of all the targeted materials appears to be lignin, Tour said. An earlier study relied on lignin, a complex organic polymer that forms rigid cell walls, as a carbon precursor to burn LIG in oven-dried wood. Cork, coconut shells and potato skins have even higher lignin content, which made it easier to convert them to graphene.
Tour said flexible, wearable electronics may be an early market for the technique. “This has applications to put conductive traces on clothing, whether you want to heat the clothing or add a sensor or conductive pattern,” he said.
Learn more: Graphene on toast, anyone?
The Latest on: Laser-induced graphene
Rice University Researchers Continue Work with 3D Graphene Foam
on June 25, 2018 at 11:32 am
This technique is a continuation of the university’s innovative work from 2014, which resulted in the first production of laser-induced graphene, or LIG, which can be made at room temperature in macro... […]
Machine makes squishy 3D stuff from graphene foam
on June 19, 2018 at 4:10 pm
The technique, detailed in Advanced Materials, is an extension of Tour lab work that produced the first laser-induced graphene (LIG) in 2014 by heating inexpensive polyimide plastic sheets with a lase... […]
Lab creates conductive 3-d carbon blocks that can be shaped for applications
on June 14, 2018 at 12:12 pm
Rice University scientists have layered laser-induced graphene and built a prototype that shapes the resulting 3-D blocks into sophisticated shapes. The foam offers new possibilities for energy storag... […]
Graphene-based edible electronics will let you make cereal circuits
on March 1, 2018 at 8:05 am
The project, which uses something called laser-induced graphene (LIG), is a process that creates a “foam made out of tiny cross-linked graphene flakes” that can carry electricity through carbon-rich p... […]
Graphene on toast? Edible electronics could help shield you from food poisoning
on February 14, 2018 at 4:00 pm
What chemist James Tour and his lab have been investigating are ways to laser graphene onto food for what may turn out to be the start of a revolution in “edible electronics.” This laser-induced graph... […]
Graphene Nanochem : Laster, graphene enable fastest light-driven current
on September 26, 2017 at 10:05 am
Graphene is a semi-metal and a great conductor ... said in a news release. In observing the laser-induced current, researchers found the electrons follow the logic of a quantum system, taking not one ... […]
Researchers develop dual-surface graphene electrode to split water into hydrogen, oxygen
on August 6, 2017 at 8:33 am
HOUSTON, Aug. 5 (Xinhua) -- Chemists in Rice University of Texas have produced a catalyst based on laser-induced graphene that splits water into hydrogen on one side and oxygen on the other side. Acco... […]
Graphene made out of wood could help solve the e-waste problem
on August 2, 2017 at 5:31 pm
The specific pattern is something called laser-induced graphene (LIG), a method for creating flexible, patterned sheets of multilayer graphene without the need for hot furnaces and controlled environm... […]
Rice University chemists make laser-induced graphene from wood
on July 31, 2017 at 7:41 am
Rice University scientists have made wood into an electrical conductor by turning its surface into graphene. Rice chemist James Tour and his colleagues used a laser to blacken a thin film pattern onto ... […]
Chemists make laser-induced graphene from wood
on July 31, 2017 at 6:20 am
This Rice University athletics logo is made of laser-induced graphene on a block of pine. Rice scientists used an industrial laser to heat the wood and turned its surface into highly conductive graphe... […]
via Google News and Bing News | <urn:uuid:47b114de-4c71-4a6c-bb07-e80182a2f743> | 3.140625 | 1,641 | Content Listing | Science & Tech. | 47.588876 | 95,496,197 |
An astronomical object or celestial object is a naturally occurring physical entity, association, or structure that exists in the observable universe. In astronomy, the terms object and body are often used interchangeably. However, an astronomical body or celestial body is a single, tightly bound, contiguous entity, while an astronomical or celestial object is a complex, less cohesively bound structure, which may consist of multiple bodies or even other objects with substructures.
Examples of astronomical objects include planetary systems, star clusters, nebulae, and galaxies, while asteroids, moons, planets, and stars are astronomical bodies. A comet may be identified as both body and object: It is a body when referring to the frozen nucleus of ice and dust, and an object when describing the entire comet with its diffuse coma and tail.
The universe can be viewed as having a hierarchical structure. At the largest scales, the fundamental component of assembly is the galaxy. Galaxies are organized into groups and clusters, often within larger superclusters, that are strung along great filaments between nearly empty voids, forming a web that spans the observable universe.
Galaxies have a variety of morphologies, with irregular, elliptical and disk-like shapes, depending on their formation and evolutionary histories, including interaction with other galaxies, which may lead to a merger. Disc galaxies encompass lenticular and spiral galaxies with features, such as spiral arms and a distinct halo. At the core, most galaxies have a supermassive black hole, which may result in an active galactic nucleus. Galaxies can also have satellites in the form of dwarf galaxies and globular clusters.
The constituents of a galaxy are formed out of gaseous matter that assembles through gravitational self-attraction in a hierarchical manner. At this level, the resulting fundamental components are the stars, which are typically assembled in clusters from the various condensing nebulae. The great variety of stellar forms are determined almost entirely by the mass, composition and evolutionary state of these stars. Stars may be found in multi-star systems that orbit about each other in a hierarchical organization. A planetary system and various minor objects such as asteroids, comets and debris, can form in a hierarchical process of accretion from the protoplanetary disks that surrounds newly formed stars.
The various distinctive types of stars are shown by the Hertzsprung-Russell diagram (H-R diagram)--a plot of absolute stellar luminosity versus surface temperature. Each star follows an evolutionary track across this diagram. If this track takes the star through a region containing an intrinsic variable type, then its physical properties can cause it to become a variable star. An example of this is the instability strip, a region of the H-R diagram that includes Delta Scuti, RR Lyrae and Cepheid variables. Depending on the initial mass of the star and the presence or absence of a companion, a star may spend the last part of its life as a compact object; either a white dwarf, neutron star, or black hole.
The table below lists the general categories of bodies and objects by their location or structure.
|Simple bodies||Compound objects||Extended objects| | <urn:uuid:add390d6-dd4b-423a-bf63-967d2f40c87b> | 3.65625 | 642 | Knowledge Article | Science & Tech. | 24.599865 | 95,496,215 |
Focus: Computer Chooses Quantum Experiments
The notoriously counterintuitive features of quantum mechanics make it hard to design experiments to study the quantum fundamentals and to develop quantum computing and quantum cryptography. So a team of researchers has developed an algorithm for combining the building blocks of quantum optics experiments, such as beam splitters and mirrors, to achieve a particular goal, such as a certain photonic quantum state. The experimental arrangements generated so far are ones the researchers say they were unlikely to have thought of themselves, and some work in ways that are hard to understand.
Experiments in quantum optics, whether for fundamental or practical ends, tend to use a rather limited set of components to manipulate the quantum states of photons. Beam splitters can send laser light along two different paths with certain probabilities and generate so-called superposition states in which photons seem to take two paths at once. Nonlinear crystals generate pairs of quantum-connected (entangled) photons, and the usual mirrors and lenses guide laser beams.
The algorithm designed by Anton Zeilinger of the University of Vienna and his co-workers, called Melvin, takes elements like these and shuffles them to find an experimental arrangement that will produce specified quantum properties in the photon beams. For example, many experiments require entanglement, where two photons have some property, such as polarization or angular momentum, that is correlated—measuring the value for one photon tells you the value for the other. Researchers might also want to manipulate single photons.
Melvin begins by arranging the elements randomly; it then calculates the resulting photon beams and checks whether any of them comes close to the specified goals. If not, the process is repeated. But if the output meets the design criteria, Melvin then does some further shuffling to simplify the setup before reporting it to the user. The algorithm can also learn from experience, remembering certain configurations that achieve particular goals, so that it can build on them in the future.
Doctoral student Mario Krenn came up with the idea of automating the design process when he found himself struggling to devise a way of creating a particular kind of quantum state. He realized that he was just guessing—and that a computer could do it much faster. “So I defined the goal, made an algorithm, and let it run overnight,” he says. “In the morning there was indeed a ‘solution.txt’ file. That was quite an exciting day.”
The Vienna team has demonstrated their method using so-called Greenberger-Horne-Zeilinger (GHZ) states, in which more than two photons are entangled. When the entanglement involves more than one of the particles’ variables—multiple components of orbital angular momentum, say—they are said to be high-dimensional and can be challenging to produce. Melvin delivered 51 different experiments that could realize entangled states using combinations of various optical elements, and one of the states was the high-dimensional GHZ state they were seeking.
Quantum theory verifies that Melvin’s arrangements should work, but that doesn’t mean it's easy to understand why. “I still find it quite difficult to understand intuitively what exactly is going on,” Krenn says. Such a lack of understanding, even for experiments with only a few components, “is unique to quantum physics,” says Krenn.
What’s more, the setups look rather different from what a human might try. All but one of them include a photon path that, after being initially entangled, doesn’t interact with any other beam path before it reaches the detector. The researchers are now seeking to test these predictions experimentally and have already managed to create the multiphoton, high-dimensional entangled state .
In a second proof of principle, Melvin found several sequences of operations that were cyclic—they could transform photon properties through a series of changes that end up back with the initial state. And these transformations were also high-dimensional. Such sequences could be useful in quantum computing or cryptography.
“This is an original and unusual paper,” says quantum physicist Nicolas Gisin of the University of Geneva. But he thinks automation of experimental design will only augment, not replace, human agency in the foreseeable future. Quantum physicist Terry Rudolph of Imperial College in London says that the work “has the potential to uncover new tensions between quantum and classical descriptions of the world.” He expects few of Melvin’s solutions to be understandable once it gets past the simplest combinations of experimental elements.
This research is published in Physical Review Letters.
Philip Ball is a freelance science writer in London; his latest book is Beyond Weird, a survey of interpretations of quantum mechanics.
- M. Malik, M. Erhard, M. Huber, M. Krenn, R. Fickler, and A. Zeilinger, “Multi-Photon Entanglement in High Dimensions,” Nature Photon. advance online publication (2016). | <urn:uuid:8246f494-e04d-4663-ac35-7cdc0bcdf020> | 3.9375 | 1,026 | Academic Writing | Science & Tech. | 35.177817 | 95,496,218 |
讲座题目:Boron Laser Fusion without nuclear radiation problem for clean economic and unlimited energy
Professor Heinrich Hora (born 1931 in Bodenbach, Czechoslovakia) is an internationally respected authority on laser driven fusion energy since the beginning in 1962 at the Max-Planck Institute of Plasma Physics in Garching/Germany. Since 1975. at the chair of Theoretical Physics of the University of New South Wales in Sydney/Australia and continued in his Emeritus position from 1992. His computations led to the first formula for the optimized fusion gains and on optical constants in plasmas based on the classical collision frequency for absorption. Analyzing measurements of laser produced ion emission from plasmas led him to the conclusion of nonlinear effects where the generation of anomalously energetic ions led to the formulation of the nonlinear force of direct conversion of laser energy into mechanical motion of plasma. These forces cover the optical plasma properties and are a generalization of the long known ponderomotive force. With this he derived the ponderomotive self-focusing to explain why laser powers above megawatt cause the change into the nonlinear regime (1970). His relativistic derivation of the optical constants led to the first general derivation of relativistic self-focusing (1975). Due to his great academic contributions, he has been awarded Ritter-von-Gerstner Medal (1985), Edward-Teller Medal (1991) ,Dirac Medal (2001), Ernst-Mach Medal (2002). | <urn:uuid:f7f9488f-ed3f-499c-9d7d-795dc184a6ea> | 2.640625 | 323 | About (Org.) | Science & Tech. | 18.370127 | 95,496,228 |
A long-standing question in Earth Science is the extent to which seismic and volcanic activity can be regulated by tidal stresses, a repeatable and predictable external excitation induced by the Moon-Sun gravitational force. Fortnightly tides, a ~14-day amplitude modulation of the daily tidal stresses that is associated to lunar cycles, have been suggested to affect volcano dynamics. However, previous studies found contradictory results and remain mostly inconclusive. Here we study how fortnightly tides have affected Ruapehu volcano (New Zealand) from 2004 to 2016 by analysing the rolling correlation between lunar cycles and seismic amplitude recorded close to the crater. The long-term (~1-year) correlation is found to increase significantly (up to confidence level of 5-sigma) during the ~3 months preceding the 2007 phreatic eruption of Ruapehu, thus revealing that the volcano is sensitive to fortnightly tides when it is prone to explode. We show through a mechanistic model that the real-time monitoring of seismic sensitivity to lunar cycles may help to detect the clogging of active volcanic vents, and thus to better forecast phreatic volcanic eruptions.
The possibility that Moon-Sun gravitational forces can influence terrestrial volcanism has been widely debated over the last century1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21. The most intriguing debate lies on whether fortnightly tides, a ~14-day amplitude modulation of the daily tidal stresses that is related to lunar phases, affect volcanic activity or even force eruptions to occur some specific days instead of others. For example, Johnston and Mauk22 suggested that major eruptions at Stromboli (Italy) start preferentially close to the fortnightly tidal minima (neap tides), that is, when the moon phase is close to the first or third quarter. The same correlation with neap tides was reported for five of the six dome extrusions occurring between 1879 and 1880 in Islas Quemadas volcano23 (El Salvador). In contrast, Sottili and Palladino24 suggested that the frequency of small explosive events in Stromboli increases during fortnightly tidal maxima (spring tides), that is, close to full or new moon. The same correlation with spring tides was reported for Kilauea (Hawai’i, US), Fuego (Guatemala), Ngauruhoe (New Zealand), and Mayon (Philippines) volcanoes25,26,27,28. On the other hand, fortnightly tidal modulation does not apparently affect the onset of eruptions at Mauna Loa25,29 (Hawai’i).
The aforementioned studies appear to yield contradictory conclusions as to the nature or existence of correlation between lunar phases (i.e., fortnightly tidal modulation) and volcanic activity. This may be a statistical bias due to the unknown start time of many historical eruptions, and therefore to the small number of events considered30. To overcome these limitations, we analyse whether fortnightly tides affect volcanoes by addressing the following questions: is the persistent seismicity recorded around active volcanic centres sensitive to fortnightly tidal modulation? If so, can we use this sensitivity to detect when a volcano is in a critical state and prone to erupt? We tackle these questions by exploring the correlation between lunar phases and seismic amplitude (hereafter called luni-seismic correlation) at Ruapehu volcano, New Zealand (Fig. 1). In particular, we use data from a seismic station installed at the summit of Ruapehu because: (a) the processes that are more likely to respond to tidal forcing are expected to take place at shallow levels beneath the volcanic crater5,31, and (b) nearly-continuous data are openly available for the last 13 years (we use data from 22-February-2004 to 15-November-2016, Geonet-GNS archive, DRZ station, vertical component32). Ruapehu is a good candidate for this study as it has displayed a broad spectrum of behaviour over the last decade, including several episodes of unrest, periods of quiescence, a small gas explosion (October 4, 2006), and a large phreatic eruption which occurred without warning32,33 (September 25, 2007).
The analysis we undertake here consists of five main steps. First, we compute the logarithm of the daily seismic amplitude ln(y sam ) from the raw seismic data, after applying a 20-day high-pass median filter to remove the influence of potential processes occurring over timescales greater than the average periodicity of spring tides, T = 14.7653 days (i.e., the average time between full moon and the next new moon) (Supplementary Fig. S1). Second, we create a synthetic periodic time series to parameterize lunar cycles: y lun = −cos(2π(t − t low )/T), where t is time in days and t low is a reference day of the calendar with neap tide (i.e., quarter moon). With this approach, y lun = 1 during high tides (full or new moon) and y lun = −1 during low tides (quarter moon) (Supplementary Fig. S2). Third, we calculate the Pearson product-moment correlation coefficient (ρ) among different subsets of the time series ln(y sam ) and y lun (Supplementary Fig. S3). We use backward windows of 1 year (i.e., >300 data pairs capturing more than 20 spring/neap tides), which allow us to explore the existence of long-term luni-seismic correlation and preserves a statically relevant sample size; the analysis is also performed with backward windows from 100 to 600 days for the sake of comparison. The Pearson coefficient ρ evaluates if two datasets are linearly correlated, such that 0 < ρ ≤ 1 implies a positive correlation between ln(y sam ) and y lun , while −1 ≤ ρ < 0 implies a negative correlation; the magnitude of ρ indicates the strength of the correlation between the time series. Fourth, we test if ρ obtained for every moving window is significantly different from zero by calculating the probability p value of obtaining by chance the observed (or more extreme) values of ρ. Fifth, we combine a set of Monte Carlo simulations with a binomial test to analyse the likelihood for a stochastic seismic amplitude time series to produce the correlation coefficients observed in the natural data (Supplementary Fig. S4). A detailed explanation of the data processing and analysis can be found in Methods (Sections 1 and 2).
Our study reveals that the 1-year rolling correlation between lunar cycles and seismic amplitude increases during the ~3 months preceding the phreatic eruption of September 25, 2007 (Fig. 2, Supplementary Fig. S5). During this period, the Pearson coefficient ρ is negative (i.e., daily median seismic amplitude tends to be lower towards full/new moon), its magnitude |ρ| ≥ 0.21, and it is statistically significant at a confidence level of 4-sigma (i.e., the probability of obtaining the observed ρ value -or greater- by chance is lower than 0.006%); the Pearson coefficient reaches a maximum of |ρ| max = 0.26 with confidence level of 5-sigma about 2 months prior to the eruption (Figs 2a,b). Moreover, we find through Monte Carlo simulations that the probability of obtaining by chance 1 day with |ρ| ≥ |ρ| max is ~10−5, whereas the binomial test reveals that the probability of obtaining by chance |ρ| ≥ 0.21 over a period extending for ~3 months with a dataset of 12 years is extremely low (<10−36). Interestingly, the significant luni-seismic correlation episode arises at a time when Ruapehu did not show signs of unrest, and it disappears after the phreatic eruption. During the rest of the 12-year period analysed, the luni-seismic correlation is not significant beyond the 4-sigma level and the Pearson coefficient ρ alternates between positive and negative values that are always lower than ~0.15 in absolute value (Fig. 2a,c). These oscillating low values of ρ reveal that the sensitivity of Ruapehu to lunar cycles is independent of the state of unrest, including anomalous episodes with earthquakes, elevated tremor activity, high degassing rates, and abnormally high or low lake temperatures. Our results remain similar if one carries the same analysis with backward windows in the range 290–430 days (Supplementary Fig. S6), so we are statistically confident to claim that Ruapehu was sensitive to lunar cycles during ~9–15 months prior to the 2007 paroxysm. Smaller time windows (<290 days) obscure the luni-seismic correlation prior to the phreatic eruption and lead to some short-term peaks of correlation that emerge sporadically; this is probably due to the instability introduced by the small sample size and the noisy nature of the data34. Larger windows (>430 days) also obscure the sensitivity of Ruapehu to lunar cycles; this is so probably because the mechanism behind the seismic response to fortnightly tidal modulation does not operate on so large timescales (Supplementary Fig. S7).
The emergence of long-term (~9–15 months) sensitivity of Ruapehu to fortnightly tides calls for a model to (a) quantify the seismic response of the volcanic system to tidal forcing, and (b) identify the link between luni-seismic correlation and the physical processes that lead to phreatic eruptions (Fig. 3). Based on a recent mechanistic model developed by Girona et al.35,36, we propose that the persistent seismicity recorded around the crater of Ruapehu (i.e., shallow tremor37) is induced by pressure oscillations emerging in gas pockets trapped below the active vent; these pressure oscillations arise in response to two concurrent processes: the permeable flow of gases through the shallowest part of the volcanic edifice and the intermittent supply of volatiles from deeper levels (Fig. 3a). Moreover, we argue that tidal stresses squeeze magma reservoirs, thus inducing harmonic ascent and retreat of magma in the shallow plumbing system1,2,5,6,31 (Fig. 3b). Combining the aforementioned approaches, we generate synthetic tremor signals that are analysed in a similar way as the natural data (see details in Methods, Section 3). Our analysis reveals that, for small tidally-induced ascent/retreat of magma (~1 cm amplitude), the magnitude of the luni-seismic correlation increases when the permeability decreases below a threshold value (Fig. 4). In other words, fortnightly tides can modulate the daily median amplitude of volcanic tremor, but only if the permeability of the shallow volcanic edifice is low enough. For example, using realistic values for the different parameters involved in the model, we find that the Pearson coefficient ρ differs from zero (at confidence level of 4-sigma) if the permeability κ is below ~10−9 m2, reaching ρ = −0.34 ± 0.20 when (note that these values of permeability are realistic for shallow volcanic edifices because they are highly-fractured38) (Fig. 4). Our model also suggests that the correlation is negative because the response time39,40 of the volcano to tidal stresses is between ~3 h and ~9 h. In turn, this implies that the magma plumbing system of Ruapehu is predominantly under compression when closer to quarter moon and predominantly under extension when closer to full/new moon (see Supplementary Discussion).
We therefore suggest that Ruapehu was sensitive to lunar cycles during 2006/2007 because the shallow vent was highly clogged, which likely triggered the 25 September 2007 phreatic eruption because it favours the pressurization (due to gas over-accumulation) below the active crater33. Vent clogging may be induced by two potential mechanisms. First, clogging could occur gradually since the 1995/1996 vent-clearing eruptions due to mineral precipitation in pore networks33, thus decreasing the permeability of the vent below threshold values since ~June 2006 (i.e., ~15 months prior to the phreatic eruption). In such a case, gradual clogging maybe also triggered the small 4 October 2006 gas explosion, although no significant luni-seismic correlation could be detected due to the short time window (~90–120 days since June 2006) and the noisy nature of the data34. Second, clogging began after the October 2006 gas explosion (~9–11 months prior to the phreatic eruption) due to gravitational compaction and restructuration of pore networks. In such a case, the small gas explosion was probably triggered by other processes not related to vent clogging (e.g., infiltration of meteoric water and subsequent expansion of steam). On the other hand, our model also suggests that Ruapehu is not sensitive to lunar phases since the end of 2007 because the porous vent is building back and its permeability has remained above threshold levels after the phreatic eruption. Hence, the unrest episodes detected during the last 10 years were probably not related to pore clogging of the vent but rather to magma migrations at depth, tectonic processes, or shallower hydrothermal phenomena.
The response of the seismicity recorded around active craters to repeated and predictable tidal excitations has the potential to reveal the state of criticality of the shallowest part of a volcano. In particular, our study reveals that: (a) the persistent seismicity of Ruapehu volcano was not modulated by fortnightly tides over the last 13 years, except for the 9 to 15 months preceding the unpredicted 2007 phreatic eruption; and (b) analysing the correlation between lunar cycles and seismic amplitude offers exciting perspectives to detect the sustained clogging of gas pathways, and thus to better forecast phreatic eruptions at volcanoes whose shallow magma plumbing system is subjected to tidal deformation. Future work should explore whether other volcanoes besides Ruapehu are sensitive to lunar cycles prior to eruptions, although it is worth highlighting that tidal deformation may be more prominent in New Zealand volcanoes than in other volcanoes of the world. This is so because ocean-tide loading is larger in New Zealand than in most other places, and its location in mid-latitudes implies that Earth tides are also strong41,42. The only requirement to apply our statistical approach to other volcanoes is to have mostly continuous and long-lasting (several years) seismic data recorded nearby volcano craters. Our interpretation of the luni-seismic correlation is applicable to any volcano with prominent outgassing during quiescence, and thus when tremor is controlled by the permeable flow of gases through the shallowest part of the volcanic edifice35,36.
Data processing consists of the following steps
Seismic data are obtained from a permanent short-period seismic station installed on top of Ruapehu volcano (~700 m from the crater lake). In particular, we use the vertical component waveform over a period spanning from February 22, 2004, to November 15, 2016 (downloaded from the open access Geonet-GNS archive, ref.32). This time period was selected because the same type of seismometer was consistently used; we note that the seismometer was destroyed during the 2007 eruption, although it was replaced a few days later by the same type of sensor.
The daily median seismic amplitude y sam (t) is computed through the following standard procedure: we read files containing velocity data over a duration of 1 day, subtract the mean of the 1-day time series, apply a high-pass filter (0.1 Hz, butterworth, 4 corners), integrate to obtain the displacement waveform, apply the same high-pass filter (0.1 Hz) followed by a low-pass filter (40 Hz, butterworth, 4 corners), take the absolute value of the displacement waveform, calculate the median of the absolute values in every window of 90 s (our results are essentially the same when using window durations ranging between 30 s and 1800 s; Supplementary Fig. S8), and export the median of all the windows contained in each day (960 windows when using window durations of 90 s). This procedure, which is repeated for every day of the 12-year period, allows capturing the persistent seismicity of Ruapehu (i.e., tremor37) by minimizing the potential contamination due to meteorological perturbations, tectonic earthquakes, or other unwanted non-volcanic effects. For our analysis, we finally calculate the logarithm of the daily median seismic amplitude ln(y sam ).
Gaps (e.g., after the phreatic eruption of 2007, which destroyed the seismic station) and spikes (probably due to storms or electronic problems), which only represent ~5% of the ln(y sam ) time series, are replaced by random values obtained from a normal distribution with the same mean and standard deviation as the data. The replacement of gaps and spikes allows us to apply a 20-day high-pass median filter to remove variations of the seismic amplitude occurring in timescales larger than the average periodicity of high tides (Supplementary Fig. S1).
The correlation between lunar cycles and seismic amplitude (defined here as luni-seismic correlation) is analysed as follows:
We create a synthetic periodic time series to describe lunar phases (Supplementary Fig. S2): y lun = −cos(2π(t − t low )/T), where t is an integer representing time in days, t low is a reference day of the calendar with low tide (i.e., quarter moon), and T = 14.7653 days is the average periodicity of high or low tides (i.e., the average time between a full moon and the next new moon). With this approach, y lun = 1 during high tides and y lun = −1 during low tides. We choose February 28, 2004, as our reference day with low tide (t low ). Note that we produce a value of y lun per day; the time of the day at which y lun is generated is the same as the peak of quarter moon on the reference day.
We calculate the Pearson correlation coefficient (ρ) with different subsets of the time series ln(y sam ) and y lun (Supplementary Fig. S3). These subsets are backward windows of L days, such that the value of ρ corresponding to a given day is calculated with the L data pairs preceding that day. It is worth noting that: I) ρ evaluates how well two datasets are linearly correlated, such that 0 < ρ ≤ 1 implies a positive correlation (i.e., ln(y sam ) tends to increase with y lun ) and −1 ≤ ρ < 0 implies a negative correlation (i.e., ln(y sam ) tends to decrease with y lun ). II) The subsets of ln(y sam ) and y lun are standardized, i.e., the mean of each time series is subtracted and the result is divided by the standard deviation. This standardization process does not affect the value of ρ and makes it match with the slope of the line that best fits ln(y sam ) and y lun (t) (Fig. 2b,c). III) The random values previously added to replace gaps and spikes are not taken into account to calculate ρ. IV) ρ is calculated for backward windows whose number of gaps and spikes is less than 20% of the number of days L. V) Larger values of L allow analyzing sustained long-term correlations while ensuring the applicability of statistical methods34. In our analysis we focus on L = 1 year, although the results are similar for backward window sizes in the range 290–430 days (Supplementary Fig. S6).
We test whether ρ obtained for every moving window is significantly different from zero. To do this, we perform a significance test consisting of calculating the probability (p value ) of obtaining the observed correlation coefficient (ρ) if the data pairs of a given window are not correlated (null hypothesis); the probability p value is calculated with a two-tailed test (MATLAB algorithm). If p value ≤ P, where P is a given threshold value, the correlation coefficient ρ is said to be statistically significant at a given confidence level. For example, if p value ≤ 0.006% for a given window, the correlation between ln(y sam ) and y lun in that window is different from zero at a confidence level of 4-sigma. In other words, the probability that the seismic amplitude and lunar cycles can be correlated by chance in that window is lower than 0.006%. Here, we use 4-sigma as confidence level to reject or not the null hypothesis, a much stricter condition than has been considered so far in other studies focusing on the influence of moon cycles on volcanic activity (usually 2-sigma or lower3,20,25).
We test the extent to which a stochastic seismic amplitude time series, which is known to be unrelated with lunar cycles, may give rise to the significant correlation coefficients (ρ) observed in the natural data. In other words, we quantify whether the values and duration of significant correlation obtained (with confidence 4-sigma) can be produced with a random seismic amplitude time series. This is done by following the next steps: I) we compute the number of days N that satisfies |ρ| ≥ |ρ sig | with the natural data, where |ρ sig | is the minimum correlation coefficient (in absolute value) from which the level of confidence is 4-sigma. II) We build a random seismic amplitude time series with the same mean and standard deviation as the natural data ln(y sam ). III) We calculate the Pearson correlation coefficient between and the moon phase time series y lun (t), exactly as we did for the natural data (i.e., for moving windows of size L and assuming the same gaps and spikes). IV) We repeat the previous step 500 times (Monte Carlo simulations) to obtain the probability density function of correlation coefficients. For example, for L = 1 year, we obtain a Gaussian probability density function with mean equal to 0 and standard deviation equal to 0.0659 (Supplementary Fig. S4). V) Using the aforementioned probability density function, we calculate the probability of obtaining by chance one day with correlation coefficient that is equal or larger than the minimum significant correlation obtained with the data (in absolute value; |ρ sig |). VI) We use the previous result to perform a binomial test; this test allows us to calculate the probability of obtaining, in the more than 12 years of data, N days (or more) with correlation coefficient satisfying |ρ| ≥ |ρ sig |.
We propose a mechanical model for tremor based on Girona et al.35,36. We use this forward model to generate synthetic seismic datasets and study the factors that can cause a change in the correlation between lunar cycles and the observed seismicity.
Mechanical model of shallow tremor
We propose that shallow tremor at Ruapehu arises from pressure oscillations ΔP occurring in a shallow gas pocket embedded beneath the volcanic crater (called steam zone by other authors37). These pressure changes ΔP do not result from elastic oscillations of the shallow conduit, but emerge in response to two concurrent processes: the permeable flow of gases through the shallow cap and the intermittent supply of volatiles from deeper levels. To first order, ΔP beneath volcanic craters is given, in the frequency domain, by35,36:
where R g is the ideal gas constant, T g is the gas temperature, R is the radius of the uppermost part of the volcanic conduit, D is the thickness of the gas pocket embedded beneath the crater, M is the molecular weight of water (H2O is the main component of volcanic gas emissions), Q0 is the mean outgassing flux, δ(ω) is the Dirac delta function (and thus δ(ω) > 0 for ω > 0), t s is the seepage time (i.e., time required for the gas to pass through the permeable cap), q n is the mass of gas introduced into the gas pocket at the instant t n (e.g., through bubble bursting at the top of a fluid-like magma column), N* is the total number of mass impulses (e.g., number of bubbles that burst at the top of the magma column) occurring in the gas pocket during the simulation time, ω is the angular frequency, and j is the imaginary unit. The parameters Γ1, Γ, and are defined as Γ1 = 2μR g T g φ/[(P0 + P ex )Mκ], Γ = Γ1 + 2R g T g Q0/[(P0 + P ex )πR2DM], and , where μ is the gas viscosity, P0 is the mean pressure in the gas pocket, P ex is the pressure at the exit vent (i.e., hydrostatic pressure at the bottom of the crater lake), L c is the cap thickness, and φ and κ are the connected porosity and permeability of the cap, respectively. In turn,the mean pressure in the gas pocket (P0) and the seepage time (t s ) can be calculated from:
where τ is the tortuosity (i.e., ratio of the actual path length for the gas to escape through the cap to the cap thickness) and the other parameters have been previously defined. The parameters φ and κ are assumed to be related through the following empirical function:
By convolving equation (1) with the Green’s function describing the propagation of Rayleigh waves along the path to the receiver, we can compute the vertical ground displacement that would be recorded at nearby stations. The vertical component of the ground displacement u z is given, in the frequency domain, by35,36:
where v c is the phase velocity (we use v c = 1295(ω/2π)−0.374 m/s), r is the distance from the source to the receiver, v u is the group velocity (we use v u = 0.73v c ), Q f is the dimensionless quality factor (a parameter that accounts for the attenuation of seismic waves), and ρ s is the density of the medium. The parameters v c , v u , Q f , and ρ s are related to the propagation of the seismic waves through the crust and not to the seismic source. The ground displacement described by equation (5) was shown to explain the main features of shallow volcanic seismicity, particularly monochromatic tremor as typically recorded around Ruapehu (Supplementary Fig. S9).
Effect of tidal stresses on the parameters of the model
The continuous compressions-extensions induced by tidal stresses in the shallow crust are thought to squeeze magma reservoirs1,2,5,6,31. This, in turn, is expected to induce harmonic ascent and retreat of magma in shallow plumbing systems (as sometimes observed in Kilauea1,2,5,6 and Villarrica lava lakes31), thus changing the thickness of the gas pocket (D) with time. The effect of tidal stresses on the gas pocket thickness is parameterized with a sum of harmonic time series:
where D0 is the mean gas pocket thickness, the parameters T i and D i represent the period and amplitude, respectively, of each tidal constituent i, and δ is a phase shift that accounts for the response time of the volcano to tidal stresses. In particular, δ is the phase of the daily oscillation of the gas pocket thickness at a specific time t and moon phase, which depends on how the combination of Earth tides and ocean-tide loading in New Zealand deforms the magma plumbing system of Ruapehu (see Supplementary Discussion). For simplicity, we consider only the tidal constituents responsible for generating fortnightly tidal modulation, i.e., the principal lunar semidiurnal (M2), with periodicity hours; and the principal solar semidiurnal (S2), with periodicity hours. We also assume tidal strains to be small and use m and . With these values, the gas pocket thickness D(t) reproduces the classic fortnightly spring/neap cycle (Fig. 3b), with maximum amplitude of the fluctuations on the order of 0.01 m. This amplitude is considered a minimum end-member; tidally-induced magma level oscillations of up to 30–60 cm were observed sometimes at Halemaumau lava lake1,2,5,6. Finally, we explore values of the phase shift in the range δ = 0 − 2π. The simple approach used in this study allows us to analyse how small harmonic variations of the gas pocket thickness due to tidal stresses affect the amplitude of the synthetic seismic time series. It is worth noting that, as revealed by equation (6), we also expect semi-diurnal modulation of the seismic amplitude (Supplementary Fig. S10); this has been observed at Fogo volcano, Cape Verde Republic42, and can be sometimes detected by simple eye inspection at Ruapehu (Supplementary Fig. S11).
Synthetic data processing
To analyse the correlation between lunar cycles and the logarithm of the modelled daily seismic amplitude, we proceed as follows:
We assign the following realistic values to the parameters of the model: external pressure Pa (i.e., atmospheric pressure + hydrostatic pressure of a 200 m water lake, consistent with historical lake depths at Ruapehu43), cap thickness L c = 10 m (this implies that tremor is sourced beneath the crater lake at a depth consistent with previous studies44), radius of the shallow magma conduit and cap R = 10 m, mean thickness of the gas pocket D0 = 0.1 m, mean outgassing flux Q0 = 50 kg/s, volcanic gas temperature T g = 900 °C, gas viscosity μ = 10−5 Pas, molecular weight of gas (mostly water vapour) M = 0.018kg/mol, cap porosity φ = φ0(κ/κ0)1/α, with , κ0 = 10−8 m2, and α = 3 (realistic for highly fractured caps and permeable flow dominated by open cracks and channels35,36), and tortuosity τ ≈ 1. Besides, we impose the random supply of N* = 1,000 gas bubbles in 90 s of simulation, and we use distance source-receiver r = 700 m, density of the medium ρ s = 3,000 kg/m3, frequency-dependent phase velocity v c = 1295(ω/2π)−0.374 m/s, group velocity v u = 0.73v c , and dimensionless quality factor Q f = 5. These values of the parameters generate mean pressures in the gas pocket P0 on the order of 106 Pa (slightly greater than the external pressure P ex ) and tremor signals with dominant frequency around 2 Hz (Supplementary Fig. S9), as observed on Ruapehu37.
For the given mean gas pocket thickness D0 (and for a given value for the phase shift δ), we calculate D at t = 0 using equation (6). Then, we simulate a 90 s time series of ground displacement u z (t) by calculating the inverse Fourier transform of equation (5) (details on the calculation in ref.35). This mimics the time windows that were used in the analysis of the natural data (see 1. Data Processing). Later, we calculate the seismic amplitude of the 90 s simulation by taking the absolute value of the synthetic displacement waveform and computing the median. Note that the gas pocket thickness D is assumed to remain constant during the 90 s simulation because it varies over longer timescales.
We repeat the aforementioned calculations after recalculating the gas pocket thickness D over steps of 15 minutes for a duration of t = 1 year (the time step is limited to 15 minutes to make the problem more tractable in terms of computer run time). This gives a total of 96 seismic amplitudes per day (instead of the 960 values per day that we have with the data), from which we take the median to obtain the daily seismic amplitude and take its logarithm. We also create a synthetic periodic time series to describe lunar cycles, exactly as we did to analyse the natural data (see 2. Data Analysis).
The procedure described above allows us to generate model data that we can treat statistically to test the conditions that are required to explain an increase of luni-seismic correlation prior to the 2007 phreatic eruption of Ruapehu. In particular, we conduct the same statistical analysis as with the natural data, i.e., we calculate the Pearson correlation coefficient ρ between the logarithm of the modelled daily seismic amplitude and the lunar cycles (Supplementary Fig. S12). This is done for 365 data pairs (1 year of synthetic data).
Finally, we repeat steps b–d with different values of the cap permeability κ and porosity φ (related through equation (4)) to explore how overall cap sealing (e.g., due to pore mineralization or subsidence of the crater floor after the 2006 gas explosion) affects the luni-seismic correlation. This analysis reveals that the luni-seismic correlation is significant when the permeability of the cap κ is below a threshold value; and it is negative as long as the phase shift δ meets π/2 < δ < 3π/2 and hence if the response time of the volcano to tidal stresses ranges between ~3 h and ~9 h (see Supplementary Discussion and Supplementary Fig. S13).
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Authors thank GNS Science for the availability of their data (raw data are openly available in the Geonet-GNS archive [http://magma.geonet.org.nz/resources/network/netmap.html, DRZ station]). Authors also thank Greg Steenbeeke for allowing us to use his photo of Ruapehu (Fig. 1a). T. Girona would like to thank F. Schwandner and E. Llewellin for discussions and comments on the topic of this paper. T. Girona and C. Huber were supported by National Science Foundation CAREER Grant (1454821). T. Girona is currently supported by an appointment to the NASA Postdoctoral Program at the NASA Jet Propulsion Laboratory, administered by Universities Space Research Association under contract with NASA. C. Caudron is supported by an FNRS postdoctoral grant. | <urn:uuid:3a0ebb69-9aa6-4125-9747-7aa108cd7805> | 2.703125 | 7,297 | Truncated | Science & Tech. | 45.334684 | 95,496,250 |
Equations are true if both sides are the same. Properties of equations illustrate different concepts that keep both sides of an equation the same, whether you're adding, subtracting, multiplying or dividing. In algebra, letters stand for numbers that you don't know, and properties are written in letters to prove that whatever numbers you plug into them, they will always work out to be true.
Associative and Commutative
Associative and commutative properties both have formulas for addition and multiplication. The commutative property of addition says that if you add two numbers, it doesn't matter what order you put them in. For example, 4 + 5 is the same as 5 + 4. The formula is: a + b = b + a. Any numbers you plug in for a and b will still make the property true.
The commutative property of multiplication formula reads a x b = b x a. This means that when multiplying two numbers, it doesn't matter what number you type in first. You will still get 10 if you multiply 2 x 5 or 5 x 2.
The associative property of addition says that if you group two numbers and add them, and then add a third number, it doesn't matter what grouping you use. In formula form, it looks like (a + b) + c = a + (b + c). For example, if (2 + 3) + 4 = 9, then 2 + (3 + 4) will still be 9.
Similarly, if you multiply two numbers and then multiply that product by a third number, it doesn't matter what two numbers you multiply first. In formula form, the associative property of multiplication looks like (a x b)c = a(b x c). For example, (2 x 3)4 simplifies to 6 * 4, then 24, where the "*" symbol indicates that you are multiplying numbers. If you group 2(3 * 4), you will have 2 * 12, and this will also give you 24.
Transitive and Distributive
The transitive property says that if a = b and b = c, then a = c. This property is used often in algebraic substitution. For example, if 4x - 2 = y, and y = 3x + 4, then 4x - 2 = 3x + 4. If you know that these two values are equal to each other, you can solve for x. Once you know x, you can solve for y if necessary.
The *distributive property* allows you to get rid of parenthesis if there is a term outside them, like 2(x - 4). Parentheses in math indicate multiplication, and to distribute something means you pass it out. So, to use the distributive property to eliminate parentheses, multiply the term outside of them by every term inside them. So, you would multiply 2 and x to get 2x, and you would multiply 2 and -4 to get -8. Simplified, this looks like: 2(x - 4) = 2x - 8. The formula for distributive property is a(b + c) = ab + ac.
You can also use the distributive property to pull out a common factor from an expression. This formula is ab + ac = a(b + c). For example, in the expression 3x + 9, both terms are divisible by 3. Pull the factor to the outside of the parentheses and leave the rest inside: 3(x + 3).
Properties of Negative Numbers
The additive inverse property says that if you add one number with its inverse, or negative version, you will get zero. For example, -5 + 5 = 0. In a real world example, if you owe someone $5, and then you receive $5, you still won't have any money because you have to give that $5 to pay the debt. The formula is a + (−a) = 0 = (−a) + a.
The multiplicative inverse property says that if you multiply a number by a fraction with a one in the numerator and that number in the denominator, you will get one: a(1/a) = 1. If you multiply 2 by 1/2, you will get 2/2. Any number over itself is always 1.
Properties of negation dictate multiplication of negative numbers. If you multiply a negative and a positive number, your answer will be negative: (-a)(b) = -ab, and -(ab) = -ab.
If you multiply two negative numbers, your answer will be positive: -(-a) = a, and (-a)(-b) = ab.
If you have a negative outside of a parentheses, that negative is attached to an invisible 1. That -1 is distributed to every term inside the parentheses. The formula is -(a + b) = -a + -b. For example, -(x - 3) would be -x + 3, because multiplying -1 and -3 will give you 3.
Properties of Zero
The identity property of addition states that if you add any number and zero, you will get the original number: a + 0 = a. For example, 4 + 0 = 4.
The multiplicative property of zero states that when you multiply any number by zero, you will always get zero: a(0) = 0. For example, (4)(0) = 0.
Using the zero product property, you can know for sure that if the product of two numbers is zero, then one of the multiples is zero. The formula states that if ab = 0, then a = 0 or b = 0.
Properties of Equalities
Properties of equalities state that what you do to one side of the equation, you must do to the other. The addition property of equality states that if you have a number to one side, you must add it to the other. For example, if 5 + 2 = 3 + 4, then 5 + 2 + 3 = 3 + 4 + 3.
The subtraction property of equality states that if you subtract a number from one side, you must subtract it from the other. For example, if x + 2 = 2x - 3, then x + 2 - 1 = 2x - 3 - 1. This would give you x + 1 = 2x - 4, and x would equal 5 in both equations.
The multiplication property of equality states that if you multiply a number to one side, you must multiply it by the other. This property allows you to solve division equations. For example, if x/4 = 2, multiply both sides by 4 to get x = 8.
The division property of equality allows you to solve multiplication equations because what you divide on one side, you must divide on the other. For example, divide 2x = 8 by 2 on both sides, yielding x = 4. | <urn:uuid:467f5349-5a4c-4885-9e80-22ac9a850c79> | 4.53125 | 1,431 | Documentation | Science & Tech. | 69.941606 | 95,496,278 |
Captured! Radio telescope records a rare ‘glitch’ in a pulsar’s regular pulsing beat
Pulsars are rapidly rotating neutron stars and sometimes they abruptly increase their rotation rate. This sudden change of spin rate is called a “glitch” and I was part of a team that recorded one happening in the Vela Pulsar, with the results published today in Nature.
Approximately 5-6% of pulsars are known to glitch. The Vela pulsar is perhaps the most famous – a very southern object that spins about 11.2 times per second and was discovered by scientists in Australia in 1968.
It is 1,000 light-years away, its supernova occurred about 11,000 years ago and roughly once every three years this pulsar suddenly speeds up in rotation.
These glitches are unpredictable, and one has never been observed with a radio telescope large enough to see individual pulses.
To understand what the glitch may be, first we need to understand what makes a pulsar.
At the end of a typical star’s life, one of three things can happen.
A small star, similar to the size of our Sun, will just quietly expire like a fire going out.
If the star is sufficiently large, a supernova will occur. After this massive explosion the remains will collapse. If the object is sufficiently large then its escape velocity will be greater than the speed of light, and a black hole will be formed.
The gravity is so strong that the electrons orbiting the atom are forced into the nucleus. They combine with protons in the nucleus to form neutrons.
These objects are estimated to have a mass of about 1.4 times the mass of our Sun, and a diameter of 20km. The density is such that a cupful of this material would weigh as much as Mt Everest.
They also rotate quite quickly (and very gradually slow down over time) as well as having a massive magnetic field, three trillion times that of the Earth. Electromagnetic radiation emits from both ends of this huge rotating magnet.
Now if one of the poles of this rotating magnet happens to sweep past Earth, we see a brief “flash” in radio waves (and other frequencies too) once every rotation. This is called a pulsar.
The hunt for a ‘glitch’
In 2014 I started a serious observing campaign with the University of Tasmania’s 26m radio telescope, at the Mount Pleasant Observatory, with a goal to catch the Vela Pulsar’s glitch live in action.
I collected data at the rate of 640MB for each 10 second file, for 19 hours a day, for most days over nearly four years. This resulted in over 3PB of data (1 petabyte is a million gigabytes) that was collected, processed and analysed.
On December 12, 2016, at approximately 9:36pm at night, my phone goes off with a text message telling me that Vela had glitched. The automated process I had set up wasn’t completely reliable – radio frequency interference (RFI) had been known to set it off in error.
So sceptically I logged in, and ran the test again. It was genuine! The excitement was incredible and I stayed up all night analysing the data.
What surfaced was quite surprising and not what was expected. Right as the glitch occurred, the pulsar missed a beat. It didn’t pulse.
The pulse before this “null” was broad and weird. Nothing like I’d ever seen or heard of before.
The two pulses following turned out to have no linear polarisation which was also unheard of for Vela. This meant the glitch had affected the strong magnet that drives the emission that comes from the pulsar.
Following the null, a train of 21 pulses arrived early and the variance in their timings was a lot smaller than normal – also very weird.
The glitch explained, sort of
So what causes glitches? The hypothesis that is best supported is that the neutron star has a hard crust and a superfluid core. The outer crust is what slows down, while the superfluid core rotates separately and does not slow down.
This is a very simplified explanation. What really happens is quite complex and involves microscopic superfluid vortices unpinning from the crust’s lattice.
After about three years the difference in rotation between the core and crust gets too great and the core “grips” the crust and speeds it up. The data seems to show that it took about five seconds for this speed-up to occur. This is on the faster end of the scale that the theorists had predicted.
All this and other information could help us understand what is called the “equation of state” – how matter behaves at different temperatures and pressures – in a laboratory that we simply cannot create here on Earth.
It also gives us, for the first time, a glimpse into the inside workings of a neutron star. | <urn:uuid:e14c0527-ac89-4c00-82c9-6fe1d2ceaadc> | 3.390625 | 1,042 | Truncated | Science & Tech. | 57.856813 | 95,496,279 |
The emulator executes the bitcode generated by the assembler.
The emulator works in ticks. Each
tick() executes one instruction,
which is fetched from the rom. Ticks executed by the
which is invoked by a simple
setTimeout. Because there is a limit on how frequent
setTimeout can fire, a number of
ticks is executed in each
- check if the emulator is running, if not, exit
- check if the number of instructions to be executed has been executed, if yes, exit (see
- check if the emulator should be waiting (
DELAYexecuted), if yes, set a timeout of the desired time and exit. This is skipped if the emulator is stepping.
- check for interrupts
- fetch the next instruction from the ROM
- check if the instruction is cached, if not disassemble and cache
- fetch instruction from the cache
- load needed values for parameters
- call the
exec()-function of the instruction
- check and set the zero-flag, if specified in instruction-file
The timeout which is set by
setTickWrapperTimeout() can be interrupted by a number of things:
CONTINUE-message (to allow faster stepping)
In addition the timeout will be longer when the
DELAY-instruction) is set.
The emulator pins and their state are handled by a
IOBank instance. Please refer to the documentation for more
information on this. The
getIO function of the emulator api are essentially wrapper functions for
the actual pin objects.
The is located in the RAM, beginning from
sp gets increment before the byte is pushed (pre-increment), so
sp always points to the last byte. Words are pushed with the least significant byte first.
An example can be found in the tutorial.
Data structures which are used inside the emulator and its messages and broadcasts.
Details about the Communicator-Implementation for the emulator, including the messages and broadcasts. | <urn:uuid:30bd7ed3-fee2-4a84-814a-ec3f68bc6a37> | 2.8125 | 415 | Documentation | Software Dev. | 40.895576 | 95,496,303 |
Nanoluminous pearls developed by scientists at UMMS
Breakthrough holds potential for developing new bioimaging technologiesDate Posted: Monday, August 10, 2015
A microscopic crystal created by researchers at UMass Medical School brings the potential use of persistent luminescence nanoparticles for bioimaging one step closer to both the research laboratory and the clinic. The successful development of these long-lasting, light-emitting nanocrystals would provide much improved, noninvasive imaging technology for evaluating structural and functional biological processes in living animals and patients.
Compared to existing in vivo optical imaging probes, these new nanoparticles possess an outstanding signal-to-noise-ratio with no need for an excitation resource (light) during imaging and they can be directly detected with existing imaging systems. Published in the Journal of the American Chemical Society, the study was selected as an editor’s choice and a spotlight article.
“Ultra-small luminescent phosphors are playing an increasingly vital role in medicine and science,” said Gang Han, PhD, assistant professor of biochemistry & molecular pharmacology and principal investigator of the study. “Our straightforward method for producing these tiny near-infrared persistent luminescence nanoparticles, coupled with their superior performance and luminescence renewability, is groundbreaking. It opens up opportunities for developing a new generation of technologies to use in medical imaging diagnosis and therapy, as well as other applications in photonics and biophotonics.”
Persistent luminescence is the term given to materials that continue to emit light for minutes or hours, and in some cases day, after turning off the excitation source. These materials have been used by humans for 1,000 years and are common today in traffic signs, emergency signage, watches and clocks, luminous paints, electronic displays and textile printing.
Biomedical researchers are striving to develop similar persistent luminescence nanoparticles that are safe to inject into live biological tissues for imaging. A persistent luminescence material that can last for hours or days combined with current imaging technologies such as magnetic resonance imaging (MRI), micro-computed tomography (micro CT), positron emission tomography (PET), optical coherence tomography (OCT), electron tomography (ET), ultrasonic imaging and X-ray imaging, could greatly improve diagnostic capabilities available to both biomedical researchers and physicians.
Current methods for producing these light-emitting particles are complex. They require synthesis with extremely high temperature annealing (>1,000°C) and a complicated physical process to transform large, bulk crystals into nanoparticles. This often creates heterogeneous particles that quickly agglomerate in solution and are too large for use in biological tissues, as their size could potentially disrupt cellular systems and cause harm.
Using a new production methodology, Dr. Han and colleagues have overcome this key developmental roadblock. The resulting nanocrystals were dubbed “luminous pearls” by Han after the legendary Chinese tale of the seven fairies, which used luminous pearls to store the daytime sunlight, and then released it “to weave the rose clouds of the dawn.”
“This image is particularly apt, because the methods described in the paper produce near-infrared luminescence nanoparticles that in effect have renewable luminescence,” said Han.
The new aqueous production method described in the research paper uses a convenient chemical approach to generate ultra small and uniform nanoparticles the size of a protein. Han and colleagues used hydrothermal synthesis, making use of the chemical reactions of substances in a sealed heated aqueous solution to produce zinc-gallium-chromium nanoparticles. During this process, they found that the molar ratio of zinc to gallium was the secret to producing uniformly ultrasmall, near-infrared persistent luminescence nanoparticles.
Measuring the performance of their “luminous pearls,” the researchers found that they provide vivid images through deep tissue of a live mouse after only a brief LED light irradiation prior to injection. This signal gradually decreased after 30 minutes and it could also be reactivated repeatedly at any desired time. The hydrothermally produced nanoparticles also showed good stability, which is necessary for biomedical applications. They remained viable for one month after insertion.
“It’s likely these hydrothermally produced nanocrystals are smaller but brighter than similar materials due to the fact that they have fewer defects, with more regular shapes and complete crystal facets. They can also be repeatedly recharged within deep tissues and have the potential to be directly adapted for use in commercially available imaging systems.” said Han. | <urn:uuid:ad3aafef-8d49-45f6-bed2-e403975cc211> | 2.703125 | 958 | News (Org.) | Science & Tech. | 5.745966 | 95,496,337 |
The article describes the syntax features in SQL Data definition language of modern DBMS, such as Access, Microsoft SQL Server, Oracle and DB2 in terms of the mobility of the script to create a main database objects during the transition to another platform. Discussed the main points to be brought to account in the implementation of such a practical problem.
SQL language, data type, referential integrity, a surrogate key
"Osoblyvosti vykorystannia movy vyznachennia danykh SQL u suchasnykh SKBD" ,
Information Processing Systems, | <urn:uuid:5c501345-faad-4678-a354-a6ff56aa1eb6> | 3.015625 | 123 | Knowledge Article | Software Dev. | 6.971944 | 95,496,370 |
In this video, Emmanuel Henri adds the four endpoints needed to get your API started. He adds the GET, POST, PUT and DELETE calls to the server.
- [Instructor] Any application, back end or front end needs routes in order to be able to call a URL and get something back in a web application. You can call a route and go to a specific page or you can also use routes to define your endpoints in an application. We'll start working on end points. And we'll also start using postman to test our end points are working properly. So let's get to it. So the first thing I want you to do is go inside of our routes and we're already inside of it if you don't have it open, it's inside of the routes folder, inside of the source folder and it's called crmRoutes.
And what we'll do is create all the routes that we need. So in order to create a route, we'll create a function called routes. Like so. And we'll pass app, inside of our routes function. And the reason why we're doing that is because we're going to use the routes function inside of our index in order to pass the endpoints that we'll just create. Simple as that, okay. So we'll create a few routes. The first one and we're leveraging the variable that we just created.
So app first one will be for the specific address or URL contact. Like so. And inside of that URL we'll get a specific get call. So we'll do a dot notation like that. So just for your information, this is the same as if I did this, but because we're going to create multiple calls for that specific route, we'll just put them on the second line, like so, and create another one if we need another one.
So let's go ahead and do get call, which will take a request and a response as the variable, and then do something specific with that. So what are we going to do with that specific call and let's just minimize the amount of space that we have in between our parentheses. We're going to do a response and we'll send the response GET request successful!!! So for the time being, what we're going to do is print out a message whatever you call it from, postman or from the web, we're going to simply do a response like this and send that response to whatever devices calling for that get call.
Okay. And then we'll do the same for a post call. And the post is still related to the contact URL. And I'm just moving stuff around. So basically, when we do a contact or get call for the contact URL or a post call, this is what we're going to get as a response from the get and the response that we're going to code in the second will be the one that we're getting when we do the post call here. So let's do the same and let's just copy some of that code so we don't repeat.
So copy from basically both parentheses, like so, and just post it, right after post, like so. And the only thing we're going to change is POST request successfull!!! Like so. Okay. So we need now to do also a second URL. So in our application we're going to make several types of call. So we're going to make a get call which will give us all the contacts or will post a new contact which will use this URL as well.
But we'll also need sometimes to have specific contacts. Or to update a specific contact. Or to even delete a specific contact. So this is what the next route will be used for. And we'll use the same syntax and let's just copy and paste some of that above and when you copy and paste, please be careful to change all the code accordingly because this is where most mistakes happen. When you copy and paste and you don't change something then, your application will be um, impacted by this.
So let's go ahead and do that. And instead of just contact, we'll do a forward slash and then we'll do colon contactId. And for that, we'll get the put and we'll do a delete call. Like so. And, let's go ahead and change. This is a put request and this is a delete request. Let's do a semi colon here and there.
Again, don't forget that this particular function here is related to this and this particular function is related to this and so on so forth. So this one is related to this particular route. So make sure that you put the semi colon at the end of this block and at the end of this block here. Like so. Okay. So the last thing that we need to do is make sure that we export that particular function otherwise we won't be able to use it anywhere else in our application.
So let's go ahead and export default routes. And again this is ES6 syntax. Save that and let's go ahead inside of our index and import our function. So we need to import routes from and we are importing from the source folder inside of the routes folder and then crmRoutes.
Like so. And then all we need to do at this point is to run it. So we are running the function routes and guess what, we're passing app to it. So therefore, once we run this application, we're going to pass express to it and be able to basically have those api calls available. So let's go ahead and save that. We don't have any errors here. So let's go ahead and test it within Postman.
So if your don't have Postman, open go ahead and open it. And all we have to do at this point is go ahead and enter the URL. So in this case it's localhost and as you can see, I've done this before with my account, with various test here. So the one that we need to do is localhost:3000/contact and do a get call. So let's just go back to our code very quickly to understand what we're doing. So we're calling and let's go inside of crmRoutes.
And what we're doing, we're doing a get call, through the contact URL and the base URL is localhost:3000, which is what the server created for us and we'll do the get call. We should get a response, GET request successfull!!! So let's go ahead and do that. So get and then send and this is how postman works. It's very simple and we'll use different ways of doing get calls, put calls when we create more stuff inside of our application.
But for the time being, this is all we need to do. Send, and then we get request successful!!! Okay. So let's go ahead and do a post. Same thing. Send again, we get a POST request successful!!! Let's go ahead and do the same thing for put and let's just enter whatever we want right now, we can enter anything and it's still going to get us the right call, PUT request successfull!!! So let's do a delete call as well with a fictive ID again, click send and DELETE request successfull!!! Congrats! So now you've got four working end points.
There are very basic responses when you call them, we don't have anything special or data attached to those requests, but it's a start.
- Setting up a project and a server
- Setting up a database and schema
- Creating POST, GET, PUT, and DELETE endpoints
- Serving files | <urn:uuid:5c26f4d7-b761-44c4-ae9d-f0e302f9ef76> | 3.03125 | 1,592 | Truncated | Software Dev. | 79.060062 | 95,496,381 |
We know that metals are good conductors of electricity as they have a pool of free electrons that flow under the influence of a potential gradient and cause the electric current. But what do we mean by electrolytic conductance? To understand electrolytic conductance, let us first understand electrolytes. Electrolytes are those substances that dissolve in a solvent and dissociate into charged ions; the positive ions are known as cations, and the negative ions are called anions. In the case of metals, the conduction is due to the flow of charge that is electrons. In the electrolytic solution, the charged particles present are the ions and hence an electrolytic solution is capable of conducting electric current. The ability of electrolytic solutions to allow the passage of electric current through them is known as the electrolytic conductance. This ability is rendered by ions that are present in the solution due to dissociation of the electrolyte. The electrolytes can conduct electricity only in the molten or aqueous state and not in any solid form. Some of the typical examples of electrolytes that conduct electricity either in molten form or aqueous state are KNO3, NaCl, KCl etc. There are various parameters such as the concentration of ions and type of electrolyte that affect the conductivity of electrolytes.
Factors affecting electrolytic conductance:
- Concentration of ions
The sole reason for the conductivity of electrolytes is the ions present in them. The conductivity of electrolytes increases with increase in the concentration of ions as there will be more charge carriers if the concentration of ions is more and hence the conductivity of electrolytes will be high.
- Nature of electrolyte
Electrolytic conduction is significantly affected by the nature of electrolytes. The degree of dissociation of electrolytes determines the concentration of ions in the solution and hence the conductivity of electrolytes. Substances such as CH3COOH, with a small degree of separation, will have less number of ions in the solution and hence their conductivity will also below, and these are called weak electrolytes. Strong electrolytes such as KNO3 have a high degree of dissociation and hence their solutions have the high concentration of ions and so they are good electrolytic conductance.
Temperature affects the degree to which an electrolyte gets dissolved in solution. It has been seen that higher temperature enhances the solubility of electrolytes and hence the concentration of ions which results in an increased electrolytic conduction.
The conductivity of electrolytes is of great importance, their studies have been the base for the development of many devices such as batteries and other devices. Join Byju’s to explore the wonders of science and make learning simple and download Byju’s App for easy learning.
Practise This Question | <urn:uuid:8830fb3f-50a8-4d47-9b0a-ab6212cd96bd> | 4.0625 | 572 | Knowledge Article | Science & Tech. | 19.701707 | 95,496,390 |
Schematic image of a pulsar, falling in the gravitational field of the Milky Way. The two arrows indicate the direction of the attractive forces, towards the standard matter—stars, gas, etc. (yellow arrow) and towards the spherical … Is dark matter a source of a yet unknown force in addition to gravity? The mysterious dark matter is little understood and trying to understand its properties is an important challenge in modern physics and astrophysics. Researchers at the Max Planck Institute for Radio Astronomy in Bonn, Germany, have proposed a new experiment that makes use of super-dense stars to learn more about the interaction of dark matter with standard matter. This experiment already provides some improvement in constraining dark matter properties, but even more progress is promised by explorations in the centre of our Milky Way that are underway. The findings are published in the journal Physical Review Letters.
Around 1600, Galileo Galilei’s experiments brought him to the conclusion that in the gravitational field of the Earth all bodies, independent of their mass and composition feel the same acceleration. Isaac Newton performed pendulum experiments with different materials in order to verify the so-called universality of free fall and reached a precision of 1:1000. More recently, the satellite experiment MICROSCOPE managed to confirm the universality of free fall in the gravitational field of the Earth with a precision of 1:100 trillion.
These kind of experiments, however, could only test the universality of free fall towards ordinary matter , like the Earth itself whose composition is dominated by iron (32 percent), oxygen (30 percent), silicon (15 percent) and magnesium (14 percent). On large scales, however, ordinary matter seems to be only a small fraction of matter and energy in the universe.
It is believed that the so-called dark matter accounts for about 80 percent of the matter in our universe. Until today, dark matter has not been observed directly. Its presence is only indirectly inferred from various astronomical observations like the rotation of galaxies, the motion of galaxy clusters, and gravitational lenses.
Buy me a cup o’ joe?
EACH MONTH for the past 4 years, I have put tremendous time, thought, and effort into Neurodope, which remains free and made possible by your patronage.
If you’ve found any stimulation and an uptick on your coolness factor from this site, please consider supporting my passion to bring you the latest science & satire with a donation.
If you’ve already donated, THANKS!
You can become a Sponsor and a very cool person with a spontaneous one-time donation – or monthly recurring – in any amount, anywhere from a cup of coffee to a lunch in the city. | <urn:uuid:2fa12805-d81a-4f9e-9fa3-dbbb083a2cd4> | 3.203125 | 552 | Personal Blog | Science & Tech. | 33.020679 | 95,496,400 |
A new study suggests that the common belief that the Earth’s rigid tectonic plates stay strong when they slide under another plate, known as subduction, may not be universal.
Typically during subduction, plates slide down at a constant rate into the warmer, less-dense mantle at a fairly steep angle. However, in a process called flat-slab subduction, the lower plate moves almost horizontally underneath the upper plate.
The research, published in the journal Nature Geoscience, found that the Earth’s largest flat slab, located beneath Peru, where the oceanic Nazca Plate is being subducted under the continental South American Plate, may be relatively weak and deforms easily.
By studying the speed at which seismic waves travel in different directions through the same material, a phenomenon called seismic anisotropy, the researchers found that interior of the Nazca plate had been deformed during subduction.
Lead author of the study, Dr Caroline Eakin, Research Fellow in Ocean and Earth Science at the University of Southampton, said: “The process of consuming old seafloor at subduction zones, where great slabs of oceanic material are swallowed up, drives circulation in the Earth’s interior and keeps the planet going strong.
One of the most crucial but least known aspects of this process is the strength and behavior of oceanic slabs once they sink below the Earth’s surface. Our findings provide some of the first direct evidence that subducted slabs are not only weaker and softer than conventionally envisioned, but also that we can peer inside the slab and directly witness their behavior as they sink.”
When oceanic plates form at mid-ocean ridges, their movement away from the ridge causes olivine (the most abundant mineral in the Earth’s interior) to align with the direction of plate growth. This olivine structure is then ‘frozen’ into the oceanic plate as it travels across the Earth’s surface. The olivine fabric causes the seismic waves to travel at different speeds in different directions, depending on whether or not they are going ‘with the grain’ or ‘against the grain’.
The scientists measured seismic waves at 15 local seismic stations over two and a half years, from 2010 to 2013, and seven further stations located on different continents. They found that the original olivine structure within the slab had vanished and been replaced by a new olivine alignment in an opposing orientation to before.
Dr Eakin said: “The best way to explain this observation is that the slab’s interior must have been stretched or deformed during subduction. This means that slabs are weak enough to deform internally in the upper mantle over time.”
The researchers believe that deformation associated with stretching of the slab as it bends to takes on its flat-slab shape was enough to erase the frozen olivine structure and create a new alignment, which closely follows the contours of the slab bends.
“Imaging Earth’s plates once they have sunk back into the Earth is very difficult,” said Lara Wagner, from the Carnegie Institution for Science and a principal investigator of the PULSE Peruvian project. “It’s very exciting to see results that tell us more about their ultimate fate, and how the materials within them are slowly reworked by the planet’s hot interior. The original fabric in these plates stays stable for so long at the Earth’s surface, that it is eye opening to see how dramatically and quickly that can change,” Lara added.
Media Relations Officer
Phone: +44 23 8059 3212
Glenn Harris | newswise
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:3c25782e-3d1e-4fee-aa20-f780e8c6af75> | 4.4375 | 1,400 | Content Listing | Science & Tech. | 41.413038 | 95,496,419 |
The Large Hadron Collider at CERN
If their assertion were confirmed, it would unhinge Albert Einstein’s Theory of Special Relativity, which states that nothing can travel faster than the speed of light because it is a “cosmic constant.” The speed of light is a cornerstone of our understanding of time and space. The results this experiment could alter the basic foundation of causality, which states that cause comes before effect.
According to their data, over three years the neutrinos that have been sent from Switzerland to Italy have arrived 60 nanoseconds faster than light would have. The international team of scientists claims they have double-checked their work and have offered their findings to colleagues for review.
The mood online seems to be one of excitement and wit.
Are you concerned about these findings?
Recover your password.
A password will be e-mailed to you. | <urn:uuid:86fbefa9-cb4e-4ea1-9849-13e7ce642e1f> | 3.09375 | 187 | Truncated | Science & Tech. | 47.244787 | 95,496,436 |
In the past year, various reports from the United Nations' Intergovernmental Panel on Climate Change (IPCC) have warned the world in no uncertain terms that in order to achieve a stable climate on our planet by the end of this century, any increase in CO2 emissions in the coming decades must be curbed before the emissions can be appreciably reduced. According to the IPCC, the maximum amount of CO2 emissions that can be tolerated globally by the end of the 21st century amounts to roughly 2000 gigatons. This will mean a considerable reduction in the emission of CO2 per capita.
The per capita emission of carbon dioxide in Switzerland is currently 9 tons per year, approximately twice the global average. "Our objective for the climate and energy policy for the century has to be to induce each member of the human race to produce not more than 1 ton of carbon dioxide per year", Professor Ralph Eichler, President ETH Zurich, explained to the media today.
Systematic implementation of 3E strategyThis proposed emission target for carbon dioxide may seem ambitious by today's standards, but it can be achieved by the end of the century both in Switzerland and throughout the world. This is reflected in the calculations made by ETH Zurich's own Energy Science Center (ESC). In order to reach the target, an energy strategy will have to be consistently implemented. As stated by Profes-sor Konstantinos Boulouchos, the proposed strategy is based on three pillars: 1) the exhaustion of efficiency potential, 2) the extended use of renewable energy sources and 3) the increased share of electricity in the energy mix.
Exhausting the efficiency potential will mean increasing efficiency in every link of the energy conversion chain, from extraction at the energy source, through stor-age and distribution up to energy usage. This alone would harbour great sav-ings potential, especially when combined with market-based instruments to in-fluence the demand side.
The second E of the strategy focuses on the use of renewable energy sources, such as photovoltaics, water, and wind. Important to note is that economic as well as ecological aspects must be taken into consideration when using renew-able energy sources.
Electricity as the backbone of the energy system
The newcomer to the 3-E strategy constitutes the third E: electrification. Accord-ing to ETH Zurich researchers, in future C02 poor electricity will establish itself as the backbone of a sustainable energy system. It is increasingly being used in heating and cooling buildings (with heat pumps, for example), and is expected to extend to individual mobility (moving, in the long run, from hybrid vehicles to fully electric cars).
A reorientation of the energy system, however, will not happen overnight. It is likely to take several decades. All the more reason that it is crucial that steps be taken today: infrastructure in industrialized countries (transmission network, power plants) needs to be renewed and in threshold countries, erected.
Innovative research at ETH Zurich
ETH Zurich conducts intensive research with a mind to finding new solutions and methods to face the CO2 problem. Professor Marco Mazzotti from the Insti-tute for Process Engineering is researching the possibilities of eliminating CO2 in fossil-fueled power stations and combining it with stable and solid substances. This so-called mineralization thus facilitates the permanent and secure storage of greenhouse gases. Power electronics are becoming increasingly smaller and more efficient: the research group headed by Professor Johann Kolar from the Power Electronic Systems Laboratory is devoted to developing such compo-nents that are deployed, for example, in hybrid vehicles. Efficient control of the drive system of such cars makes a significant contribution towards environmen-tally-friendly private transport.
Promising ETH Zurich research is also being carried out in the field of building systems engineering. The technology at our fingertips today would already en-able us to replace CO2-emitting heating and boiler systems with a combination of innovative wall insulation and heat pumps - with free renewable energy from the ground. This ingenious concept is also just the ticket for existing buildings. "We just need to get cracking", explains Professor HansJürg Leibundgut from the Institute for Building Systems. Within five to six years it should be possible to produce the necessary components on an industrial level so that for the price of a mid-range car, a four-room apartment can be refurbished, with the effect that practically all of the CO2 previously generated by heating and warm water can be prevented.
Renata Cosby | idw
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:0e2d21f9-63ea-4a7a-a696-64c8cb599424> | 3.203125 | 1,585 | Content Listing | Science & Tech. | 34.534772 | 95,496,437 |
DNA Replication Made Frizzy
Paul Robert Johnson
“DNA Replication Made Frizzy” animates the process of DNA replication using legos as nucleotides and gloved hands as enzymes.
Each enzyme is represented using a different color glove, while each nucleotide is represented by a different color lego. This tutorial specifically focuses on transcription and the essential enzymes involved. The sequences of nucleotide addition and enzymatic joining or cleavage proceed in specific directions and in specific orders. Ms. Frizzie, a whimsical, funny character, guides the audience through the sequence of events that occur during transcription, beginning with a single strand of DNA and resulting in a replicated strand of DNA.
NOTE: This project was created with assistance from the former Engage Program. | <urn:uuid:8eeb7f7c-e536-49e7-84a4-f9dc69d43ec6> | 3.453125 | 159 | Product Page | Science & Tech. | 21.090924 | 95,496,447 |
This article needs additional citations for verification. (December 2009) (Learn how and when to remove this template message)
|Part of a series of articles about|
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy. This contrasts with classical particles, which can have any energy. These discrete values are called energy levels. The term is commonly used for the energy levels of electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or a principal energy level, may be thought of as an orbit followed by electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, …).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2(n2) electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. If more than one quantum mechanical state is at the same energy, the energy levels are "degenerate". They are then called degenerate energy levels.
- 1 Explanation
- 2 History
- 3 Atoms
- 4 Molecules
- 5 Energy level transitions
- 6 Crystalline materials
- 7 See also
- 8 References
Quantized energy levels result from the relation between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave function has the form of standing waves. Only stationary states with energies corresponding to integral numbers of wavelengths[clarification needed] can exist; for other states the waves interfere destructively,[clarification needed] resulting in zero probability density. Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator.
The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926.
Intrinsic energy levels
In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom, i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative.
Orbital state energy level: atom/ion with nucleus + one electron
Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by :
(typically between 1 eV and 103 eV), where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is Planck's constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n.
This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = h ν = h c / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data.
An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants.
Electron-electron interactions in atoms
If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low.
For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number.
In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule.
Fine structure splitting
Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell[which?] electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV.
This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV.
Energy levels due to external fields
There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by
Additionally taking into account the magnetic momentum arising from the electron spin.
Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin
with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ,
The interaction energy therefore becomes
Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved.
The molecular energy levels are labelled by the molecular term symbols.
The specific energies of these components vary with the specific energy state and the substance.
Energy level diagrams
There are various types of energy level diagrams for bonds between atoms in a molecule.
Energy level transitions
Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to Planck's constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ).
- ΔE = h f = h c / λ,
since c, the speed of light, equals to f λ
Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum.
An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n.
A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics.
Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly colored glow.
An electron farther from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.
Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal.
- Re: Why do electron shells have set limits ? madsci.org, 17 March 1999, Dan Berger, Faculty Chemistry/Science, Bluffton College
- Electron Subshells. Corrosion Source. Retrieved on 1 December 2011.
- UV-Visible Absorption Spectra
- Theory of Ultraviolet-Visible (UV-Vis) Spectroscopy
- "Archived copy". Archived from the original on 2010-07-18. Retrieved 2010-10-07. | <urn:uuid:265a8153-588a-4799-a165-d0ee8f4f91b7> | 4.21875 | 3,188 | Knowledge Article | Science & Tech. | 29.892228 | 95,496,464 |
Astronomers estimate that at the time the Solar system formed, its proto-planetary disk contained the equivalent of about twenty Jupiter-masses of gas and dust. This so-called "minimum mass solar nebula (MMSN)" is derived from the current masses of the rocky planets and calculations of how they formed; a minimum mass is used in case the planet formation mechanism is somehow less efficient than expected. (Some earlier estimates had MMSN values up to about 100 Jupiter-masses.) As a nebula ages and its planets develop, its disk mass naturally decreases; current models estimate that a planetary system can form in under five million years.
CfA astronomer Sean Andrews and his colleagues have been studying the early stages of planet-forming nebulae around other stars using the fact that such disks are cool and emit radiation primarily in the infrared and submillimeter regimes. The team used the submillimeter camera on the James Clerk Maxwell Telescope in Hawaii to map the emitting dust in a cluster of young stars known as IC348 located in the Perseus molecular cloud about a thousand light-years away from us. The cluster is estimated to be about two to three million years old, and its planetary systems should therefore be partially developed.
The scientists found thirteen submillimeter point sources in the cloud indicative of disks, in a total population of about three hundred and seventy known objects. From its emitted luminosity the scientists can estimate the mass of a disk, and they find these disks range in size between 1.5 and 16 Jupiter-masses—smaller than a MMSN. Their results imply that disks as massive as the early solar system's are, at least by this age, very rare. Furthermore, expecting that the undetected sources all have smaller and fainter disks, the team combined the observations of all the sources to estimate what the average disk mass was: one-half a Jupiter-mass. The astronomers conclude that fewer than about 1% of stars have a MMSN disk. If most disks start off with the solar minimum mass value, therefore, they must have evolved very rapidly in order to have depleted most of the mass after a few millions years.
Explore further: Transition discs in Ophiuchus and Taurus
L. Cieza et al. A SCUBA-2 850-μm survey of protoplanetary discs in the IC 348 cluster, Monthly Notices of the Royal Astronomical Society (2015). DOI: 10.1093/mnras/stv2044 | <urn:uuid:9b86c9a4-bb70-412f-91d0-0841a2421a2a> | 3.6875 | 511 | Truncated | Science & Tech. | 42.370833 | 95,496,479 |
In May, residents of Tornado Alley braced themselves as the region's usual spring thunderstorms began popping up across the plains, bringing heavy rainfall that caused flash floods, high winds, hail and tornadoes. On May 20, the town of Moore, Okla., seemingly a perennial tornado target, was struck by a monster storm that was 1.3 miles (2 kilometers) wide at its peak, carving a 17-mile-long (27 km) path of destruction through the Oklahoma City suburb with winds that reached 210 mph (338 km/h).
While severe thunderstorms can happen anywhere that atmospheric conditions become ripe, there are areas like Tornado Alley where these conditions come together more often. But as human activity spews more and more greenhouse gases, such as carbon dioxide, into the atmosphere, causing the world to warm, there are concerns that global warming could substantially increase the risk of severe thunderstorms and the damage they can bring. A new study, detailed online today (Sept. 23) in the journal Proceedings of the National Academy of Sciences, suggests that this risk could increase for the eastern United States in a warming world.
(MORE: Driver Safety Tips in Thunderstorms)
"These severe thunderstorms can be very damaging events," said researcher Noah Diffenbaugh, a climate scientist at Stanford University in California. [In Images: Extreme Weather Around the World]
The study comes out in the run-up to a new report from the Intergovernmental Panel on Climate Change, the international body that reviews the most recent research on climate change and releases reports that summarize the current science and expected impacts for the world's policymakers.
Climate change conundrum
The issue of whether or not global warming would lead to an increased or decreased risk of severe thunderstorms has been a long-standing one among scientists who examine the potential impacts of climate change. The problem with answering this question lies in understanding the way warming alters the behavior of the atmosphere. While a warmer atmosphere can hold more moisture, creating the possibility of higher rainfall amounts, it could also lead to a reduction in the wind shear that causes these storms. Wind shear is a change of wind speed or direction with height in the atmosphere — a strong wind shear is needed to generate the kinds of storms that spawn tornadoes.
The lack of a reliable long-term record of severe thunderstorms makes it difficult to systematically analyze trends of where and when thunderstorms occur as climate changes that might help clarify the issue.
"There's been this conundrum of competing effects that have been theorized for global warming in terms of severe thunderstorm environments," Diffenbaugh told LiveScience.
(MORE: Top 5 Lightning-Prone States)
To help see what global warming might bring for the continental United States, Diffenbaugh and his colleagues tested an ensemble of global climate models to investigate how global warming might influence the kind of atmospheric environment known to support the formation of severe thunderstorms in the current climate: namely, that of strong wind shear and high convective energy. (Convection, like that in a boiling pot of water, is the engine that fuels storms.)
The researchers found this suite of global climate models suggested that even relatively moderate global warming could lead to a substantial increase in the kind of atmospheric environment linked with severe thunderstorms over the eastern United States. Overall, the scientists discovered global warming boosted the number of days with both high levels of convective energy and strong wind shear, suggesting that more of the country could see severe thunderstorms like the one that created the Moore tornado.
(PHOTOS: Lightning Storms Captured From Space)
The climate models suggested global warming would also cause days with lower wind shear, and that overall, average wind shear would decrease. However, the researchers discovered these days of lower wind shear often coincided with days of low convective energy levels. This means the average reduction in wind shear wouldn't diminish the likelihood of severe storms because it did not hinder the potential for storm formation on the days with high convective energy.
The scientists caution their models do not simulate the emergence of severe thunderstorms, only the atmospheric environments where they are known to arise. "It's a tough challenge to perform climate model experiments that resolve individual storms," Diffenbaugh said.
- 8 Ways Global Warming Is Already Changing the World
- The Reality of Climate Change: 10 Myths Busted
- 6 Unexpected Effects of Climate Change
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:caa2ed25-3fa8-41e0-8033-00241ea631f7> | 4.21875 | 929 | News Article | Science & Tech. | 28.80619 | 95,496,498 |
Until recently, the northern part of the Great Russian forest—which is the size of the continental U.S.—was populated with larch trees, while the slightly warmer southern part was populated by evergreen conifers. UVA environmental sciences researchers say that now evergreen conifers are encroaching on the larch-dominated boreal forest due to global warming. Unlike larch trees, evergreen conifers, such as spruce and fir, retain their needles year-round and absorb sunlight, which will make the forest even warmer. “What we’re seeing is a system kicking into overdrive,” says Hank Shugart, a professor of environmental sciences. “Warming creates more warming.”
Forget everything you know about required classes. The College rethinks curriculum. | <urn:uuid:9f1ab46c-3338-411c-8c67-7dce101c5be9> | 3.515625 | 168 | Truncated | Science & Tech. | 41.104649 | 95,496,512 |
Edmund Halley was an English mathematician and astronomer best known for predicting the return and calculating the orbit of the comet that now bears his name. His father was a wealthy merchant who could well afford to provide his eldest son with a good education. Halley undertook his early studies at St. Paul's School in London and demonstrated a youthful interest in astronomy that was promoted by his father, who provided him with suitable instruments. He entered Queen's College at Oxford in 1673, where he came into contact with John Flamsteed, the Astronomer Royal at the Greenwich Observatory. Encouraged by Flamsteed, Halley left school without a degree in 1676 to survey the stars in the southern hemisphere, in order to supplement the charts compiled in the northern hemisphere by Flamsteed and others.
Halley spent 18 months on the island of St. Helena in the South Atlantic, the most southern territory ruled by the British. The location made astronomical observations difficult because the island was often shrouded in clouds, but Halley persevered. Before returning to London in 1678, he had catalogued almost 350 stars and observed a variety of other phenomenon, such as the apparent decrease in brightness of some of the stellar bodies observed in antiquity. His findings, the first star charts of the southern hemisphere, were published by the end of the year. He was awarded for the achievement by election into the Royal Society and by receiving a degree from Oxford through royal mandate.
For the next two years, Halley traveled the continent before resettling in London, where he married and began a series of protracted lunar observations in an effort to correct tables of the moon's position. However, his interests were diverse and in 1684 his fascination with planetary motion brought him into contact with several of his prominent contemporaries. Robert Hooke, Christopher Wren, and Halley often conversed with each other in their attempts to determine how gravitational forces affect the orbits of the planets. Although they made some progress, Halley decided to visit Isaac Newton and was surprised to find that he had already worked out the problem and believed that planetary motion was elliptical. Extremely impressed, Halley encouraged Newton to publish his theories and even provided the financial backing for his opus, Principia.
In turn, Newton had a large effect upon the life of Halley, which later turned to the study of comets. Though they at first appeared to follow different laws of motion than the planets, Halley believed that comets must also be affected by gravitational pulls. In his analysis of comet observations, he realized that certain aspects of three (1531, 1607, and 1682) were so similar that they must be the successive returns of a single entity whose orbit was an elongated ellipse. He determined that the periodicity of the comet was approximately 76 years and in his Synopsis of the Astronomy of the Comets, published in 1705, predicted the return of the body in 1758. Though he did not live long enough to see the comet a second time, his successful prediction resulted in his name becoming permanently associated with the comet.
Halley made several other scientific contributions throughout his lifetime, although many of them are less well known. He discovered relative motion among the stars, which had previously been believed to be fixed. He contrived the first meteorological weather map and established accurate quantitative mortality tables. He also commanded the first sea voyage undertaken purely for scientific purposes, noting any compass variations that could be caused by the Earth's magnetic field. These achievements, along with his other tremendous scientific efforts, contributed to his crowning honor. In 1720, he succeeded Flamsteed, the man who had once inspired him to reach for the stars, as Astronomer Royal. Halley continued his admirable quest for knowledge at the Greenwich Observatory for his remaining years, dying at the age of eight-six on January 14, 1742.
BACK TO PIONEERS IN OPTICS
Questions or comments? Send us an email.
© 1995-2018 by
Michael W. Davidson
and The Florida State University.
All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our
Graphics & Web Programming Team
in collaboration with Optical Microscopy at the
National High Magnetic Field Laboratory.
Last Modification Friday, Nov 13, 2015 at 02:19 PM
Access Count Since March 11, 2003: 28412
Visit the websites of our partners in education: | <urn:uuid:e6dab62c-f366-4ef5-a8f0-55e370d2bb1e> | 4.03125 | 947 | Knowledge Article | Science & Tech. | 39.038767 | 95,496,529 |
researchers at the royal melbourne institute of technology (rmit university) in australia have created two-dimensional materials that are only a few atoms thick by dissolving metals in liquid metals to create oxide layers that can be easily peeled away.
this image of a liquid metal "slug" and its clear atom-thick "trail" shows the breakthrough in action. (rmit university)
according to a report from the institute, these 2-d materials are not found in nature and have the potential to enhance data storage and make faster electronics.
“once extracted, these oxide layers can be used as transistor components in modern electronics,” the article continued. “the thinner the oxide layer, the faster the electronics are. thinner oxide layers also mean the electronics need less power. among other things, oxide layers are used to make the touch screens on smart phones.”
researchers used non-toxic alloys of gallium as a reaction medium covering the surface of the liquid metal with thin oxide layers of the added metal rather than naturally occurring gallium oxide. the layers are removed by touching the liquid metal with a smooth surface.
larger amounts of the thin layers can be produced by injecting air into the liquid metal, according to a researcher.
the article added, “it’s a process so cheap and simple that it could be done on a kitchen stove by a non-scientist.” the researchers predict that this technology could be used with one-third of the periodic table and that many of the thin oxides are semiconducting or dielectric materials.
the research was recently published in science. the abstract stated:
“two-dimensional (2d) oxides have a wide variety of applications in electronics and other technologies. however, many oxides are not easy to synthesize as 2d materials through conventional methods. we used nontoxic eutectic gallium-based alloys as a reaction solvent and co-alloyed desired metals into the melt.
“on the basis of thermodynamic considerations, we predicted the composition of the self-limiting interfacial oxide. we isolated the surface oxide as a 2d layer, either on substrates or in suspension. this enabled us to produce extremely thin subnanometer layers of hfo2, al2o3, and gd2o3.
“the liquid metal–based reaction route can be used to create 2d materials that were previously inaccessible with preexisting methods. the work introduces room-temperature liquid metals as a reaction environment for the synthesis of oxide nanomaterials with low dimensionality.” | <urn:uuid:b66cdb51-67ce-4676-b3a5-2c9297aea7c0> | 3.828125 | 551 | News Article | Science & Tech. | 31.414667 | 95,496,539 |
Catherine Reid’s task, supervised by team member Dr. Robert Hermes, is to look for new ways to preserve rhino sperm so that it can be used after months of preservation with minimal losses of fertility. A far-fetched topic for scientific work? Not at all, stresses Catherine Reid. The cryo-conservation of rhino sperm is very important for the conservation of threatened species. Catherine Reid: „Statistics show that the reproduction rate of captive White Rhinos is only about 8 per cent.” In European zoos, 55 per cent of all female animals are in the reproductive age, but so far, only 15 per cent of these animals have reproduced; most of them only once. „These numbers show that the population is not self-sustaining”, says Reid. All the more important is a way to conserve the sperm from male rhinos.
Asked, whether the team could not continue its work with fresh sperm, Reid answers: „This would limit the breeding possibilities.“ If the team used only fresh sperm, a suitable bull – fertile and not related to the female – must be nearby. That is not always the case. So, rhinos have to be transported over far distances for breeding projects. That is expensive and risky for the animals, and the chances of success are not very large. So far, an artificial fertilization was successful only twice world wide, and natural mating is rare. Particularly worrisome: The rarest rhino species – for instance the Sumatra Rhino and the Northern White Rhino – reproduce rarely in captivity. All the more important is a successful preservation of sperm cells by freezing. Thus, the sperm could be used after the death of a bull, livestock transports for mating could be abandoned, and large distances could be bridged by sperm transport. „Additionally, we could introduce new genes from wildlife populations into the breeding programmes of zoos without taking animals from the wilderness“, says Catherine Reid.
Now, what exactly does Ms. Reid investigate? In simple words, she is studying two different methods of freezing. „You can freeze sperm cells very fast with liquid nitrogen or you can freeze cells in two steps by first cooling them gradually“, says Reid. The fast method requires small quantities of sperm and can lead to crystallization which destroys cells or impairs their fertility. Therefore, she is working, together with her colleagues and supervisors at the IZW, on a slower freezing method that avoids crystallization and that can freeze larger quantities. First, the sperm cells are cooled down to 5 degrees Celsius, then the temperature is lowered to minus 50 degrees centigrade. „Beyond that, we are testing different additives to improve the fertility of the sperm celles after thawing.“
The scientist is working in Berlin with a scholarship from the local parliament (“Abgeordnetenhaus”). Recently, she presented her research project to experts from the foundation. It convinced the experts, and the foundation called “Studienstiftung des Berliner Abgeordnetenhauses” extended the scholarship.
Josef Zens | alfa
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:533ecab3-1b17-43fa-8776-7e04b2c1d793> | 3.4375 | 1,306 | Content Listing | Science & Tech. | 43.144733 | 95,496,580 |
|CHLOROPHYTA : BRYOPSIDALES : Bryopsidaceae||GREEN ALGAE|
Description: Thallus small and erect to 10 cm long. Branches arranged distichously on the main axes. Feather-like.
Habitat: Not uncommon. In low littoral rock pools and in the sublittoral.
Distribution: Widespread in the British Isles. Europe: Mediterranean and the Black Sea, Azores, Portugal, Spain, France, Norway, Faroes and Iceland. Atlantic coast of North America: Canada, Maine, New Hampshire, Massachusetts, Rhode Island, Connecticut and Long Island, New Jersey, Delaware, Maryland and Virginia. Further afield: west coast of Greenland, Jamaica, Canary Islands, Senegal, Ghana, Mauritania, South Africa, British Columbia to California, Australia and Japan.
Similar Species: Bryopsis hypnoides has irregular branching.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Morton, O. & Picton, B.E. (2016). Bryopsis plumosa (Hudson) C Agardh. [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=ZS3920 Accessed on 2018-07-18
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:e11a3865-ee8f-47d5-80a4-de1a9dbdb2bc> | 2.859375 | 325 | Knowledge Article | Science & Tech. | 31.467487 | 95,496,586 |
An unusual stellar object named ROXs 42Bb may represent a new kind of planets or it may be a very rare planet-mass brown dwarf, according to a group of astronomers from Canada, Germany, Japan and the United States.
This image, taken with the Keck telescope, shows the ROXs 42B system: ROXs 42Bb orbits at about 150 astronomical units; ROXs 42Bc is a candidate companion or an unrelated background star. Image credit: Thayne Currie et al.
ROXs 42Bb is located near or orbiting the very young star ROXs 42B about 440 light-years away. The star is a member of the ρ Ophiuchus star-forming region located in the constellation Ophiuchus.
“We have very detailed measurements of this object spanning 7 years, even a spectrum revealing its gravity, temperature, and molecular composition,” said Dr Thayne Currie from the University of Toronto, the lead author of the paper published in the Astrophysical Journal Letters (arXiv.org).
“Still, we can’t yet determine whether it is a planet or a failed star – what we call a ‘brown dwarf’. Depending on what measurement you consider, the answer could be either.”
ROXs 42Bb is about 9 times the mass of Jupiter, below the limit most astronomers use to separate planets from brown dwarfs, which are more massive.
However, it is located 30 times further away from the star than Jupiter is from the Sun.
Dr Currie said: “this situation is a little bit different than deciding if Pluto is a planet. For Pluto, it is whether an object of such low mass amongst a group of similar objects is a planet. Here, it is whether an object so massive yet so far from its host star is a planet. If so, how did it form?”
Most astronomers believe that gas giant planets such as Jupiter and Saturn formed by core accretion, whereby the planets form from a solid core that then develops a massive gaseous envelope. Core accretion operates most efficiently closer to the parent star due to the length of time required to first form the core.
An alternate theory proposed for forming gas giant planets is disk instability – a process by which a fragment of a disk gas surrounding a young star directly collapses under its own gravity into a planet. This mechanism works best farther away from the parent star.
Of the dozen or so other young objects with masses of planets observed by astronomers, some have planet-to-star mass ratios less than about 10 times that of Jupiter and are located within about 15 times Jupiter’s separation from the Sun.
Others have much higher mass ratios and/or are located more than 50 times Jupiter’s orbital separation, properties that are similar to much more massive objects widely accepted to not be planets.
The first group would be planets formed by core accretion, and the second group probably formed just like stars and brown dwarfs.
In between these two populations is a big gap separating true planets from other objects.
“The new object starts to blur this distinction between planets and brown dwarfs, and may lie within and begin to fill the gap,” Dr Currie said.
“It’s very hard to understand how this object formed like Jupiter did. However, it’s also too low mass to be a typical brown dwarf; disk instability might just work at its distance from the star. It may represent a new class of planets or it may just be a very rare, very low-mass brown dwarf formed like other stars and brown dwarfs: a planet-mass brown dwarf.”
This story originally appeared at sci-news
Thayne Currie et al. 2014. Direct Imaging and Spectroscopy of a Candidate Companion Below/Near the Deuterium-burning Limit in the Young Binary Star System, ROXs 42B. ApJ 780, L30; doi: 10.1088/2041-8205/780/2/L30
Are Antibiotics Leading To An Increased Risk Of Miscarriage?
According to a new study published in the CMAJ (Canadian Medical Association Journal), many classes of antibiotics are associated with an...May 1, 2017
Could a Carbon Tax Work?
Over the past couple of years, several suggestions for limiting the amount of greenhouse gases that are produced by the burning...May 1, 2017
Genes Might Be Helping the Tasmanian Devil Fight Off Face Cancer
Getty Images The Tasmanian devil is famous for two things. One, it’s ornery as all hell. And two, it’s the unfortunate...August 30, 2016
What Gives With Insects Pretending to Be Sticks and Leaves?
Imagine that you had one outfit and one outfit only: a jumpsuit that made you look like a leaf. You’d blend...August 29, 2016
How to Use Physics to Paddle Board Like a Pro
Getty Images Question: How do you make a stand up paddle board go straight if you only paddle on one side?...August 29, 2016
Cluster of Big Earthquakes Rattles Iceland’s Katla Volcano
Alamy Last night, a brief earthquake swarm rattled the caldera at Katla in southern Iceland. The largest earthquakes were over M4,...August 29, 2016
Six Scientists Lived in a Tiny Pod for a Year Pretending They Were on Mars
Arguably one of the most Mars-like environments on Earth, the north side of Mauna Loa has been home sweet home to...August 29, 2016
Forget the Pool. This Guy Chased Tornadoes All Summer
This May, a massive supercell storm ripped through the countryside just outside of Dodge City, Kansas. It produced more than a...August 29, 2016
This Aquanaut Is Defining the Next Era of Spaceflight
NASA Megan McArthur has spent her life messing with microgravity. She was on the team that got the first commercial cargo...August 29, 2016 | <urn:uuid:8856edff-6b77-4a79-ab25-77a6069972c4> | 3.859375 | 1,264 | Content Listing | Science & Tech. | 61.240573 | 95,496,613 |
With various space agencies eyeing up moon missions in the not-too-distant future, Japanese scientists may have just spotted the perfect place for Man’s first moon colony.
A fissure beneath the moon’s surface was spotted by a radar instrument on Japan’s Selenological and Engineering Explorer (SELENE) probe, and is about 50 kilometres long.
Japan Aerospace Exploration Agency (JAXA) said that the fissure, located beneath the domes of the moon’s Marius Hills, may be an ancient subterranean lava tube.
JAXA senior researcher Junichi Haruyama said that such lava tubes might be the ‘might be the best candidate sites for future lunar bases’ – shielding future explorers from the moon’s wildly fluctuating temperatures, and the barrage of cosmic radiation on the surface.
Haruyama said, "We’ve known about these locations that were thought to be lava tubes … but their existence has not been confirmed until now."
- Single mum builds tiny house for less than $13k with materials from Bunnings
- No charges sought over Dreamworld disaster
- Accused Australian drug smuggler Cassie Sainsbury leaves Colombian court after striking plea deal
These underground networks of “lava tubes” could protect future astronauts from the harsh conditions on the moon.
Unlike Earth, the moon does not have a thick atmosphere and magnetic field, so it is unprotected against cosmic radiation, and receives frequent meteorite impacts.
The moon’s temperature also varies wildly, going up and down by several hundred degrees Celsius during one lunar day. | <urn:uuid:80b2a2b2-8815-4919-b205-07a495bcc7ca> | 3.5625 | 339 | News Article | Science & Tech. | 26.648819 | 95,496,621 |
PHP is an acronym to the Hypertext Preprocessor, this is a language of the web developers who love to write on the web. PHP is an open source general purpose language which enables the user to embed codes in the HTML. The PHP is very widely used a programming language and its best for the web development. Instead of confusing yourself with the lots of output HTML commands, utilize the easy PHP embedded coding to write the HTML commands. The PHP is enclosed in the special codes, that enables you to easily jump in and out of the PHP codes. Well, if you are thinking of learning this prosperous language, then you need to look at the five top reasons to learn PHP before embracing this language.
Learn Complete front end development and build 14 projects in just 1199 Rs. Certificate included.Click here to know more
- The language of the beginners.
PHP is the easiest programming language, which beginners can easily adapt with some basic knowledge. The coding in the PHP is so simple, that the person without any knowledge can crack the PHP codes. Any no programmer can easily pick up the language and can easily crack them, but a dear programmer, a true gentleman would never copy somebody else coding. Especially, the latest version of PHP 7 fixes errors itself and thus it is even easier for the beginners to work on it. You can easily learn PHP from the numerous available video tutorials.
- Super Speed with the PHP.
The PHP is very fast to load as compared to the ASP. PHP loads any website very fast and can work very quickly. The coding in the PHP runs on their own space, which eventually results in fast loading of the websites. Patience is considered as the biggest virtue, but if you lack in it, then the PHP is the ideal choice for you.
- PHP for the tightly budgeted fellas.
PHP software and hosting are available for the zero price. PHP is very beneficial for the pocket as almost all the PHP tools are open sourced, so no need to spend a dime for them. Like, WordPress is the open sourced software for the PHP programmers and a PHP only requires Linux server to run, which is available at no price from the hosting provider. In short working on PHP will not going to cost you a penny.
- Furiously flexible.
PHP is a user-friendly language, which gives enough space to the programmers to go little wild with their creativity. There is no hard and fast rule in PHP for the coding, so the programmer can work as they please and more over PHP automatically fixes the errors in the coding. All in all, PHP is a dynamic programming language.
- Bright Career in the PHP.
The PHP programmers have bright future in the field and its very good tool for the freelancers. They can easily make living out of it and PHP web developers have very lucrative future ahead. With the companies like Facebook investing in the PHP, it is the clear indication of the bright future in the PHP.
The PHP is the very progressive programming language and with the launch of PHP 7, it can be seen. So, it’s a good choice for the not technical persons to learn this easy web development language and try the luck in PHP.
Check the most advanced course in PHP at LearnCodeOnline.in: PHP Super Series.
Hello, I am Arpit. I am a content writer who usually loves to write technical stuff. I always try to write something very related to latest technologies and trend. I hope you like my blogs and have a good time reading them. | <urn:uuid:0d471a69-2518-4880-9538-34dfc1416df7> | 2.8125 | 721 | Product Page | Software Dev. | 63.793145 | 95,496,631 |
Principles of Oxidative Degradation
Organic materials undergo degradation reactions in the presence of oxygen. As a result, numerous oxidation products are formed such as, e.g. peroxides, alcohols, ketones, aldehydes, acids, peracids, peresters or y-lactones. Elevated temperatures, heat, and catalysts, such as metals and metal ions, assist oxidation. Degradation products arising from the oxidation of defined low molecular weight hydrocarbons can be relatively, easily isolated and analyzed.
KeywordsOxidative Degradation Chain Scission Peroxy Radical Oven Aging Hydroperoxide Group
Unable to display preview. Download preview PDF. | <urn:uuid:07f4be5c-3883-42e7-a731-bac276699af7> | 3.203125 | 144 | Academic Writing | Science & Tech. | -5.994081 | 95,496,644 |
Early Earth Was Flat and Ocean-Covered, Secular Scientists Claim
May 9, 2017
No mountains, covered with water — does that sound vaguely familiar?
Archive Classic: The Mystery of the Ultra-Pure Sandstones
May 6, 2017
Reprinted from June 27, 2003, this mystery is worth considering still. What caused these global deposits that are not being formed today? R. H. Dott, Jr (Univ. of Wisconsin) has a problem. He’s been trying to explain a geological puzzle for 50 years, and it is still unresolved.
The Human Evolution Textbook Has to be Rewritten Yet Again!
May 3, 2017
This post by Dr. Jerry Bergman exposes the shoddy interpretations drawn from controversial flakes of rock that are turned by reporters into world-shattering news.
Ancient DNA Recovered from Caves
May 3, 2017
New techniques are allowing scientists to extract ancient DNA from cave soil. But is it really as old as claimed?
Bubbles Scream Life
May 1, 2017
It might be a fungus. It might be half a billion years older than previously thought. It might rewrite the evolutionary history of complex life, including humans. What is it?
Are Hobbits Human?
April 25, 2017
Are the little people of Indonesia and South Africa just small versions of us? Wherever they came from, and whatever they were doing in the caves in which they were found, they don't fit evolutionary expectations.
Enceladus Pumps Imagination into the Vacuum
April 17, 2017
NASA astrobiologists abandon scientific restraint in a naked push to titillate taxpayers for another vain quest to find life beyond Earth.
OOL Foolishness Is Out of Control
April 12, 2017
Most scientists working on origin of life (OOL) have lost all semblance of respect for empiricism.
OOL’s Gold and Animism
April 6, 2017
What is the spark that turns molecules into life? For the materialist, it's the spirit of imagination.
Tick Talk: Mammal Blood Found in Amber
April 5, 2017
Can intact blood be preserved for 15 to 45 million years, give or take 50 million?
Dry Titan Has Static Cling
March 31, 2017
Imagine putting a cat in box of packing peanuts. Something like that happens when Titan's sand dunes grow, some scientists think.
Aussie Dino Stampede Tramples Long Ages
March 29, 2017
Researchers just published findings on a decade-long study of one of the biggest dinosaur trackways on earth, located on the west coast of Australia.
Small Planetary Bodies Unexpectedly Active
March 27, 2017
You would think things would have cooled down after 4.5 billion years. That's not what planetary scientists are observing.
The Great Dinosaur Mix-up
March 25, 2017
Evolutionists seem to enjoy rearranging branches on the Darwin tree, not to find the truth, but to fool the public into thinking they're getting warmer.
More Soft Tissue Found in Cretaceous Fossil Bird
March 24, 2017
Unrepentant over extreme falsification, evolutionary paleontologists are just taking it for granted that soft tissue can survive millions of years. | <urn:uuid:65bda781-ecb6-41dd-af72-1412fae0b4c7> | 2.78125 | 679 | Content Listing | Science & Tech. | 59.945616 | 95,496,687 |
UCSB researchers develop a novel device to image the minute forces and actions involved in cell membrane hemifusion
Cells are biological wonders. Throughout billions of years of existence on Earth, these tiny units of life have evolved to collaborate at the smallest levels in promoting, preserving and protecting the organism they comprise.
Among these functions is the transport of lipids and other biomacromolecules between cells via membrane adhesion and fusion -- processes that occur in many biological functions, including waste transport, egg fertilization and digestion.
At the University of California, Santa Barbara, chemical engineers have developed a way to directly observe both the forces present and the behavior that occurs during cell hemifusion, a process by which only the outer layers of the lipid bilayer of cell membranes merge.
While many different techniques have been used to observe membrane hemifusion, simultaneous measurements of membrane thickness and interaction forces present a greater challenge, according to Dong Woog Lee, lead author of a paper that appears in the journal Nature Communications.
'It is hard to simultaneously image hemifusion and measure membrane thickness and interaction forces due to the technical limitations,' he said.
However, by combining the capabilities of the Surface Forces Apparatus (SFA) -- a device that can measure the tiny forces generated by the interaction of two surfaces at the sub-nano scale -- and simultaneous imaging using a fluorescence microscope, the researchers were able to see in real time how the cell membranes rearrange in order to connect and open a fusion conduit between them. The SFA was developed in Professor Jacob Israelachvili's Interfacial Sciences Lab at UCSB. Israelachvili is a faculty member in the Department of Chemical Engineering at UCSB.
To capture real time data on the behavior of cell membranes during hemifusion, the researchers pressed together two supported lipid bilayers on the opposing surfaces of the SFA. These bilayers consisted of lipid domains -- collections of lipids that in non-fusion circumstances are organized in more or less regularly occurring or mixed arrangements within the cell membrane.
'We monitored these lipid domains to see how they reorganize and relocate during hemifusion,' said Lee. The SFA measured the forces and distances between the two membrane surfaces as they were pushed together, visualized at the Ångstrom (one-tenth of a nanometer) level. Meanwhile, fluorescent imaging made it possible to see the action as the more ordered-phase (more solid) domains reorganized and allowed the more disordered-phase (more fluid) domains to concentrate at the point of contact.
'This is the first time observing fluorescent images during a hemifusion process simultaneously with how the combined thickness of the two bilayers evolve to form a single layer,' said Lee. This rearrangement of the domains, he added, lowers the amount of energy needed during the many processes that require membrane fusion. At higher pressures, according to the study, the extra energy activates faster hemifusion of the lipid layers.
Lipid domains have been seen in many biological cell membranes, and have been linked to various diseases such as multiple sclerosis, Alzheimer's disease and lung diseases. According to the researchers, this novel device could be used to diagnose, provide a marker for, or study dynamic transformations in situations involving lipid domains in pathological membranes. The fundamental insights provided by this device could also prove useful for other materials in which dynamic changes occur between membranes, including surfactant monolayers and bilayers, biomolecules, colloidal particles, surfactant-coated nanoparticles and smart materials.
Sonia Fernandez | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:d26ec094-8fa9-45ce-9860-016f08fa4256> | 3.296875 | 1,387 | Content Listing | Science & Tech. | 28.010731 | 95,496,692 |
The History of Astrometry
by Michael Perryman
Publisher: arXiv 2012
Number of pages: 52
The history of astrometry, the branch of astronomy dealing with the positions of celestial objects, is a lengthy and complex chronicle, having its origins in the earliest records of astronomical observations more than two thousand years ago, and extending to the high accuracy observations being made from space today.
Home page url
Download or read it online for free here:
by Claus Tøndering
An overview of the Christian, Hebrew, Persian, and Islamic calendars in common use. It gives a historical background for the Christian calendar, plus an overview of the French Revolutionary calendar, the Maya calendar, and the Chinese calendar.
by George Forbes
This book starts with the ancient Chinese, the Chaldeans, Greeks, and Arabs, then Copernicus and others of the Renaissance, and lastly the 18th and 19th centuries. Topics included are the telescope, the sun, moon, planets and the stars.
by Nick Kaiser - University of Hawaii
These are the notes for an introductory graduate course. They are meant to be a 'primer' for students embarking on a Ph.D. in astronomy. The level is somewhat shallower than standard textbook courses, but quite a broad range of material is covered.
by Kenneth R. Koehler - University of Cincinnati
Table of contents: Distance vs. Direction; Electromagnetic Waves; Astronomical Observation; The Solar System; The Sun; Stellar Populations; Elementary Particles; Nuclear Reactions; Stellar Evolution; Spacetime; Black Holes; Galaxies; etc. | <urn:uuid:79202b22-0e53-4fc5-9e79-02383543ace8> | 3.078125 | 338 | Content Listing | Science & Tech. | 31.009762 | 95,496,699 |
A View from Emerging Technology from the arXiv
New Theory Explains Superrotation on Venus
As a Japanese weather satellite heads to Venus, a new theory tackles one of the outstanding mysteries of the planet.
Akatsuki, the first extraterrestrial weather satellite, began its journey to Venus this morning after a successful launch from the Tanegashima Space Centre in Japan.
The spacecraft should help answer one of the great mysteries of the Solar System: why the winds on Venus blow faster than the planet itself rotates.
Venus rotates once every 243 days but it takes a mere 4 days for clouds in the Venusian atmosphere to go all the way round the planet at a whopping 200 metres per second. This phenomenon is known as superrotation.
Astrophysicists have long speculated that the difference in temperature between the day and the night side of Venus at 300K and 100K respectively, is what drives these winds. But there’s a problem with this calculation
The puzzle is that the Venusian atmosphere has a certain viscosity and so, by itself, ought to dissipate energy at a rate of 10^9 W and slow down. Something else must be injecting energy into the system at this rate. How does this happen?
Today, Héctor Javier Durand-Manterola and pals from the Universidad Nacional Autónoma de México say they have solved the puzzle. They point out that in addition to the ordinary atmospheric winds, there is another much faster flow higher above the planet. These are ionic winds in the ionosphere between 150 and 800 km above the surface and were discovered by the Pioneer Venus Orbiter in the early 80s.
Known as the transterminator flow, these winds travel at supersonic speeds of several kilometres per second, probably driven by the planet’s interaction with the solar wind.
The question that Durand-Manterola and co address is what happens when the supersonic winds in the ionosphere interact with the slower winds in the atmosphere. Their answer is that the interaction generates turbulence in the atmosphere and that dissipation of this turbulence creates sound waves in that inject a significant amount of energy into the atmosphere.
How much? Durand-Manterola and pals calculate that the process injects energy at a rate of 10^10 W, more than enough to account for the amount lost due to viscosity. In fact, one prediction they make is that the sound waves created by the energy injection process have an intensity of 84 dB. That’s a significant roar that ought to be measurable in future.
To back up the idea, the team have performed a simple experiment with water to show how the energy transfer occurs, albeit it in rather different conditions.
That’s an interesting idea but one that will need more observations of Venus itself before it can be claimed as a home run. The fact that this process could replace the dissipated energy doesn’t mean that it does.
As it happens, Akatsuki might be able to help. It will arrive at Venus in December and should start sending data back soon after that. Durand-Manterola and others will be watching.
Ref: arxiv.org/abs/1005.3488: Superrotation on Venus: Driven By Waves Generated By Dissipation of the Transterminator Flow
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:3d22a5aa-9ff4-4590-9341-13efcfd986f8> | 3.4375 | 730 | Truncated | Science & Tech. | 44.321034 | 95,496,711 |
- Open Access
Validity on radar observation of middle- and upper-atmosphere dynamics
© The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences; TERRAPUB. 2009
Received: 27 July 2007
Accepted: 12 December 2008
Published: 14 May 2009
Doppler radar observations have made a large contribution towards improving our understanding of middle- and upper-atmosphere dynamics. This radar echo is mainly due to radiowave scattering by atmospheric turbulence, which occurs almost universally throughout the atmosphere as a result of gravity wave (GW) breaking. In the strictness sense, the radar pulse is back-scattered by radio refractive index (RRI) perturbations caused by turbulence whose size along the radar beam is half of the radar wavelength. Understanding of the validity of the MST (mesosphere, stratosphere and troposphere)-radar observation requires a precise knowledge of how RRI perturbations are formed and behave. A basic property of the RRI of the atmosphere is that it has a static vertical gradient in terms of distribution and the turbulence velocity is divergence-free. The RRI depends on three atmospheric properties—humidity, air density and electron density. These three elements each have a static vertical gradient along which turbulence transports each element, hereby perturbing the density distribution of each element; that is to say, it produces the RRI irregularity for scattering the radar pulse. Since the turbulence motion and winds are divergence-free, PRI perturbation co-moves with the total air flow, i.e. turbulence motion plus winds. This can be true even in the mesosphere where the perturbation is controlled electromagnetically. Co-movement of turbulence with local winds has been shown from a comparison of observations with radars and radiosondes. In addition to tracking turbulence as wind-tracers, MST-radar observations provide important data in the study of atmosphere turbulence dynamics. | <urn:uuid:af7a4289-35d8-44b1-96c1-71a0ae3213cb> | 2.75 | 440 | Truncated | Science & Tech. | 14.528929 | 95,496,720 |
In physics, a string is a physical phenomenon that appears in string theory and related subjects. Unlike elementary particles, which are zero-dimensional or point-like by definition, strings are one-dimensional extended objects. A major reason for interest in string theories is that theories in which the fundamental objects are strings rather than point particles automatically have many properties that some physicists expect to hold in a fundamental theory of physics. Most notably, a theory of strings that evolve and interact according to the rules of quantum mechanics will automatically describe quantum gravity.
In string theory, the strings may be open (forming a segment with two endpoints) or closed (forming a loop like a circle) and may have other special properties. Prior to 1995, there were five known versions of string theory incorporating the idea of supersymmetry, which differed in the type of strings and in other aspects. Today these different string theories are thought to arise as different limiting cases of a single theory called M-theory.
In string theories of particle physics, the strings are very tiny objects; much smaller than can be observed in today's particle accelerators. The characteristic length scale of strings is typically on the order of the Planck length, about 10−35 meter, the scale at which the effects of quantum gravity are believed to become significant. Therefore on much larger length scales, such as the scales visible in physics laboratories, such objects would appear to be zero-dimensional point particles. Strings are able to vibrate as harmonic oscillators, and different vibrational states of the same string would appear to be a different type of particle. In string theories, strings vibrating at different frequencies constitute the multiple fundamental particles found in the current Standard Model of particle physics. Strings are also sometimes studied in nuclear physics where they are used to model flux tubes.
As it propagates through spacetime, a string sweeps out a two-dimensional surface called its worldsheet. This is analogous to the one-dimensional worldline traced out by a point particle. The physics of a string is described by means of a two-dimensional conformal field theory associated with the worldsheet. The formalism of two-dimensional conformal field theory also has many applications outside of string theory, for example in condensed matter physics and parts of pure mathematics.
Types of strings
Closed and open strings
Strings can be either open or closed. A closed string is a string that has no end-points, and therefore is topologically equivalent to a circle. An open string, on the other hand, has two end-points and is topologically equivalent to a line interval. Not all string theories contain open strings, but every theory must contain closed strings, as interactions between open strings can always result in closed strings.
The oldest superstring theory containing open strings was type I string theory. However, the developments in string theory in the 1990s have shown that the open strings should always be thought of as ending on a new type of objects called D-branes, and the spectrum of possibilities for open strings has increased greatly.
Open and closed strings are generally associated with characteristic vibrational modes. One of the vibration modes of a closed string can be identified as the graviton. In certain string theories the lowest-energy vibration of an open string is a tachyon and can undergo tachyon condensation. Other vibrational modes of open strings exhibit the properties of photons and gluons.
Strings can also possess an orientation, which can be thought of as an internal "arrow" which distinguishes the string from one with the opposite orientation. By contrast, an unoriented string is one with no such arrow on it. | <urn:uuid:def77e66-0437-4700-9685-97c1e25ceef2> | 3.890625 | 740 | Knowledge Article | Science & Tech. | 35.373225 | 95,496,721 |
There are many different types of science in today's society. A few of these sciences include biology, chemistry, and physics. There are subjects within these subjects, also. One of these subjects is known as quantum physics. “Quantum physics is the theory that underlies nearly all our current understanding of the physical universe” (Rae xi). Without quantum physics, which will be referred to as quantum mechanics, and a few important quantum physicists, such as Max Planck, Niels Bohr, Erwin Schrödinger, Werner Heisenberg, and Albert Einstein, especially, many, if not all, things in the physical universe would not be understood.
Max Planck was one of the founders of quantum mechanics. He was born on April 23, 1858 in Kiel, Germany. He gained his interests in mathematics and physics when he was nine years old. Planck enrolled into Munich University in 1874, but three years later, he transferred to the University of Berlin to study physics. He was awarded his doctoral degree in 1879. He started out as a lecturer, and then he was designated the position of associate professor at the University of Kiel in 1885. He was then a professor at the University of Berlin in 1889 and later appointed to the full professor in 1892. He was recognized as a theoretical physicists before theoretical physics was recognized fully as its own discipline (Barron para. 2-4).
If it were not for Max Planck, quantum mechanics would not have been created, potentially. In 1900, he presented the law which brought about quantum mechanics (Charap 3). There was a major error in what physicists believed about the structure of an atom. This was demonstrated by Planck when he showed as the electron moves around the nucleus, it accelerates. Because of this accele...
... middle of paper ...
...e Great Debate About the Nature of Reality. New York: W. W. Norton & Company, Inc., 2008. Print.
Landshoff, Peter, Metherell, Allen, and Gareth Rees. Essential Quantum Physics. New York: Cambridge UP, 1997. Print.
Laughlin, Robert B. A Different Universe: Reinventing Physics from the Bottom Down. New York: Basic Books, 2005. Print.
O'Connor, J.J. And Robertson, E.F. “Erwin Rudolf Josef Alexander Schrödinger.” The MacTutor History of Mathematics archive, 2003. Web. 27 April 2013.
“Planck's Contstant.” Quantum Physics, n.d. Web. 17 April 2013.
Qualitative Reasoning Group of Northwestern University. “Propulsion.” What Is An Atom?, n.d. Web. 24 April 2013.
Rae, Alastair. Quantum Physics Illusion or Reality? New York: Cambridge UP, 2004. Print.
Rigden, John. Einstein 1905. Cambridge: Harvard UP, 2005. Print
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- Threads, Quanta, and Technostuff The Theory of Loop Quantum Gravity is the lone, valid competition for the Superstring Theory. This theory quantifies Einstein’s Theory of Relativity by breaking down spacetime into sections called quanta at the Plank Scale. The Plank Scale, named after Max Plank, is the absolute most minuscule, microscopic measurement in the fabric of spacetime that physicists can only dream to observe. At this level, the very large and very small collide in what is known as quantum gravity.... [tags: Physics, Creation]
841 words (2.4 pages)
- Quantum mechanics is the study of the behavior of energy and matter at the atomic, molecular and nuclear levels and sometimes even microscopic levels. The first initial information on quantum mechanics was first discovered in the early 20th century by a pioneering scientist Max Planck, because of this early knowledge of quantum energy it led to the first invention of the transistor. Scientist Max Planck discovered an equation that explained the results of these tests. The equation is as follows, E=Nhf, with E=energy, N=integer, h=constant, f=frequency.... [tags: Quantum Mechanics]
1077 words (3.1 pages)
- Of the many counter intuitive quirks of quantum mechanics, the strangest quirk is perhaps the notion of quantum entanglement. Very roughly, quantum entanglement a phenomenon where the state of a large system cannot be described by the state of the smaller systems that compose it. On the standard metaphysical interpretation of quantum entanglement, this is taken to show that there exists emergent properties1. If this standard interpretation is correct, it seems that physics paints a far different picture of the world then commonsense leads one to believe.... [tags: Quantum Mechanics, science, Marc Lange, Introducti]
2570 words (7.3 pages)
- Stephen Hawking and the World of Physics Dr Stephen Hawking was born January 8, 1942, the 300th anniversary of the day Galileo died. Although today he is totally paralyzed from ALS, he was born healthy. His work on the physics of black holes and the beginning of the universe revolutionized modern physics and our understanding of the universe. His biggest discoveries were Hawking radiation, mini black holes, and the no boundary theory. He started out as an averagely bright student at St. Albans private school.... [tags: biography, quantum physics, quantum mechanics]
2213 words (6.3 pages)
- Through a personal intrigue in 2002 physicist Andre K. Geim and a new Phd student were working on a late night project to discover how thin a sample of graphite they could extract from rocks. First using tape to clean dust and small debris from the rocks, they would polish the rocks down and measure them. Noticing that small flakes of graphite on the tape were actually thinner than anything they had previously measured, they shifted their focus to the remnants of graphite on the tape. A young physicist by the name of Kostya Novoselov then stepped into study the thin layers of graphite on the tape.... [tags: carbon, graphite, electronics, physics, 3D]
1491 words (4.3 pages)
- introduction A quantum computer is one which exploits quantum-mechanical interactions in order to function; this behavior, found in nature, possesses incredible potential to manipulate data in ways unattainable by machines today. The harnessing and organization of this power, however, poses no small difficulty to those who quest after it. Subsequently, the concept of quantum computing, birthed in the early 80's by physicist Richard Feynman, has existed largely in the realm of theory. Miraculous algorithms which potentially would take a billionth of the time required for classical computers to perform certain mathematical feats, and are implementable only on quantum computers, as such ha... [tags: quantum physics computer]
1351 words (3.9 pages)
- Quantum Teleportation is one of the newest areas of study in the field of quantum physics. It is the stuff of science fiction, which is fast becoming reality, where solid objects can be moved vast distances instantly. It has been the subject of books and movies for years but it wasn’t until recently that physicists at IBM’s laboratories made it a reality. The ideas that formed the basis of these experiments came about from previous research by scientists such as Albert Einstein and Heisenberg. This essay will explore the research done on this subject, the theories behind it, and the possible applications.... [tags: quantum physics teleport]
782 words (2.2 pages)
- Magical Realism and Quantum Physics The term Magical Realism is said to have started with the German art critic Franz Roh, who used the trem to describe the return of art to Realism from Expressionism. The term Magical Realism has also been used to categorize some the novels and short stories of authors such as Gabriel Garcia Marquez, Gunter Grass, and John Fowls. These writers use techniques that combine the real and unreal in ways that make them believable and acceptable by both the reader and characters in the stories.... [tags: Magical Realism Literature]
437 words (1.2 pages)
- Missing figures With today's technology we are able to squeeze millions of micron wide logic gates and wires onto the surface of silicon chips. It is only a matter of time until we come to a point at which the gates themselves will be made up of a mere handful of atoms. At this scale, matter obeys the rules of quantum mechanics. If computers are to become smaller and more powerful in the future, quantum technology must replace or reinforce what we have today. Quantum computers aren't limited by the binary nature of the classical physical world.... [tags: quantum physics computer]
924 words (2.6 pages)
- Albert Einstein One of the greatest heroes of American(and international) science and culture in the past century has been German physicist Albert Einstein. Born in 1879, Einstein used his early years to educate himself and began to think up his own methods for solving his newly found inquiries into science and higher-level mathematics. In a short time during the beginning of the twentieth century, Einstein pulled together his research and incredible intellect for unprecedented gains in science and theory used throughout the world.... [tags: physics science mathematics]
1868 words (5.3 pages) | <urn:uuid:f9be3ff0-5ca0-4c41-a379-9cfb0ed4deab> | 3.15625 | 1,961 | Content Listing | Science & Tech. | 55.352349 | 95,496,758 |
+44 1803 865913
Edited By: Peter Grant
334 pages, 52 b/w photos, 25 illus
The study of patterns and processes of evolution on islands has played an important role in the development of an understanding of how and why evolution occurs. Small, discrete pieces of the environment, frequently isolated from the continental processes of gene flow, often inhabited by unique species, and displaying remarkable rapidity of diversifying evolution, it is easy to see why islands have been referred to as `natural experiments.' This book surveys our current knowledge and understanding of island evolution in several chapters written by experts on various aspects of microevolution, speciation, and adaptive radiation.
1. Patterns on islands and microevolution; 2. The reproductive biology and genetics of island plants; 3. Evolution of small mammals; 4. The maintenance of genetic polymorphism in small island populations: large mammals in the Hebrides; 5. Molecular and morphological evolution within small islands; 6. Speciation; 7. Natural selection and random genetic drift as causes of evolution on islands; 8. Island hopping in Drosophila: genetic patterns and speciation mechanisms; 9. Speciation and hybridization of birds on islands; 10. Ecological speciation in postglacial fishes; 11. How 'molecular leakage' can mislead us about island speciation; 12. Radiations, communities and biogeography; 13. Ecological and evolutionary determinants of the species-area relation in Caribbean anoline lizards; 14. Lake level fluctuations and speciation in rock dwelling cichlid fish in Lake Tanganyika, East Africa; 15. Islands in Amazonia; 16. Biotic drift or the shifting balance - did forest islands drive the diversity of warningly coloured butterflies?; 17. Adaptive plant evolution on islands : classical patterns, molecular data, new insights; 18. Epilogue and questions
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
NHBS is one of my favorite vendors.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:310dc49f-9a74-44c3-b3a6-8782790cb8f3> | 3.265625 | 452 | Product Page | Science & Tech. | 32.946603 | 95,496,759 |
BOSTON – Even those who follow science may be surprised by how quickly international collaboration in scientific studies is growing.
The number of multiple-author scientific papers with collaborators from more than one country more than doubled from 1990 to 2015, from 10 to 25 percent, one study found. And 58 more countries participated in international research in 2015 than did so in 1990.
“Those are astonishing numbers,” said Caroline Wagner, associate professor in the John Glenn College of Public Affairs at Ohio State University, who helped conduct these studies.
Photo courtesy of Ohio State University“In the 20th century, we had national systems for conducting research. In this century, we increasingly have a global system.”
Wagner presented her research Feb. 17 in Boston at the annual meeting of the American Association for the Advancement of Science.
Even though Wagner has studied international collaboration in science for years, the way it has grown so quickly and widely has surprised even her.
One unexpected finding was that international collaboration has grown in all fields she has studied. One would expect more cooperation in fields like physics, where expensive equipment (think supercolliders) encourages support from many countries. But in mathematics?
“You would think that researchers in math wouldn’t have a need to collaborate internationally—but I found they do work together, and at an increasing rate,” Wagner said.
“The methods of doing research don’t determine patterns of collaboration. No matter how scientists do their work, they are collaborating more across borders.”
In a study published online last month in the journal Scientometrics, Wagner and two co-authors (who are both from The Netherlands) examined the growth in international collaboration in six fields: astrophysics, mathematical logic, polymer science, seismology, soil science, and virology.
Their findings showed that all six specialties added between 18 and 60 new nations to the list of collaborating partners between 1990 and 2013. In two of those fields, the number of participating nations doubled or more.
The researchers expected astrophysics would grow the most in collaboration, given the need to use expensive equipment. But it was soil science that grew the most, with a 550 percent increase in the links between research groups in different countries in that time period.
“We certainly didn’t expect to see soil science have the fastest growth,” she said.
“But we saw strong increases in all areas. It appears that all the fields of science that we studied are converging toward similar levels of international activity.”
The study found that virology had the highest rate of collaboration, with the most countries involved. “They aren’t working together because they need to share expensive equipment. They’re collaborating because issues like HIV/AIDS, Ebola, and Zika are all international problems and they need to share information across borders to make progress.”
Wagner has started a new line of research that attempts to determine how much nations benefit from their scientific work with other countries. For this work, she is looking at all the scientific articles that a nation’s scientists published with international collaborators in 2013. She is looking at each article’s “impact factor”—a score that measures how much other scientists mentioned that study in their own work.
“How much recognition a study gets from other scientists is a way to measure its importance,” Wagner said.
She compared each nation’s combined impact factor for its international collaborations to how much money the same country spent on scientific research. This is a way to determine how much benefit in terms of impact each nation gets for the money it spends.
The United States has the highest overall spending and shows proportional returns. However, smaller, scientifically advanced nations are far outperforming the United States in the relationship between spending and impact. Switzerland, the Netherlands, and Finland outperform other countries in high-quality science compared to their investment. China is significantly underperforming its investment.
Wagner said this isn’t the only way to measure how a country is benefiting from international science collaboration. But it can be one way to determine how efficiently a country is using its science dollars.
In any case, Wagner said her findings show that international science collaboration is becoming the way research gets done in nearly all scientific fields.
“Science is a global enterprise now,” Wagner said.
Like this article? Click here to subscribe to free newsletters from Lab Manager | <urn:uuid:d23344bd-d1bb-4ea9-8150-74580fc853c4> | 2.578125 | 921 | News Article | Science & Tech. | 37.543045 | 95,496,779 |
Polarization-Sensitive Optomotor Reaction in Invertebrates
If a pattern of vertical black and white stripes is rotated around an animal, it usually displays a turning reaction. The tendency for the animal to turn in the direction of motion of a pattern is called “optomotor response”, which demonstrates that the animal is able to detect the movement of the optical environment on the basis of brightness cues. This behaviour serves to stabilize the animal’s orientation with respect to the environment, and helps it to maintain a straight course during locomotion. The optomotor reaction of insects to black-and-white (B&W) patterns has been intensively studied (e.g. Hassenstein and Reichardt 1956; Varjú 1959). Studies of the dependence of motion perception on the wavelength of light demonstrated that the visual subsystems performing directionally-selective movement detection are usually colour-blind (e.g. Kaiser 1974; Lehrer et al. 1990).
KeywordsPolarization Pattern Polarization Sensitivity Open Loop Gain Optomotor Response Green Receptor
Unable to display preview. Download preview PDF. | <urn:uuid:266bb650-4c62-4cc8-a558-b001510c463b> | 3.109375 | 235 | Truncated | Science & Tech. | 32.258022 | 95,496,799 |
Describe the importance of rare earth elements in science and technology. Assess the most common uses of these elements, particularly as encountered in your daily life, as well as projections for future demand of these minerals resources© BrainMass Inc. brainmass.com July 17, 2018, 9:28 pm ad1c9bdddf
Hi and thank you for using Brainmass. The solution below should get you started. In this particular task you are being asked to talk about rare earth elements. I suggest using this simple outline:
1. About rare earth elements & use in technology - 100 words
2. Samples in daily tech - 150 words
3. Future Demand - 100 words
The outline should yield 350 words. You can use the listed resources to further explore the topic. All the best with your studies.
AE 105878/Xenia Jones
Rare Earth Elements
Contrary to what the name suggests, rare earth elements (RRE) are actually plentiful on earth (i.e. Cerium is actually the 25th most abundant element on earth). RREs are (Periodni, n.d.), "a collection of seventeen chemical elements in the periodic table, specifically the fifteen lanthanides plus scandium and yttrium. Scandium and yttrium are considered rare earth elements since they tend to occur in the same ore deposits as the lanthanides and exhibit similar chemical properties. They are divided into light REE (Scandium, Yttrium, Lanthanum, Cerium, Praseodymium, Neodymium, Promethium, Samarium, Europium and ...
The solution provides information, assistance and advise in tackling the task (see above) on the topic of rare earth elements. Resources are listed for further exploration of the topic. | <urn:uuid:d5d8a29a-a85a-45e1-8a39-f3ba78cb7eef> | 3.3125 | 373 | Tutorial | Science & Tech. | 48.415331 | 95,496,822 |
Why aren't lisp keywords protected?
(define a 3) (define define +) #makes define useless (define a 1) #outputs 4, instead of assigning 1 to a.
Is this flexibility so important? Or even worse:
(define + -) (+ 1 1) #Output 0.
Where can this be useful, and, can this behavior be blocked? AFAIK, all programming languages have reserved keywords.
Why aren't lisp keywords protected?
Because they aren't keywords, they are just library functions like any other.
Where can this be useful,
It means that you, as a user, have the same power as the language designer. Making a change to a language can take several decades and is a massive undertaking. Writing a new library function only takes seconds and can be done by anyone.
AFAIK, all programming languages have reserved keywords.
No, not all. Lisp doesn't, for example. In fact, Lisp doesn't really have syntax at all.
Io has no keywords, neither have Ioke or Seph. Poison and Potion also don't have keywords.
Smalltalk also has only a very small number of keywords:
thisContext. Newspeak has almost the same list (
outer). In both cases,
false could just as easily be method calls instead, leaving only 3 keywords. (Actually, Gilad Bracha, the designer of Newspeak said that all of them could be made method calls, but it wouldn't be worth it.)
Symbols are not protected because they don't need to be. Some languages have reserved names because their syntax cannot tolerate having a class named "class", even though technically you could avoid it. Sometimes it is done on purpose to avoid ambiguity in the programmer's mind and truly, redefining
- to mean
+ is the kind of things that really would be disastrous in theory. However, this is part of having a dynamic environment, where this kind of flexibility is useful sometimes.
Those problems vanish when you use different namespaces, so that
mypackage:+ is different from
yourpackage:+ (unqualified uses of
+ can be inspected during development to know which symbols is actually used).
By the way, your example is in Scheme (you tagged it as lisp originally), but Common Lisp is a little bit different in that regard.
The 978 symbols in the standardized "COMMON-LISP" package shouldn't be redefined in a portable program (undefined behavior if you do this). An implementation like SBCL provides a way to "lock" packages, to prevent accidental changes. Of course, it gives you a way to unlock them.
This is not a problem in practice because you generally don't want to redefine the standard operators but provide your own operator, defined in your own package.
Packages being first class, you can even modify which symbols are imported in a given package, so that you can sometimes change the semantics of an existing programs without touching its files, just by changing what an unqualified
+ function means (and recompiling).
This is useful, for example, to perform some kind of instrumentation, like automatic differentiation. You replace a set of operators by your own because you want to compute information about your program, for example.
In Common Lisp, all symbols in the package
COMMON LISP are protected by the standard. It says that redefining the package or the symbols in the package has undefined consequences. Which means, everything can happen: nothing, the change is made, the program crashes, the program might run into an error, ..., or even the sun comes up in the west the next morning.
Real Common Lisp implementations will often warn the user:
CL-USER 1 > (defun defun () 3) Error: Redefining function DEFUN visible from package COMMON-LISP. 1 (continue) Redefine it anyway. 2 (abort) Return to level 0. 3 Restart top-level loop.
* (defun defun () 3) debugger invoked on a SYMBOL-PACKAGE-LOCKED-ERROR: Lock on package COMMON-LISP violated when proclaiming DEFUN as a function while in package COMMON-LISP-USER. See also: The SBCL Manual, Node "Package Locks" The ANSI Standard, Section 184.108.40.206.2 Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL. restarts (invokable by number or by possibly-abbreviated name): 0: [CONTINUE ] Ignore the package lock. 1: [IGNORE-ALL ] Ignore all package locks in the context of this operation. 2: [UNLOCK-PACKAGE] Unlock the package. 3: [ABORT ] Exit debugger, returning to top level.
Since many Common Lisp implementations are largely programmed in themselves, there needs to be a mechanism to allow changes and then later prevent them.
Some parts of the language even can't be modified by redefining the function or macro. There is no way for the user to change, add or remove special forms (like LET) and special syntax like argument lists and lambda expressions. The syntax for argument lists is hardwired into the language. The syntax for lambda expressions is hardwired. You can't change it. You would need to change the interpreter and/or the compiler. | <urn:uuid:c100c4c8-0b59-4259-aab0-e722c0229ffc> | 2.765625 | 1,133 | Q&A Forum | Software Dev. | 50.587399 | 95,496,832 |
28 Nov 2014
Mukash Kaldarov, Chief Technical Adviser, UNDP Kyrgyzstan
A boy looking at an eroded canal in Jalal-Abad province, Kyrgyzstan. Credit: Kairatbek Murzakimov/UNDP
It is fair to say that disasters, whether natural or technological, are not limited or restrained by borders. Floods, storms, environmental degradation and the ramifications of industrial or radiological waste affect multiple countries at once when they occur. National and local efforts to prepare for this, while necessary, are simply not sufficient or efficient.
The reality, however inconvenient at times, is that regional threats require an equally regional effort to prepare and respond. Preparing communities along a river or waterway for possible flooding should not stop simply because of a political boundary; efforts, therefore, must be made to integrate and coordinate actions for optimum results.
This understanding is quickly taking root in the Central Asian region; between 1988 and 2007 at least 177 disasters affected the region, causing more than 36,000 deaths. In 2000 alone, at least 3 million people regionally were affected by droughts that caused serious economic losses.
Looking ahead, the threat of climate change means that weather related disasters may only increase in severity and frequency. Equally as threatening, though thankfully rare, is the threat of technological or industrial disasters stemming from aging but critical infrastructure, such as dams, irrigation nets and uranium mines. Reducing and managing these … | <urn:uuid:cbcac5c6-eb08-4729-9883-3761cd7a24ff> | 3.359375 | 300 | News (Org.) | Science & Tech. | 21.171947 | 95,496,858 |
study of ocean plants and animals and their ecological relationships. Marine organisms may be classified (according to their mode of life) as nektonic, planktonic, or benthic. Nektonic animals are those that swim and migrate freely, e.g., adult fishes, whales, and squid. Planktonic organisms, usually very small or microscopic, have little or no power of locomotion and merely drift or float in the water. Benthic organisms live on the sea bottom and include sessile forms (e.g., sponges, oysters, and corals), creeping organisms (e.g., crabs and snails), and burrowing animals (e.g., many clams and worms). Seafloor areas called hydrothermal vents, with giant tube worms and many other unusual life forms, have been intensively studied by marine biologists in recent years.
The distribution of marine organisms depends on the chemical and physical properties of seawater (temperature, salinity, and dissolved nutrients), on ocean currents (which carry oxygen to subsurface waters and disperse nutrients, wastes, spores, eggs, larvae, and plankton), and on penetration of light. Photosynthetic organisms (plants, algae, and cyanobacteria), the primary sources of food, exist only in the photic, or euphotic, zone (to a depth of about 300 ft/90 m), where light is sufficient for photosynthesis. Since only about 2% of the ocean floor lies in the photic zone, photosynthetic organisms in the benthos are far less abundant than photosynthetic plankton (phytoplankton), which is distributed near the surface oceanwide. Very abundant phytoplankton include the diatoms and dinoflagellates (see Dinoflagellata). Heterotrophic plankton (zooplankton) include such protozoans as the foraminiferans; they are found at all depths but are more numerous near the surface. Bacteria are abundant in upper waters and in bottom deposits.
The scientific study of marine biology dates from the early 19th cent. and now includes laboratory study of organisms for their usefulness to humans and the effects of human activity on marine environments. Important marine biological laboratories include those at Naples, Italy; at Plymouth and Millport in England; and at Woods Hole, Mass., La Jolla, Calif., and Coral Gables, Fla. Research has been furthered by unmanned and manned craft, such as the submersibleAlvin.
See also oceanography.
- See R. Carson, The Sea Around Us (rev. ed. 1961);.
- Exploring Our Living Planet (1983);. ,
- Ocean Wildlife (1989);. ,
- The Universe Below (1997). ,
Marine biology is concerned with the animal and plant life of the sea. Studies of local flora and fauna often were the typical goals of marine...
The study of the organisms that live in the sea and on shore and the features of the environment that influence them. Marine biology is of... | <urn:uuid:3a2ba87c-dbcb-4930-a84d-2f2d0279a3ac> | 3.9375 | 638 | Knowledge Article | Science & Tech. | 39.6125 | 95,496,876 |