text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Mars attacked? Scientist suggests probes from Earth might have accidentally liquidated microscopic Martians The public fascination with the possibility of life on our planetary neighbor Mars began when early telescopes spotted features fancifully interpreted as canals. It has spawned countless stories, books and films featuring visitors from the Red Planet, most famously those highly unpleasant illegal aliens created by H.G. Wells in his novel War of the Worlds. Mars mania has recently been boosted by the ongoing three-year adventure of the seemingly indestructible pair of robotic rovers poking and photographing the Martian landscape. New orbiting probes that have documented the apparent, and recent, presence of surface water there. The decision by NASA to dispatch a manned mission in the coming decades will likely keep Mars exploration in the headlines. In a distinctly Earth bites Mars twist, a paper given at an astronomy conference this week by planetary scientist Dirk Schulze-Makuch theorizes that the first victims of contact between the two worlds might have been Martian microbes. NASA successfully landed two Viking spacecraft on Mars in 1976-77 equipped with instruments to analyze soil samples for indications of living matter. The Washington State University professor argues that the Viking craft were programmed to search for salt water-based cells, while the strong probability is that life in the dry, cold Martian environment would have evolved using water and hydrogen peroxide. That combination would not only remain liquid at much colder temperatures but could freeze and thaw without destroying cell membranes and has the ability to absorb water vapor from a very thin atmosphere. Unfortunately, the methods the Viking landers used to search for life would not only have missed such microbes, but by soaking samples of Martian dirt in water would have drowned or cooked any life forms present. If Schulze-Makuch's theory is on target, NASA will need to alter some of the planned experiments to detect life on board the next planned Martian lander, Phoenix, scheduled for launch later this year. The discovery of extraterrestrial life would be one of the greatest scientific achievements in human history. If or when that momentous find occurs, let's hope our cosmic neighbors don't end up as road kill in a NASA probe's petri dish.
<urn:uuid:a29ee802-7867-4853-8215-9a7df15eef02>
3.140625
445
News Article
Science & Tech.
32.789021
95,639,453
Breakthrough discovery offers new perspectives for research on the immune and nervous system / Publication in Nature Methods When it comes to analyzing cell components or body fluids or developing new medications, there is no way around mass spectrometry. Mass spectrometry is a highly sensitive method of measurement that has been used for many years for the analysis of chemical and biological materials. Scientists at the Institute of Immunology of the University Medical Center of Johannes Gutenberg University Mainz (JGU) have now significantly improved this analytical method that is widely employed within their field. They have also developed a software program for the integrated analysis of measurement data called ISOQuant. Their optimized mass spectrometric workflow allows to identify and quantify significantly more proteins than before. The development of this enhanced method of measurement and the specially designed software is described in an article recently published in the prestigious journal Nature Methods. A proteome represents the entire set of proteins expressed by a cell. Through analysis of proteomes, it is thus possible to obtain a comprehensive picture of the proteins and peptides present in cells or body fluids. However, many of the traditional mass spectrometric methods used to date for proteomic analysis are relatively slow and do not always provide reproducible results. Dr. Stefan Tenzer of the Institute of Immunology and his colleagues have perfected a relatively new, data-independent technique that facilitates a very accurate and reproducible quantitative analysis. With its help, many more proteins can be identified than before. "Figuratively speaking, the equipment we use is as exact as a scale that can tell whether a two-euro coin is present in a VW Beetle or not," explains Tenzer. Tenzer's work group focuses in particular on developing novel techniques for quantitative proteomic analysis with the aid of so-called ion mobility mass spectrometry. This technique allows not only to measure the mass of a molecule but can also to determine its cross section. This additional analytical dimension renders the technique optimally suited for the comprehensive investigation of highly complex samples. Tenzer and his colleagues have also managed to enhance the technique known as label-free quantification. This eliminates the need for samples to be labeled in the laboratory before being analyzed, an otherwise complex procedure. "We are now able to directly analyze patient samples and specific immune cells without prior cost-intensive preparation," says Tenzer. The Mainz-based scientists specifically developed their ISOQuant software program for this purpose. This provides for standardized analysis of complex data material and generally simplifies the technique of quantitative mass spectrometric analysis. These groundbreaking innovations were developed under the aegis of the technology platforms "Quantitative Proteomic Analysis" of the JGU Research Center Immunology (FZI) and "ProTIC" of the Research Unit Translational Neurosciences (FTN) at the Mainz University Medical Center. They were now published in Nature Methods, one of the most respected international journals. This was already the third article published in the Nature journal group in 2013 by Dr. Stefan Tenzer and his colleagues. "The years of work within the technology platform have paid off in terms of a quantum leap forward with regard to the improvement of the technique of proteomic analysis mass spectrometry," stated Professor Hansjörg Schild, Director of the Institute of Immunology and Coordinator of the Research Center Immunology (FZI) at the Mainz University Medical Center. "The results obtained by Dr. Stefan Tenzer and his colleagues reflect the quality of achievement of this team. I think we can look forward to new and exciting collaborations in future," said Schild. "Mass spectrometry is a technique that has now become indispensable within the field of the neurosciences. In this area, we specifically need highly sensitive analytical techniques and Dr. Tenzer has opened up new perspectives in this regard," emphasized Professor Robert Nitsch, Coordinator of the Research Unit Translational Neurosciences and of the Collaborative Research Center 1080 on "Molecular and Cellular Mechanisms of Neuronal Homeostasis" at the Mainz University Medical Center. "The collaboration between the Research Center Immunology and the Research Unit Translational Neurosciences in the field of mass spectrometry represents an excellent opportunity for us to gain new insights into the way the brain functions," claimed Nitsch. Ute Distler, Jörg Kuharev, Pedro Navarro, Yishai Levin, Hansjörg Schild & Stefan Tenzer, “Drift time-specific collision energies enable deep-coverage data- independent acquisition proteomics“,15 December 2013 Dr. Stefan Tenzer Head of the Core Facility for Mass Spectrometry Institute of Immunology Mainz University Medical Center D 55131 Mainz phone +49 6131 17-6199 fax +49 6131 17-6202 Professor Dr. Hansjörg Schild Director of the Institute of Immunology and Coordinator of the Research Center Immunology Mainz University Medical Center D 55131 Mainz phone +49 6131 39-32401 fax +49 6131 39-35688 Petra Giegerich | idw - Informationsdienst Wissenschaft Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:1a040c13-42b4-4446-8a74-e3d218dc0532>
3.015625
1,648
Content Listing
Science & Tech.
25.300199
95,639,454
For example, the humble and over-worked string literal is an object complete with methods and properties. For example: displays 4. This almost seems against the laws of programming when you first see it but yes raw data in the form of a quoted string is an example of a string object and has the methods and properties of a string object. With this in mind consider the following simple function that tests to see if a string is equal to a target string literal: Notice that this is an exact test. That is, s has to be equal to "Yes" and not convertible to "Yes" from some other data type. In this case using either == which is a looser test for equality, or using === which is strict equality doesn't seem to be a big issue. After all a string is either a string or it isn't and it is either equal to "Yes" or it isn't. So you could use: just as easily and expect to get the same results. But it it is generally held (for example JSLint) that you should avoid == and != so the first function is the one that was used. The utility function was used as part of a much larger program and eventually a bug was tracked down to it sometimes returning false when the string passed to it was indeed "yes". The function was carefully checked and two debugging commands added: return s ==="yes"; so that the value of s could be seen along with the result. During one run the function returned: That is, the parameter passed in stored "yes" and the result of the comparison s==="yes" was false. This cannot be! What is going on? Turn to the next page when you are ready to find out. - Next >>
<urn:uuid:45ef96d9-75aa-413b-88b0-02b9666b049b>
3.578125
366
Documentation
Software Dev.
68.288485
95,639,471
Skip to comments.Extra Dimensions Showing Hints Of Scientific Revolution Posted on 02/19/2003 9:18:15 AM PST by RightWhale Extra Dimensions Showing Hints Of Scientific Revolution Chicago - Feb 19, 2003 The concept of extra dimensions, dismissed as nonsense even by one of its earliest proponents nearly nine decades ago, may soon help solve seemingly unrelated problems in particle physics, cosmology and gravitational physics, according to a panel of experts who spoke Feb. 15 at the American Association for the Advancement of Science annual meeting in Denver. "It doesn't happen often that you get a confluence of ideas and experiments that come together and it's something that obviously would change your whole way of looking at the universe," said one of the panelists, Joseph Lykken, Professor in Physics at the University of Chicago and a scientist at Fermi National Accelerator Laboratory. Even though scientists lack direct evidence of extra dimensions, "we have a number of hints from experiments and theoretical ideas that make us think they're probably out there. That's why we're so excited about looking for them," Lykken said. On the theoretical side, string theory, developed over the past two decades, requires that space-time has extra dimensions if it is to include gravity. "It's just built into the way that string theory works," Lykken said. Experiments, meanwhile, have produced the standard model of physics to describe the most elementary particles and the forces that hold them together. Physicists have come to suspect that something is missing from the standard model. "There seems to be more particles and forces than we really need, and they operate in more complicated ways than they need to," Lykken said. But extra dimensions may ultimately help explain these data complications. "That standard model itself may be our biggest hint that there's this world of extra dimensions," he said. New experiments at Fermi National Accelerator Laboratory are producing data that just don't fit the standard model, said Maria Spiropulu an Enrico Fermi Fellow at the University of Chicago. "We have things in the data that leave our mouths hanging," she said. Whether extra dimensions or some other phenomenon emerges to clarify these murky data, scientists seem certain that they stand only a few years away from a scientific revolution. "What's going on right now in particle physics, gravitational physics and cosmology is like when quantum mechanics started coming together," Spiropulu said. Quantum mechanics, developed in the 1920s, describes the physics of objects at the atomic level and dominates the concepts of modern physics. Spiropulu, who organized the AAAS session on the physics of extra dimensions, spoke at the session along with scientists from Fermi National Accelerator Laboratory, Harvard University and the universities of Chicago and Washington. Another panelist, Sean Carroll, Assistant Professor in Physics at the University of Chicago, said that extra dimensions could help solve two mysteries in cosmology: what were the initial conditions of the universe and what is the mysterious dark energy that is accelerating the expansion of the universe. The idea of an inflationary universe, one that expanded rapidly just moments after the big bang, has gained wide acceptance among cosmologists to explain how conditions in the early universe could be unexpectedly different from what they later came to be. But inflationary cosmology tells scientists nothing about the initial conditions of the universe. This is where extra dimensions come in, even though they might be microscopically small. "If you had extra dimensions, then when the universe is very small at early times, the extra dimensions weren't small compared to the rest of the universe," Carroll said. "They must have played a big role. What was that role? Could the role have something to do how we perceive the initial conditions?" Extra dimensions may also explain dark energy. Physicists conjecture that dark energy is governed partly by occurrences in the familiar four dimensions and partly by occurrences in the extra dimensions, Carroll explained. "There is the tantalizing possibility that a complete change of perspective makes all of the problems collapse at once," he said. But inflationary cosmology tells scientists nothing about the initial conditions of the universe. This is where extra dimensions come in, even though they might be microscopically small. Bumping for discussion... ..hey, anybody seen the glue? Beyond this door lies another dimension. It's a dimension of sight. It's a dimension of sound. [Crash!] It's a dimension of mind. Alright, I don't know what the heck dimension it is! [This ping list is for the evolution -- not creationism -- side of evolution threads, and sometimes for other science topics. To be added (or dropped), let me know via freepmail.] Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.
<urn:uuid:46a16753-6b70-4387-915a-ec141490b413>
2.828125
1,026
Comment Section
Science & Tech.
36.474076
95,639,492
The new research finds that agriculture's environmental impacts vary depending on location even for producers of the same crops. This creates opportunities for targeted mitigation, making an immense problem more manageable. The new council hopes to champion science globally and bring findings outside the laboratory to help society fight global problems, including climate change, poverty, or the rise of anti-science sentiments. Emma Marsden – Countries in Asia Pacific are now starting to integrate environmental sustainability into their development goals, but this needs to scale up. Here are three things they need to do to make this happen, writes ADB's Emma Marsden. Carlo Ratti – Around the world, architects and urban planners are working to bring nature back into our cities. Carlo Ratti traces the rocky relationship between cities and the natural world, and the best attempts to reconcile the two. For urban planners, data and technology are valuable tools in the drive to improve administration and services. But while these innovations are making urban environments more livable, they come with a hidden cost: the potential to deepen inequality among digitally marginalized groups. Ellen Gray, The Third Pole – Using data from twin satellites over a 14-year period, from 2002 to 2016, NASA comes up with stunning images of how wet areas are getting wetter, and groundwater is disappearing in some areas, and why. Genevieve Belmaker, Mongabay.com – Experts say that if fragmented sections of forest are linked in the Eastern Arc Mountains in Tanzania and the Atlantic Forest in Brazil, it could significantly increase the odds in favor of ensuring species survival – and it’s a financially safe bet. The latest addition to our publications series “Informal Waste Management”, a report that we have constructed after interviewing dozens of stakeholders, visiting waste management sites, and following the paths of ... The Centre for Sustainable Environmental Sanitation (CSES) at the University of Science and Technology Beijing (USTB) was appointed by UPM Umwelt-Projekt-Management GmbH (UPM) to evaluate critically the actual and potential ...
<urn:uuid:474e785f-1c00-4d1f-b9fd-d9ce33fc17c3>
2.859375
430
Content Listing
Science & Tech.
26.813286
95,639,517
Magnetic bacteria are found in a variety of aquatic environments, such as ponds and lakes. The strain of bacterium the research team studied, Magnetospirillum magneticum, was originally found in a pond in Tokyo, Japan. Magnetic bacteria typically live far below the surface, where oxygen is scarce. (They do not grow well where oxygen is plentiful.) What makes them fascinating is that they naturally grow strings of microscopic magnetic particles called magnetosomes. When placed in a magnetic field, the bacteria align like tiny swimming compass needles, a phenomenon call magnetotaxis. The research team is using genetic engineering to create a strain of the bacteria that become magnetic only when exposed to specific toxic chemicals, with the goal of using them as living chemical sensors. As a first step, they have created a strain that cannot make magnetosomes and therefore is not magnetic. Dr. Lloyd Whitman from NRL, who led the research team, explains that "during the course of our research, we realized that nobody had ever really demonstrated that being magnetic actually helps the bacteria." "Genetic modification allowed us to directly observe differences in behavior between magnetic and non-magnetic versions of the same bacterium," adds Professor Bruce Applegate. Professor Applegate directed the genetic engineering at Purdue, with the assistance of Professor Lazlo Csonka, Dr. Lynda Perry, and Ms. Kathleen O'Connor. In the past, scientists had suspected that being magnetic helps a bacterium find the oxygen concentrations it prefers more quickly by swimming only up and down in the earth's magnetic field rather than randomly in all directions. An analogy would be a blind-folded mountain climber searching for a specific altitude. If she only climbs straight up or down the mountain, she should find it more quickly. "But by observing how the bacteria moved away from oxygen that we added to their environment," reports Dr. McRae Smith, a member of the team while a postdoctoral researcher at NRL, "we directly measured how much magnetotaxis helps." NRL researcher Dr. Paul Sheehan adds, "by mathematically modeling their motion, we determined that being magnetic actually makes the bacteria much more sensitive to oxygen when in a magnetic field, so that they swim away from oxygen at much lower concentrations." It is as if the climber gets tired and turns around sooner when heading up the mountain, keeping her from heading too far in the wrong direction. And the stronger the magnetic field, the bigger the effect. The scientists do not yet know how the magnetic field has this affect on the bacteria, and are currently conducting additional experiments to help answer that question. What was particularly interesting to the scientists was that the affect of being magnetic was too small for them to measure in the earth's natural, but weak, magnetic field. "Therefore," concludes Dr. Whitman, "the advantage to these bacteria in nature must be very small." "But over millions of years, this very subtle advantage has somehow produced bacterial magnetism." NRL Public Affairs | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:a8621475-b445-4183-aa2a-47998af07d6d>
3.953125
1,274
Content Listing
Science & Tech.
40.809255
95,639,526
Researchers from the Technological Institute for Superhard and Novel Carbon Materials in Troitsk, MIPT, MISiS, and MSU have developed anew method for the synthesis of an ultrahard material that exceeds diamond in hardness. An article recently published in the journal Carbon describes in detail a method that allows for the synthesis of ultrahard fullerite, a polymer composed of fullerenes, or spherical molecules made of carbon atoms. In their work, the scientists note that diamond hasn’t been the hardest material for some time now. Natural diamonds have a hardness of nearly 150 GPa, but ultrahard fullerite has surpassed diamond to become first on the list of hardest materials with values that range from 150 to 300 GPa. All materials that are harder than diamond are called ultra hard materials. Materials softer than diamond but harder than boron nitride are termedsuperhard. Boron nitride, with its cubic lattice, is almost three times harder than the well-known corundum. Fullerites are materials that consist of fullerenes. In their turn, fullerenes are carbon molecules in the form of spheres consisting of 60 atoms. Fullerene was first synthesized more than 20 years ago, and a Nobel Prize was awarded for that work. The carbon spheres within fullerite can be arranged in different ways, and the material’s hardness largely depends on just how interconnected they are. In the ultrahard fullerite discovered by the workers at the Technological Institutefor Superhard and Novel Carbon Materials (FSBITISNCM), C 60 molecules are interconnected by covalent bonds in all directions, a material scientists call a three-dimensional polymer. However, the methods providing production of this promising material on an industrial scale are not available yet. Practically, the superhard carbon form is of primary interest for specialists in the field of metals and other materials processing: the harder a tool is, the longer it works, and the more qualitatively the details can be processed. What makes synthesizing fullerite in large quantities so difficult is the high pressure required for the reaction to begin. Formation of the three-dimensional polymer begins at a pressure of 13 GPa, or 130,000 atm. But modern equipment cannot provide such pressure on a large scale. The scientists in the current study have shown that adding carbon disulfide (CS 2 ) to the initial mixture of reagents can accelerate fullerite synthesis. This substance is synthesized on an industrial scale, is actively used in various enterprises, and the technologies for working with it are well-developed. According to experiments, carbon disulfide is an end product, but here it acts as an accelerator. Using CS 2 , the formation of the valuable superhard material becomes possible even if the pressure is lower and amounts to 8GPa. In addition, while previous efforts to synthesize fullerite at a pressure of 13 GPa required heating up to 1100K (more than 820 degrees Celsius),in the present case it occurs at room temperature. “The discovery described in this article (the catalytic synthesis of ultrahard fullerite) will create a new research area in materials science because it substantially reduces the pressure required for synthesis and allows for manufacturing the material and its derivatives on an industrial scale”, explained Mikhail Popov, the leading author of the research and the head of the laboratory of functional nanomaterials at FSBI TISNCM. Note: Ultrahard fullerite is described in greater detail in the following scientific publications: Is C 60 fullerite harder than diamond? V.Blank, M.Popov, S.Buga, V.Davydov, V.N. Denisov, A.N. Ivlev, B.N. Mavrin, V.Agafonov, R.Ceolin, H.Szwarc, A.Rassat. Physics Letters A Vol.188 (1994) P 281-286. Structures and physical properties of superhard and ultrahard 3D polymerized fullerites created from solid C60 by high pressure high temperature treatment. V.D. Blank, S.G. Buga, N.R. Serebryanaya, G.A. Dubitsky, B. Mavrin, M.Yu. Popov, R.H. Bagramov, V.M. Prokhorov, S.A. Sulynov, B.A. Kulnitskiy and Ye.V. Tatyanin. Carbon, V.36, P 665-670 (1998) Ultrahard and superhard phases of fullerite C60 : comparison with diamond on hardness and wear. V.Blank, M.Popov, G.Pivovarov, N.Lvova, K.Gogolinsky, V.Reshetov. Diamond and Related Materials. Vol. 7, No 2-5 (1998), P 427-431 MIPT’s press service would like to thank scientists for their invaluable help in writing this article. Alexandra O. Borissova | Eurek Alert! Machine-learning predicted a superhard and high-energy-density tungsten nitride 18.07.2018 | Science China Press In borophene, boundaries are no barrier 17.07.2018 | Rice University For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:f078ff53-57ab-401d-a132-4c63975974fb>
3.671875
1,690
Content Listing
Science & Tech.
45.789408
95,639,540
The Piscis Austrinids meteor shower takes place within the boundaries constellation of Piscis Austrinus. The meteor showers occurs between 15 Jul- 10 Aug with the peak occuring on the 28-Jul every year. The Solar Longitude (Abbrev: S.L., λ ☉) is 136 degrees, this value is the the date of maximum activity. It is measured as a degree with zero degree indicating spring equinox (roughly March 21st/22nd). 90 is the Summer Solstice, 180 is the Autumn Equinox and 270 is the Winter Solstice. This degree is independent of the calender. AMS . The Piscis Austrinids are a faint meteor Shower that radiate from an area to the west of the brightest star Fomalhaut in the constellation of Piscis Austrinus, the southern fish. If you`re expecting something quite spectacular, you will be severly disappointed with what you get, the most you could probably get is about 10 (1). Also happening at the same time is the Southern Delta-Aquariids which is also a faint meteor shower so where the meteor is credited can be somethings wrong. The closest star to the radiant point of the meteor shower is Fomalhaut. The coordinates can also be determined by the Right Ascension (352.5) and the Declination (-20.5). The Zenith Hourly Rate or how many you expect to see during the hour is 5. The ZHR can radically increase if the comet or associated object is close by. The speed/velocity of the Meteor Shower particles is 44 km/s. The population index of the meteor shower is 3. The population index refers to the magnitude distribution of the meteorites, the smaller the index, the brighter the meteors are, the higher, the dimmer the meteors are. For this particular meteor shower, bright meteors are more frequent. |Closest Star to Radiant Point||Fomalhaut| |Max Activity Date||28 Jul| |Activity Period||15 Jul- 10 Aug| |Solar Longitude / λ ☉||136 °| |Zenith Hourly Rate||5|
<urn:uuid:4c0dd436-a91c-40fc-9235-a67f7df56eac>
3.140625
463
Knowledge Article
Science & Tech.
58.454804
95,639,564
Medicine Hat, Alberta, is branching out. The city has approved a proposal to install the country’s first concentrated solar power (CSP) project. CSP typically employs an array of mirrors to reflect sunlight onto a single point, often at the top of a central tower. The concentrated light quickly becomes heat, which in turn is converted into electricity by way of an electric power generator. The technology sounds promising. Some research indicates that filling even five per cent of the world’s deserts with CSP could match the world’s entire current energy needs. That’s still a lot of space and the electricity would need to be transported over great distances, but there’s no denying that it’s an impressive statistic. Canada is a little short on deserts, but we do have the prairies. So is Medicine Hat on to something big? Maybe, but not necessarily. There are three not-insubstantial hurdles that they need to clear first: 1. Indirect sunlight and latitude Solar energy production works best in direct sunlight, i.e. with no cloud cover, but even in the blazing heat of a summer day in Regina, the sun will only hold so much potential. High latitudes receive more diffuse sunlight than do equatorial regions. Though the potential shifts slightly according to the time of year, northern countries are generally less fertile grounds for solar production. A Natural Resources Canada map shows the prairies to be as good a spot as any within Canada for a CSP project, but maps of worldwide solar resources contextualize our country at the lower end of global potential. The German Aerospace Center has already concluded that, “most world regions except Canada, Japan, Russia and South Korea have significant potential areas for CSP at an annual solar irradiance higher than 2000 kWh/m²/y.” None of this means that CSP can’t work in Canada, but it does mean that results from other countries are probably not strong indicators of Medicine Hat’s maximum potential. 2. Risk of an Unproven Technology Concentrated solar power projects are excluded from consideration under Ontario’s Feed-In Tariff (FIT) program, due to the unproven nature of the technology in the Canadian climate. Is there anything to lose in supporting CSP? Yes, insofar as a technology with no support will lower economies of scale and make regular maintenance a difficult prospect, while a technology on the rise would result in lowered costs. Some in the energy industry are already pronouncing the death of CSP. Jigar Shah of Inerjys Ventures, for example, has left the bandwagon: “I think in 2012, CSP basically died. [...] I don’t think anyone believes that they’re going to get a second contract. CSP is dead.” The prairies are not quite empty. Fertile farmland stretches in nearly endless fields from Alberta across Manitoba, punctuated by occasional rivers and lakes. Securing available farmland is one issue, but CSP is also competing for limited geographical and financial resources with other technologies, including photovoltaic solar power. I’m not necessarily all that pessimistic about the prospects for this interesting form of renewable energy, but I raise these concerns in order to lend context to a bigger question worthy of our consideration: Is it better to concentrate our alternative energy efforts on a small handful of technologies or would we do better to diversify our renewable portfolio? One might think that there is no downside to employing as many technologies as possible, but I hope I’ve shown that introducing new technologies may come with a few problems and some measure of financial risk. It’s something to consider as we decide on the shape our renewable future should take. The Renewable Energy blog showcases weekly posts by Stu Campana on current renewable energy issues. With fascinating projects underway across the country – from community solar power in Milton, Ontario to wind farms in Pictou County, Nova Scotia – Stu connects these stories through attention to the broader scientific perspective, international political climate, and social variables they involve. Stu is an international environmental consultant, currently working with Fern Ridge Landscaping and Eco-Consulting in Milton, Ontario - A\J Editorial Board (18) A\J Editorial Board - A\J Special Delivery (160) A\J Special Delivery - Backstage at A\J (85) Backstage at A\J - Current Events (213) Current Events - EcoLogic (8) EcoLogic - Food and Culture (24) Food and Culture - Green Living (32) Green Living - Made in Canada (21) Made in Canada - Renewable Energy (54) Renewable Energy - Shades of Green (12) Shades of Green - Summer Reading Series (7) Summer Reading Series - Sustainable A\J (57) Sustainable A\J - The Green Student (19) The Green Student - The Mouthful (14) The Mouthful - The Wild Side (38) The Wild Side - Think Global (14) Think Global - Turtle Island Solidarity Journey 2018 (4) Turtle Island Solidarity Journey 2018 Popular on A\J - From EATING AROUND THE WORLD article: "The long road to sustainability requires rebuilding our communities, and a g… https://t.co/gLTuZ7Rvu5 — 22 weeks 2 days ago - A Valentine's Day (and every day) message from Jane Goodall: "Let us replace impatience and intolerance with unders… https://t.co/1WGML2toyK — 22 weeks 3 days ago - For Valentine's Day: https://t.co/exvDzE2LQf — 22 weeks 3 days ago
<urn:uuid:da4dfb60-437a-4d39-8b0e-adfc2f73ae48>
3.375
1,205
Personal Blog
Science & Tech.
40.768522
95,639,586
A NASA scientist explains why it’s only now worthwhile. Between 1968 and 1972, 24 humans went and orbited the Moon. Of those Apollo astronauts, 12 of them stepped foot on the Moon. Since then, not a single human has ventured beyond low-Earth orbit. And for all the amazing work and research astronauts have done in the intervening decades, the question remains: Why didn’t humans push further when they had already come so far? Why, a half-century later, are we still not appreciably closer to our next great milestone, a trip to Mars? That’s the question Star Trek: Voyager star Robert Picardo posed while moderating a Sunday panel of NASA scientists at the Star Trek: Mission New York Event. He asked Jeffrey Sheehy, the senior technology officer of NASA’s Space Technology Mission Directorate, whether it would have been possible to immediately follow up the Apollo missions with a trip to Mars. Sheehy’s answer was unequivocal. “Fifty years on, should we have been able to go to Mars? Yeah, we could have gone to Mars,” Sheehy replied. “We could have mounted a mission to put someone in the Mars vicinity or a couple people in the Mars vicinity with the technologies and extensions of the technologies we had in the ‘70s.” But just because we could have reached Mars back then doesn’t mean it would have been a good idea to do so, according to Sheehy. The issue is that a Mars mission has different parts. While reaching Mars was doable using 1970s-era tech, actually staying on our planetary neighbor long enough to justify the mission has only recently become possible. “So once we get to Mars, we’ve got to leave them on Mars for a while.” “You can’t come and go just anytime you want because of the energy expenditures required and the propulsion capabilities that would be needed,” Sheehy said. As he pointed out, it takes about 180 days each way to travel to Mars, and such trips are only possible when the two planets are properly aligned. And six months of travel in the weightlessness of space poses its own, often overlooked challenges. Sheehy pointed to the experiences of fellow panelist Kjell Lindgren, who spent 141 days on the International Space Station. “Kjell was telling me when he got back from the space station after being up there for five, six months, the way the body adjusts to being in gravity again is such that when we put people on Mars they’re not going to be able to operate at full capability right away,” Sheehy continued. “They’re going to have to readjust for a while. So once we get to Mars, we’ve got to leave them on Mars for a while, and so to really set up shop on Mars takes technology capabilities that we’ve been developing over the last few decades.” Sheehy ended his answer on an optimistic note, saying the kinds of advances in propulsion, power, and life support capabilities — not to mention the potential to extract resources from the Martian landscape itself — have all been developed in the last few decades and make a worthwhile mission to Mars a genuine possibility. “It’s really technology that drives exploration,” he said. “And so that’s why this is the right time to start mounting a plan to go to Mars.” Indeed, that’s NASA’s plan: Their Journey to Mars program calls for landing astronauts on an asteroid by 2025 and a trip to Mars by the 2030s.
<urn:uuid:595a8646-00ed-4902-89dc-191e9e9e2504>
3.5625
772
News Article
Science & Tech.
54.406996
95,639,587
Tuples +, *, and slicing operations return new tuples. To sort a tuple, convert it to a list to gain access to a sorting method call, or use the sorted built-in that accepts any sequence object. T = ('cc', 'aa', 'dd', 'bb') tmp = list(T) # Make a list from a tuple's items tmp.sort() # Sort the list print( tmp ) T = tuple(tmp) # Make a tuple from the list's items print( T ) print( sorted(T) ) # Or use the sorted built-in, and save two steps # from ww w . j a va 2s. co m Here, the list and tuple built-in functions are used to convert the object to a list and then back to a tuple.
<urn:uuid:967801ed-89e2-4df1-9c8f-71fb86a82360>
3.625
169
Documentation
Software Dev.
88.75477
95,639,623
Positions, Motions and Distances of the Stars — Concepts and Methods The bodies of the planetary system, including the interplanetary matter, can be investigated nowadays by direct measurements at the locality of the object by using space probes. Our knowledge of the structure of the universe beyond the limits of the solar system, on the other hand, depends exclusively on the radiation fields of the cosmic objects (including particle streams and possibly gravitational waves) observable at the location of a terrestrial or “extra-terrestrial” observer. KeywordsMain Sequence Proper Motion Star Cluster Astronomical Unit Interstellar Extinction Unable to display preview. Download preview PDF.
<urn:uuid:7d5c8fa5-015c-4eca-87f0-f7ac19cb94e1>
3.09375
132
Truncated
Science & Tech.
11.168626
95,639,627
MIT professor Kerry Emanuel, a climate scientist, calls man-made global warming “perhaps the most consequential problem ever confronted by mankind.” In this primer, Emanuel details the science underlying the causes and impact of global warming. He explains why warming is taking place and discusses options for mitigating its impact. This 2012 update of Emanuel’s 2006 book provides more recent information about current scientific findings. getAbstract recommends his analysis to anyone who wants to stay informed on this vital topic. In this summary, you will learn - What the causes of global warming are, - What factors contribute to climate change, - What its possible consequences might be and - What society can do to address the problem. About the Author Kerry Emanuel, professor of atmospheric science in the Department of Earth, Atmospheric and Planetary Science at MIT, studies moist convection in the atmosphere and tropical cyclones. Comment on this summary 5 years agoBias is political. It is not our greatest threat to humanity. It is academically written but in the way one massages the data to support ones thesis. It does not compare to other times in global history when high levels of green house gasses were also generated without any assistance from humanity. In their own worlds there are a dozen contributing factors and only one is the small slice that human-made gases adversely affect the total situation. It is good to read what know what MIT is producing. Contained in Knowledge Pack: Knowledge PackClimate ChangeStay at the cutting informational edge of this critical public policy issue; ready or not, here it comes. Customers who read this summary also read Donald Wuebbles et al. U.S. Global Change Research Program, 2017 Profile Books, 2011 Economist Films, 2017 Orrin H. Pilkey et al. Columbia UP, 2016
<urn:uuid:6d4d3ec8-05d6-4f6c-a848-42c7812ecfb2>
3.140625
381
Truncated
Science & Tech.
40.803247
95,639,644
According to ecological theory, populations whose dynamics are entrained by environmental correlation face increased extinction risk as environmental conditions become more synchronized spatially. This prediction is highly relevant to the study of ecological consequences of climate change. Recent empirical studies have indicated, for example, that large-scale climate synchronizes trophic interactions and population dynamics over broad spatial scales in freshwater and terrestrial systems. Here, we present an analysis of century-scale, spatially replicated data on local weather and the population dynamics of caribou in Greenland. Our results indicate that spatial autocorrelation in local weather has increased with large-scale climatic warming. This increase in spatial synchrony of environmental conditions has been matched, in turn, by an increase in the spatial synchrony of local caribou populations toward the end of the 20th century. Our results indicate that spatial synchrony in environmental conditions and the populations influenced by them are highly variable through time and can increase with climatic warming. We suggest that if future warming can increase population synchrony, it may also increase extinction risk. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:9e72d006-c4ca-4c7e-97f6-d8c5750f9e10>
2.9375
233
Academic Writing
Science & Tech.
0.616118
95,639,656
Aronson’s research lands on the cover of the August 2017 issue! Urban vegetation provides important ecological services, but only certain plants can survive these harsh environments. Understanding how urban environments select for or against particular plant species would help in managing urban biodiversity, planning and executing sound ecological restoration, and predicting how climate change will […] Search Results for: "Myla Aronson" Most cities plan to protect biodiversity yet lack mechanism to measure success. Myla Aronson, assistant professor in the Department of Ecology, Evolution and Natural Resources, Lauren Frazee, doctoral candidate in the Graduate Program in Ecology and Evolution, Karen O’Neill, associate professor in the Department of Human Ecology, Rutgers alum Dr. Emilie Stander and an international […] Source: Department of Ecology, Evolution and Natural Resources For more than 60 years, Mettler’s Woods, an old growth forest at Hutcheson Memorial Forest Center, has served Rutgers Department of Ecology, Evolution and Natural Resources students and faculty as a classroom, research site, and natural wonder. While ancient oaks and hickory still tower above visitors, much […] Are cities unnatural? Are urban landscapes disturbed or damaged? “There is no right answer. We can think of cities in many ways,” says Dr. Paige S. Warren of the University of Massachusetts. “Cities are sources of novelty, hotpots of resource inputs, and drivers of evolutionary change.”.. And what about the plants? With access to floras from 112 cities including both natural and spontaneous vegetation since 1975, Dr. Myla Aronson of Rutgers University along with the Urban Biodiversity Research Coordination Network (UrBioNet) is asking questions about the ways in which cities influence global, regional, and local patterns in plant diversity. “The mosses were just labelled Moss 1, Moss 2, that it just struck me how much mosses are overlooked,” says Eliana Geretz, ecology, evolution and natural resources major. At the time, she was helping conduct research in Hutcheson Memorial Forest in nearby Somerset County. One of the last uncut forests in the Mid-Atlantic States, the […] Urban biodiversity isn’t just limited to buzzing insects. Last year, a study found that 54 cities are home to 20 percent of the world’s bird species. In the city of Lyon, scientists found nearly a third of all the bee species native to France. It turns out that cities are a good place for some animals to live- and how humans decide to manage their cities can make those habitats better or worse for the local fauna… Anyone who has ever walked through a flock of pigeons knows birds do pretty well in cities, too. That isn’t to say that birds prefer cities- urban areas only retain about 8 percent of the bird species that otherwise would have lived in the area, according to a study led by Myla Aronson of Rutgers University… But cities are still filled with a rich variety of birds. Aronson and her team looked at 54 cities around the world and found that 20 percent of known bird species can be found flying in urban centers…”From city to city, across the world, maintaining natural habitat within a city is important for biodiversity,” Aronson told NBC News. Myla Aronson (GSNB ‘07 Ph.D.), research scientist in the Department of Ecology, Evolution, and Natural Resources, has conducted far-reaching research that shows cities are not concrete jungles but instead harbor a variety of native birds and plants. Her work supports the argument that planning greenspaces in cities with biodiversity in mind benefits both people and nature. […] I’m strolling the grounds of the New York Botanical Garden, a quiet green space in the noisy heart of the Bronx. The sun is hot, but once I leave the neat, conventional garden beds and enter the Thain Family Forest, the air is cool under old-growth oak and hickory trees. And when the roar of a JFK-bound jet dies away, I can hear catbirds, white-eyed vireos and a kingbird running through their vocal repertoires… I’m walking the Spicebush Trail with Myla Aronson, a Rutgers University scientist who is the lead researcher on a groundbreaking study of biodiversity in cities across the globe—a study that refutes what she calls the myth of biotic homogenization. “Everyone assumes that because of globalization, cities are all the same in terms of the plants and animals you find there- mostly rats and pigeons,” Aronson says as we stroll. We congratulate these SEBS and NJAES faculty and staff on their accomplishments, appointments and awards below. For university-wide announcements, please visit the Rutgers Faculty and Staff Bulletin. 2018 Christopher Obropta, associate professor in the Department of Environmental Sciences and extension specialist in water resources, received a one-year award totaling $92,335 from the U.S. Geological Survey […] High above southwest Pennsylvania, it’s not unusual to look up and spot a bald eagle flying thousands of feet in the sky, on the hunt for prey. What’s especially surprising is that its nest isn’t in a secluded forest; it’s in the industrial heart of Pittsburgh, in a neighborhood called Hays…”Cities have lost a lot of biodiversity, but they support a lot more than we normally expect them to,” agrees Myla Aronson, a visiting professor of ecology at Rutgers University in New Jersey. She and several other researchers around the world recently published a comprehensive study that found while urbanization does decrease the abundance of plants and animals, the mixture of species continues to resemble the mix of the region.
<urn:uuid:8cb52e7f-ff90-4a02-a975-1f7df6bb7b19>
2.734375
1,182
Content Listing
Science & Tech.
36.471572
95,639,687
One of Prolog's most powerful features is its built-in pattern-matching algorithm, unification. For all of the examples we have seen so far, unification has been relatively simple. We will now examine unification more closely. The full definition of unification is similar to the one given in chapter 3, with the addition of a recursive definition to handle data structures. This following table summarizes the unification process. |The variable will unify with and is bound to any term, including another variable.| |Two primitive terms (atoms or integers) unify only if they are identical.| |Two structures unify if they have the same functor and arity and if each pair of corresponding arguments unify.| In order to experiment with unification we will introduce the built-in predicate =/2, which succeeds if its two arguments unify and fails if they do not. It can be written in operator syntax as follows. arg1 = arg2 which is equivalent to WARNING: The equal sign (=) does not cause assignment as in most programming languages, nor does it cause arithmetic evaluation. It causes Prolog unification. (Despite this warning, if you are like most mortal programmers, you will be tripped up by this difference more than once.) Unification between two sides of an equal sign (=) is exactly the same as the unification that occurs when Prolog tries to match goals with the heads of clauses. On backtracking, the variable bindings are undone, just as they are when Prolog backtracks through clauses. The simplest form of unification occurs between two structures with no variables. In this case, either they are identical and unification succeeds, or they are not, and unification fails. ?- a = a. yes ?- a = b. no ?- location(apple, kitchen) = location(apple, kitchen). yes ?- location(apple, kitchen) = location(pear, kitchen). no ?- a(b,c(d,e(f,g))) = a(b,c(d,e(f,g))). yes ?- a(b,c(d,e(f,g))) = a(b,c(d,e(g,f))). no Another simple form of unification occurs between a variable and a primitive. The variable takes on a value that causes unification to succeed. ?- X = a. X = a ?- 4 = Y. Y = 4 ?- location(apple, kitchen) = location(apple, X). X = kitchen In other cases multiple variables are simultaneously bound to values. ?- location(X,Y) = location(apple, kitchen). X = apple Y = kitchen ?- location(apple, X) = location(Y, kitchen). X = kitchen Y = apple Variables can also unify with each other. Each instance of a variable has a unique internal Prolog value. When two variables are unified to each other, Prolog notes that they must have the same value. In the following example, it is assumed Prolog uses '_nn,' where 'n' is a digit, to represent unbound variables. ?- X = Y. X = _01 Y = _01 ?- location(X, kitchen) = location(Y, kitchen). X = _01 Y = _01 Prolog remembers the fact that the variables are bound together and will reflect this if either is later bound. ?- X = Y, Y = hello. X = hello Y = hello ?- X = Y, a(Z) = a(Y), X = hello. X = hello Y = hello Z = hello The last example is critical to a good understanding of Prolog and illustrates a major difference between unification with Prolog variables and assignment with variables found in most other languages. Note carefully the behavior of the following queries. ?- X = Y, Y = 3, write(X). 3 X = 3 Y = 3 ?- X = Y, tastes_yucky(X), write(Y). broccoli X = broccoli Y = broccoli When two structures with variables are unified with each other, the variables take on values that make the two structures identical. Note that a structure bound to a variable can itself contain variables. ?- X = a(b,c). X = a(b,c) ?- a(b,X) = a(b,c(d,e)). X = c(d,e) ?- a(b,X) = a(b,c(Y,e)). X = c(_01,e) Y = _01 Even in these more complex examples, the relationships between variables are remembered and updated as new variable bindings occur. ?- a(b,X) = a(b,c(Y,e)), Y = hello. X = c(hello, e) Y = hello ?- food(X,Y) = Z, write(Z), nl, tastes_yucky(X), edible(Y), write(Z). food(_01,_02) food(broccoli, apple) X = broccoli Y = apple Z = food(broccoli, apple) If a new value assigned to a variable in later goals conflicts with the pattern set earlier, the goal fails. ?- a(b,X) = a(b,c(Y,e)), X = hello. no The second goal failed since there is no value of Y that will allow hello to unify with c(Y,e). The following will succeed. ?- a(b,X) = a(b,c(Y,e)), X = c(hello, e). X = c(hello, e) Y = hello If there is no possible value the variable can take on, then unification fails. ?- a(X) = a(b,c). no ?- a(b,c,d) = a(X,X,d). no The last example failed because the pattern asks that the first two arguments be the same, and they aren't. ?- a(c,X,X) = a(Y,Y,b). no Did you understand why this example fails? Matching the first argument binds Y to c. The second argument causes X and Y to have the same value, in this case c. The third argument asks that X bind to b, but it is already bound to c. No value of X and Y will allow these two structures to unify. The anonymous variable (_) is a wild variable, and does not bind to values. Multiple occurrences of it do not imply equal values. ?- a(c,X,X) = a(_,_,b). X = b Unification occurs explicitly when the equal (=) built-in predicate is used, and implicitly when Prolog searches for the head of a clause that matches a goal pattern. Predict the results of these unification queries. ?- a(b,c) = a(X,Y). ?- a(X,c(d,X)) = a(2,c(d,Y)). ?- a(X,Y) = a(b(c,Y),Z). ?- tree(left, root, Right) = tree(left, root, tree(a, b, tree(c, d, e))). Copyright ©1990,1996-97 Amzi! inc. All Rights Reserved
<urn:uuid:d1411810-0284-4c04-b383-61bfc7697d4a>
4.03125
1,569
Documentation
Software Dev.
74.964659
95,639,696
A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and a wave front emerges from the slit in a cylindrical wave of uniform intensity. A slit which is wider than a wavelength produces interference effects in the space downstream of the slit. The slit is assumed to behave as though it has a large number of point sources spaced evenly across the width of the slit. The analysis also simplifies if we consider light of a single wavelength. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources. If the relative phases of these contributions vary by 2π or more we may expect to find minima and maxima in the diffracted light. These phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit. We can calculate the angle at which the first minimum is obtained in the diffracted light. The light from a source is located at the top edge of the slit. This interferes destructively with a source located at the middle of the slit when the path difference between them is equal to λ/2. Furthermore, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. Therefore we can reason that along the entire height of the slit the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apparent that is half the width of the slit.© BrainMass Inc. brainmass.com July 21, 2018, 3:42 pm ad1c9bdddf
<urn:uuid:f5b3c5e9-34f9-4bfe-9660-c876d34d25f7>
4.03125
345
Knowledge Article
Science & Tech.
49.043243
95,639,700
IT'S often said there are plenty more fish in the sea - now a massive global census of marine life proves it's true. Scientists conducting the most comprehensive census ever of the world's sea life estimate there are more than 230,000 known species in the sea, and somewhere between four and six times that figure that are unknown. In total, it's believed there are somewhere between 1 million and 1.4 million species of marine life on the planet, most of which we have little or no knowledge. ''At the end of the census of marine life, most ocean organisms still remain nameless and their numbers unknown,'' said biologist Nancy Knowlton, leader of the census's coral reef project. ''The ocean is so vast that, after 10 years of hard work, we still have only snapshots, though sometimes detailed, of what the sea contains.'' But despite this abundance of life, the census authors warn that the world's sea creatures are under attack on many fronts. Overfishing, lost habitat, pollution and invasive species are common threats, while rising water temperature and sea acidification loom as future dangers. ''The sea today is in trouble,'' Dr Knowlton said. The census found that Australian waters, along with Japan's, are the most biodiverse on the planet, with almost 33,000 recorded species. It is also estimated that more than 80 per cent of species in Australian waters are unknown or unnamed. The census's lead author in Australia, CSIRO marine biologist Alan Butler, said Australia's deep seas remained a mysterious place. ''Most of marine Australia is 5000 metres or more deep and huge areas of that we haven't sampled at all, and we have no idea of what's there,'' Dr Butler said. The census, which mapped 25 regions across the globe, also found that Australia has among the greatest proportion of endemic species, with about a quarter unique to our seas. By contrast the Baltic Sea has the least, with just one unique species, a seaweed. The census will be released on October 4. Results so far are online at the Public Library of Science. Morning & Afternoon Newsletter
<urn:uuid:1888d3fe-04ae-4a80-bf3c-b9dc4ae59f5e>
3.328125
442
Truncated
Science & Tech.
50.609316
95,639,718
Scientists came to the conclusion that the perpetrators of the first on earth, global warming became the ancient relatives of earthworms, ate all the stocks of organic matter at the bottom of the first earth’s oceans. “At the bottom of the ocean there are many animals that are constantly “plow” him like rain worms you have in the country. Their appearance in an era when bioturbation soil did not exist, was supposed to radically change the face of the Earth as a whole”, — commented on the theory of an expert from the British University of Exeter, Tim Lenton. So, about 540-520 million years ago, the then earthworms have created a sharp jump in temperature on the planet, which led to the emergence of the ancestors of the majority of the currently existing species on earth. Новости в Украине и в мире
<urn:uuid:73a74af8-3825-44c6-b0c5-58f8090c2c9e>
3.015625
199
Truncated
Science & Tech.
39.803295
95,639,731
We illustrate the fundamental importance of fluctuations in natural water flows to the long-term sustainability and productivity of riverine ecosystems and their riparian areas. Natural flows are characterized by temporal and spatial heterogeneity in the magnitude, frequency, duration, timing, rate of change, and predictability of discharge. These characteristics, for a specific river or a collection of rivers within a defined region, shape species life histories over evolutionary (millennial) time scales as well as structure the ecological processes and productivity of aquatic and riparian communities. Extreme events - uncommon floods or droughts - are especially important in that they either reset or alter physical and chemical conditions underpinning the long-term development of biotic communities. We present the theoretical rationale for maintaining flow variability to sustain ecological communities and processes, and illustrate the importance of flow variability in two case studies - one from a semi-arid savanna river in South Africa and the other from a temperate rainforest river in North America. We then discuss the scientific challenges of determining the discharge patterns needed for environmental sustainability in a world where rivers, increasingly harnessed for human uses, are experiencing substantially altered flow characteristics relative to their natural states. © 2008 Académie des sciences. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:99961aca-15b5-4848-8fa9-02a78f327a9a>
3.53125
262
Academic Writing
Science & Tech.
-11.31
95,639,735
Each wolf calls with its own ‘voice.’ Tracking wild animals can provide lots of valuable data. New research suggests audio recordings of wild wolves can replace the typical radio collars, which can be expensive and intrusive. The universal sign for ‘Look over there!’ isn’t so common in some cultures. It was long thought that humans everywhere favor pointing with the index finger. But some fieldwork out of Papua New Guinea identified a group of people who prefer to scrunch their noses. There’s no blueprint for excellence, but some building blocks are crucial. Research institutes and "centres of excellence" exist around the world to draw talent and to share resources - all with the aim of solving important problems. Author Tom Iliffe leads scientists on a cave dive. Scientific fieldwork that happens underground and underwater in spectacular but dangerous caves opens a window on a largely unknown world. Crews clean up debris in a neighborhood flooded by Hurricane Harvey in Beaumont, Texas, Sept. 26, 2017. AP Photo/David Goldman Epidemiologists study disease outbreaks in populations to determine who gets sick and why. In the wake of this year's hurricanes, they are assessing impacts from mold, toxic leaks and other threats. Science is a human approach to understanding the world. Science provides a useful way to explore and understand the natural world. But it also has a richness, diversity and creativity that is often overlooked. Hiscox and students practice for the big day with a weather balloon. Meteorology researchers across the country are prepping experiments for the mini-night the eclipse will bring on August 21 – two minutes and 36 seconds without the sun in the middle of the day. Into the unknown. In this episode of The Anthill podcast we are off exploring: land, sea and space. © Harriet Ibbett Intensified rice production in Cambodia's dry season is wreaking havoc on local bird populations. Polysaccharide molecules such as cellulose, seen here, are long chains of sugars that are very hard to break apart. Enzymes – proteins that can degrade polysaccharides – have many industrial uses. Bio-prospecting is the search for useful materials from natural sources. A biologist explains what we can learn from bacteria about breaking down plant material, and how we can use that knowledge. NERC / National Oceanography Centre The new sub allows scientists to access some of the most remote and hazardous environments in the ocean. Muskoxen group together for security. How is rapid warming in the Arctic affecting animals that are adapted to cold? A wildlife biologist is using many techniques to find out, including stalking muskoxen in a polar bear costume. Hassan Ammar/Press Association Images A PhD candidate retells the moving stories of Syrian women, as they try to find a place in their new neighbourhoods. Our citizen science project was designed to record bird sounds but produced some surprisingly funny impressions. The crew of scientists prepare to put the drill stem into the Greenland ice sheet to probe water flows about a half of a mile below. A glaciologist develops a lightweight method for probing the depths of Greenland's ice sheet to answer a crucial question: How fast is it melting? Public park in Manhattan, home to a rat population with over 100 visible burrows. Dr. Michael H. Parsons Rats foul our food, spread disease and damage property, but we know very little about them. A biologist explains how he tracks wild rats in New York City, and what he's learned about them so far. Archaeologists on the front lines. Jonathan Cohen/Binghamton University Cultural resource management archaeologists don't choose where they dig. Instead they identify, evaluate and preserve cultural heritage sites in locations slated for development. Laura de Mingo Glamorous award ceremonies and popular TV shows can only get you so far – finding the time to do the science is still the most important thing. Ice cold physics: hunting for neutrinos in Antarctica. Sven Lidström, IceCube/NSF A cubic kilometer of clear, stable ice could help physicists answer big questions about cosmic rays and neutrinos. Hardy scientists collect data via a unique telescope at the frozen bottom of the world. Ed S. Johovac Louis Monroy Santander has been looking at how locals in the town of Sanski Most are moving on after a brutal conflict.
<urn:uuid:c9cd75b3-e6e4-45f7-8f9c-37a66f6e6b99>
2.859375
936
Content Listing
Science & Tech.
46.3703
95,639,763
Basic Math Quick Reference Handbook by Peter J. Mitas Publisher: Quick Reference Handbooks 2009 Number of pages: 83 This handbook, written by an experienced math teacher, lets readers quickly look up definitions, facts, and problem solving steps. It includes over 700 detailed examples and tips to help them improve their mathematical problem solving skills. Download or read it online for free here: by Florentin Smarandache - viXra This book is addressed to College honor students, researchers, and professors. It contains 136 original problems published by the author in various journals. The problems could be used to preparing for courses, exams, and Olympiads in mathematics. by MacGregor Campbell - Annenberg Foundation Mathematics Illuminated is a text for adult learners and high school teachers. It explores major themes of mathematics, from humankind's earliest study of prime numbers, to the cutting-edge mathematics used to reveal the shape of the universe. This book is about the topic of mathematical analysis, particularly in the field of engineering. This will build on topics covered in Probability, Algebra, Linear Algebra, Calculus, Ordinary Differential Equations, and others. by David B. Surowski - Kansas State University An advanced mathematics textbook accessible by and interesting to a relatively advanced high-school student. Topics: geometry, discrete mathematics, abstract algebra, series and ordinary differential equations, and inferential statistics.
<urn:uuid:5ddda9f0-1797-4aa2-8121-6e77f0480dab>
3.4375
297
Content Listing
Science & Tech.
22.815903
95,639,776
- Open Access With respect to coefficient of linear thermal expansion, bacterial vegetative cells and spores resemble plastics and metals, respectively © Nakanishi et al.; licensee BioMed Central Ltd. 2013 Received: 13 August 2013 Accepted: 2 October 2013 Published: 9 October 2013 If a fixed stress is applied to the three-dimensional z-axis of a solid material, followed by heating, the amount of thermal expansion increases according to a fixed coefficient of thermal expansion. When expansion is plotted against temperature, the transition temperature at which the physical properties of the material change is at the apex of the curve. The composition of a microbial cell depends on the species and condition of the cell; consequently, the rate of thermal expansion and the transition temperature also depend on the species and condition of the cell. We have developed a method for measuring the coefficient of thermal expansion and the transition temperature of cells using a nano thermal analysis system in order to study the physical nature of the cells. The tendency was seen that among vegetative cells, the Gram-negative Escherichia coli and Pseudomonas aeruginosa have higher coefficients of linear expansion and lower transition temperatures than the Gram-positive Staphylococcus aureus and Bacillus subtilis. On the other hand, spores, which have low water content, overall showed lower coefficients of linear expansion and higher transition temperatures than vegetative cells. Comparing these trends to non-microbial materials, vegetative cells showed phenomenon similar to plastics and spores showed behaviour similar to metals with regards to the coefficient of liner thermal expansion. We show that vegetative cells occur phenomenon of similar to plastics and spores to metals with regard to the coefficient of liner thermal expansion. Cells may be characterized by the coefficient of linear expansion as a physical index; the coefficient of linear expansion may also characterize cells structurally since it relates to volumetric changes, surface area changes, the degree of expansion of water contained within the cell, and the intensity of the internal stress on the cellular membrane. The coefficient of linear expansion holds promise as a new index for furthering the understanding of the characteristics of cells. It is likely to be a powerful tool for investigating changes in the rate of expansion and also in understanding the physical properties of cells. When solid materials are heated, they expand and generally exhibit a thermal creep curve. If a fixed stress is applied to the three-dimensional z-axis of a material and the material is then heated, the amount of expansion resulting from thermal stress increases according to a fixed coefficient of thermal expansion . As the temperature approaches the transition temperature, at which the physical properties of the material change, the rate of expansion decreases and the material reaches its maximum expansion. If expansion is plotted against temperature, the transition temperature is at the apex of the curve. In the case of a solid, the transition temperature may be determined as the melting point of the solid . Different materials, such as metal and plastic, differ in their melting points and their patterns of thermal expansion, so they provide different curves. Microbial cells are not made of a single solid material, but rather of various constituent materials, and the composition varies according to the species and the condition of the cell. The rate of expansion and the transition temperature determined from the amount of expansion thus differ depending on the species and the condition of the cell, and may therefore offer a new approach for the study of cellular structure. However, currently there are no valid methods for determining the rate of expansion and the transition temperature from the amount of expansion of a single cell, and to date there have been no studies conducted on this topic. To address this, we have developed a method for measuring the coefficient of thermal expansion and the transition temperature of microbial samples using a nano thermal analysis (nano-TA) system. Results and discussion Change in the coefficient of linear expansion and its relationship to the transition temperature, determined by nano-TA-SPM The coefficient of linear expansion and transition temperature of bacteria and yeast The transition temperature and the coefficient of linear expansion of different bacteria, yeast, and plastic materials Bacteria, yeast, and plastic materials Coefficient of linear expansion (×10-6/°C) 58 ± 0.7 190 ± 10.5 48 ± 0.5 360 ± 13.5 71 ± 1.8 105 ± 7.5 56 ± 0.9 230 ± 16.5 54 ± 1.2 280 ± 12.5 172 ± 1.2 8 ± 0.3 131 ± 2.0 11 ± 0.2 125 ± 2.5 14 ± 0.6 107 ± 1.1 19 ± 0.5 113 ± 0.9 18 ± 0.5 239 ± 6.1 5 ± 0.2 289 ± 6.7 4 ± 0.3 122 ± 3.5 102 ± 7.5 70 ± 5.5 65 ± 4.5 Comparison of the coefficient of linear expansion and transition temperature between bacteria, yeast, and materials Cells may be characterized by the coefficient of linear expansion as a physical index, and the coefficient of linear expansion may also characterize cells structurally since it relates to volumetric changes, surface area changes, the degree of expansion of water contained within the cell, and the intensity of the internal stress on the cellular membrane. The coefficient of linear expansion holds promise as a new index for furthering the understanding of the characteristics of cells. One of the problems in the future includes a difference in the water content of the vegetative cells and the spores. Estimating the possibility to influence the difference that this difference is a coefficient of linear thermal expansion to be high. We would investigate whether the quantity of water contained in a cell effects on the coefficient of linear thermal expansion. It is likely to be a powerful tool for investigating changes in the rate of expansion and also in understanding the physical properties of cells. Bacterial and yeast strains The spores used were prepared from Geobacillus stearothermophilus NBRC 13737, Bacillus coagulans DSM 1, B. subtilis NBRC 13719T, B. megaterium NBRC 15308T, B. licheniformis NBRC 12200, Thermoanaerobacter mathranii DSM 11426, and Moorella thermoacetica DSM521T. The vegetative cells were Staphylococcus aureus NBRC 100910, Escherichia coli IFO 3301, B. subtilis NBRC 13719T, and Pseudomonas aeruginosa ATCC 10145, and the yeast Saccharomyces pastorianus RIB 2010. Culture and pretreatment methods Bacteria were cultured in nutrient broth (Difco, Becton Dickinson and Co., Franklin Lakes, NJ, USA); G. stearothermophilus was cultured at 60°C; all other bacteria were cultured at 35°C. For vegetative cells, the log phase (OD600 = 0.8 – 1.0) after 4 to 12 h of culture was used. Where spores were used, the bacteria were cultured under the same conditions for 96 h . For T. mathranii and M. thermoacetica spores, the bacteria were cultured in modified TGC culture medium (Nissui Pharmaceutical Co., Ltd., Tokyo, Japan) at 60°C for 72 h. The yeast S. pastorianus was cultured in YM Broth (Difco, Becton Dickinson) at 25°C for 48 h. Spores were collected from the culture fluid as reported previously . The thin films of the plastic materials used were polycaprolactone (PCL; Tm = 55°C, Wako, Tokyo, Japan) and polyethylene (PE; Tm = 116°C, Wako, Tokyo, Japan), which were processed into thin films, as well as polyethylene terephtalate (PET; Tm = 235°C, Pana Chemical Co., Ltd., Tokyo, Japan) and polyamide 66 (nylon 66; Tm = 256°C, Murakami Dengyo Co., Ltd., Yokohama, Japan). Measurement of transition temperature and coefficient of linear thermal expansion of bacteria A Nano Search Microscope type SFT-3500 (Shimadzu Corporation, Kyoto, Japan) was combined with a nano-TA system (nano thermal analysis) (Anasys Instruments, Santa Barbara, CA, USA) [2, 3]. The cantilever was brought into contact with a single microbial cell at a constant stress of 200 nN and heated from 25°C at 10°C/s to a temperature of 100°C or 400°C, continuously. The measurement point was the highest point, determined as reported previously . We would like to thank Dr. Daisuke Imamura of Okayama University for his invaluable advice. - Timoshenko SP, Young DH: Elements of Strength of Materials.5th edition. Princeton: D. Van Nostrand Co., Inc; 1968.Google Scholar - Gotzen NA, Assche GV: nano-TATM: Nano Thermal Analysis, Application, Note #5, Anasys Instrument Corp. California, U.S.A.: Santa Barbara; 2005.Google Scholar - Zhanxin J: Nanoscale thermal analysis of pharmaceutical solid dispersions. Inl J Phar. 2009, 380: 170-173. 10.1016/j.ijpharm.2009.07.003.View ArticleGoogle Scholar - Kirdy RK: Thermal Expansion, Am Ins. Phys Handbook.Volume 4. 3rd edition. New York: McGraw-Hill; 1972:119.Google Scholar - Kumke K: Thermal Properties of Inorganic Substance. 2nd edition. Berlin: Springer Verleg; 1991.Google Scholar - Parcedes-Sabja D, Setlow B, Setlow P, Sarker R: Characterization of Clostridium perfringens spore that lack spoVA proteins and dipicolinic acid. J Bacteriol. 2008, 190: 4648-4659. 10.1128/JB.00325-08.View ArticleGoogle Scholar - Nakanishi K, Kogure A, Fujii T, Kokawa R, Deuchi K: Development of method for evaluating cell hardness and correlation between bacterial spore hardness and durability. J. Nanobiotechnology. 2012, 10: 22-34. 10.1186/1477-3155-10-22.View ArticleGoogle Scholar This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:1e88d21b-5895-4edf-ab44-b4e1c900035b>
2.75
2,271
Truncated
Science & Tech.
51.042468
95,639,794
Дата публикации: 2018-05-27 18:30 Insights on this page were contributed by David W. Thomson of the Quantum AetherDynamics Institute and are explored in more depth at his page at http:///. 67 Mar 7569 - The BICEP7 experiment has announced a detection of a primordial polarization signal from the inflationary epoch. Could it be that all interactions occur in cycles of both the positive and negative results of phi, the occasional offset being whole number interactions. This would suggest the only times we can measure any data are during interactions, the rest is gravity. The WSM cosmology solves this problem by showing how our observable universe is just a finite spherical region of infinite eternal space. And the second law of thermodynamics does not apply to infinite space. His theorem relates to the results of an experiment like the one shown in Figure (see above): A source of two paired photons, obtained from the simultaneous decay of two excited atomic states, is at the center. At opposite sides, are located two detectors of polarized photons. The polarization filters of each detector can be set parallel to each other, or at some other angle, freely chosen. It is known that polarizations of paired photons are always parallel to each other, but random with respect to their surroundings. So, if the detector filters are set parallel, both photons will be detected simultaneously. If the filters are at right angles, the two photons will never be detected together. The detection pattern for settings at intermediate angles is the subject of the theorem. Quantum electrodynamics (QED) is the study of how electrons and photons interact. It was developed in the late 6995s by Richard Feynman, Julian Schwinger, Sinitro Tomonage, and others. The predictions of QED regarding the scattering of photons and electrons are accurate to eleven decimal places. On the other hand, Wave-Amplitude is both positive and negative, thus interacting Wave-Amplitudes can either increase or decrease (. combine or cancel out), causing either an increase or decrease in the velocity of the In-Waves, and a consequent moving together, or moving apart of the Wave-Centers. It is this property of Space that causes Charge / Electromagnetic Fields and in a slightly more complex manner, Light. The conjecture has not been proved, but there are a lot of interesting partial results so far. For a nice review of this work see: Superstrings represent one example of a class of attempts, generically classified as superunification theory , to explain the four known forces of nature—gravitational, electromagnetic, weak, and strong—on a single unifying basis. Common to all such schemes are the postulates that quantum mechanics and special relativity underlie the theoretical framework. Another common feature is supersymmetry , the notion that particles with half-integer values of the spin angular momentum ( fermions ) can be transformed into particles with integer spins ( bosons ). Einstein introduced dark energy to physics under the name of "the cosmological constant" when he was trying to explain how a static universe could fail to collapse. This constant simply said what the density dark energy was supposed to be, without providing any explanation for its origin. When Hubble observed the redshift of light from distant galaxies, and people concluded the universe was expanding, the idea of a cosmological constant fell out of fashion and Einstein called it his "greatest blunder". But now that the expansion of the universe seems to be accelerating, a cosmological constant or some other form of dark energy seems plausible. «Quantum cosmology the supersymmetric perspective vol 1 fundamentals lecture notes in physics» в картинках. Еще картинки на тему «Quantum cosmology the supersymmetric perspective vol 1 fundamentals lecture notes in physics».
<urn:uuid:8be34e84-6152-42d7-bd3c-e9d195dcd756>
2.734375
829
Personal Blog
Science & Tech.
30.255141
95,639,817
GNU.WIKI: The GNU/Linux Knowledge Base [HOME] [PHP Manual] [HowTo] [ABS] [MAN1] [MAN2] [MAN3] [MAN4] [MAN5] [MAN6] [MAN7] [MAN8] [MAN9] [0-9] [Aa] [Bb] [Cc] [Dd] [Ee] [Ff] [Gg] [Hh] [Ii] [Jj] [Kk] [Ll] [Mm] [Nn] [Oo] [Pp] [Qq] [Rr] [Ss] [Tt] [Uu] [Vv] [Ww] [Xx] [Yy] [Zz] vref — increment the use count for a vnode vref(struct vnode *vp); Increment the v_usecount field of a vnode. vp the vnode to increment Each vnode maintains a reference count of how many parts of the system are using the vnode. This allows the system to detect when a vnode is no longer being used and can be safely recycled for a different file. Any code in the system which is using a vnode (e.g. during the operation of some algorithm or to store in a data structure) should call vref(). vget(9), vnode(9), vput(9), vrefcnt(9), vrele(9) This manual page was written by Doug Rabson. All copyrights belong to their respective owners. Other content (c) 2014-2017, GNU.WIKI. Please report site errors to email@example.com.Page load time: 0.117 seconds. Last modified: November 09 2017 18:38:06.
<urn:uuid:6577c845-54e8-40db-b23b-35609c5b4efb>
2.609375
403
Documentation
Software Dev.
88.606226
95,639,819
Uses simple supplies to effectively demonstrate convection currents for middle school students. Supports NGSS MS-ESS2-1 and MS-ESS2-2. This lab is very effective as either a student investigation or a teacher demonstration. The supplies are simple - hot and cold tap water, large beaker, small cup, rubber band, food coloring, and plastic wrap. The worksheet leads students (or the teacher performing the demonstration) step-by-step through a simple process, and then provides a section for observations and analysis questions. Year after year, this lab proves to be a "light bulb" moment when teaching middle school students about convection currents. There are two very similar versions of this lab provided - one has more of a focus on convection in the mantle and the other is strictly focused on convection currents. Take a look at both and use the one that fits your needs better. SAVE 30% with our Plate Tectonics ACTIVITIES BUNDLE! Includes THIS resource PLUS 8 MORE Plate Tectonics activities! Also check out our best-selling, standards-based Plate Tectonics Test. You'll find lots of related resources in the EARTH & PLATE TECTONICS SECTION OF OUR STORE! Highlights of Our Resources: Plate Tectonics Quiz Graphing Sea Floor Spreading Plot the Ring of Fire Using Earthquakes & Volcanoes Continental Drift / Sea Floor Spreading Quiz Plate Tectonics FINAL PROJECT Activity: Model Earth’s Layers TO SCALE On Paper Plate Tectonics GROUP Project Need something different? Search all of our NGSS-Aligned middle school materials in Our Store! Easy to follow, effective, and always standards-based!
<urn:uuid:cdff6bcb-0ca5-4153-a2df-08ccb5786662>
4.09375
376
Product Page
Science & Tech.
48.248235
95,639,834
“Template:Ambox/core Template:WiktionaryparTemplate:Two other uses In geometry, a pentagon is any...Pentagon “In geometry, a nonagon (or enneagon) is a nine-sided polygon. The name "nonagon" is a hybrid...Nonagon “In geometry, a hexagon is a polygon with six edges and six vertices. A regular hexagon has...Hexagon “Scalene triangles are defined as a triangle where the interior angles are all different. Most...Scalene triangle This category has the following 43 subcategories, out of 43 total. Pages in category "Geometry" The following 136 pages are in this category, out of 136 total.
<urn:uuid:36960fcd-a3c7-4a50-a03f-dfa05d05e1a3>
2.546875
161
Content Listing
Science & Tech.
36.415573
95,639,837
Whilst experimenting with nanospheres and perfluorodecalin, a liquid used in the production of synthetic blood, researchers at Germanys University of Ulm have stumbled across a phenomenon that could ultimately help remove ozone-harming chemicals from the atmosphere. The perfluorodecalin, against all expectations, was taken up by a water-based suspension of 60 nm diameter polystyrene particles. The scientists believe that this occurred because nanoscopic perfluorodecalin droplets became encapsulated by self-assembled polystyrene nanospheres. Perfluorodecalin has very similar properties to chlorofluorocarbons (CFCs), the inert liquids that are known to destroy the Earths protective ozone layer. And the Ulm team reckons that aerosol particle-carrying water droplets or ice crystals in clouds may be able to collect up chlorofluorocarbons in the same way, eventually returning them harmlessly to Earth as rain, hail or snow. "I realized that I had developed a useful model system for the simulation of microphysical processes in the stratosphere," Andrei Sommer of the University of Ulm told nanotechweb.org. "In particular, for [simulating] the very complicated interplay between cloud droplets, nanoscopic aerosols emitted by man-made and natural sources, and chlorofluorocarbons - the principal ozone killers." Joanne Aslett | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:567b0c08-a8c2-4e3e-8a32-6aff246c8f09>
3.515625
956
Content Listing
Science & Tech.
31.984714
95,639,844
The fly is one of a dozen species of Drosophila to have recently had their genomes sequenced, information that should provide abundant opportunities for identifying genetic changes that cause females of this species, and not others, to retain their fertilized eggs until they are ready to hatch. The result was so surprising that the scientists initially thought it was a mistake. “The student who was timing things came a said ‘wow, these eggs in this species really develop quickly’ sometimes in less than an hour. That’s not possible,” said Therese Markow, a professor of biology who led the project. “When I went and actually looked at them I saw that they were depositing something that was very advanced, that hatched into a larva right away. In several cases they were hatching as they were being laid.” Even those Seychelles fly eggs that emerged unhatched were at an advanced state of development, the team reports in forthcoming issue of the Journal of Evolutionary Biology. Most larvae emerged within two hours compared to an average of nearly 23 hours for the other 10 species in the study. The Seychelles flies also laid larger eggs--nearly double the average volumes found for the other species--and their ovaries have fewer threadlike structures called ovarioles in which insect eggs mature before fertilization. Live birth could result from changes to the male reproductive strategy as well. Proteins found in the semen of the well-known lab fruit fly, Drosophila melanogaster, stimulate egg laying in the female. A modification of these signals could be responsible for the switch. “That signaling mechanism between the male and the female has changed. We don’t know the basis for it, but we ought to be looking,” Markow said. “It’s very interesting. It tells you who’s really going to be able control reproduction.” Early hatching offers advantages, the authors say. Mobile larvae can burrow into the ground to avoid becoming inadvertent hosts to the eggs of parasitic insects or a predator’s meal. But harboring offspring for a longer period of time costs the female. The opportunity to take that risk may come with specialization. The Seychelles flies feed only on the fruit of the morinda tree, a tropical plant that produces year round, but is toxic to other fruit flies, giving this single species exclusive access. One other fly in the study, Drosophila yakuba, also occasionally laid larvae instead of eggs, and their eggs also hatched fairly quickly, most in under 14 hours. It too specializes in a particular fruit, that of the Pandanus tree. The Seychelles flies, Drosophila sechellia and D. yakuba are two of about 250 species held by the Drosophila Species Stock Center, which moved to UC San Diego this fall. Contact: Therese Markow (firstname.lastname@example.org) Susan Brown | Newswise Science News Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:5eb67385-739a-402a-b9ee-fdc4fd8301ce>
3.859375
1,274
Content Listing
Science & Tech.
43.591477
95,639,845
Machine learning has returned with a vengeance. I still remember the dark days of the late ’80s and ’90s, when it was pretty clear that the current generation of machine-learning algorithms didn’t seem to actually learn much of anything. Then big data arrived, computers became It seems that quantum machine learning might provide an advantage here, as a recent paper on searching for Higgs bosons in particle physics data seems to hint. Learning from big data: In the case of chess, and the first edition of the Go-conquering algorithm, the computer wasn’t just presented with the rules of the game. I’ll annoy every expert in the field by saying that the computer essentially correlated board arrangements and moves with future success. I’m not saying this to disrespect machine learning but to point out that computers use their ability to gather and search for correlations in truly vast amounts of data to become experts—the machine played 5 million games against itself before it was unleashed on an unsuspecting digital opponent. A human player would have to complete a game every 18 seconds for 70 years to gather a similar data set. This is the case for evaluating Higgs Boson observations. The LHC generates data at inconceivable rates, even after lots of pre-processing to remove most of the uninteresting stuff. But even in the filtered data set, collisions that generate a Higgs boson are pretty rare. Sometimes, however, you have a situation that would be perfect for this sort of big-data machine learning, except that the data is actually pretty small. That makes it quite difficult to apply machine learning, let alone train the algorithm in the first place. To test if quantum machine learning might be good at sorting through these combinations, the researchers programmed a quantum computer to try to optimize the 36 parameters to fit the given data and subsequently classify the data as either containing a Higgs or not. The values of the parameters reach their optimum value when the sum of all the energies of the magnets is at a minimum, and the value of the minimum energy is used to decide Higgs/no Higgs. In the end, they identified a set of three that were most sensitive and several that were completely insensitive, including the transverse mass of one of the emitted photons. So far so good. But there are a bunch of non-quantum machine learning algorithms that should be able to do the same. Searching for needles having never seen a needle: The important difference between the classical and quantum algorithms was the size of the training data set. For algorithms trained on around 200 collisions, the quantum algorithm significantly outperforms the classical algorithms. On the other hand, the quantum algorithm is significantly worse than the classical algorithms after training on large data sets. This, however, is probably a product of the performance of the underlying hardware rather than the actual algorithm. I must admit to being faced with the unenviable task of changing my mind. In the past, I have been highly skeptical of machine learning and artificial intelligence in general. I was astonished at a recent conference when a speaker claimed that current AI was about the equivalent of a cat, a claim I find hard to credit. It is inevitable that AI systems will do more and more tasks, even if they are limited to the role of assistants. At this point, there is not a single profession that I would say is safe from artificial intelligence, except those jobs that are too boring for an AI to be interested in learning.
<urn:uuid:4e591dfc-eeae-4e1b-8eaf-f076fd639034>
2.671875
718
Personal Blog
Science & Tech.
39.894981
95,639,850
Manual Reference Pages - ENCODING (n) encoding - Manipulate encodings encoding option ?arg arg ...? Strings in Tcl are encoded using 16-bit Unicode characters. Different operating system interfaces or applications may generate strings in other encodings such as Shift-JIS. The encoding command helps to bridge the gap between Unicode and these other formats. Performs one of several encoding related operations, depending on option. The legal options are: encoding convertfrom ?encoding? data Convert data to Unicode from the specified encoding. The characters in data are treated as binary data where the lower 8-bits of each character is taken as a single byte. The resulting sequence of bytes is treated as a string in the specified encoding. If encoding is not specified, the current system encoding is used. encoding convertto ?encoding? string Convert string from Unicode to the specified encoding. The result is a sequence of bytes that represents the converted string. Each byte is stored in the lower 8-bits of a Unicode character. If encoding is not specified, the current system encoding is used. encoding dirs ?directoryList? Tcl can load encoding data files from the file system that describe additional encodings for it to work with. This command sets the search path for *.enc encoding data files to the list of directories directoryList. If directoryList is omitted then the command returns the current list of directories that make up the search path. It is an error for directoryList to not be a valid list. If, when a search for an encoding data file is happening, an element in directoryList does not refer to a readable, searchable directory, that element is ignored. Returns a list containing the names of all of the encodings that are encoding system ?encoding? Set the system encoding to encoding. If encoding is omitted then the command returns the current system encoding. The system encoding is used whenever Tcl passes strings to system calls. It is common practice to write script files using a text editor that produces output in the euc-jp encoding, which represents the ASCII characters as singe bytes and Japanese characters as two bytes. This makes it easy to embed literal strings that correspond to non-ASCII characters by simply typing the strings in place in the script. However, because the source command always reads files using the current system encoding, Tcl will only source such files correctly when the encoding used to write the file is the same. This tends not to be true in an internationalized setting. For example, if such a file was sourced in North America (where the ISO8859-1 is normally used), each byte in the file would be treated as a separate character that maps to the 00 page in Unicode. The resulting Tcl strings will not contain the expected Japanese characters. Instead, they will contain a sequence of Latin-1 characters that correspond to the bytes of the original string. The encoding command can be used to convert this string to the expected Japanese Unicode characters. For would return the Unicode string which is the Hiragana letter HA. set s [encoding convertfrom euc-jp "\xA4\xCF"] Visit the GSP FreeBSD Man Page Interface. Output converted with manServer 1.07.
<urn:uuid:2942526e-fe16-4d8f-a4a0-b8db13569bb3>
3.25
733
Documentation
Software Dev.
46.032213
95,639,862
A water wave is called a deep-water wave if the water's depth is more than 1/4 of the wavelength. The speed of a deep water wave depends on its wavelength: v = sqrt((g.lambda)/2pi). Longer wavelengths travel faster. Let's apply to this to a standing wave. Consider a diving pool that is 5.0 m deep and 10.0 m wide. Standing water waves can set up across the width of the pool. Because water sloshes up and down at the sides of the pool, the boundary conditions require antinodes at x = 0, and x = L. Thus a standing water wave resembles a standing sound wave in an open-open tube. a) What are the wavelengths of the first 3 standing-wave modes for water in the pool? Do they satisfy the condition for being deep-water waves? b) What are the wave speeds for each of these waves? c) Derive a general expression for the frequencies f_m of the possible standing waves. Your expression should be in terms of m,g, and L. d) What are the oscillation periods of the first three standing wave modes?© BrainMass Inc. brainmass.com July 17, 2018, 9:30 pm ad1c9bdddf
<urn:uuid:db30e62b-b23d-4d11-b7f1-b98044e33cbb>
4.0625
270
Tutorial
Science & Tech.
83.468359
95,639,865
List of impact craters on Earth This article appears to contradict itself on the sizes of craters.(January 2018) This list of impact craters on Earth contains a selection of the 190 confirmed craters given in the Earth Impact Database. To keep the lists manageable, only the largest craters within a time period are included. The complete list is divided into separate articles by geographical region. Confirmed impact craters listed by size and age These features were caused by the collision of meteors (consisting of large fragments of asteroids) or comets (consisting of ice, dust particles and rocky fragments) with the Earth. For eroded or buried craters, the stated diameter typically refers to the best available estimate of the original rim diameter, and may not correspond to present surface features. Time units are either in thousands (ka) or millions (Ma) of years. Young craters (10 ka or less) Less than ten thousand years old, and with a diameter of 0.1 km (100 meters) or more. The EID lists only 7 or 8 such craters, and the largest in the last 100,000 years (100 ka) is the 4.5 km Rio Cuarto crater in Argentina. However, there is some uncertainty regarding its origins and age, with some sources giving it as < 10 ka while the EID gives a broader < 100 ka. The Kaali impacts (c. 2000 BC) during the Iron Age may have influenced Estonian and Finnish mythology, the Campo del Cielo (c. 2000 BC) could be in the legends of some Native American tribes, while Henbury (c. 2200 BC) has figured in Australian Aboriginal oral traditions. (approx. in km) |Wabar||Rub' al Khali desert||Saudi Arabia||0.1||0.2||~1800 AD| |Campo del Cielo||Chaco||Argentina||0.1||4.0||2000 BC| |Henbury||Northern Territory||Australia||0.2||4.2||2200 BC| |Boxhole||Northern Territory||Australia||0.2||5.4||3400 BC| |Macha||Sakha Republic||Russia||0.3||7.3||5300 BC| |Rio Cuarto (disputed)||Córdoba Province||Argentina||4.5||<10?||<8000 BC| Large craters (10 ka to 1 Ma) From between 10 thousand years to 1 million years ago, and with a diameter of 1 km or more. The largest in the last one million years is the 14-km Zhamanshin crater in Kazakhstan and has been described as being capable of producing a nuclear-like winter. |Name||Location||Country||Diameter (km)||Age (thousand years)||Coordinates| |Meteor Crater||Arizona||United States||1.2||49| |Tswaing||Pretoria Saltpan||South Africa||1.1||220| |Zhamanshin||Kazakhstan||Kazakhstan||14.0||900 ± 100| Larger craters (1 Ma to 10 Ma) From between 1 and 10 million years ago, and with a diameter of 5 km or more. If uncertainties regarding its age are resolved, then the largest in the last 10 million years would be the 52-km Karakul crater which is listed in EID with an age of less than 5 Ma, or the Pliocene. The large but apparently craterless Eltanin impact (2.5 Ma) into the Pacific Ocean has been suggested as contributing to the glaciations and cooling during the Pliocene. |Name||Location||Country||Diameter (km)||Age (million years)||Coordinates| |Elgygytgyn||Chukotka Autonomous Okrug||Russia||18||3.5| Largest craters (10 Ma or more) Craters with a diameter of 20 km or more are all older than 10 Ma, with the exception of Karakul whose age is uncertain. There are more than forty such craters. The largest two within the last hundred million years have been linked to two extinction events: Chicxulub for the Cretaceous–Paleogene and the Popigai impact for the Eocene–Oligocene extinction event. Large unconfirmed craters The largest unconfirmed craters 200 km or more are significant not only for their size, but also for the possible coeval events associated with them. For example, the Wilkes Land crater has been connected to the massive Permian–Triassic extinction event. The sortable table has been arranged by diameter. |Name||Location||Country||Diameter (km)||Age (million years)||Coordinates| |Mistassini-Otish impact crater||Quebec||Canada||600||2100| |Australian impact structure||Northern Territory||Australia||600||545| |Shiva crater||offshore of India||India||500||65| |Wilkes Land crater||Wilkes Land||Antarctica||480-500||250-500| |Czech Crater||Central Europe||Czech Republic||300-500||2000| |Ishim impact structure||Akmola Region||Kazakhstan||300||460-430| |Bedout||offshore of Western Australia||Australia||250||250| |Falkland (Malvinas) Plateau anomaly||offshore of South America||Falkland Islands||250||250 (uncertain, estimated to be Late Palaeozoic)| |East Warburton Basin||Southern Australia||Australia||200+||300-360| All craters listed alphabetically As of 2017[update], the Earth Impact Database (EID) contains 190 confirmed craters. The table below is arranged by the continent's percentage of the Earth's land area, and where Asian and Russian craters are grouped together per EID convention. The global distribution of known impact structures apparently shows a surprising asymmetry, with the small but well-funded European continent having a large percentage of confirmed craters. It is suggested this situation is an artifact, highlighting the importance of intensifying research in less studied areas like Antarctica, South America and elsewhere. of the 190 |Asia & Russia||30%||16%||31| - List of impact craters in Asia and Russia - List of impact craters in Africa - List of impact craters in North America - List of impact craters in South America - List of impact craters in Antarctica - List of impact craters in Europe - List of impact craters in Australia - Earth Impact Database - Extinction event - Impact events - Impact Field Studies Group - List of unconfirmed impact craters on Earth - Traces of Catastrophe book from Lunar and Planetary Institute - comprehensive reference on impact crater science - Giant-impact hypothesis - "Earth Impact Database". - Bland, Phil A.; de Souza Filho, C. R.; Timothy Jull, A. J.; Kelley, Simon P.; Hough, Robert Michael; Artemieva, N. A.; Pierazzo, E.; Coniglio, J.; Pinotti, Lucio; Evers, V.; Kearsley, Anton; (2002); "A possible tektite strewn field in the Argentinian Pampa", Science, volume 296, issue 5570, pp. 1109-1112 - "Rio Cuarto". Earth Impact Database. University of New Brunswick. Retrieved 2009-08-19. - Schultz, Peter H.; Lianza, Ruben E.; (1992) "Recent grazing impacts on the Earth recorded in the Rio Cuarto crater field, Argentina", Nature 355, p. 234-237 (16 January 1992) - Haas, Ain; Peekna, Andres; Walker, Robert E. "Echoes of Ancient Cataclysms in the Baltic Sea" (PDF). Electronic Journal of Folklore. Retrieved 2008-10-26. - Benítez, Giménez; López, Alejandro M.; Mammana, Luis A. "Meteorites of Campo del Cielo: Impact on the indian culture". - Bobrowsky, Peter T.; Rickman, Hans (2007). Comet/asteroid impacts and human society: an interdisciplinary approach. Springer. pp. 30–31. ISBN 3-540-32709-6. - Hamacher, Duane W.; Goldsmith, John. "Aboriginal oral traditions of Australian impact craters" (PDF). - Stankowski, Wojciech; Raukas, Anto; Bluszcz, Andrzej; Fedorowicz, Stanisław. "Luminescence dating of the Morasko (Poland), Kaali, Ilumetsa, and Tsõõrikmäe (Estonia) meteorite craters" (PDF). - Cione, Alberto L. et al. (2002). Putative Meteoritic Craters in Río Cuarto (Central Argentina) Interpreted as Eolian Structures, Earth, Moon, and Planets 91 (1), 9-24. - Essay "Impact Cratering on Earth", based on: Grieve, Richard A. F.; (1990); "Impact cratering on the Earth", Scientific American, vol. 262, 66-73 - Povenmire, Harold; Liu, W.; Xianlin, Luo; (1999) "Australasian tektites found in Guangxi Province, China", in Proceedings of the 30th Annual Lunar and Planetary Science Conference, Houston, March 1999 - Glass, Billy P.; Pizzuto, James E.; (1994) "Geographic variation in Australasian microtektite concentrations: Implications concerning the location and size of the source crater", Journal of Geophysical Research, vol. 99, no. E9, 19075-19081, September 1994 - "Agoudal". Earth Impact Database. University of New Brunswick. Retrieved 2016-08-18. - University of New South Wales (19 September 2012). "Did a Pacific Ocean meteor trigger the Ice Age?". Retrieved 8 October 2012. - "Kara-Kul". Earth Impact Database. University of New Brunswick. Retrieved 2009-08-15. - Gurov, Eugene P.; Gurova, H. P.; Rakitskaya, R. B.; Yamnichenko, A. Yu. (1993). "The Karakul depression in Pamirs - the first impact structure in central Asia" (PDF). Lunar and Planetary Science. XXIV: 591–592. - "Russia's Popigai Meteor Crash Linked to Mass Extinction". 13 June 2014. - Cohen, Benjamin E.; Mark, Darren F.; Lee, Martin R.; Simpson, Sarah L. (2017-08-01). "A new high-precision 40Ar/39Ar age for the Rochechouart impact structure: At least 5 Ma older than the Triassic–Jurassic boundary". Meteoritics & Planetary Science. 52 (8): 1600–1611. Bibcode:2017M&PS...52.1600C. doi:10.1111/maps.12880. ISSN 1945-5100. - Gorder, Pam Frost (1 June 2006). "Big Bang in Antarctica – Killer Crater Found Under Ice". Ohio State University Research News. Archived from the original on 6 March 2016. - Genest, Serge; Robert, Francine; "The Mistassini-Otish Impact Structure, Northern Quebec, Canada: An Update", Proceedings of the 80th Annual Meeting of the Meteoritical Society, 2017 - Dachille, Frank. "Frequency of the formation of large terrestrial impact craters". - Zeylik, B. S.; Seytmuratova, E. Yu; (1974); "A meteorite-impact structure in central Kazakhstan and its magmatic-ore controlling role", Doklady Akademii Nauk SSSR, 1, pages 167-170 - Rocca, Maximiliano C. L.; Presser, Jaime Leonardo Báez; (2015) "A possible new very large impact structure in Malvinas Islands", Historia Natural, Tercera Series, Volumen 5(2) - Rocca, Maximiliano C. L.; Rampino, Michael R.; Presser, Jaime Leonardo Báez (2017). "Geophysical evidence for a large impact structure on the Falkland (Malvinas) Plateau". - Osborne, Hannah (5 May 2017). "Crater Potentially Linked to the Biggest Mass Extinction Event in Earth's History is Discovered". Newsweek Tech & Science. - Prezzi, Claudia B.; Orgeira, María Julia; Acevedo, Rogelio D.; Ponce, Juan Federico; Martinez, Oscar; Rabassa, Jorge O.; Corbella, Hugo; Vásquez, Carlos; González-Guillot, Mauricio; Subías, Ignacio; (2011); "Geophysical characterization of two circular structures at Bajada del Diablo (Patagonia, Argentina): Indication of impact origin", Physics of the Earth and Planetary Interiors, volume 192, p. 21-34
<urn:uuid:f42770ed-6508-4e61-a1c4-e76e2c19d76b>
3.75
2,872
Structured Data
Science & Tech.
57.862272
95,639,898
A Relativist's Toolkit The Mathematics of Black-hole MechanicseBook - 2004 This textbook fills a gap in the existing literature on general relativity by providing the advanced student with practical tools for the computation of many physically interesting quantities. The context is provided by the mathematical theory of black holes, one of the most successful and relevant applications of general relativity. Topics covered include congruences of timelike and null geodesics, the embedding of spacelike, timelike and null hypersurfaces in spacetime, and the Lagrangian and Hamiltonian formulations of general relativity. Publisher: Cambridge, UK ; New York : Cambridge University Press, 2004. Branch Call Number: ELECTRONIC BOOK Characteristics: 1 online resource (xvi, 233 p.) : ill. Call Number: ELECTRONIC BOOK
<urn:uuid:30e401d1-1379-4580-a463-5c06a6277e68>
2.875
174
Product Page
Science & Tech.
8.617355
95,639,901
angular-openlayers is a set of AngularJS directives for openlayers3. Unlike alternatives (angular-openlayers-directive, ngeo, etc), angular-openlayers is a very thin abstraction layer on top of OpenLayers3, meaning that angular-openlayers directives have the same attributes as the underlying OpenLayers objects. It means that anyone familiar to OpenLayers3 will be proficient in an instant with angular-openlayers. Documentation && getting started - A tutorial is available in - Examples are available in - An exhaustive API documentation is available in bower install angular-openlayers Code: Mozilla Public License Version 2.0 (MPL-2.0). Documentation: CC-BY-SA 3.0 Copyright 2015 Orange
<urn:uuid:22a4556b-c918-49c6-ad07-752d7cfcf75c>
2.6875
168
Product Page
Software Dev.
20.477129
95,639,912
Cloning methods rely on molecular biological processes that occur in nature. The techniques are continually being refined and simplified; therefore, many strategies nowadays permit cloning of sequences of interest from their sources more efficiently. These cloning strategies include: PCR cloning is a method in which double-stranded DNA fragments amplified by PCR are ligated directly into a vector. PCR cloning offers some advantages over traditional cloning which relies on digesting double-stranded DNA inserts with restriction enzymes to create compatible ends, purifying and isolating sufficient amounts, and ligating into a similarly treated vector of choice (see Insert preparation). With PCR amplification, this cloning technique requires much less starting template materials which include cDNA, genomic DNA, or another insert-carrying plasmid (see Subcloning basics). Furthermore, PCR cloning provides a simpler workflow by circumventing the requirement of suitably-located restriction sites and their compatibility between the vector and insert. Nevertheless, there are a number of considerations related to: PCR primers and amplification conditions, the cloning method of choice and the cloning vectors used, and, finally, confirmation of successful cloning and transformation. With respect to PCR amplification of a sequence of interest, primers must be designed and PCR conditions (components and cycling) optimized for efficient and specific amplification of the template. Primer design tools are available to bioinformatically evaluate and select suitable target-specific primer sequences for amplification. Ligation requires that either the insert or vector has 5′-phosphorylated termini; therefore, if the cloning vector lacks 5′-phosphorylated ends, 5′-phosphate groups must be added to the PCR primers during synthesis or by T4 polynucleotide kinase for successful ligation. For PCR optimization, reaction component concentrations, annealing temperatures, and template amounts are of importance. TA cloning and blunt-end cloning represent two of the simplest PCR cloning methods. Their choice depends upon the nature of the vector and the type of PCR enzymes used in cloning. TA cloning employs a thermostable Taq DNA polymerase capable of amplifying short DNA sequences. This enzyme lacks 3′→ 5′ proofreading activity and features a terminal transferase activity that adds an extra deoxyadenine at the 3′ end of the amplicons (3′ dA). The resulting PCR products with 3′ dA overhangs are readily cloned into a linearized TA cloning vector containing complementary 3′ deoxythymine (3′ dT) overhangs (Figure 1). While relatively straightforward, the limitations of this method include the length of insert (up to 5 kb), the inability to clone inserts directionally, and the high error rate associated with Taq DNA polymerase (approximately 1.1 x 10–4 to 8.9 x 10–5 errors/bp). Blunt-end cloning involves the ligation of an insert into a linearized vector where both DNA fragments lack overhangs. Blunt-end inserts can be produced using high-fidelity DNA polymerases with 3′→5′ exonuclease or proofreading activity. Their proofreading activity improves the sequence accuracy of the amplified products; however, limitations include lower ligation efficiencies when inserting into blunt-end cloning vectors and the inability to clone directionally. Ligation efficiency can be improved by incubating the amplicons with a Taq DNA polymerase and dATP in a procedure called “3′ dA tailing” (incubate 20–30 minutes at 72°C), then purifying the 3′ dA-tailed products (Figure 1). Figure 1. Common PCR cloning strategies. To further simplify and streamline the cloning workflow, specialized vectors have been developed to place an insert into vector, for example, without using a ligase. One such class of vectors includes the Invitrogen™ TOPO™ cloning vectors which contain covalently linked DNA topoisomerase I that functions as both a restriction enzyme and a ligase (learn more about TOPO cloning technology). Compared to conventional PCR cloning vectors, these vectors result in shorter ligation reaction times (e.g., 5 minutes) and greater cloning efficiencies (e.g., >95% positive clones) and with a much simpler protocol. Furthermore, directional cloning of the PCR products can be achieved with a specially designed TOPO vector using a specific primer design. Regardless of the cloning method choice, cloning efficiencies are significantly improved by purification of PCR amplicons prior to the ligation reaction. Gel purification helps remove salts, nucleotides, nonspecific amplicons, and primer-dimers. After ligation and transformation into the appropriate competent cells, the resulting colonies need to be screened carefully for the correct insert, as well as its proper frame and orientation for subsequent studies to analyze gene fusions and/or protein expression. Subcloning refers to moving one fragment of a plasmid into another plasmid that can serve as a vector. There are a variety of reasons why it is necessary to transfer the fragment of interest into a different vector backbone. For instance, the new vector may possess a specific marker for antibiotic selection or fluorescent expression. Subcloning may also be performed to move a cloned fragment to an expression vector of a more suitable host for the study (e.g., bacteria, mammals, insects, plants, etc.); to place the gene of interest under a different expression promoter (e.g., a constitutive to inducible promoter); or to tag or fuse the experimental gene with another protein or a marker. Whatever the goal of the experiment may be, the two most common approaches to subcloning rely on restriction digestion and/or PCR cloning. Subcloning by restriction digestion is the more traditional of the two methods. In this workflow, fragments from the vector and the insert are double-digested with two restriction enzymes that generate sticky or cohesive ends (Figure 2). Since the vector and the insert can ligate in only one orientation (i.e., directional) and the digested ends of the vector are incompatible for self-ligation, this is arguably the preferred and most common method among other possible restriction enzyme options (see Insert preparation for some options). For subcloning in protein expression or gene regulation studies, the selected restriction enzyme(s) should allow in-frame cloning of the fragment of interest with close proximity to the start codon as appropriate. A second popular approach uses PCR to amplify the region of interest from the plasmid. The resulting PCR product is then cloned into the desired vector. TA cloning or blunt-end cloning methods can be used as described in the PCR cloning section, but neither approach maintains directionality of the insert. To achieve directional cloning, restriction sites that are present in the destination vector for subcloning can be incorporated into PCR amplicons by using PCR primers designed with the restriction sites in the 5′ end of the PCR primers. Following the PCR reaction, PCR products are restriction digested, purified, and subcloned into the restriction sites of the vector. There are a few considerations when designing the PCR primers with restriction enzymes sites. It is imperative that the introduced restrictions sites are unique and not present within the sequence of the fragment to be subcloned. The restriction sites should also be carefully designed to allow in-frame expression of the subcloned DNA. The cleavage efficiency of most restriction enzymes is greatly reduced when their recognition sites are close to termini of linear DNAs. To ensure proper digestion of the PCR fragments, a sequence with an extra 4–8 nucleotides (sometimes called “leader” or “spacer” sequence) is recommended at the 5′ end of the restriction sites on the primers (Figure 3). Although there is no consensus on the optimal spacer sequence, a general recommendation is to avoid sequences that may result in primer-dimers or secondary structure formation (e.g., palindromes and inverted repeats). Furthermore, the primer recognition sequence design should be longer than those of the restriction site and the spacer combined to ensure specificity and proper binding to the target. When calculating the Tm of the primers, only sequences that are perfect matches to the template should be included. Finally, purification of the primers may be necessary to ensure full-length DNA oligonucletoides when using long primer sequences. Figure 3. Schematic workflow of PCR subcloning in combination with restriction digestion (RE = restriction enzyme site). Other subcloning strategies have been devised to take advantage of special vectors that do not require the use of restriction enzymes or a ligase. One such example is Invitrogen™ Gateway™ cloning, which exploits unique recombination activities of the family of Invitrogen™ Clonase™ enzymes (Figure 4). This method involves use of specially designed Gateway-specific plasmids and Gateway-compatible insert ends (att sites) for recombination. Figure 4. Gateway cloning strategies. ccdB is a toxic gene used in bacterial cell selection. In molecular cloning, DNA library construction refers to the creation of clones that carry DNA fragments representing the complete genomic DNA (gDNA) of a species, or the complementary DNA (cDNA) of RNA transcripts representing the expressed genome. By constructing DNA libraries, thousands of genetic fragments can be conveniently archived and expanded for downstream applications, such as genotyping and phenotypic screening. gDNA libraries serve as helpful tools to study the genetic composition of different species or gene mutations that occur in diseases such as cancer. cDNA libraries, on the other hand, are useful for expression analyses of genes and transcript variants based on the cell type and tissue origins (spatial), as well as time points (temporal). The construction of gDNA and cDNA libraries shares many similarities but also some important differences. Both strategies include nucleic acid purification, sample preparation (e.g., restriction digestion), vector cloning, vector introduction into a suitable host (e.g., transformation or transduction), and clonal selection. As the starting materials are different between the gDNA library and the cDNA library, their purification and preparation employ different approaches; however, once the gDNA or cDNA fragments are cloned into the desired vector, the same workflow may be followed. For genomic library preparation, gDNA is purified from the organism, tissues, or cells of interest. Extracted gDNA is then digested, isolated, and ligated into the vector of interest with compatible ends. Partial digestion of the genome is often carried out with a restriction enzyme with prevalent cutting sites to allow sequence overlaps between fragments for mapping of the cloned inserts (Figure 5). Figure 5. Schematic diagram of complete vs. partial digestion of a fragment by a restriction enzyme with four cutting sites. Partial digestion results in overlapping sequences among fragments for mapping. (Only some possible partially-digested fragments are shown here for simplicity.) Vector selection for gDNA libraries is an important consideration because the gene fragments used in the library constructions are often large (e.g., >20 kb). The choice of cloning vector, in turn, determines the method to deliver insert-carrying vectors into the host (Table 1) . Table 1. Common vector types, cloned fragment lengths, and vector delivery methods in library construction. Cloned DNA (kb) Vector delivery method BAC (bacterial artificial chromosome) YAC (yeast artificial chromosome) Ligation products or recombinant DNA can be introduced directly into bacterial cells via transformation or packaged into bacteriophage for infection or “transduction” of the host cells (Figure 6). The transformed or transduced cells are intended for subsequent archiving, expansion, and sequencing in downstream experiments. Whole-genome sequences of many organisms, including the first whole human genome sequence, were determined using this basic strategy in early 2000 . Figure 6. Schematic workflow of genomic library preparation using a λ phage vector. A genomic DNA sample is partially digested with Sau3AI, after which ~20-kb fragments (ideal size for viral packaging) are isolated for ligation with the viral gene fragments. The left and right arms of the λ vector comprise essential components for viral growth in the bacterial cells. For cDNA library preparation, total RNA is extracted from a biological source (e.g., cells, tissue, etc.), after which mRNA is reverse transcribed into complementary DNA (cDNA). This process is known as first-strand cDNA synthesis. The second strand is then synthesized to obtain the double-stranded cDNAs. The resulting double-stranded fragments may be ligated directly into a blunt-end cloning vector (random cloning), or “tagged” at the ends with restriction sites for directional cloning (Figure 7). cDNA libraries that provide good, faithful representation of the expressed genome depend on several factors including the quality and integrity of the source mRNA population. For the reverse transcription steps, it is also crucial that the reverse transcriptase is capable of synthesizing cDNA from a mixed and complex population, including long RNA templates and rare RNA transcripts, for adequate coverage within the libraries (see reverse transcriptase choices). Using the basic strategy outlined in Figure 7, many cDNA library preparations were used to construct comprehensive collections including the Mammalian Gene Collection (MGC), the largest NIH-sponsored public collection of cDNA clone libraries of mammalian species including human, mouse, and rat. Figure 7. cDNA cloning strategies using mRNA with a poly-A tail. In random (non-directional) cloning, double-stranded cDNA are ligated directly to a blunt-end cloning vector. In directional cloning, adapters with rare restriction sites (e.g., NotI and SalI) are ligated to the double-stranded cDNA ends to clone into a vector with compatible ends. Following library construction, one of the goals is to characterize the clones by sequencing the inserts. Insert sizes represented within these libraries can often range from 25 kb to 300 kb, depending on the type of vectors and the genome size of the organism of interest . For Sanger sequencing, once the most widespread method for DNA sequencing, the upper limit of a sequencing reaction with good-quality reads is generally less than 1 kb. To overcome this dilemma, researchers can turn to shotgun cloning and sequencing. In this approach, the large cloned inserts are further fragmented by physical or enzymatic means and subcloned into another vector; the smaller cloned fragments are then sequenced. These sequences are reassembled thereafter based on sequence overlaps (termed contiguous or “contigs”) using bioinformatics programs to ultimately obtain the original long sequence (Figure 8). Figure 8. Schematic workflow of shotgun cloning. Shotgun sequencing is instrumental in whole-genome sequencing of many organisms, ranging from viruses and bacteria to human. The method can be used to sequence the genome de novo, as well as improve quality of already-sequenced genome by verifying reads and filling in gaps. During the first sequencing of the human genome, the publicly funded Human Genome Project employed shotgun sequencing of large gene fragments that had been cloned into a bacterial artificial chromosome or BAC vector. The genomic positions of the cloned fragments had been defined prior to shotgun cloning, making their shotgun sequence assembly easier. Hence, this method is known as hierarchical shotgun sequencing (Figure 9A). It is also called clone-by-clone sequencing due to the use of BAC clones as a source [3,4]. Concurrent with the Human Genome Project, another privately funded whole genome sequencing project led by Craig Venter used shotgun sequencing strategies directly on the human genome DNA (instead of cloned fragments that had already been mapped). This process is known as the whole-genome shotgun approach (Figure 9B) . In theory, shotgun sequencing requires no prior information about the genome or genetic maps, and would save time and resources. Nevertheless, it is helpful to have reference genetic maps during sequence assembly because a large amount of computational power is required in the whole-genome shotgun approach, especially for organisms with sizable genomes. Genetic mapping or fingerprinting is routinely carried out using restriction enzymes , as in the methods of RFLP and AFLP . Figure 9. Schematic workflow of two shotgun sequencing approaches used in whole human genome sequencing.
<urn:uuid:e4f3f0d7-794e-4f20-8a1e-07f27fc096a3>
3.796875
3,365
Knowledge Article
Science & Tech.
27.535185
95,639,951
Hurricane - A Natural Disaster A hurricane is a huge storm. Hurricanes rotate in a counter-clockwise direction around an "eye." The center of the storm or "eye" is the calmest part. When it occurs? Hurricanes are usually accompanied by electrical storms and typically occur during summer and early autumn. In a day a hurricane can release the amount of energy necessary to satisfy the electrical needs of the entire United States for about six months. How it forms? Hurricanes are among the most powerful and deadliest forces in nature, which bring various kinds of effects. Floods, High winds, Heavy Rain are some of the common effects from a hurricane. Robert Adler, the Austrian - born American scientist is the inventor o... Chandrayaan 1 was India's first unmanned lunar probe launched by the I... Want to know what is the current time in MARS ? Download the new NASA... Aryabhata was born in Taregna, a small town near by Indian city 'Patna... Luna 12, also called as Lunik is an unmanned space mission in the Luna...
<urn:uuid:1b170528-7d33-4ca2-87b9-8ab3ad6c78a0>
3.359375
234
Content Listing
Science & Tech.
55.853359
95,639,964
The tiny single-celled ‘diatom’, which first evolved hundreds of millions of years ago, has a hard silica shell which is iridescent – in other words, the shell displays vivid colours that change depending on the angle at which it is observed. This effect is caused by a complex network of tiny holes in the shell which interfere with light waves. UK scientists have now found an extremely effective way of growing diatoms in controlled laboratory conditions, with potential for scale-up to industrial level. This would enable diatom shells to be mass-produced, harvested and mixed into paints, cosmetics and clothing to create stunning colour-changing effects, or embedded into polymers to produce difficult-to-forge holograms. Manufacturing consumer products with these properties currently requires energy-intensive, high-temperature, high-pressure industrial processes that create tiny artificial reflectors. But farming diatom shells, which essentially harnesses a natural growth process, could provide an alternative that takes place at normal room temperature and pressure, dramatically reducing energy needs and so cutting carbon dioxide emissions. The process is also extremely rapid – in the right conditions, one diatom can give rise to 100 million descendants in a month. This ground-breaking advance has been achieved by scientists at the Natural History Museum and the University of Oxford, with funding from the Engineering and Physical Sciences Research Council (EPSRC). The project involved a range of experts from disciplines including biology, chemistry, physics, engineering and materials science. “It’s a very efficient and cost-effective process, with a low carbon footprint,” says Professor Andrew Parker, who led the research. “Its simplicity and its economic and environmental benefits could in future encourage industry to develop a much wider range of exciting products that change colour as they or the observer move position. What’s more, the shells themselves are completely biodegradable, aiding eventual disposal and further reducing the environmental impact of the process life cycle.” The new technique basically lets nature do the hard work. It involves taking a diatom or other living cells such as those that make iridescent butterfly scales, and immersing them in a culture medium – a solution containing nutrients, hormones, minerals etc that encourage cell subdivision and growth. By changing the precise make-up of the culture medium, the exact iridescent properties of the diatoms or butterfly scales (and therefore the final optical effects that they create) can be adjusted. The researchers estimate that up to 1 tonne/day of diatoms could be produced in the laboratory in this way, starting from just a few cells. Within as little as two years, an industrial-scale process could be operational. “It’s a mystery why diatoms have iridescent qualities,” says Professor Parker. “It may have something to do with maximising sunlight capture to aid photosynthesis in some species; on the other hand, it could be linked with the need to ensure that sunlight capture is not excessive in others. Whatever the case, exploiting their tiny shells’ remarkable properties could make a big impact across industry. They could even have the potential to be incorporated into paint to provide a water-repellent surface, making it self-cleaning.” Natasha Richardson | alfa Investigating cell membranes: researchers develop a substance mimicking a vital membrane component 25.05.2018 | Westfälische Wilhelms-Universität Münster New approach: Researchers succeed in directly labelling and detecting an important RNA modification 30.04.2018 | Westfälische Wilhelms-Universität Münster For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:7a35c11e-43ee-40ef-bd1d-d583ea26a0f1>
4.0625
1,340
Content Listing
Science & Tech.
33.261406
95,639,971
Authors: George Rajna Researchers at Tokyo Institute of Technology (Tokyo Tech) have brought the worlds of physics and finance one step closer to each other. Such chirping signals a loss of heat that can slow fusion reactions, a loss that has long puzzled scientists. Physicists from the Institute of Applied Physics of the Russian Academy of Sciences, researchers from Chalmers University of Technology and computer scientists from Lobachevsky University have developed a new software tool called PICADOR for numerical modeling of laser plasmas on modern supercomputers. Comments: 28 Pages. [v1] 2018-03-29 08:57:21 Unique-IP document downloads: 14 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:a0cdd32d-4dcc-4377-a20c-c10c2382c5bd>
2.65625
285
Truncated
Science & Tech.
38.040017
95,640,003
Humans Have Drastic Effect On Sediment Transfer To World's Coasts, According To CU-Boulder Study April 14, 2005 Note to Editors: Contents embargoed until 2 p.m. EST April 14. A new analysis of data from more than 4,000 rivers around the world indicates humans are having profound and conflicting effects on the amount of sediment carried by rivers to coastal areas, with consequences for marine life and pollution control, according to a University of Colorado at Boulder environmental scientist. The report, "Impact of Humans on the Flux of Terrestrial Sediment to the Global Coastal Ocean," appears in the April 15 edition of Science Magazine. Lead author James Syvitski, director of the Institute of Arctic and Alpine Research at CU-Boulder, said 15 billion tons of sediment are transferred to coastal areas around the world every year. "In other words, the world's rivers are carrying and depositing enough sediment each year to cover the state of Texas with one inch of sediment, or the state of Colorado with three inches of sediment," Syvitski said. The report found that humans are stirring up much more sediment than expected, about 2.3 billion metric tons annually, through agriculture and other soil erosion activities. However, manmade reservoirs are simultaneously reducing the flux of sediment reaching the world's coasts by about 1.4 billion metric tons per year. "We're churning up our landmasses, and if not for the reservoirs we'd be flooding the coastlines with sediment," Syvitski said. "As an example, the Nile reservoir holds back 98 percent of natural sediment from that river's coastal region." The report estimated that more than 100 billion metric tons of sediment and carbon are now sequestered in reservoirs built mostly in the last 50 years. "Take either part of the two-sided human influence away, and there would be drastic consequences for sediment transfer," Syvitski said. Sediment transfer levels have many effects on coastal zones, according to Syvitski. "If we add to the sediment load, land mass can grow and accumulate at river mouths. This affects harbors and makes more dredging necessary to keep shipping lines open. Also, coastal fish farms and coral reefs can be severely impacted by too much sediment. The same concept applies to coastal wetlands and sea grass communities," he said. "Natural productivity could go way down if these areas were drowned in sediment. "On the other hand, if too little sediment reaches the coast, the coastline will retreat from ocean storms," said Syvitski. "Sediment flux is also tied to nutrient transfer, particularly carbon, so reduced sediment transfer means less nourishment for marine communities in coastal areas. Additionally, if new sediment isn't deposited in coastal areas, buried pollutants can be churned up by wave action." Syvitski worked with scientists at the University of New Hampshire and the Netherlands' Delft University of Technology to create a scientific model capable of globally consistent estimates of sediment flux near river mouths. The team was able to develop the model so they could determine what sediment loads would naturally occur without human activity. "It was particularly useful to be able to 'strip away' dams, reservoirs and other human impacts to make comparisons," Syvitski said. The study also highlighted geographical differences in human influence on sediment transfer. "Some regions of the world have a bigger problem than others. There have been huge changes since ancient times near inland seas like the Mediterranean and the Black Sea. Areas like the Arctic haven't seen much change. The greatest impacts have occurred where more people live," Syvitski said. African and Asian rivers carry a greatly reduced sediment load, while Indonesian rivers deliver much more sediment, according to the report. The report was completed for the International Geosphere Biosphere Programme, a large-scale effort by scientists to study how humans have been and will continue to affect the entire planet. "It's one of the largest science programs in the world, with representatives from nearly every country," Syvitski said. He has been a member of the steering committee for the IGBP's Land Ocean Interaction in the Coastal Zone division for the last six years. To view the entire report, visit http://www.sciencemag.org. Contact: James Syvitski, (303) 492-7909 Mike Liguori, (303) 492-3117 Print | E-mail this story Office of News Services 584 UCB ? Boulder, CO 80309-0584 ? 303-492-6431 ? FAX: 303-492-3126 ? email@example.com ¸ Regents of the University of Colorado This page created and maintained by Dave Palmer (-Back to home page-)
<urn:uuid:44ac2304-219d-4d43-92e0-70d6a5fdcb72>
3.734375
992
News Article
Science & Tech.
43.807359
95,640,007
Report the coordinates of the patches at the given distances of the turtles in the direction of their headings. patchAhead(world, turtles, dist, torus = FALSE) # S4 method for worldNLR,agentMatrix,numeric patchAhead(world, turtles, dist, torus = FALSE) WorldMatrix or worldArray object. AgentMatrix object representing the moving agents. Numeric. Vector of distances from the Logical to determine if the Matrix (ncol = 2) with the first column "pxcor" and the second column "pycor" representing the coordinates of the patches at the distances turtles's headings directions turtles. The order of the patches follows the order of the torus = FALSE and the patch at distance dist of a turtle is outside the are returned for the patch coordinates. If torus = TRUE, the patch coordinates from a wrapped world are returned. Wilensky, U. 1999. NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University. Evanston, IL. w1 <- createWorld(minPxcor = 0, maxPxcor = 9, minPycor = 0, maxPycor = 9) t1 <- createTurtles(n = 10, coords = randomXYcor(w1, n = 10)) patchAhead(world = w1, turtles = t1, dist = 1)#> pxcor pycor #> [1,] 2 6 #> [2,] 4 4 #> [3,] 8 5 #> [4,] 7 7 #> [5,] 4 6 #> [6,] 4 2 #> [7,] 4 7 #> [8,] 5 0 #> [9,] 7 4 #> [10,] 7 2
<urn:uuid:30b5c095-612e-4e1a-854e-f85318391dc1>
2.71875
415
Documentation
Software Dev.
71.334502
95,640,019
Experimental NASA research models based on NASA’s Solar Terrestrial Relations Observatory show that the CME was not Earth-directed and it left the sun at around 570 miles per second. On July 1, 2013, the sun erupted with a coronal mass ejection, or CME – shown here as the lighter-colored gas moving off to the left -- which soared off in the direction of Venus and Mars. This image was captured by the joint ESA/NASA Solar and Heliospheric Observatory. Image Credit: ESA and NASA/SOHO The CME may, however, pass by NASA’s Messenger, Spitzer and STEREO-B satellites, and their mission operators have been notified. There is only very slight particle radiation associated with this event, which is what would normally concern operators of interplanetary spacecraft, because the particles can trip computer electronics aboard interplanetary spacecraft. If warranted, operators can put spacecraft into safe mode to protect the instruments from the solar material. NOAA's Space Weather Prediction Center (http://swpc.noaa.gov) is the U.S. government's official source for space weather forecasts, alerts, watches and warnings. Updates will be provided as needed. Karen C. Fox | EurekAlert! Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters Theorists publish highest-precision prediction of muon magnetic anomaly 16.07.2018 | DOE/Brookhaven National Laboratory For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:6b81d815-f257-43be-9ed1-8362fb270673>
3.265625
909
Content Listing
Science & Tech.
39.791987
95,640,070
Here is a link to Dr. Sandy Ganzell’s previous FOM mid-term: here it is Enjoy and good studying! By the start of class on Friday, September 25th, you should post on your blogs an attempt at proving the following: I should also point out that the definition of even is this: an integer is even if there exists such that . Note that this is not an assignment — you are not required to post this. However, if everyone posts an attempt at this proof by the start of class on Friday, then your mid-term will be a very lovely experience. A fitting article as we begin our journey into Proof Land. Exercises 1, 2, 7, 8, 13 from Section 2.3 Exercises 1, 3, 4 from Section 2.4 Exercises 3, 7, 10 from Section 2.5 Exercises A2. A4, A6, B11 from Section 2.6 Exercises 1, 2, 3, 5, 9 Section 2.7 Exercises 2, 7, 10, 13 from Section 2.9 Read all of these sections (including pages 54 – 55, Section 2.8). On Monday’s FOM class we discussed some homework problems, all of them dealing with sets. One question was raised by Alexis (I believe), and sounded something like this: Which integers are in the set ? For starters, we should note that this set is, indeed, a subset of (why?). However, the actual question at hand does not have an obvious answer. Setting , we see that . Similarly, by choosing and , we see that . For the purposes of comparison, consider the set . Several sample values for the integers suggest that not all integers are in . In fact, it is pretty clear that No such pattern appears to emerge when we work with the original set , though, and so what might this suggest? I think — and this is only an opinion — this suggests a slight rephrasing of the original question: Which integers can I build using combinations of and ? If you can answer this question (or at least develop an intuitive answer), then the original question is all done, too. Another point of discussion yesterday concerned an indexed union. Although this is described quite well in our textbook (and you should re-read it for yourselves!), it seems like its worth exploring a bit further. In fact, I’ll even do this in well-organized sections (including the one containing this very paragraph). Note, however, that our focus will be on indexed unions, and you should keep in mind that we also care about indexed intersections and indexed other-operations. A Finite Amount of Unions Suppose I have five sets. “Sets of what?” you ask, to which I reply “Sets of anything!” Each individual set may be an interval of real numbers or a set of cats or a set of pictures or whatever. The important part is that they’re sets, and since I have five of them I’ll name them like this: and . Then, of course, we can form the union of these sets, which we like to notate as We can notate this new set more efficiently (i.e. using less horizontal space) by adopting the following notation: This notation should remind us of so-called “sigma summation” notation that one first encounters in calculus (often when learning about Riemann sums). However, the notation is a bit incomplete, at least technically. The reader has to infer that the expression “\latex i” runs through the first natural numbers, starting at , contenting through , and . If we wanted to be clearer, we could spell this out a bit more by using the notation The expression above can be translated into the following English: “for every natural number between (and including) 1 and 5 we have a set, , and now we are union-ing them all together.” The expression is referred to as the index, and the set of elements it “runs through” is referred to as the index set; that is the set above is the index set. In slightly shorter English, then, the above set is the union of a collection of sets that is indexed by the first five natural numbers. Example (1). Suppose and . Then it follows that Two important observations to make about this (finite) indexed union. First, the index is not required or used in the description of the final set. Second, the sets being unioned, the ‘s, have absolutely nothing to do with the index set . Example (2). Here is an example, like your homework problems, where the description of the sets being unioned depend on the index . Let’s use for each set so that and so on. (Note that since the index is a natural number, it makes sense to write down . Had we used a different index set it likely would not be possible to make sense of .) It then follows Example (3). Here is a strange-looking example, one that uses a bizarre (and rather arbitrary) index set. This example is important but silly, I should point out, but let’s discuss that after its all done. We’ll use the index set . Note that is a three-element set (that is ), and so it would have made much more notational sense to instead use the set , but we’re being weird on purpose here. We use this 3-element index set to keep track of three sets which we shall notate using the given indices; that is, we will use a set , a set , and, finally, a set . We then have To write this out in any more meaningful or concrete way, I’d have to tell you what each set and is, but before doing anything like that, take note: again, there is no relationship (necessarily) between the kinds of sets being unioned and the index set . How could there be? We haven’t even specified which three sets we’re union-ing above! We’ve only (bizarrely) named them according to the index . And, indeed, this is the important part of main point of this example, that the index set , no matter what it is, is used to keep track of the sets we are union-ing (or intersecting or whatever-else-ing), it is just a way to help name or (sometimes) describe the sets. If is a natural number, that doesn’t stop from containing real numbers, rational numbers, abstract symbols or whatever else. If is an arbitrary symbol, is still free to contain whatever kinds of elements we wish. For the sake of completing this example, then, let’s go ahead and say that — i.e. the closed interval of real numbers between negative one half and one — and and . We then have Of course, this process of union-ing over a finitely-indexed collection of sets works similarly for any sized index set, and, indeed, even for index sets that are not finite. Suppose we have a large collection of sets, one for each element in the counting or natural numbers . We can then use our index, to name each set as, say, , and then we can form the infinite union While certain flavors of philosophers and logicians may contest this process, we, as budding mathematicians, are perfectly comfortable with it. Union-ing together an infinite collection of sets results in a new set, one with elements that came from at least one of the indexed sets (it could have also come from multiple). Again, it is important to note that the sets may have absolutely nothing to do with the index set . The ‘s may be sets of real numbers, sets of rationals, sets of matrices, sets of words, etc; the index set is simply keeping track of these sets, it is not (necessarily) telling us anything about the elements in each set. Let’s explore two examples, one where the sets have nothing to do with the index , and one where the sets are, in fact, described in terms of the parameter . Example (4). Let’s use the sets and so on. That is, our infinite collection of sets repeats itself according to the indicated pattern. We then have In some ways this was a very silly example. After all, we didn’t have infinitely many distinct sets to union, we just unioned together the same three sets over and over again. Still, this example helps solidify our two main points, which are worth repeating here. - The index parameter is not (necessarily) needed to write out the final set. - The index set (in this case ) does not (necessarily) determine the elements of any of the sets . Example (5). Let us create an infinite family of sets each of which is described using the index . Similar to our homework problems, let’s use For example, here are a few of the (infinitely many) sets we’ll be union-ing together: In this example, the index still has the responsibility of keeping track of the individual sets being unioned, but it also serves another purpose: each set is not just labelled with an , but its elements are also described in terms of the value of . The index here is a natural number, , that is being used to label our sets and to define or describe our sets. However, the fact that is a whole number does not change or impact the fact that the sets being unioned contain elements that are not just whole numbers. It is not too difficult o convince yourself that the larger becomes, the smaller the interval of real numbers becomes. Indeed, one can see that for any value of our index, it follows that (perhaps the easiest way to see this is to draw some pictures of these intervals). It then follows that A Super-Infinite Union? We now come to what I think is an especially tricky example, at least upon first glance. Although we are blessed / cursed with finite minds, thinking of an infinite collection of sets indexed by the counting numbers is not altogether too terrible. In Examples (4) and (5) above, we could have used the words to describe the process by which we produced the new, big-unioned set. Another way to talk about it is to say that we performed a discretely infinite union; after all, the natural numbers can be thought of as a discrete (albeit infinite) set of points that go on forever. What happens, though, when our index set is even more complicated? For instance, what happens when we have a collection of sets not only for each natural number but for each real number, ? In this instance we could say something like this: We could also describe it as a continuous union of infinitely many sets. The main issue we have with this union / indexing is that we cannot write out something like since doing so excludes the real-number-indexed sets like and . In other words, because we cannot (yet? ever?) list out the real numbers like we can the naturals, we are not able to write this union as an infinite list of unions. This, of course, is troublesome, but depending on the sets being used, the difficulties can be avoided. Example 6. If we use, for example, for every value of — that is, we have a “constant set”, then it follows As in our examples that use finitely many indices and/or a natural-number’s worth of indices, the final set can be written out explicitly (and also without use of the index ). Example 7. Let’s consider a real line’s worth of one-element sets . For example, , and . Can you explain why Example 8. In this example we’ll use a real line’s worth of interval-subsets of the real numbers, specifically setting . Then You should be able to convince yourself that this is true by thinking about the facts that (1) for every value of the index , (2) for . A picture of several of these intervals will likely help. Example 9. We can, of course, also use not an entire real-line’s worth of sets, but, say, an intervals worth of sets. If we use , then we can use any of the ‘s from the previous examples to form For example, if we use the sets from Example 8 so that each is itself an interval of real numbers, you should be able to convince yourself that Finish reading chapter 1 and begin reading pages 31-60 (Chapter 2) - Exercises 2, 4, 5, 6, 8, 12 from 1.7 - Exercises 1, 4, 5, 9, 10a from 1.8 - Exercises 1, 6, 12, 14 from 2.1 - Exercises 1, 2, 3, 5, 14 from 2.2 - Bonus 1: Is it true that ? Is it true that ? - Bonus 2: Give any two sets, and , is it true that ? - Bonus 3: When are Casey’s office hours? Also, here is a link to an excellent paper on the crises in mathematics. Also, check out this video as it may give you an even deeper, more comprehensive view of mathematics: Our second day of class went fairly well, I thought. Students appeared to be working well together, building consensus (or at least near-consensus) on the truth or falsity of four statements. As a refresher, here they are: - Given any triangle, there is always a circle that can be inscribed in it. - For ever integer it is possible to choose points on a circle in such a way that if you connect every pair of points with straight line segments, the circle will be divided into regions. - Let be the product of the first primes. For every positive integer , is prime. - Every even number greater than 2 can be written as , where and are both prime numbers. Since this class is all about proving things, I offer the following image as proof that, indeed, students were working on these true or false statements: As we discussed in class, no one yet has an answer to the fourth statement. I don’t mean no one in our class, I mean no one, anywhere. (I suppose its possible that someone out there has an argument that demonstrates whether this is true or false and has chosen not to share it… let’s ignore that unlikely possibility.) Indeed, this is a famous conjecture, one called Goldbach’s Conjecture. I think there is a good lesson here. Perhaps even multiple lessons. For instance, I think its important for FOM students to keep in mind that I might be trolling you. Yes, I had heard of this conjecture before compiling this assignment. Yes, I intentionally included it amongst other statements that can, actually, be argued true or false. Another lesson (hopefully?) learned here: mathematics is incomplete. There are lots and lots of mathematical statements that we can’t yet decide are true or false. In fact, this is what mathematical research is all about, finding new statements and proving whether or not they are true or false. What’s even better — and we’ll talk about this later in the semester — there are even mathematical statements that can never be proven true or false! Its crazy, I know, but there it is: sometimes a statement can never be proven true nor can it ever be proven false. I will probably post another entry concerning my Blue Eyed Faculty story, but for now I raise my glass to this year’s FOM class(es). Cheers to an intoxicating semester!
<urn:uuid:294279fb-f4e5-44ae-bd3e-29c6784031a2>
3.109375
3,341
Personal Blog
Science & Tech.
60.13947
95,640,096
Vindeln station in northern Sweden reported a similar story with measurements showing a reading of 437 dobson units, 11 more units than Norrköping and a record there since the first measurements were taken in 1991. The ”Dobson unit” or DU, named after the British scientist G.M.B. Dobson, indicates how much ozone there is in the air above a certain point on Earth. In a statement, SMHI said: ”We have to go as far back to the measurements taken in Uppsala between 1951 and 1966 to find levels that high.” In Uppsala, the highest level for February was back in 1957, when a value of 439 DU was recorded. Over the Earth’s surface, the ozone layer’s average thickness is about 300 Dobson Units. That’s a layer that is 3 millimeters thick. So in these days of thinning ozone layers, does this mean that the ozone is improving in general? The Swedish weather service, which only last year recorded the second thinnest levels of ozone ever, says that it was too early to tell. SMHI says ”We would need to see more high values before we can say with certainty that the ozone layer is growing thicker. However we are now in a period where the decrease appears to have halted and we expect to see a thickening.” The reason for the thickening of the ozone during February was attributed to the low temperatures which normally cause a rapid depletion of the ozone layer. Well they weren’t there because the high pressure column of cold air from the Arctic which develops during the long polar night disappeared very quickly in mid-January. The ozone layer over Sweden usually reaches its thickest level during the spring, before thinning during the Summer and reaching a minimum during the winter.
<urn:uuid:1ce47208-669d-4ceb-b8c4-9adfc885779c>
4.0625
384
News Article
Science & Tech.
55.078518
95,640,100
+44 1803 865913 Edited By: Patrick Wu 637 pages, Bw photos, figs, tabs, maps Although the last ice age ended about then 10,000 years ago, its effect are still influencing human activities today - for example: coastal engineering, siting of nuclear waste depositories, intraplate earthquake mitigation, inaccuracy of a global positioning due to changes in the geodetic reference frame, and more. The recognition of ice ages and glacial isostasy led to the first scientific revolution in earth science. During the last few decades, studies of the dynamics of the ice age earth have brought together various disciplines - including geomorphology, geodynamics, rock and ice rheology, geodesy, glaciology, oceanography, climatology, astronomy, engineering and archaeology. Recent interest in the subject has surged forward due to new advances in space-age geodetic techniques and new developments in modelling methods. The purpose of this volume is to bring the reader up-to-date on the latest developments and to foster contributions, from various branches of science, to the understanding of ice age geodynamics. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I have always been MOST impressed by the efficiency, courtesy, integrity and professionalism of NHBS! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:84890834-9f0d-4590-b0a0-bca4f9746093>
2.953125
310
Product Page
Science & Tech.
20.532
95,640,105
+44 1803 865913 This volume deals with the Superfamily Empidoidea as defined by Chavla (1983), which total 673 British species in the latest Diptera check list (Chandler 1998a), now increased to 677 species (as of March 2003; Stubbs (2003)). The Empidoidea comprise five families (Atelestidae, Dolichopodidae, Empididae, Hybotidae, and Microphoridae), the species included representing approximately 10% of our Diptera fauna. The remaining families of Diptera outside of the Empidoidea that were not dealt with by Falk (1991) are reviewed in three further parts within the JNCC Species Status Review series. Although less well-known than some of the more popular families of Diptera, the Empidoidea has attracted the interest of a growing number of dipterists in recent years. This has resulted in greatly increased recording effort, which is continuing under the auspices of the national recording scheme for Empidoidea (see the Biological Records Centre website at: www.brc.ac.uk). The Empidoidea are found as adults throughout the spring, summer and autumn, with the greatest number of Empididae and Hybotidae found in early June (Plant 2003). The phenology can differ greatly between individual species and is summarised in the identification guides, but is not considered in this review. The adults are typically predators of other small insects, but they may also feed at flowers, with some species apparently showing preferences for certain plants (for instance, see Allen, 1994). Stark (1994) reviewed the prey composition and hunting behaviour of Platypalpus species. Pollet and Grootaert (1994) investigated the consequences of using different colours and heights of water traps upon the species collected. The status of many species as proposed by Falk (1991) has been revised during the preparation of this volume. Initially, the Red Data Book and Notable categories (as defined by Parsons 1993) were used for this revision. Subsequently, following the adoption of the revised IUCN Guidelines (IUCN 1994) by JNCC in 1995, a further revision of the status for all species was carried out by Ian McLean (JNCC) in 2003. At the same time the nomenclature was brought up to date in accordance with the latest checklist for British Diptera (Chandler 1998a) and recent literature up to 2004 has been incorporated within the introductory sections and in the species accounts. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Shopping at NHBS is always good. The range of books is wide, the service is excellent, the orders arrive swiftly. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:0eb3f792-df61-420b-9db0-137b5596d595>
2.734375
612
Content Listing
Science & Tech.
39.049924
95,640,132
5 Species You Might Have Mistakenly Thought Were California Natives | KCET 5 Species You Might Have Mistakenly Thought Were California Natives Some of the animal species we introduced, on purpose or by accident, have made themselves right at home in our cities and our wildlands. Some have seemed to fit in without causing obvious harm to other species. Others have caused serious damage to the landscape. Most of them fall somewhere in between. Did you know that all of these animals were introduced to California? Some of the entries on this list may surprise you. Some people love them, with their nearly harmless demeanor, quiet habits, and ability to rid a vegetable garden of snails overnight. Others can't get past the snaggle-toothed grin and ratlike tail. Whatever your viewpoint, you may well have assumed that the Virginia opossum, Didelphis virginiana, with its primeval appearance, has been in California since time immemorial. The truth is that there were no opossums in California before the 1890s, when a small population of the marsupial was moved to Los Angeles County, where their descendants spread across the South Coast. A second population was brought into San Jose from Tennessee in 1910 by migrants nostalgic for the wild food of home, and a third population of South Carolina opossums was released into the southern Sierra Nevada after a fur farming venture failed. Between those three main introductions and a host of smaller ones, opossums have spread throughout California and the rest of the Pacific Coast. They have a supremely generalist diet, eating a range of foods from live lizards and snakes to rotting tree fruit, and that habit has allowed them to thrive in the widely varied habitats available to them in California. That diet has probably also kept the opossum from becoming a walking environmental disaster, in that it doesn't seek out specific kinds of food such as the eggs of native birds, and it doesn't seem to have badly displaced other native animals through competition. Eastern fox squirrels The bad news is that depending on the kind of habitat they move into, eastern fox squirrels can displace a closely related California native, the western gray squirrel That's not true of every kind of habitat: as of 2009, the two species seemed to be coexisting in parts of Griffith Park. But in other places, the victory of eastern fox squirrels over their western cousins is inexorable. In 2005, biologists first noticed eastern fox squirrels on the Cal Poly Pomona campus, at the time a stronghold of western gray squirrels. By 2009 there were no gray squirrels left on campus. Honeybees are Old World insects, and the most commonly domesticated species, Apis mellifera or the European honeybee, probably originated in Africa, spreading across Europe and Asia and imported into North America in the 17th Century. Shelton's first California hive prospered, and was followed by others. Now, European honeybees are crucial to the Golden State's agricultural sector, and we regard their future with fear. Ironically, the presence of introduced honeybees may be suppressing populations of native bees, which you don't hear much about in the "save the honeybees" brochures. And unlike opossums and honeybees, those who brought parrots to the state and released them didn't keep public records. That's probably because the raucous birds now caucusing in the queen palm down the block descend from either escaped pets or other fugitives from the captive parrot trade. One species of parrot, the monk parakeet or Quaker parrot, was deemed such a potential threat to California agriculture that its importation and possession has been banned in the state, and the California Department of Fish and Wildlife describes an eradication campaign against the species as a success. Other species haven't turned out to be quite so disruptive, focusing on non-native food sources like urban fruit trees and taking time out to star in the occasional documentary film. That may come as a surprise to visitors to Santa Catalina Island, where the local bison herd seems to blend wonderfully into the landscape. But aside from the possibility that those eastern Modoc bison may have visited once in a while, and aside from a couple vague second-hand reports of "buffaloes" related by early California explorers like Juan Crespí, there is essentially no evidence of modern bison in California. It kind of makes sense: bison don't climb steep mountains if they can avoid it, and California is well-defended along its eastern border with steeply tilted fault-block mountain ranges. Why cross the Warners when there's perfectly good grass in the sagebrush steppe below? Of course that's modern bison, with the easy to remember Latin binomial "Bison bison." There were other species of bison in California back in the Pleistocene, even bigger and crankier and more dangerous than the kind we have now. Bison antiquus, the most common large herbivore found in the La Brea Tar Pits, lived here up until about 10,000 years ago. Bison latifrons, which had a truly fearsome set of horns that could span seven feet from tip to tip, died out in California somewhere between 30,000 and 21,000 years ago. But their inheritors the modern bison? Not so much with the California territory. Something to remember as you take in the iconic California countryside on Catalina. For ongoing environmental coverage in March 2017 and afterward, please visit our show Earth Focus, or browse Redefine for historic material. KCET's award-winning environment news project Redefine ran from July 2012 through February 2017. This northern realm offers some of the most refreshing hikes in the state. Following a screening of “Puzzle”, actress Kelly MacDonald, actor David Denman and director Mark Turtletaub attended a Q&A hosted by Cinema Series host Pete Hammond. A Q&A will immediately follow the screening with Glenn Close, Jonathan Pryce, Christian Slater, Annie Starke and director Björn Runge. The stocks of two of the largest private prison contractors skyrocketed in the month after President Trump’s inauguration and have continued to grow. - 1 of 67 - next ›
<urn:uuid:98625954-432c-4de8-9c68-c5777cc40b08>
3.15625
1,306
Content Listing
Science & Tech.
37.185802
95,640,148
This book covers the more basic aspects of carbonate minerals and their interaction with aqueous solutions; modern marine carbonate formation and sediments; carbonate diagenesis (early marine, meteoric and burial); the global cycle of carbon and human intervention; and the role of sedimentary carbonates as indicators of stability and changes in the Earth's surface environment. The selected subjects are presented with sufficient background information to enable the non-specialist to understand the basic chemistry involved. Tested on classes taught by the authors, and approved by the students, this comprehensive volume will prove itself to be a valuable reference source to students, researchers and professionals in the fields of oceanography, geochemistry, petrology, environmental science and petroleum geology. Publisher: Elsevier Science & Technology Number of pages: 706 Weight: 1330 g Dimensions: 246 x 189 x 37 mm This fine book ... provides an excellent overview of low-temperature carbonate geochemistry.... It will become a best seller. American Association Petroleum Geologists This book fills a gap in currently available textbooks and can be recommended to a wide audience....to paraphrase the authors, whether your work involves monitoring atmospheric chemistry from Antarctic ice cores, performing diagenetic studies on carbonate sediments from the Alps, the deep sea or tropical reefs, or carrying out complex laboratory experiments in carbonate geochemistry, then there is something in this book for you! Journal of Sedimentary Petrology A bibliographic reference list containing more than one thousand citations completes a thorough and authoritative compilation which presents a very readable text backed by chemical equations, diagrams and case studies. Australian Mineral Foundation Journal; ERISTAT ..the book will be of great value to both the established researchers and non-specialists with an interest in sedimentary carbonates. Geochimica et Cosmochimica Acta
<urn:uuid:ef54d722-e5f7-40be-9322-593c5c48b4d3>
2.875
384
Product Page
Science & Tech.
17.53125
95,640,149
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index] POLITICALLY INCORRECT UTAH DINOSAURS! :) Ray Stanford <email@example.com> wrote Could it be that the animal (if it was a carnivore) was dragging its prey in between its legs - as large cats do quite often. That would produce a drag mark, specially on soft mud. Tigers often hunt in the marshy or muddy flood plains but then drag the prey on to the dry land For those interested in the 'problem' of tail dragging in dinosaurs, the current (June 23) issue of SCIENCE NEWS, page 397, column 1, tell us that on a 50-acre parcel of land in St. George, Utah, owned by Sheldon Johnson, there are "...More than 100 footprints of meat-eating theropod dinosaurs which have been uncovered here, as well as grooves where the creatures' tails dragged in the mud." The fossil tracks are said to range from 5 to 18 inches in length. Note that they were made in deep mud, and some went so deep that a hallux (digit I) imprint is recorded. Perhaps the mud depth might have resulted in tail drag marks where, normally, the tail might have been suspended somewhat above the surface. Gautam Majumdar firstname.lastname@example.org
<urn:uuid:8d3604e6-9d31-4c65-b656-e9366dbef06f>
3.078125
321
Comment Section
Science & Tech.
62.838692
95,640,151
Selective Sample Preparation Techniques for Trace Analysis The isolation and determination of very small quantities of materials in complex mixtures is a major problem for analytical chemists in many fields. Historically [cf. R.P. Maickel, #A-1 -Ed.], solvent extraction has been the principal technique used for sample preparation; now solid-phase materials such as bonded silicas and small-particle resins are becoming accepted as convenient and efficient extraction sorbents. Two types of these solid-phase materials are now discussed and compared: lipophilic sorbents such as C-l8 bonded silica, and anion-exchange resins. KeywordsSalicylic Acid Peak Identity Elution Solvent Polystyrene Resin Acidic Analytes Unable to display preview. Download preview PDF.
<urn:uuid:35a23d23-5fbf-41fd-be5f-b37799d29190>
2.734375
169
Truncated
Science & Tech.
19.781532
95,640,170
Cloud-seeding may take edge off bad weather Saturday, June 17, 2006 OTTAWA -- During St. Petersburg's 300th anniversary celebrations three years ago, Russian President Vladimir Putin ordered 10 cloud-seeding planes into the air, to induce rain about 50 kilometres outside the city. "Our aim is to empty all clouds of rain before they hit the city borders," the physicist in charge of the project told a reporter. Rain on Putin's parade would have been a minor inconvenience. In New Orleans, on the other hand, last fall's rain was nothing short of catastrophic -- and with the arrival of this year's hurricane season, scientists are wondering: if we can change the weather, can we head off hurricanes? It's not a new idea. In 1947, researchers from General Electric and the U.S. government used an airplane to "seed" a hurricane 500 kilometres off the coast of North Carolina. The hurricane reversed direction and sped toward Savannah, Georgia, where it caused $2-million damage. The U.S. Department of Defence promptly classified details of the experiment to avoid litigation. The field of "weather modification" has come a long way since then, but experts still aren't sure humans can affect something as massive as a hurricane, which can release energy equivalent to a 10-megaton nuclear bomb every 20 minutes, according to the U.S. Hurricane Research Division. A later analysis of the 1947 Georgia hurricane suggested the seeding experiment actually had nothing to do with its sudden change of direction. In fact, the evidence for cloud-seeding itself, widely used to enhance rainfall or reduce hail damage, is hotly debated. "The problem is, you never know what you would have got if you hadn't seeded," says Terry Krauss, the Red Deer, Alta.-based chief scientist of Weather Modification, Inc. The North Dakota company has been contracted by a group of insurance companies to seed the clouds in "Hail Alley" between Calgary and Red Deer for the past 10 years, flying about 100 sorties between June 1 and Sept. 15 each year. Before the program started, Krauss says, the insurance companies were paying about $100 million a year in insurance for hail damage, and a single storm in 1991 cost $400 million. With cloud seeding in place, payouts are down 50 per cent -- a handsome return on the $2-million annual investment. "In spite of the lack of scientific proof-positive, the financial indicators are very good," Krauss says. That's good enough for the insurance companies, who have signed on for another five years. But insurance records aren't a reliable way to judge a project, since they may be spun for marketing purposes, says George Isaac, a senior scientist with Environment Canada's cloud physics and severe weather section. "There's no hard evidence for hail suppression," he says. A 2003 report by the U.S. National Research Council agreed, concluding that "there still is no convincing scientific proof of the efficacy of intentional weather modification efforts." Pioneered in the 1940s, cloud-seeding involves dropping particles of silver iodide or dry ice into gathering clouds. Raindrops coalesce around the seeds, and -- so the theory goes -- rain falls. But is it just "borrowing" rain from the next day, or from a neighbouring region? And can the seeds tie up moisture that would otherwise grow into damaging hailstones? These are the questions that about 100 scientists from around the world were confronting recently in San Antonio, Texas, at the annual meeting of the Weather Modification Association. "In the semi-arid areas where I live, we're more interested in increasing the rainfall," said Tommy Shearer, a Texan who helped organize the conference. Hurricanes are just too big to modify, he said. "The military did that back in the '60s. They abandoned it, classified it, said never again." The U.S. government's Project Stormfury, an attempt to weaken hurricanes by seeding, actually ran from 1962 all the way to 1983, with ambiguous and ultimately disappointing results. But others at the meeting, such as Krauss, were more optimistic. The key would be to use today's more sophisticated weather-tracking satellites and computer-modelling abilities to attack hurricanes before they grow too large to handle. "You can't go head to head with a Category 4 or 5," Krauss said. "But what can you do at an earlier stage?" This approach has spawned a series of suggestions in recent years, and gained support with an article published in Scientific American in 2004 by Ross Hoffman of Atmospheric and Environmental Research, a research firm in Boston. Hoffman showed that a change of just two or three degrees Celsius near the eye of a hurricane, applied early enough, could radically alter the course of the storm. "It turns out the very thing that makes forecasting any weather difficult -- the atmosphere's extreme sensitivity to small stimuli -- may well be key to achieving the control we seek," he wrote. But how to produce that small stimulus? Scientists have proposed a number of ambitious ideas in the last few years. The U.S. Hurricane Research Division in Florida gets about 120 such proposals a year, and has a form letter to respond to perennial favourites like dropping a nuclear bomb into the heart of the storm, or dragging icebergs from the North Pole to cool the water around a hurricane. "You give everyone a fair shake," says Frank Marks, Jr., the division's director. "As a scientist you're always questioning everything." While some of the ideas, such as floating jet engines in a storm's path to remove moist air, are considered fairly farfetched, other work such as Hoffman's is well-respected by Marks and other hurricane experts. The problem, though, is that current hurricane models simply aren't good enough to make an accurate prediction about what might happen if you heat up a storm with an infrared beam. "You don't want to modify something unless you know what the impact will be," Marks says. And all of this raises an even bigger question: if we could change hurricanes, would we? "If man in his hubris makes a change, it may be favourable to me, but it may not be favourable to someone upstream or downstream from me," Marks says. Isaac at Environment Canada has dealt with irate farmers who worry that the hail-suppression project in Alberta is causing drought in other parts of the province. He doesn't know whether the claims are true, and if he did, he's not sure how the dispute could be resolved. "It's a difficult question, not handled easily," he says. Given the carnage wrought by last year's record 28 hurricanes, it's tempting to think that the benefits would outweigh the risks. But those considering undertaking the challenge should bear in mind an episode from weather modification's early, less reputable history. In 1915, San Diego city council offered Charles Hatfield $10,000 if he could attract enough rain to fill a large reservoir, using his patented mixture of 23 chemicals. Hatfield built his contraption, and sure enough, the rain started. It filled the reservoir, and kept raining, eventually bursting a dam and killing 20. In court cases that stretched on for more than two decades, Hatfield was able to avoid being held liable for the $3.5 million in damages caused by the flood. But he never did get his $10,000. © The StarPhoenix (Saskatoon) 2006 CanWest Interactive, a division of CanWest MediaWorks Publications, Inc.. All rights reserved.
<urn:uuid:880f96c4-4b9f-469d-9e7d-9a6acda207ba>
2.625
1,571
Truncated
Science & Tech.
53.942183
95,640,178
Read this tip to make your life smarter, better, faster and wiser. LifeTips is the place to go when you need to know about Developing New Products and other Invention topics. The latest ‘big thing' in new product technology is actually a very small thing – nanotechnology. And, it is opening up a floodgate of opportunities in the invention and innovation of new products and processes. As the global economy's fastest growing information sector, nanotechnology offers applications for nearly every conceivable industry. What is nanotechnology? Nanotechnology is the engineering of functional systems at the molecular scale. And it provides the means to develop systems and materials to be built with exacting specifications at the molecular level. Every inventor and innovator should follow the development of nanotechnology because it represents the state-of-the art of what technology can accomplish. Still in its infancy, nanotechnology is bringing about rapid advancements in biology, chemistry, engineering, computer science and physics. |Sheri Ann Richerson|
<urn:uuid:86a4d9c1-64bf-4c79-b732-d63642b2a1d7>
2.546875
204
Truncated
Science & Tech.
19.80875
95,640,179
This week, researchers from University of Hawai'i, Norway, and the UK have shown with innovative experiments that a rise in jellyfish blooms near the ocean's surface may lead to jellyfish falls that are rapidly consumed by voracious deep-sea scavengers. Previous anecdotal studies suggested that deep-sea animals might avoid dead jellyfish, causing dead jellyfish from blooms to accumulate and undergo slow degradation by microbes, depleting oxygen at the seafloor and depriving fish and invertebrate scavengers, including commercially exploited species, of food. Globally there are huge numbers of jellyfish in the oceans. In some parts of the ocean, jellyfish "blooms" are increasing apparently due to nutrient enrichment and climate change caused by human activities. In recent years, studies have suggested that when jellyfish blooms die-off, massive quantities of jellyfish sink out of surface waters and can deposit as "jelly-lakes" at the seafloor, choking seafloor habitats of oxygen and reducing biodiversity. This latest research shows that the accumulation of dead jellyfish lakes may be unusual, with jellyfish carcasses normally being rapidly consumed by a host of typical deep-sea scavengers such as hagfish and crabs. "We just had a hunch that dead jellyfish were important to deep-sea ecosystems in some way, even though they are made up largely of water. We therefore decided to film what the fate of jellyfish carcasses were at the seafloor so we deployed deep-sea lander systems with jellyfish bait. When we later retrieved the landers and found no jellyfish attached to the bait plates we were pleasantly surprised. However, our surprise jumped to another level when we looked at the camera images and saw just how fast the jellyfish baits were consumed and the shear number of scavengers that were consuming the baits. It just blew our minds." lead author Andrew K. Sweetman said. Sweetman is a chief senior scientist and research coordinator for deep-sea ecosystem research at the International Research Institute of Stavanger in Norway. Published October 15 in the prestigious journal Proceedings of the Royal Society: Biological Sciences, the research looked at the response by scavengers to jellyfish and fish baits in the deep-sea along the Norwegian margin. The researchers found that jellyfish and fish baits were consumed equally fast and attracted similar densities of a diversity of scavengers. "The speed of the jellyfish scavenging was totally unexpected because earlier, previous observations seemed to suggest that jellyfish carcasses would just rot very slowly at the seafloor. It was also really interesting that the hagfish targeted the most energy-rich parts of the jellyfish, burrowing into the jellyfish carcasses to eat the gonads!" said Craig R. Smith, co-author, designer of the deep-see camera-lander systems used in the study, and a Professor of Oceanography and Pew Fellow in Marine Conservation at the University of Hawai'i at Mānoa, USA. The study further revealed that the role of jellyfish material could be seriously underestimated in global carbon budgets in the ocean, because jellyfish were removed so quickly that they fail to accumulate at the seafloor, causing scientist to overlook their role in deep-sea food webs. "Our work shows that previous assessments of the ocean carbon cycle may have missed an important component. Until we saw these photos we thought that the massive amount of jellyfish material was deposited on the seafloor and was essentially taken out of the system – removing carbon rapidly. Our results show that much of this carbon could, in fact, make it into deep-sea food webs, fueling these systems. This is especially important when other food sources to deep-sea ecosystems may be decreasing as our oceans warm" said co-author Daniel Jones, a scientist at the National Oceanography Center in Southampton UK. Ultimately, this new research reveals that jellyfish blooms could provide far-reaching, potentially important, food supplements to normal deep-sea food webs, rather than having purely negative impacts on fisheries and marine ecosystem function. Link to video and interview (more information below): http://bit.ly/ZYsSNS BROLL (45 seconds followed by soundbites): Video of jellyfish being eaten by ocean scavengers Craig Smith - Oceanography professor, University of Hawaiʻi at Mānoa (11 seconds) "And this is real, actually quite important. As the climate warms, as humans change the climate of the earth, and as they put nutrients in the ocean, there's an increase in the abundance of jellyfish." Smith (14 seconds) "It may mean that these changes that are occurring in the ocean where jellyfish are becoming more abundant are not as significant, not as bad as we thought they might be. The ocean may be more able to adjust to these changes than we expected." Smith (13 seconds) "We've only been able to do these experiments in one location. The scavengers that come are typical of the deep sea but it would be nice to replicate or repeat these experiments in other parts of the ocean to show that the scavenging processes are similar." Sweetman AK, Smith CR, Dale T, Jones DOB. 2014. Rapid scavenging of jellyfish carcasses reveals the importance of gelatinous material to deep-sea food webs. Proceedings of the Royal Society B 281: 20142210. http://dx.doi.org/10.1098/rspb.2014.2210 The School of Ocean and Earth Science and Technology at the University of Hawaii at Manoa was established by the Board of Regents of the University of Hawai'i in 1988 in recognition of the need to realign and further strengthen the excellent education and research resources available within the University. SOEST brings together four academic departments, three research institutes, several federal cooperative programs, and support facilities of the highest quality in the nation to meet challenges in the ocean, earth and planetary sciences and technologies. Marcie Grabowski | Eurek Alert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:04b851a9-48bb-4cfb-809d-bd610951b7ec>
3.8125
1,918
Content Listing
Science & Tech.
39.914082
95,640,191
AVIL is an open source (GPL) interpreter (and programming language) originally designed to run on Arduino. AVIL programs are just plain text files stored on a micro sd card, so you have GB's of space available! AVIL's project was initially inspired by this project. See this website to learn more about it! With AVIL you can: - Read/Write digital I/O. - Read inputs from serial or from via telnet. - Write output to serial line or via ethernet. - File operations (read, write, create, delete...). - Easily create your custom instructions and call them from a progam. - Transform your Arduino in a standalone terminal with an interactive shell. - Transform your Arduino in a remote I/O board for your PC. - Remotely upload/dowload files on your Arduino. AVIL was created just for fun as a personal challenge, but I hope it could be useful in your project or that it will get you inspired for something new! I'm not an english native speaking...so forgive my bad grammar or help me to improve this documentation!
<urn:uuid:8c853c89-30c8-45e7-9bb3-6677ae683bbb>
2.953125
241
Product Page
Software Dev.
58.242893
95,640,206
What catches your eye on the Syracuse University campus—a beautiful shot of campus, a cool project or time spent on the Shaw Quad? Take a photo and share it with us. We select photos from a variety of sources. Submit… Syracuse University physicists first to observe rare particles produced at the Large Hadron Collider at CERN Shortly after experiments on the Large Hadron Collider (LHC) at the CERN laboratory near Geneva, Switzerland, began yielding scientific data last fall, a group of scientists led by a Syracuse University physicist became the first to observe the decays of a rare particle that was present right after the Big Bang. By studying this particle, scientists hope to solve the mystery of why the universe evolved with more matter than antimatter. Led by Sheldon Stone, a physicist in SU’s College of Arts and Sciences, the scientists observed the decay of a special type of B meson, which is created when protons traveling at nearly the speed of light smash into each other. The work is part of two studies published in the March 28 issue of Physics Letters B. Stone leads SU’s high-energy physics group, which is part of a larger group of scientists (the LHCb collaboration) that run an experiment at CERN. The National Science Foundation (NSF) funds Stone’s research group. “It is impressive to see such a forefront physics result produced so soon after data-taking commenced at the LHC,” says Moishe Pripstein, program director for the NSF’s Elementary Particle Physics program. “These results are a tribute both to the ingenuity of the international collaboration of scientists and the discovery potential of the LHC.” Scientists are eager to study these special B mesons because of their potential for yielding information about the relationship between matter and antimatter moments after the Big Bang, as well as yet-to-be-described forces that resulted in the rise of matter over antimatter. “We know when the universe formed from the Big Bang, it had just as much matter as antimatter,” Stone says. “But we live in a world predominantly made of matter, therefore, there had to be differences in the decaying of both matter and antimatter in order to end up with a surplus of matter.” All matter is composed of atoms, which are composed of protons (positive charge), electrons (negative charge) and neutrons (neutral). The protons and neutrons are composed, in turn, of even smaller particles called quarks. Antimatter is composed of antiprotons, positrons (the opposite of electrons), antineutrons and thus anti-quarks. While antimatter generally refers to sub-atomic particles, it can also include larger elements, such as hydrogen or helium. It is generally believed that the same rules of physics should apply to both matter and antimatter and that both should occur in equal amounts in the universe. That they don’t play by the same rules or occur in equal amounts are among the greatest unsolved problems in physics today. B mesons are a rare and special subgroup of mesons composed of a quark and anti-quark. While B mesons were common after the Big Bang, they are not believed to occur in nature today and can only be created and observed under experimental conditions in the LHC or other high-energy colliders. Because these particles don’t play by the same rules of physics as most other matter, scientists believe B mesons may have played an important role in the rise of matter over antimatter. The particles may also provide clues about the nature of the forces that led to this lack of symmetry in the universe. “We want to figure out the nature of the forces that influence the decay of these [B meson] particles,” Stone says. “These forces exist, but we just don’t know what they are. It could help explain why antimatter decays differently than matter.” In 2009, SU’s experimental high-energy physics group received more than $3.5 million from the NSF through the American Recovery and Reinvestment Act (ARRA) for its research as part of the LHCb collaboration at CERN. The LHCb, one of four large particle detectors located in the LHC ring, is dedicated to searching for new types of fundamental forces in nature.
<urn:uuid:1eeb00f8-4657-4987-a971-8a145d62e907>
2.71875
915
News (Org.)
Science & Tech.
40.447102
95,640,220
The planet is pretty much ready to go 100 percent renewable by 2050. The country’s energy mix is under scrutiny. A report commissioned by Energy Secretary Rick Perry acknowledges that low natural gas prices—not renewables—are behind the recent closure of coal energy plants, and that the grid has managed to withstand the increasing presence of renewable energy. According to an unrelated study published this week in the journal Joule, the world is poised to give up fossil fuels altogether. The research lays out renewable energy roadmaps—the mix of resources a given country would need to transition away from fossil fuels to renewable energy—for 139 countries collectively responsible for more than 99 percent of the global carbon emissions. According to the resulting analysis, the planet is pretty much ready to go 100 percent renewable by 2050. Fossil fuels like coal, natural gas, and oil are not renewable resources. It took an extremely long time for the Earth to produce them, and they’re going to run out. And now that we know them to be significant contributors to human-caused climate change, trying to replace them is basically a no-brainer. Still, many regard renewable energy as the flighty, less dependable sibling of our go-to fossils. But according to the United States Energy Information Administration (EIA), renewable energy sources accounted for roughly 15 percent of total electricity generation and 10 percent of total U.S. energy consumption in 2016. Some of that investment in renewable energy is being led by places that we tend to associate with petroleum, like Texas, where wind energy provided more than 12 percent of that state’s electricity in 2016. Even the United States military has vowed to get 25 percent of its energy from renewable sources. And this is more practical than environmental: A hybrid electric tank uses less gas, and doesn’t need to refuel as often. Also: solar panels don’t explode the same way a gas tank does. But could the world really give up on fossil fuels entirely? Jacobson and his colleagues used available data to assess how much wind, geothermal, and solar energy each of the 139 countries they studied has at its disposal, and how much of that it would take to achieve 80 percent renewable energy usage by 2030 and 100 percent by 2050. “Like anything, you don’t want to change—and it’s hard to change if something is working right now. But right now things are working with humongous side effects.” “I was surprised by how many countries we found had sufficient resources to power themselves with 100 percent wind, water, and solar power,” says Jacobson. The countries could all function using the renewable energy potential contained within their own borders, and most could do it while relying mainly on technologies that already exist. For small nation-states, like Singapore, the task of going totally renewable would be hard—but doable. Most countries could manage by mixing energy generation into existing landscapes—putting solar panels on rooftops, for example, or placing wind turbines on ranch land—while also creating dedicated renewable energy power plants like solar farms. And according to the researchers, this process would actually decrease the amount of land dedicated to energy production overall. “The entire renewable energy footprint […] is on order of 1.15 to 1.2 percent of the world’s land,” says Jacobson. “But keep in mind that 20 percent of the world’s land is used for agriculture. In the United States, if you just look at oil and gas, there are 1.7 million active oil and gas wells and 2.3 million inactive wells. Collectively they take up somewhere between one to two percent of the U.S. land area. And that’s not counting the refineries, the pipelines, or coal and nuclear infrastructure.” And then there’s the fact that we wouldn’t have the oil spills and chemical leaks associated with transporting and refining fossil fuels. Renewable energies involve a relatively fixed amount of land use; wind and solar energy doesn’t run out, so a solar farm erected today will still be pumping out electricity in a few decades. And even as those panels wear out, new ones can be erected on the same site. Coal seams run out and oil wells run dry, so we’re constantly pressing new locations into service. Tens of thousands of new oil wells are drilled annually. “We would reduce, we think, the footprint on the land,” says Jacobson. The study builds on earlier research by Jacobson that analyzed the technological feasibility—and the socio-economic benefits—of switching to renewable energy. That research suggested that the gradual shift to 100 percent renewable energy would lower the social cost of energy, especially deaths associated with fossil fuel pollution. “With oil and gas, you have to keep drilling and mining, and pollution keeps going on forever,” says Jacobson. “Worldwide, we have more than 4 million air pollution deaths from it. Things have to change—they’re not sustainable as they are.” He calculated that renewable energy could prevent 4.6 million premature deaths a year by 2050, simultaneously adding 24.3 million jobs to the economy. It would also save more than $50 trillion dollars a year in climate- and pollution-related costs. The first major step is (literally) electrifying: if all energy sectors (including transportation, heating/cooling, industry, and agriculture) start running on electricity instead of gas and oil, a nation’s overall energy usage goes down. “When you’re driving a car, only 17 to 20 percent of the energy in the gasoline goes to move the car. The rest is waste heat,” says Jacobson. “Whereas in an electric car, 80 to 86 percent of the electricity goes to move a car. You need one-fourth to one-fifth of the energy to drive an electric car than to drive a gasoline car.” That’s one reason why both France and Britain are pushing to ban all non-electric cars by 2040. Germany is working toward a ban on internal combustion engines by 2020. “By electrifying everything, just doing that, the power demand will go down because of the efficiency of electricity,” says Jacobson. Averaged across sectors, there’s a 23 percent reduction in energy demand just by switching to electricity. And when that electricity comes directly from renewable sources like solar and wind instead of coal, the savings keep getting better. According to Jacobson, 12.6 percent of global electric energy use goes toward mining, refining, and transporting fossil fuels (and uranium for nuclear power). Electrification plus a switch to renewables leads to a 36 percent reduction in demand—with no significant change in quality of life. “We think a transition is possible and its beneficial in multiple ways, and there’s little downside to a transition,” says Jacobson. “Like anything, you don’t want to change—and it’s hard to change if something is working right now. But right now things are working with humongous side effects.”
<urn:uuid:ab18b1f8-890c-4d80-b264-eafe24b039b7>
3.234375
1,493
News Article
Science & Tech.
47.806157
95,640,224
7. A high-speed centipede in S' is 10.0cm long measured at rest in S'...(see attachment).© BrainMass Inc. brainmass.com July 20, 2018, 6:56 am ad1c9bdddf See attached for the full solution. (a) By Length Contraction formula, L = L0 Or = 8/9 The solution determines the relativity in the given problem.
<urn:uuid:c143db40-489b-478d-b458-4386e8b3a549>
2.625
93
Tutorial
Science & Tech.
91.687333
95,640,244
The marsupials, or marsupialia, are an order of pouched mammals, which include the kangaroos, koalas, and opposum, all inhabitants of Australia. The marsupial embryo stays in the utterus (womb) for only a short time. When the young are born they are little more than embryos. They crawl from the vagina into a pouch called marsupium on the mother's abdomen. The mother may lick a pathway through her fur to smooth their way. Once they are inside the pouch, the babies seek out a nipple which they grasp in their mouth and hang onto for weeks as they complete their development. Thus, the pouch serves the same function as the incubator in which we rear premature babies. Marsupials have several other features which set them apart from the placental mammals. Their brains have a number of reptilian characteristics and their skeletons have two bones called marsupial bones attached to the pelvis. Apparently, marsupials split off early from the main stem of mammalian evolution.
<urn:uuid:36c51cca-812c-43fd-aaca-43e3d8b7f120>
3.484375
213
Knowledge Article
Science & Tech.
50.155531
95,640,246
Shark cousin uses heat from ocean vents to incubate its eggs B.C. specimen used to identify deep-sea fish that laid eggs at Galapagos deep sea hydrothermal vents Birds might sit on their eggs to keep them warm until they hatch, but deep-sea fish called skates have found a less boring and time-consuming way to incubate their eggs — they lay them near hot hydrothermal vents on the sea floor. Skates are relatives of sharks that have flattened bodies like stingrays and typically live more than a kilometre below the surface of the ocean. They lay eggs that can take three or four years to hatch in the cold waters of the deep ocean, although the eggs could theoretically hatch more quickly in warmer environments. Scientists exploring the volcanically active sea floor off the Galapagos Islands with a robotic sub or ROV (remotely operated vehicle) stumbled across a "nursery" in 2015 where Pacific white skates had laid 157 yellow egg cases, each about the of a smartphone and shaped like a pillow with horns at all four corners. When they analyzed the data, they discovered that over 89 per cent of the eggs had been laid in places where the water was warmer than average, the researchers reported in a paper published today in the journal Scientific Reports. About 68 per cent of the egg cases were laid within 20 metres of a black smoker chimney, a hydrothermal vent in the ocean floor that spews hot water along with clouds of dark-coloured particles. The researchers, led by Pelayo Salinas-de-Leon, senior marine ecologist at the Charles Darwin Foundation in the Galapagos Islands, suggest that the skates are choosing this place to lay their eggs in order to use the warmer temperatures to make them develop and hatch faster. That's because they're cold-blooded animals, and the rate at which all bodily processes happen depends on the surrounding temperature. Previous research has found that in the lab, the incubation period for some sharks can be reduced from two years to one just by increasing the temperature half a degree, said David Ebert, one of the co-authors of the new study. Ebert, program director of the Pacific Shark Research Center at Moss Landing Marine Laboratories in California, added that Pacific white skates are already the of a dinner plate when they hatch and can grow to be up to two metres in diameter. "The most vulnerable time in this skate's life history is in going to be in the egg case," said Ebert. The developing embryo can be eaten by predators like worms and snails, and the faster it can hatch, the shorter this period of vulnerability. Salinas-de-Leon said he and his research team was "very excited to document this behaviour for the first time in the marine environment." The team hadn't originally been planning to study skate egg cases. They were on a three-week expedition organized by the Ocean Exploration Trust, a non-profit group dedicated to exploring the oceans with help of its 64-metre Nautilus research vessel and its Argus and Hercules ROVs, which can dive to depths of 6,000 metres. Their goal was to map and explore the biodiversity around the Galapagos Islands — the area where the world's first hydrothermal vents were discovered in 1977 — in order to figure out how much the Galapagos Marine Park needed to be expanded and protected from human activity like mining and fishing. Just by chance, Salinas-de-Leon recalled, the Hercules ROV landed right next to a black smoker chimney at the start of a 24-hour dive. Its cameras beamed back images of egg cases that the team initially thought were shark egg cases. The team decided to collect two samples with the Hercules's manipulator arm. DNA from those samples were later used to identify the species. They matched a Pacific white skate caught off Vancouver Island in B.C. during a Fisheries and Oceans Canada expedition and catalogued at the Royal B.C. Museum. When the team reviewed the footage from the dive, they noticed that the eggs were concentrated near the black smokers and wondered if those might be areas of warmer water. Brennan Phillips was a PhD student on the expedition who piloted the ROV, and is now an assistant professor of ocean engineering at the University of Rhode Island. He checked the temperatures automatically recorded by the ROV above each of the egg cases. When he saw the results, he said, "Well, I was really excited." While most of the readings were less than 0.1 C higher than the normal water temperature of 2.76 C, they were taken more than a metre above the egg cases, where it was cooler. Salinas-de-Leon estimates that the egg cases themselves may have been closer to 1 degree above the surrounding temperature. Chris Mull is a postdoctoral researcher at Simon Fraser University in Burnaby, B.C., who studies sharks and their relatives, including skates, but was not involved in the study. He says many shark species that give birth to live young will move to warmer waters when they're pregnant, presumably to speed up gestation. But this is the first time he's seen an egg-laying species seek out warmer waters to incubate their eggs. "You could almost think of it as a form of parental care," he said, adding that skates weren't previous known for caring for their eggs or their young in any way. Salinas-de-Leon and his colleagues ended their report with some good news. In March 2016, after the researchers showed the high level of shark biodiversity in the area, the Ecuadorian government created a 40,000 square kilometre marine sanctuary around Darwin and Wolf Islands in the Galapagos that includes the Pacific white skate nursery. The study was funded by the Helmsley Charitable Trust and the Save our Seas Foundation.
<urn:uuid:415b3f2b-ffbf-4a90-a7a6-22497786c9df>
3.5625
1,218
News Article
Science & Tech.
47.673075
95,640,276
Study: Lakes Huron, Michigan water clarity tops Superior ST. PAUL, Minn. (AP) — A study has found that the Great Lakes of Huron and Michigan have surpassed Lake Superior in water clarity. Scientists analyzed satellite images from 1998 to 2012 and found that the depth light could penetrate the water increased by about 20 percent, Minnesota Public Radio reported . “What surprised us was the magnitude of the change,” said Robert Shuchman, a study co-author and co-director of the Michigan Tech Research Institute. “We had no idea the data was going to tell us that Huron and Michigan have surpassed the water clarity in Lake Superior. That was the startling piece.” Scientists say less phosphorous runoff, climate change and an increase in invasive zebra and quagga mussels have contributed to the change. “Lake Michigan now reminds me of the Caribbean,” Shuchman said, with crystal clear, aqua blue water and white sand beaches along its eastern shore. Michigan Technological University Senior Research Scientist Gary Fahnenstiel co-authored the study. He said the mussels filter the water by eating plankton, which absorb light. The decrease in plankton could cause major changes to the ecology in the lakes, Shuchman said. Plankton is the base of the food chain, so getting rid of it could cause the rest of the food chain to starve. The clearer water has also led to an increase of an algae called cladophora. Harmful bacteria grow in the algae and can produce botulism toxins that kill fish and birds. Fahnenstiel said he hopes the intense clarity of the lakes will lead people to recognize their beauty and strive to take better care of the resource.
<urn:uuid:349ee9a8-83b9-4cce-9b15-9b58162c1718>
3.328125
368
News Article
Science & Tech.
45.874076
95,640,277
Sea ice algae are important primary producers, contributing to the base of the food chain in the Arctic Ocean. Their productivity and composition depend on the light, nutrient and salinity conditions present in the ice and underlying water column, all of which are likely to change in response to climate warming. Diatom-ARCTIC will characterize sea ice habitats in the Arctic and evaluate the biogeochemical and ecological contributions of the most prolific algal community – diatoms – within them. These insights will be applied to understand how the Arctic marine system will respond to ongoing changes that include thinning of sea ice, declines in nutrient inventories and freshening of Arctic Ocean surface waters. We will answer questions such as: - How do sea ice conditions vary over different spatial and temporal scales in the Arctic? - How do sea ice diatoms respond to variability in growth conditions? Professor Alexandre Anesio, lead investigator of Diatom-ARCTIC: “Diatom-ARCTIC is a truly innovative study that combines extensive field and laboratory investigations to comprehensively investigate the potential response of sea ice algae to climate change from the species to pan-Arctic scales, while also bringing together interdisciplinary experts from the UK, Germany, and around the world.” - View full profile Professor Alexandre Anesio Co-lead investigator, University of Bristol Alex Anesio is a Professor of Biogeochemistry at the University of Bristol, and the co-lead investigator of the Diatom-ARCTIC project. His research combines molecular and biogeochemical approaches to determine microbial functionality and activity in the cryosphere. In the Diatom-ARCTIC, a major focus of his research is determining the response of sea ice algae to changes in light, nutrient and salinity conditions associated with the thinning of sea ice. - View full profile Dr Marcel Nicolaus Co-lead investigator, AWI I am a research scientist at the Alfred Wegener Institute in Bremerhaven, Germany. My main research interest is the role of sea ice and its snow cover as key elements of the climate- and ecosystems. In Diatom-ARCTIC, I am the co-lead investigator, and responsible for the ROV based observations of the bio-physical sea ice and habitat conditions. I work on linking our field observations into general parameterizations and numerical models. UK and Germany combine forces to fund crucial Arctic science For the first time, the UK and Germany have joined forces to investigate the impact of climate change on the Arctic Ocean. The UK’s Natural Environment Research Council (NERC) and Germany’s Federal Ministry of Education and Research (BMBF) have jointly invested almost £8 million in 12 new projects to carry… Read more03 July 2018
<urn:uuid:8d5f129e-0e7a-45ff-8294-fb85b18d11ef>
2.6875
577
Content Listing
Science & Tech.
13.770435
95,640,280
In this program we are going to fetch the data from the database in the table from our java program. To accomplish our goal we first have to make a class named as ServletFetchingData which must extends the abstract HttpServlet class, the name of the class should be such that the other person can understand what this program is going to perform. The logic of the program will be written inside the doGet() method which takes two arguments, first is HttpServletRequest interface and the second one is the HttpServletResponse interface and this method can throw ServletException. Inside this method call the getWriter() method of the PrintWriter class. We can retrieve the data from the database only and only if there is a connectivity between our database and the java program. To establish the connection between our database and the java program we firstly need to call the method forName() which is static in nature of the class ClassLoader. It takes one argument which tells about the database driver we are going to use. Now use the static method getConnection() of the DriverManager class. This method takes three arguments and returns the Connection object. SQL statements are executed and results are returned within the context of a connection. Now your connection has been established. Now use the method createStatement() of the Connection object which will return the Statement object. This object is used for executing a static SQL statement and obtaining the results produced by it. As we need to retrieve the data from the table so we need to write a query to select all the records from the table. This query will be passed in the executeQuery() method of Statement object, which returns the ResultSet object. Now the data will be retrieved by using the getString() method of the ResultSet object. The code of the program is given below: XML File for this program: The output of the program is given below: Table in the database: |mysql> select * from emp_sal; | EmpName | salary | | zulfiqar | 15000 | | vinod | 12000 | 2 rows in set (0.00 sec)
<urn:uuid:67728038-d2f9-4357-a823-a22efffa4558>
3.375
446
Documentation
Software Dev.
45.870014
95,640,287
Match the descriptions of physical processes to these differential equations. Explore the possibilities for reaction rates versus concentrations with this non-linear differential equation Get further into power series using the fascinating Bessel's equation. Look at the advanced way of viewing sin and cos through their power series. See how differential equations might be used to make a realistic model of a system containing predators and their prey. Solve these differential equations to see how a minus sign can change the answer See how the motion of the simple pendulum is not-so-simple after all. Can you find the differential equations giving rise to these famous solutions? Dip your toe into the world of quantum mechanics by looking at the Schrodinger equation for hydrogen atoms An article demonstrating mathematically how various physical modelling assumptions affect the solution to the seemingly simple problem of the projectile. Things are roughened up and friction is now added to the approximate simple pendulum Follow in the steps of Newton and find the path that the earth follows around the sun. How many eggs should a bird lay to maximise the number of chicks that will hatch? An introduction to optimisation.
<urn:uuid:9fa511da-ceeb-417c-a748-4dc80795dca1>
3.203125
236
Content Listing
Science & Tech.
36.338
95,640,291
|MLA Citation:||Bloomfield, Louis A. "Question 1546: How can light travel through vacuum?"| How Everything Works 18 Jul 2018. 18 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=1546>. The fact that light waves can travel in vacuum, and don't need any material to carry them, was disturbing to the physicists who first studied light in detail. They expected to find a fluid-like aether, a substance that was the carrier of electromagnetic waves. Instead, they found that those waves travel through truly empty space. One thing led to another, and soon Einstein proposed that the speed of light was profoundly special and that space and time were interrelated by way of that speed of light.
<urn:uuid:95a3a6ac-10bb-480a-8482-335e3483b661>
3.21875
157
Knowledge Article
Science & Tech.
72.392937
95,640,318
Classification and Mapping of Land Cover Types and Attributes in Al- Ahsaa Oasis, Eastern Region, Saudi Arabia Using Landsat-7 Data Received Date: Jan 28, 2018 / Accepted Date: Feb 08, 2018 / Published Date: Feb 12, 2018 Information about land use/cover is important and much more needed for different aspects of sustainable development and environmental management. Remote sensing datasets has become one of the most important and convenient tool to provide such information. The present study aimed to map land cover types for sub area in Al- Ahasaa Oasis, Saudi Arabia, using a subset of Landsat-ETM+ image. Different image preprocessing techniques in addition to a well-known and widely used classification method (i.e., Maximum Likelihood classifier) were applied. Accuracy assessment was carried out with 89% agreement and accepted according to the applied method. A different land cover classes were found in the study area, which includes (Sand dunes, Water bodies, Sabakha, Bare soil, Urban, and Agricultural lands). The study also revealed that the dominant land cover class is sand dunes with area approximately ± 70%. The study strongly indicated that the area has long been affected by sand movement. Finally, the study suggested that, further researches with more advanced methods rather than traditional methods are needed in the future to support the findings of this study, with a high degree of accuracy. Keywords: Remote sensing; Classification; Al-Ahsaa; Saudi Arabia; Land cover Knowledge of land use and land cover is important for many planning and management activities concerned with the surface of the earth . Understanding the distribution of land cover is crucial to the better understanding of the earth’s fundamental characteristics and processes, including productivity of the land, the diversity of plant and animal species, and the biogeochemical and hydrological cycles . The availability and accessibility of accurate and timely land cover information play an important role in many global land development, and in many scientific studies and socioeconomic assessments because they are essential inputs for environmental and ecological models , the primary reference for ecosystem control and management and required information for understanding coupled human and natural system . In the study area, little information and a few studies about the land use/cover types have been found. The only study conducted by Aldakheel et al. has been conducted in the study area, have pointed that the use of multi-temporal Landsat TM imagery to detect land use/ cover change showed a significant result. They also reported that vegetation, soil salinization and urban area are the dominant land cover types in the study area. However, accurate and up to date information about land cover types and attributes is much more needed, and the available information needs further investigation. The use of Landsat image to map land use and land cover has been an accepted practice since the launch of Landsat-1 in 1972. Land cover mapping is one of the main areas of remote sensing data application [9,10]. To effectually obtain such information from remotely sensed data, convenient digital image classification methods are required. A number of classification techniques have been reported for gathering, monitoring and mapping land cover types using remote sensing data [1,6,11-15]. (e.g., Osman suggested the nonparametric methods or knowledge-based for image processing and analysis. Sub pixel classification methods [13,16,17] have been used to label the mixed land cover class especially in arid and semi-arid environment. However, these methods would not be suitable within the limited resource and because it requires limited and spectrally distinctive components. In addition to, the remotely sensed signal of a pixel should linearly relate to the fraction of endmember present . Conventional methods have also been widely accepted and used for mapping and assessment of land use and land cover types from satellite image [1,12,18]. In the present study, classification and mapping of land cover types from Landsat-7 (ETM+) image is the main objective. In addition, identification of land cover attributes is second focus. Level 1 of the USGS classification system is applied using the standard supervised (i.e., Maximum Likelihood) classification method, aided by different image preprocessing techniques. More details about the study area and materials and research methods are described in section materials and methods of the paper. In results section, more information about research results are presented. Finally, the study discusses the main points and findings of the subject of the paper in discussion section and draw general conclusions in conclusion section. Materials and Methods The study taken place in Al-Ahsaa Oasis, eastern region, Saudi Arabia. It covers approximately 2268.72 km2 in area, with the geographical coordinates (49°24’-49°48’ E and 25°24’-25°36’ N), (Figure 1). The study area is mainly covered by active sand dunes. The topography as shown in Figures 2a and 2b is very gentle with little relief and a few surrounding ridges. The elevation ranges from 345 to 510 meters above sea level. The study area is affected by arid and semiarid climate, with average annual rainfall less than 46 mm, and mean annual temperature is approximately 28°C in summer. The peak rain falls almost entirely in the period of March and August. For generating the land cover map, several and essential image preprocessing and analysis techniques were used. All the image processing and analyses have been carried out by using an Integrated Land and Water Information system (ILWIS) open source software. ILWIS is software with Geographical Information System (GIS) and Image Processing capabilities. For several reasons, raw remotely-sensed data generally contain geometric and radiometric errors . To classify, identify and extract spectral and spatial classes representing different thematic features of these data , these errors have to be removed or eliminated. In this study, the geometric corrections were already done by the data provider, while the necessary radiometric corrections were accomplished as previously described in Irish and Mather [18,20]. There is a relationship between land cover and measured reflection values in the image data, which depends on the local characteristics . In order to extract information from the image data, this relationship must be found. The process to find this relationship is called classification. Digital image classification is customarily made by applying either supervised or unsupervised classification methods [1,12]. For satellite image applications, the latter is generally considered much more important and widely used . In this study, the supervised classification method was applied to classify sub-scene of the Landsat-7 ETM+ image. In the following steps, the classification procedures were given: • By using three uncorrelated bands (7, 5, and 1) have been obtained from the optimum index factor (O.I.F), a false colour composite image was created. • Three image transformation methods were used. These are: a) Principal Components Analysis (PCA) was used as enhancement to reduce and remove data dimensionality and redundancy (Liu and Mason, 2009) prior to visual interpretation classification of the original data , b) Image subtraction (differencing) was used for spectral enhancement and removal of background illumination bias(Mather 1987):, c) Image division (ratio) was used to enhance spectral features, and finally d) Normalized Difference Vegetation Index (NDVI) was used for detection vegetation spectral response. • By using data derived from step 1 and 2, two sets of signature files were defined and collected aided by the groups of “ground truth points”. • For signatures evaluation, the created signature files were plotted in colors feature space (see Figure 3), to confirm and judgment that, the selected land cover classes are spectrally distinguished, and each class corresponds to only one spectral cluster , (i.e., no obvious overlap exist) between different features. • By using the signature files generated in step three, the supervised classification (maximum likelihood algorithm) was applied in a semi-automatic way and the obtained result was evaluated and tested for accuracy. Image transformation results The six original bands 1-5 and 7 (Band 6, is a thermal band, it makes TM data potentially useful in a range of thermal mapping applications. Also, band 6 has a less distinct appearance than the other bands because of the ground resolution cell of this band is 120 m. Therefore, it was excluded from band combination and preprocessing and analysis) of the Landsat-7 ETM+ image are highly correlated with one another. To compact the redundant data into fewer layers, PCA was used to produces a new set of image, that are uncorrelated with one another and are ordered in terms of the amount of variance they explain from the original set of bands. The eigenvector matrix of six reflective spectral bands of subset of a Landsat ETM+ image is presented in Table 1. The six PC components derived from original image are shown in Figure 4. The following observations are made: Table 1: The PCA eigenvalues (variance) and eigenvectors of the covariance matrix of Landsat-7 ETM+ sub-scene. Variance per bands: 5007.30, 242.67, 31.08, 41.46, 10.38, 1.06. Variance percentage per band: 94.35, 4.57, 0.59, 0.27, 0.20, 0.02. • The eigenvalues representing the variances of PCs shown in Table 1 indicates that a very large portion of information (data variance) is concentrated in PC1 and PC2 with 98.92% of the total variation in the original data set. Whereas, the others PC (i.e., PC3, PC4, PC5, and PC6) together account for only about 1.08% of the total variance in the original scene. Also from Table 1 it can be seen that all the bands had a positive eigenvectors (weight) in PC1. According to Olmo et al., Singh et al. and ILWIS [22-24] these eigenvectors are interpreted as: a) albedo image (in which the soil and sand background is represented), b) mostly explains the high difference of the input bands. • From Table 1 and Figure 4 note that PC1 highly concentrates approximately (94.35 percent) of the variance in the original data. Figure 4: PC (Transformed) images derived from six reflective spectral bands of a subset of Landsa-7 ETM+ image, in which the information redistribution and compression properties of the transformation are illustrated. Formation of colour one composite by displaying images with high variance, i.e., PC1 image as red, PC2 as green and PC3 as blue. • All six bands have positive contribution with large eigenvalue (5007.30 variance) and accounts for more than 94% of the information from all six bands. From eigenvectors in row 1 note that large positive loading (0.655) from band 5 and (0.454) from band 7 caused by the high reflectance of sand dunes and urban areas. • The PC images with small eigenvalues (variance) (e.g., PC5 and PC6) contain no information. It is almost no more than errors or noise. • By referencing to Figure 4, PC1 concentrates information common for all six bands. According to the visual interpretation, this common information is mostly sand dunes. • From Figure 4, both PC2 and PC3 depict the largest amount of variance that was inconspicuous by the dominant information of PC1. For instance, some urban areas, natural vegetation and sand sheet are obviously defined in the PC2 and PC3 with bright and grey colour. • Row 2 of Table 1 illustrates the eigenvectors values of PC2 that is dominated by the contribution of the blue band (channel 1) with large positive loading (0.753) and large negative loading (-0.453) from mid-infrared band (channel 7). The large loading of PC2 is mostly representing the information excluded from PC1. PC3 and PC4 are dominated by large positive loading (0.437) and (0.682) from mid-infrared (channels 5 and 7). The large positive loading of PC3 as shown in Figure 4 is caused by the higher reflectance of urban areas and vegetation given by mid-infrared band (channel 7), whereas the large positive loading of PC4 is because of noise. • As the PCA operation made the bands independent or orthogonal from one another, bands with high amount of information (high variance) (i.e., PC1, PC2, and PC3) are used to create a colour composite image as shown in Figure 4. From the created image, definition and collection of training areas were easily achieved with less overlap as plotted in Figure 3. From Table 1 the too many positive eigenvectors value in band 1 could be justified by the following: • The study area is dominated by sand cover, bare soil, and the availability of Sabakha’s feature. These features are highly reflected by band 1 where this band is designed for discrimination of soil and vegetation and cultural feature identification. • Band 1 is highly correlated with band 2(0.96%), 3(0.90%) and 4 (0.84%) and less correlated with band 5 and 7 (largest sum of standard deviation between this two bands is 46.62, and 33.09) and smallest correlation. It can justify that, band 1 to 3 having similar information with high concentration in band1. Figures 4 and 5 shows the image transformation results. The results highlighted the cover feature classes in the image by enhancing spectral features separability and suppressing topographic shadows. From Figures 4 and 5, it can be seen that the areas covered by sand and barren lands or bare soil were easily distinguished and sampled. The concentrations of Iron oxides and hydroxides in minerals, made the spectral reflectance of sand (represented by the pink colour in Figure 5 and by red colour in Figure 6 more apparent in the resulting images than the original one. The urban (Built-up) areas are more apparent in Figure 6 than in Figure 5, that indicated by blue colour. Therefore, it was easily sampled and classified. Agricultural areas are indicated by green colour in Figure 5 and turquoise colour in Figure 6 due to higher moisture content of this cover type. The Sabakha’s feature is indicated and highlighted by brown colour as shown in Figure 5. However, image transformation (i.e., PCs and image ratios) techniques are very useful and valuable for highlighting and distinguishing specific land cover classes spectrally rather than spatially. Therefore, it is usage was restricted just to define and collect the training samples for classification purpose. Figure 7 shows the healthy vegetation cover in the study area as difference and summation of the Near Infrared (NIR) and red spectrally calibrated bands (i.e., NDVI) index. This index was applied to make vegetation cover more distinguishable from the other ground objects for better classification results. The derived NDVI values ranging from -0.30 as a minimum NDVI value representing the area covered by water body, to 0.60 as maximum NDVI value, representing the area covered by vegetation. Image classification results Figure 8 shows the result obtained from classified subset Landsat- ETM+ image. Six major land cover classes have been found in the study area, namely: (Vegetation (Agriculture), Sabakha, Sand, Bare soil, Water body, and Urban). Information about areal and percentage of thematic classes are summarized and presented in Table 2. By referencing to Table 2, the sand dune class is dominant in the study area, it covers about 70% of the study area. The absence of vegetation cover on the study area sides is conspicuous. There is, however, a few vegetation cover near water bodies in the form of bushes. Figure 8 also shows that the agricultural areas are only cover 5% in the area, mainly date trees with a few vegetables around it. Urban areas cover approximately 8% of the study area. According to the urban shape and pattern, it is clear that the urban growth and extension of the study area is in the North-South and East direction. This meant that the extension of the urban is restricted by different factors (e.g., Sand dunes). |Class||N pixels||Area (meter)||Area (km2)||Area (%)| Table 2: Major land-cover class found in the area in (ha, Km2 and percentage) created by the classification of Landsat-7 ETM+ subset image (channel 1,5,7). Table 3 shows the statistical report of the cross function that was used to evaluate the accuracy of the classification result, using the second set of signature file. The overall accuracy is 79%, with average accuracy 89% and average reliability 83%, which demonstrate that the good performance of classification procedures. Generally speaking, statistical information from Table 3 indicates that the error of accuracy and reliability is less than 17 percent. By referencing to the accuracy and reliability statistical information, the classification results are accepted as basis for better planning and management of the existing land resources in the study area. |Bare soil||Sand||Sabakha||Urban||Vegetation (Agriculture)||Water||Unclassified||Accuracy| Table 3: Classification accuracy assessment results of Landsat-7 ETM+ subset image, Date: October 2017, bands combination (1,5,7). Average Accuracy=89.08%; Average Reliability=83.03%; Overall Accuracy=79.05%. A study of Landsat-ETM+ image of the study area reveals that a large variety of sand dune shapes were found. With a total area estimated at 1591 square kilometer, has a sand cover. At least one-third of the study area has been affected by sand movement. The problem of sand movement has been controlled for several years by planting different types of trees utilized to control and stabilize the sand movement toward the urban built-up area. Different shapes and sizes of sand dunes have been found in the study area. Holm pointed out that the main sources for these sand dunes are: the Rub’ al Khali, Nafud’s, and Dahna deserts. He also reported that the primary sources of sand for these deserts are crystalline rocks exposed in the uplands of the peninsula. An observation from field work suggests that most of the sand dunes occur in areas of low relief, and low plains, as shown in the East and West of the study area. In the Eastern part of the study area, the dunes high are about ± 150 meters. The second more interesting cover type has been found in the study area is Sabakha’s features (indicated by blue colour in the land cover map). A study by Holm pointed out that the name is from the Arabic, and the sabakha is a saline flat area, and are found inland from the coast at elevations up to 150 meters near Hofuf (The focus of this study). Most of this type of land cover has long been concentrated in the Eastern part of the study area (see Figure 8). Also Holm reported that there are two types of sabakha’s formation along the Arabian coast, these are: 1) arenaceous, filled with sand, and 2) aregillaceous, filled with clay. For more details and more information about the formation of this land cover type, the study has been carried out by Holm can be considered and suggested. Also from Table 2, can be seen that the agricultural areas only cover around 131 square kilometer (i.e., approximately 5%) from the whole study site. One may interpret that for some reasons; the first reason is that the study area has long been affected by different kinds of drought (e.g., hydrological droughts), the second reason is that the study area has been experiencing steady growth in its population since 2000 up to now. From 2000 until today, the built-up areas are increased to reach approximately ± 197 km2 in area in 2017. In addition to that, the extension of built-up area has recently restricted by sand dune to be extended in specific directions (i.e., toward the agricultural lands). All these factors lead to decrease the areas covered by crop land in the study area. Different land use/ cover classes have been found by Aldakheel and Al-Hussaini . They also revealed that channel 3 of Landsat TM image may best used to discriminate conversion land of rural to urban among the land cover classes in change detection method. However, what they have been found and what has been found in this study, needed more investigation and in deep research using more ground truth and different methods of remotely sensed data analysis (e.g., Object-based classification, decision trees and support vector machines) than traditional methods (i.e., supervised or unsupervised) to better findings and generalizing the findings for the whole region and generating more accurate and reliable land use/cover map. The aim of this study was to generate up-to-date land-cover map of the Al-Hofuf study site based on a well-known and widely applied (i.e., maximum likelihood classifier algorithm) standard supervised classification method using Landsat-7 ETM+ subset image data. From the obtained results, the study concludes that: Results from the study revealed that using image transformations prior to image classification decreased the topographical effects (i.e., shadows) on the satellite image and make it more consistence for classification application, and more appropriate for the definition and collection of training areas, especially for (urban and sabakha). It is also concluded that the correlation matrix (i.e., O.I.F) was very useful to obtain multivariate statistical information of a data set for 3-band combination. By referencing to the applied methods and overall accuracy results, the generated land-cover map may considered for land resources management and development. Furthermore, the study concludes that Landsat-ETM+ image data give optimal and up-to-date information regarding land use/cover mapping, and very useful to carry out land use/cover studies in wide arid and semi-arid area. Finally, the results also pointed out that the study area has long been affected by sand movement. Therefore, more studies in the future should take place in the study area for more information about this phenomenon (i.e., Sand Encroachment). The author would like to thank the anonymous reviewer and Dr. Ganawa from University of Khartoum, Department of GIS in Sudan for his useful suggestions and comments on the manuscript. All the work of the paper has been carried out by the author. Conflicts of Interest The authors declare no conflict of interest. - Lillesand TM, Kiefer RW (1989) Remote Sensing and Image Interpretation. John Wiley & Sons Ltd, London, UK, p: 721. - Giri CP (2012) Remote Sensing of Land Use and land Cover: Principles and Applications. Series in Remote Sensing Applications. Taylor & Francis, UK. - Bontemps S, Herold M, Kooistra L, Van Groenestijn A, Hartely A, et al. (2012) Revisiting land cover observation to address the needs of the climate modeling community. Biogeosciences 9: 2145-2157. - Yang J, Gong O, Fu R, Zhang M, Chen J, et al. (2013) The role of satellite remote sensing in climate change studies. Nat Clim Chang 3: 875-883. - Mora B, Tesndbazar NE, Herold M, Arino O (2014) Global land cover mapping: Current status and future trend. In: Land cover change detection by integrating object-based data blending model of Landsat and MODIS. Remote Sensing of Environment 184: 374-386. - Chen J, Chen J, Liao A, Cao X, Chen L, et al. (2015) Global land cover mapping at 30 m resolution: A POK-based operational approach. ISPRS J Photogramm Remote Sens 103: 7-27. - Jin S, Yang L, Danielson P, Homer C, Fry J, et al. (2013) A comprehensive change detection method for updating the National Land cover Database to circa 2011. Remote Sens Environ 132: 159-175. - Aldakheel Y, Al-Hussaini A (2005) The use of multi-temporal Landsat TM imagery to detect land cover/use changes in Al-Ahssa, Saudi Arabia. Scientific Journal of King Faisal University (Basic and Applied Sciences) 6: 1426. - King RB (2002) Land cover mapping principles: a return to interpretation fundamentals. International Journal of Remote Sensing data, Remote Sensing of the Environment 86: 530-541. - Foody GM (2002) Status of land covers classification accuracy assessment. Remote Sensing of the Environment 80: 185-201. - Pilesjö P (1992) GIS and Remote Sensing for Soil Erosion Studies in Semi-arid Environment: Estimation of Soil Erosion Parameters at Different Scales. Doctor’s thesis No. CXIV, Department of Physical Geography, University of Lund, Sweden, p: 203. - Osman BT (1996) GIS-Hydrological Modelling in Arid Lands: A geographical synthesis of surface waters for the African Red Sea region in the Sudan. Doctor’s thesis, Department of Physical Geography, University of Lund, Sweden, p: 202. - Salih AAM, Ganawa E, Elmahl AA (2017) Spectral mixture analysis (SMA) and change vector analysis (CVA) methods for monitoring and mapping land degradation/desertification in arid and semi-arid area (Sudan), using Landsat imagery. The Egyptian Journal of Remote Sensing and Space Sciences 20: 21-29. - Sobrino JA, Munoz JCJ, Paolini L (2004) Land surface temperature retrieval from LANDSAT TM5. Remote Sensing of Environment 90: 434-440. - Erener A, Düzgün S, Yalciner AC (2011) Evaluating land use/cover change with temporal satellite data and information system. Procedia Technology 1: 385-389. - Song C (2005) Spectral mixture analysis for sub-pixel vegetation fraction in the urban environment: How to incorporate endmember variability. Remote Sensing of Environment 95: 248-263. - Dawelbait M, Morari F (2012) Monitoring desertification in a Savannah region in Sudan using Landsat images and spectral mixture analysis. Journal of Arid Environment 80: 45-55. - Irish R (2002) Landsat 7 Science Data Users Handbook. NASA Goddard Spaceflight Centre, USA. - US Geological Survey (USGS). - Mather PM (2009) Computer Processing of Remotely-Sensed Images: An Introduction. John Wily & Sons Ltd, West Sussex, England. - Liu JG, Mason PJ (2009) Essential Image Processing and GIS for Remote Sensing. John Wily & Sons Ltd, West Sussex, UK. - ILWIS (2001) User’s Guide, Aerospace Survey and Earth Sciences (ITC). Enschede, The Netherlands. - Olmo MC, Hernandez FA (2005) Remote Sensing Image Analysis: Including the Spatial Domain, Netherlands, pp: 93-111. - Singh A, Harrison A (1985) Standardized Principle Components. International Journal of Remote Sensing 6: 883-896. - Holm DA (1960) Desert Geomorphology in the Arabian Peninsula: Distinctive land forms provide new clues to the Pleistocene and Recent history of a desert region. Science 132: 1369-1379. Citation: Salih A (2018) Classification and Mapping of Land Cover Types and Attributes in Al-Ahsaa Oasis, Eastern Region, Saudi Arabia Using Landsat-7 Data. J Remote Sensing & GIS 7: 228. Doi: 10.4172/2469-4134.1000228 Copyright: © 2018 Salih A. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Select your language of interest to view the total content in your interested language Share This Article International Conference on Geological and Environmental Sustainability August 13-14, 2018, Bali, Indonesia International Conference on GIS and Remote Sensing September 27-28, 2018 Berlin, Germany International Convention on Geochemistry and Environmental Chemistry October 19-20, 2018 Ottawa, Canada - Total views: 298 - [From(publication date): 0-2018 - Jul 18, 2018] - Breakdown by view type - HTML page views: 270 - PDF downloads: 28
<urn:uuid:ad73ef3a-afc0-4dbc-b223-c60624b82cfb>
3.015625
6,121
Truncated
Science & Tech.
42.500868
95,640,324
Species Detail - Seraphim (Lobophora halterata) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). insect - moth 30 April (recorded in 2011) 3 July (recorded in 2008) National Biodiversity Data Centre, Ireland, Seraphim (Lobophora halterata), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Species/78810>
<urn:uuid:dd1aafb3-e394-49df-897a-28cb36b1c105>
2.65625
143
Structured Data
Science & Tech.
33.246137
95,640,332
The programming world, could be very huge, and it’s a must to do numerous analysis, in order for you to be able to grasp, any programming language in its entirety. So, in a lot of the academic institutions, the programming language BC was induced in the curriculum in 2003. However, Matlab is a proprietary language used in mathematical programming. This list covers probably the most important computer programming languages an individual trying to enter IT ought to know. Although, the core A+ interpreter, which doesn’t embody a GUI or an IPC, have been ported to Microsoft Windows. A+ is a programming language that’s for actual programmers, and for these programmers who are dedicated, to creating software program and website purposes. Actually talking, it’s certainly laborious to search for programming languages which are really strong. A+ is alleged to be a descended of the A programming language, so if you know in regards to the A programming language, you will have some kind of concept of what A+ programming language, is absolutely all about. To entry Centralized Phone Programming press Feature 0 zero, left intercom twice, then proper intercom as soon as. Yes, you may program or reprogram your thoughts to …
<urn:uuid:1f34d3bf-aafe-4953-8e71-34c2ef1f3856>
2.96875
255
Truncated
Software Dev.
37.851863
95,640,341
or H-bomb, weapon deriving a large portion of its energy from the nuclear fusion of hydrogen isotopes. In an atomic bomb, uranium or plutonium is split into lighter elements that together weigh less than the original atoms, the remainder of the mass appearing as energy. Unlike this fission bomb, the hydrogen bomb functions by the fusion, or joining together, of lighter elements into heavier elements. The end product again weighs less than its components, the difference once more appearing as energy. Because extremely high temperatures are required in order to initiate fusion reactions, the hydrogen bomb is also known as a thermonuclear bomb. The first thermonuclear bomb was exploded in 1952 at Enewetak by the United States, the second in 1953 by Russia (then the USSR). Great Britain, France, and China have also exploded thermonuclear bombs, and these five nations comprise the so-called nuclear club—nations that have the capability to produce nuclear weapons and admit to maintaining an inventory of them. The three smaller Soviet successor states that inherited nuclear arsenals (Ukraine, Kazakhstan, and Belarus) relinquished all nuclear warheads, which have been removed to Russia. Several other nations either have tested thermonuclear devices or claim to have the capability to produce them, but officially state that they do not maintain a stockpile of such weapons; among these are India, Israel, and Pakistan. South Africa's apartheid regime built six nuclear bombs but dismantled them later. The presumable structure of a thermonuclear bomb is as follows: at its center is an atomic bomb; surrounding it is a layer of lithium deuteride (a compound of lithium and deuterium, the isotope of hydrogen with mass number 2); around it is a tamper, a thick outer layer, frequently of fissionable material, that holds the contents together in order to obtain a larger explosion. Neutrons from the atomic explosion cause the lithium to fission into helium, tritium (the isotope of hydrogen with mass number 3), and energy. The atomic explosion also supplies the temperatures needed for the subsequent fusion of deuterium with tritium, and of tritium with tritium (50,000,000 degrees Celsius and 400,000,000 degrees Celsius, respectively). Enough neutrons are produced in the fusion reactions to produce further fission in the core and to initiate fission in the tamper. Since the fusion reaction produces mostly neutrons and very little that is radioactive, the concept of a "clean" bomb has resulted: one having a small atomic trigger, a less fissionable tamper, and therefore less radioactive fallout. Carrying this progression further results in the neutron bomb, which has a minimum trigger and a nonfissionable tamper; it produces blast effects and a hail of lethal neutrons but almost no radioactive fallout and little long-term contamination. This theoretically would cause minimal physical damage to buildings and equipment but kill most living things. Developed in 1958 by the United States and successfully tested, a number of countries are believed to have included such weapons in their nuclear arsenals; the United States built several hundred neutron bombs in the 1980s but did not deploy them. The theorized cobalt bomb is, on the contrary, a radioactively "dirty" bomb having a cobalt tamper. Instead of generating additional explosive force from fission of the uranium, the cobalt is transmuted into cobalt-60, which has a half-life of 5.26 years and produces energetic (and thus penetrating) gamma rays. The half-life of Co-60 is just long enough so that airborne particles will settle and coat the earth's surface before significant decay has occurred, thus making it impractical to hide in shelters. This prompted physicist Leo Szilard to call it a "doomsday device" since it was capable of wiping out life on earth. Like other types of nuclear explosion, the explosion of a hydrogen bomb creates an extremely hot zone near its center. In this zone, because of the high temperature, nearly all of the matter present is vaporized to form a gas at extremely high pressure. A sudden overpressure, i.e., a pressure far in excess of atmospheric pressure, propagates away from the center of the explosion as a shock wave, decreasing in strength as it travels. It is this wave, containing most of the energy released, that is responsible for the major part of the destructive mechanical effects of a nuclear explosion. The details of shock wave propagation and its effects vary depending on whether the burst is in the air, underwater, or underground. See disarmament, nuclear and nuclear weapons; see also nuclear energy. - See Dark Sun: The Making of the Hydrogen Bomb (1995). , The hydrogen bomb, a thermonuclear device, uses the process of fusion to create a much larger explosion than is possible with the fission-based... (H-bomb) Nuclear weapon developed by the USA in the late 1940s, and first exploded in 1952 in the Pacific. The explosion results from nuclear... Weapon whose enormous explosive power is generated by the nuclear fusion of hydrogen isotopes. The high temperatures required for the fusion reacti
<urn:uuid:ac587930-0f3b-4bb9-ba9f-797076823034>
3.5625
1,052
Knowledge Article
Science & Tech.
36.046077
95,640,352
All of the following methods are specific to Internet Explorer 4.0's Dynamic HTML object model. None are supported by Netscape or any previous versions of Internet Explorer click method can be used to 'click' a referenced object through scripting, forcing an onClick event for the particular element. For example, click the top of the following two links and it 'clicks' the second link: contains method can be used to determine whether the referenced element totally encloses (contains) another element. For example: <P ID="para1">Some <STRONG ID="str1">strong, bold</STRONG> text</P> alert (para1.contains(str1)) would return true. getAttribute method can be used to retrieve the value of a specific attribute for the referenced element. For example: would retrieve the value of the BGCOLOR attribute of the element whose false argument is a boolean value (i.e. true or false), specifying whether or not the search to find the attribute is case-sensitive or not. 'True' means that the attribute case must match that give in the attribute value, for the getAttribute method to work - the default value is 'false'. Depending on the value of the attribute, the getAttribute method returns either a string, a number, or a variant. insertAdjacentHTML method can be used to insert a new HTML element into the document, without removing a previous one (as manipulation of the insertAdjacentHTML places the string specified in the second argument, at the position specified in the first argument. For example: document.all.tags("P").item(1).insertAdjacentHTML("BeforeBegin", "<P>Here's a new paragraph") <P>Here's a new paragraph before the second paragraph in the document. The possible values for the positioning are: insertAdjacentText method is essentially identical to the insertAdjacentHTML method, except that it inserts literal text, regardless of the strings actual content. It takes the same argument set - i.e. (string, position) where position can be one of the four values mentioned above. removeAttribute can be used to remove an attribute and its associated value from the referenced element. This is subtly different to dynamically setting the attribute property value to nothing. Using the removeAttribute value forces removal of the attribute, as if it had never been set in the first place. The removeAttribute method returns a boolean (i.e. true or false) value depending on whether the attribute was successfully removed or not. Its optional second argument is a boolean value, which specifies whether to use a case-sensitive search to locate the attribute to remove. For example: bKilldataSrc=dataTable.removeAttribute "DATASRC", "false" would make the bKilldataSrc true or false, depending on whether the DATASRC attribute was removed from the element referenced by dataTable. The search is case-insensitive. The default value for the case-sensitivity argument, if none is given, is 'true'. scrollIntoView method can be used to force the current viewing window to scroll to a referenced element object. It accepts a boolean argument (true or false) which determines whether the window should be scrolled so that the referenced element object is at the top (true) or bottom (false) of the window. For example, the button below will scroll the links given in the click example, so that they're at the bottom of the viewing window. Like the other setAttribute can be used to set the value of a specific attribute for a referenced element. For example: MyTable.setAttribute "DATASRC", "#Comp1", true would set the DATASRC attribute to #Comp1 for the element referenced by MyTable. Basically, the first and second arguments for the method specify the attribute and its value to be set, with the third argument being 'true' or 'false', specifying whether case sensitive setting of the attribute is used or not. If this is set to 'true' (the default) and the attribute name you specify for the referenced element has a different case than any existing setting of that attribute, then a new attribute will be created, with the value specified in the value argument. © 1995-1998, Stephen Le Hunte (reading:"All of the following methods are specific to Internet Explorer 4.0's Dynamic HTML object model. None are supported by Netscape") is no longer needed, as of netscape 6.5, mozilla 1.7, and firefox 1.0. It just took a bit for netscape to recover from the AOL/Sun nightmare. There still may be subtle differences in how layers are displayed but there are other sites you can link to for that. PS a 'last updated' time-date stamp at the bottom would be helpful. |file: /Techref/language/html/ib/Dynamic_HTML/dhtmlm.htm, 9KB, , updated: 2010/9/7 23:33, local time: 2018/7/21 06:32, |©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?| <A HREF="http://www.sxlist.com/techref/language/html/ib/Dynamic_HTML/dhtmlm.htm"> Dynamic HTML standard Methods</A> |Did you find what you needed?| Welcome to sxlist.com! & kind contributors just like you! Please don't rip/copy Copies of the site on CD are available at minimal cost. Welcome to www.sxlist.com!
<urn:uuid:6db5f33e-2137-4412-b9d5-99e6682fa24f>
3.1875
1,258
Documentation
Software Dev.
48.190047
95,640,359
WELCOME TO WEBMATHS!Hi, my name is Jeff Trevaskis - alias Mr. T! I live in a small town in Northern Victoria, Australia. I love teaching kids. - 485,562 hits Can you list all the “small” fractions, that is, those using no number higher than 5, in order of size from 0 to 1? Who was the Mathematician who named this series of ordered fractions after himself in 1816? Advertisements Imagine a rope pulled tight around the circumference of the earth! We now lengthen the rope by 1 metre and position it uniformly above the earth. What is the gap between the earth and the rope? Will a mouse fit … Continue reading I have found the ability of children (and adults) to visualise their own country is often tenuous. What are your experiences with this? Their are many aspects to this, for example: Areas of States (see Maths300 lesson 50 – “Country … Continue reading I was a famous Mathematician who died in the year 1871. I was “x” years old in the year x2. In what year was I born? Who am I? Describe one of my notable achievements. Can you find anything wrong with the following proof that 2 = 1? If not then in our next Maths class I will give you a $1 coin, and in return you will give me a $2 coin! (I hope to get … Continue reading What if you had to teach the classes you are taking now or something you learned years ago? How would you use technology to do it? What devices, software, games, networks, or applications would you use to help students learn … Continue reading Last year in a Year 8 Maths class I used Origami for the first time during a Geometry unit. I was very surprised by the enthusiasm shown by the students! They were much more attentive and focussed than usual and … Continue reading Adrian loves to play poker. But when no one is around he plays it as a solitaire game. He deals himself 25 cards and then tries to arrange them into the best poker hands possible going both across and up … Continue reading This year there will be friday 13th in February and March. 1. Can this occur with any two other months? 2. In what year will the exact same situation happen again? [Thanks to my daughter Rachel for this question!]
<urn:uuid:e54379ed-3ae9-4915-b044-740179de5291>
2.5625
509
Content Listing
Science & Tech.
70.750037
95,640,365
Imagine an artificial leaf that mimics photosynthesis, which lets plants harness energy from the sun. But this leaf would have the ability to power your homes and cars with clean energy using only sunlight and water. This is not some far-off idea of the future. It's reality, and the subject of a jury-prize-winning film in the GE Focus Forward Film Competition. Jared The Artificial Leaf," showcases chemist , the inventor of the artificial leaf, a device that he says can power the world.and ' short film, " "The truth is stranger than fiction," Kelly Nyks, a partner at PF Pictures, told ABC News. "What I think is so exciting is that Dan has taken this science and applied it in a way that makes bringing it to scale to solve the for the planet real and possible." Nocera's leaf is simply a silicon wafer coated with catalysts that use sunlight to split water to into hydrogen and oxygen components. "Essentially, it mimics photosynthesis," Nocera told ABC News. The gases that bubble up from the water can be turned into a fuel to produce electricity in the form of fuel cells. The device may sound like science fiction fantasy, but Nocera said he hopes one day it will provide an alternative to the centralized energy system - the grid. Worldwide, more than 1.6 billion people live without access to electricity and 2.6 billion people live without access to clean sources of fuel for cooking. "This is the model: We're going to have a very distributed energy system," Nocera told ABC News. With the leaf, "using just sunlight and water, you can be off the grid. If you're poor, you don't have a grid, so this gives them a way to have energy in the day and at night." With just the artificial leaf, 1.5 bottles of drinking water and sunlight, you could have enough electricity to power a small home, but the cost is still a problem, though Nocera said he believes that will come down with time and research. The artificial leaf is cheaper than solar panels but still expensive. Hydrogen from a solar panel and electrolysis unit can currently be made for about $7 per kilogram; the artificial leaf would come in at $6.50. Nocera is looking for ways to drive down the costs make these devices more widely available. He recently replaced the platinum catalyst that produces hydrogen gas with a less-expensive nickel-molybdenum-zinc compound. He's also looking for ways to reduce the amount of silicon needed. In 2009, Nocera's artificial leaf was selected as a recipient of funding by the U.S. Department of Energy's Advanced Research Projects Agency (ARPA-E), which supports energy technologies that could create a more secure and affordable American future. Nyks and Scott said they hope "The Artificial Leaf" will bring awareness to the public that sustainable energy solutions do exist. "We make films for social action," Scott, also a partner at PF Pictures, told ABC News. "We see films as a tool for social change. And what I think Dan sketches out is that we start with energy. And if we solve the energy crisis, we'll solve the climate crisis, and then we'll solve the water crisis, and then we'll solve the food crisis. But it starts with energy." The directors were one of 30 filmmaking teams asked to make a movie that could highlight an innovation that could change the world as part of, a series of three-minute films created by award-winning documentary makers including Alex Gibney, Lucy Walker, Albert Maysles and Morgan Spurlock. Anyone with an Internet connection has access to the videos online. The winning entries are featured at focusforwardfilms.com. So far, total media impressions for GE Focus Forward have exceeded 1.5 billion. In addition, the films are screening at all the major film festivals around the world and have played on every continent, including Antarctica. Nyks and Scott said they hope to take the success of the short and turn it into a feature-length documentary.
<urn:uuid:da273ce7-334a-475e-a23a-90fbd0e06a6b>
3.484375
858
News Article
Science & Tech.
54.432174
95,640,387
Authors: Raji Heyrovska Exactly today fifteen years ago, the author arrived at the unique result that the ground state Bohr radius of the hydrogen atom is divided into two parts pertaining to the electron and proton, the ratio of which was amazingly a constant. This constant turned out to be the Golden ratio, a mathematical constant, known from ancient times to appear in many spontaneous creations of Nature, big and small. Further work showed that the interatomic distances in alkali metals and halogens are divided exactly into their cationic and anionic radii by the Golden ratio, the sums of which accounted precisely for the interionic distances in alkali halides. This cascaded over the years into the additivity rule of atomic and or ionic radii in the structures of small as well as large molecules. This is summarized in this short paper. Comments: 3 pages [v1] 2018-02-26 12:23:58 Unique-IP document downloads: 52 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:87c7ca90-8ab4-44d6-b795-95c8ae0f6597>
3.1875
343
Truncated
Science & Tech.
37.546303
95,640,389
The team used satellite data going back to 1982 to reconstruct past Arctic sea ice conditions, concluding there has been a nearly complete loss of the oldest, thickest ice and that 58 percent of the remaining perennial ice is thin and only 2-to-3 years old, said the lead study author, Research Professor James Maslanik of CU-Boulder's Colorado Center for Astrodynamics Research. In the mid-1980s, only 35 percent of the sea ice was that young and that thin according to the study, the first to quantify the magnitude of the Arctic sea ice retreat using data on the age of the ice and its thickness, he said. "This thinner, younger ice makes the Arctic much more susceptible to rapid melt," Maslanik said. "Our concern is that if the Arctic continues to get kicked hard enough toward one physical state, it becomes increasingly difficult to reestablish the sea ice conditions of 20 or 30 years ago." A September 2007 study by CU-Boulder's National Snow and Ice Data Center indicated last year's average sea ice extent minimum was the lowest on record, shattering the previous September 2005 record by 23 percent. The minimum extent was lower than the previous record by about 1 million square miles -- an area about the size of Alaska and Texas combined. The new study by Maslanik and his colleagues appears in the Jan. 10 issue of Geophysical Research Letters. Co-authors include CCAR's Charles Fowler, Sheldon Drobot and William Emery, as well as Julienne Stroeve from CU-Boulder's Cooperative Institute for Research in Environmental Sciences and Jay Zwally and Donghui Yi from NASA's Goddard Space Flight Center in Greenbelt, Md. The portion of ice more than five years old within the multi-year Arctic icepack decreased from 31 percent in 1988 to 10 percent in 2007, according to the study. Ice 7 years or older, which made up 21 percent of the multi-year Arctic ice cover in 1988, made up only 5 percent in 2007, the research team reported. The researchers used passive microwave, visible infrared radar and laser altimeter satellite data from the National Oceanic and Atmospheric Administration, NASA and the U.S. Department of Defense, as well as ocean buoys to measure and track sections of sea ice. The team developed "signatures" of individual ice sections roughly 15 miles square using their thickness, roughness, snow depth and ridge characteristics, tracking them over the seasons and years as they moved around the Arctic via winds and currents, Emery said. "We followed the ice in sequential images and track it back to where it had been previously, which allowed us to infer the relative ages of the ice sections." The replacement of older, thicker Arctic ice by younger, thinner ice, combined with the effects of warming, unusual atmospheric circulation patterns and increased melting from solar radiation absorbed by open waters in 2007 all have contributed to the phenomenon, said Drobot. "These conditions are setting the Arctic up for additional, significant melting because of the positive feedback loop that plays back on itself." "Taken together, these changes suggest that the Arctic Ocean is approaching a point where a return to pre-1990s ice conditions becomes increasingly difficult and where large, abrupt changes in summer ice cover as in 2007 may become the norm," the research team wrote in Geophysical Research Letters. James Maslanik | EurekAlert! Abrupt cloud clearing events over southeast Atlantic Ocean are new piece in climate puzzle 23.07.2018 | University of Kansas Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Health and Medicine 23.07.2018 | Earth Sciences 23.07.2018 | Science Education
<urn:uuid:5c53be5f-9c3f-42a2-a61d-315e4013e2d4>
3.28125
1,263
Content Listing
Science & Tech.
42.382679
95,640,394
Channel formation by the bacterial toxin aerolysin follows oligomerization of the protein to produce heptamers that are capable of inserting into lipid bilayers. How insertion occurs is not understood, not only for aerolysin but also for other proteins that can penetrate membranes. We have studied aerolysin channel formation by measuring dye leakage from large unilamellar egg phosphatidylcholine vesicles containing varying amounts of other lipids. The rate of leakage was enhanced in a dose-dependent manner by the presence of phosphatidylethanolamine, diacylglycerol, cholesterol, or hexadecane, all of which are known to favor a lamellar-to-inverted hexagonal (L-H) phase transition. Phosphatidylethanolamine molecular species with low L- H transition temperatures had the largest effects on aerolysin activity. In contrast, the presence in the egg phosphatidylcholine liposomes of lipids that are known to stabilize the lamellar phase, such as sphingomyelin and saturated phosphatidylcholines, reduced the rate of channel formation, as did the presence of lysophosphatidylcholine, which favors positive membrane curvature. When two different lipids that favor hexagonal phase were present with egg PC in the liposomes, their stimulatory effects were additive. Phosphatidylethanolamine and lysophosphatidylcholine canceled each other's effect on channel formation. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:c3653034-49c8-462e-a1bf-71dcc630abb0>
2.875
335
Academic Writing
Science & Tech.
-9.85659
95,640,396
Scientists Unravel the Everyday Problem of Untied Shoelaces Untied shoelaces -- it's one of life's most annoying quirks. A team of mechanical engineers at the University of California Berkeley has had enough, so they investigated the reason why shoelaces keep on coming untied. According to a report from Phys Org, the group found that the feet's combination of stomping on the ground and whipping around act as an "invisible hand," loosening and eventually untying the knot. "When you talk about knotted structures, if you can start to understand the shoelace, then you can apply it to other things, like DNA or microstructures, that fail under dynamic forces," Christopher Daily-Diamond, study co-author and a graduate student at Berkeley, explained. "This is the first step toward understanding why certain knots are better than others, which no one has really done." There are two ways people generally tie their shoelaces: the "granny knot" and the "square knot" that tends to last longer. Both eventually fail though, and to find out why, the researchers conducted several experiments testing the knots. First, one of the study authors Christine Gregg hopped on a treadmill as her colleagues filmed her feet. With the slow-motion video, the team observed that the foot strikes the ground at seven times the force of gravity. Reacting to this increased force, the knot stretches and relaxes with the foot's movement. This causes the knot to loosen, and when the swinging leg also applies inertial force on the ends of the shoelaces, it's only a matter of time that the knot becomes untied. The shoelaces can unravel in as little as two strides. Walking -- the combination of stomping and swinging -- is necessary for the knots to come undone. A report from Science Magazine stressed that simply stomping will not get the knot to fail, nor will just swinging the feet. "We were able to see that these two combined effects lead to shoe knots failing," UC Berkeley engineer Oliver O'Reilly told Science Magazine. "You need both together." The experiment also underlined the superiority of the square knot, which only failed half the time compared to the granny knot, which failed all the time. Scientists still need further studies to figure out why one knot is stronger than the other. The significance of knots goes beyond solving the everyday problem of shoelaces. Understanding these mechanics has other practical applications like mountaineering and climbing, sailing and even surgical sutures. More importantly, knots exist on a microscopic level as well such as in the DNA. The study appears on the journal Proceedings of the Royal Society A.
<urn:uuid:68572e4c-a6d7-454a-b461-fbf777117345>
2.765625
554
News Article
Science & Tech.
46.095032
95,640,397
To retrieve the 3D water vapour density structure, a discretization of the troposphere on the study area has to be performed, where it is spatially divided into a finite number of boxes or cells (usually called voxels (Flores et al. This is only the second time water vapour has been discovered on a moon in the solar system. The ability of clothing ensembles to transport water vapour is an important determinant of physiological comfort. 4) If water vapour is a more effective greenhouse gas than carbon dioxide, then why isn't global warming a consequence of more water vapour Many materials, such as cellulose, EVOH or PVOH, allow water vapour to flow through but some plastics allow hundreds of times more vapours to flow through than others. Now we will need to review our understanding of the chemical processes in this dense region and, in particular, the importance of cosmic rays to maintain some amount of water vapour ," Caselli noted. Freezerburn is caused by water vapour escaping from the product's frozen surface and migrating through the packaging. It seems that previous models have greatly underestimated the quantities of water vapour at heights of 20-50 km. The Brownell REGEN8 removes the water vapour entering the gearbox to a safe level. Permeability leaders Versaperm Ltd have introduced a new multi-chamber instrument to measure the water vapour permeability of biscuit (and other food) packaging to a few ppm or better. When the air is compressed the amount of water vapour increases in volume, depending on the compression pressure. London, January 29 (ANI): Scientists have suggested that a puzzling drop in the amount of water vapour in the Earth's atmosphere may be responsible for a slowdown in average global temperatures.
<urn:uuid:b5093a6c-9c77-4cab-9d61-40d3e315edc7>
3.1875
390
Structured Data
Science & Tech.
30.421426
95,640,458
Hubble was used to precisely measure, for the first time ever, the sideways motions of a small sample of stars located far from the galaxy's center. Their unusual lateral motion is circumstantial evidence that the stars may be the remnants of a shredded galaxy that was gravitationally ripped apart by the Milky Way billions of years ago. These stars support the idea that the Milky Way grew, in part, through the accretion of smaller galaxies. "Hubble's unique capabilities are allowing astronomers to uncover clues to the galaxy's remote past. The more distant regions of the galaxy have evolved more slowly than the inner sections. Objects in the outer regions still bear the signatures of events that happened long ago," said Roeland van der Marel of the Space Telescope Science Institute (STScI) in Baltimore, Md. They also offer a new opportunity for measuring the "hidden" mass of our galaxy, which is in the form of dark matter (an invisible form of matter that does not emit or reflect radiation). In a universe full of 100 billion galaxies, our Milky Way "home" offers the closest and therefore best site for detailed study of the history and architecture of a galaxy. Deason and her team plucked the outer halo stars out of seven years' worth of archival Hubble telescope observations of our neighboring Andromeda galaxy. In those observations, Hubble peered through the Milky Way's halo to study the Andromeda stars, which are more than 20 times farther away. The Milky Way's halo stars were in the foreground and considered as clutter for the study of Andromeda. But to Deason's study they were pure gold. The observations offered a unique opportunity to look at the motion of Milky Way halo stars. Finding the stars was meticulous work. Each Hubble image contained more than 100,000 stars. "We had to somehow find those few stars that actually belonged to the Milky Way halo," van der Marel said. "It was like finding needles in a haystack." The astronomers identified the stars based on their colors, brightnesses, and sideways motions. The halo stars appear to move faster than the Andromeda stars because they are so much closer. Team member Sangmo Tony Sohn of STScI identified the halo stars and measured both the amount and direction of their slight sideways motion. The stars move on the sky only about one milliarcsecond a year, which would be like watching a golf ball on the Moon moving one foot per month. Nonetheless, this was measured with 5 percent precision, made possible in visible-light observations because of Hubble's razor-sharp view and instrument consistency. "Measurements of this accuracy are enabled by a combination of Hubble's sharp view, the many years' worth of observations, and the telescope's stability. Hubble is located in the space environment, and it's free of gravity, wind, atmosphere, and seismic perturbations," van der Marel said. Stars in the inner halo have highly radial orbits. When the team compared the tangential motion of the outer halo stars with their radial motion, they were very surprised to find that the two were equal. Computer simulations of galaxy formation normally show an increasing tendency towards radial motion if one moves further out in the halo. These observations imply the opposite trend. The existence of a shell structure in the Milky Way halo is one plausible explanation of the researchers' findings. Such a shell can form by accretion of a satellite galaxy. This is consistent with a picture in which the Milky Way has undergone continuing evolution over its lifetime due to the accretion of satellite galaxies. The team compared their results with data of halo stars recorded in the Sloan Digital Sky Survey. Those observations uncovered a higher density of stars at about the same distance as the 13 outer halo stars in their Hubble study. A similar excess of halo stars exists across the Triangulum and Andromeda constellations. Beyond that radius, the number of stars plummets. Deason immediately thought the two results were more than just coincidence. "What may be happening is that the stars are moving quite slowly because they are at the apocenter, the farthest point in their orbit about the hub of our Milky Way," Deason explained. "The slowdown creates a pileup of stars as they loop around in their path and travel back towards the galaxy. So their in and out or radial motion decreases compared with their sideways or tangential motion." Shells of stars have been seen in the halos of some galaxies, and astronomers predicted that the Milky Way may contain them, too. But until now there was limited evidence for their existence. The halo stars in our galaxy are hard to see because they are dim and spread across the sky. Encouraged by this study, the team hopes to search for more distant halo stars in the Hubble archive. "These unexpected results fuel our interest in looking for more stars to confirm that this is really happening," Deason said. "At the moment we have quite a small sample. So we really can make it a lot more robust with getting more fields with Hubble." The Andromeda observations only cover a very small "keyhole view" of the sky. The team's goal is to put together a clearer picture of the Milky Way's formation history. By knowing the orbits and motions of many halo stars it will also be possible to calculate an accurate mass for the galaxy. "Until now, what we have been missing is the stars' tangential motion, which is a key component. The tangential motion will allow us to better measure the total mass distribution of the galaxy, which is dominated by dark matter. By studying the mass distribution, we can see whether it follows the same distribution as predicted in theories of structure formation," Deason said. The Hubble study will appear in an upcoming issue of the Astrophysical Journal. The science team consists of A. Deason and P. Guhathakurta of UCO/Lick Observatory, University of California, Santa Cruz, Calif., and R.P. van der Marel, S.T. Sohn, and T.M. Brown of the Space Telescope Science Institute, Baltimore, Md.For illustrations and more information about this study, visit: Donna Weaver | Newswise Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6e1bf66d-220d-45c8-b550-4d05a98a2aca>
3.71875
1,855
Content Listing
Science & Tech.
44.6054
95,640,464
US researchers have broken the record for the most amount of data sent by photons. Using the "wiggling" and "twisting" of a pair of hyper-entangled photons, researchers at the University of Illinois have surpassed a fundamental limit on the channel capacity for dense coding with linear optics. "Dense coding is arguably the protocol that launched the field of quantum communication," said Paul Kwiat, a John Bardeen professor of physics and electrical and computer engineering at Illinois. "Today, however, more than a decade after its initial experimental realisation, channel capacity has remained fundamentally limited as conceived for photons using conventional linear elements." The boffins explained that a single photon in classical coding will convey only one of two messages, or one bit of information. In dense coding, a single photon can convey one of four messages, or two bits of information. "Dense coding is possible because the properties of photons can be linked to each other through a peculiar process called quantum entanglement," said Professor Kwiat. "This bizarre coupling can link two photons even if they are located on opposite sides of the galaxy." Using linear elements, however, the standard protocol is fundamentally limited to convey only one of three messages, or 1.58 bits. The new experiment surpasses this threshold by employing pairs of photons entangled in more ways than one, i.e. hyper-entangled. As a result, additional information can be sent and correctly decoded to achieve the full power of dense coding. Professor Kwiat, graduate student Julio Barreiro and postdoctoral researcher Tzu-Chieh Wei describe the experiment in a paper published in Nature Physics. Microsoft receives a 30 per cent cut of all purchases on the Xbox digital store Credit card thieves used Apple ID accounts to buy and sell virtual currency for Clash of Clans and Clash Royale and Marvel Contest of Champions $5.1bn fine further evidence that the EU is anti-US, claims Trump New cable will connect Virginia to France
<urn:uuid:a8af0059-e843-4415-bb67-e61a5ce8c884>
3.09375
413
News Article
Science & Tech.
32.413873
95,640,468
Application of Algebras The geometric interpretation of complex numbers as points of the plane appeared for the first time in the 18th century. After that there arose the natural idea of generalizing complex numbers in such a way that they could be interpreted as points of three-dimensional space. One of the earliest attempts of this kind was due to Caspar Wessel. It appeared in his previously mentioned Attempt to represent direction . Having thought of the operation of multiplication of complex numbers in geometric terms, Wessel associated to a point in space with rectangular coordinates x, y, z the expression x + yε + zη, where ε and η are two different imaginary units, and interpreted by means of these numbers rotations about the Oy- and Oz-axes. Wessel used his “algebra” to solve problems involving spherical polygons. KeywordsJordan Algebra Spinor Representation Residue Class Steiner Triple System Dual Number Unable to display preview. Download preview PDF.
<urn:uuid:6246eef3-62e2-4e97-9926-07f6d6741272>
3.375
206
Truncated
Science & Tech.
34.576769
95,640,514
This webinar is a kick-off to PERN's Cyberseminar People and Pixels Revisited: 20 years of progress and new tools for population-environment research. Twenty years ago the National Research Council published the ground-breaking People and Pixels: Linking Remote Sensing and Social Science (NRC, 1998). The volume focused on emerging research findings that linked population dynamics and human activities to changes in land use and land cover, revealing the many ways that human activities affect landscapes from the Latin America to Southeast Asia. Separate chapters also addressed health- and famine-related applications of remote sensing. Since that time, new research opportunities are opening because of the increasing array of social science data from both traditional (e.g. censuses, surveys) and new sources (e.g., mobile phone and social media data), the growing variety of satellite and aerial data sources (e.g., high resolution, VIIRS nightlights, radar, UAVs), and the access to computation cyberinfrastructure for the analysis of massive spatiotemporal datasets. The cyberseminar aims to identify and review the primary research breakthroughs and future directions opened by this digital revolution. The “people and pixels” move in geography shed light on the concerns of sustainability, human livelihoods, land use planning, resource use, and conservation, and led to practical innovations in agricultural planning, hazard impact analysis, and drought monitoring. What will the next 20 years bring?
<urn:uuid:08a29bc2-9fc5-4154-a3ab-1a768016da7b>
2.578125
301
News (Org.)
Science & Tech.
26.2105
95,640,529
We all know that if you put your hand over an open flame its very painful. What you may not know is that, for some people, just lying under a blanket is painful as well. They have neuropathic pain--annoying, chronic pain that comes from a diseased nerve cell rather than a specific stimulus. Feeling phantom pain in a missing limb is another, more famous, example. Experts say up to two percent of the U.S. population suffers from neuropathic pain. But this pain generally responds poorly to analgesics and other standard treatment and get worse over time, causing permanent disability in some people. Now there may be new hope for these pain sufferers. Scientists at the University of Virginia Health System have identified a new type of pain-sensing neuron in rats, which are unusually dense in a subtype of calcium channels called T-type channels. It is possible that these "T-rich cells" could be targets for future therapies to treat neuropathic pain as well as acute onset pain, which can happen after invasive surgery or inflammation. Bob Beard | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:02150d88-5056-44fb-bbe1-913e4dc68654>
2.921875
871
Content Listing
Science & Tech.
42.91602
95,640,530
Sometimes it takes time to uncover natures secrets. Take the case of callimicos, also called Goeldis monkeys, a reclusive and diminutive South American primate. Discovered a century ago by Swiss naturalist Emil August Goeldi, the animals were once considered to be a possible "missing link" between small and large New World monkeys. An endangered callimico perches on the trunk of a tree in a Bolivian rain forest. Credit: Edilio Nacimento Becerra But new findings from the first long-term studies of the monkeys in the wild seem to indicate that this is not the case, although the animals have a unique set of anatomical, reproductive and behavioral characteristics. Leila Porter, a biological anthropologist at the University of Washington, has spent nearly four years observing callimicos (Callimico goeldii) in the Amazon basin of Northern Bolivia. Her pioneering fieldwork has collected the first detailed data of the ecology and behavior of the animals, an endangered species, in the wild. Among other things, her observations show callimicos eat fungi during the dry season, making them the only tropical primate species to subsist on this food source for part of the year. They also have a different reproductive strategy from other small New World monkeys. Callimicos (Latin for beautiful little monkeys) have the capacity to give birth to a single offspring twice annually while their closest primate relatives – marmosets, tarmarins and lion tarmarins – give birth to twins once a year. Porters findings have just been published in the journal Evolutionary Anthropology in a paper she authored with Paul Garber, a biological anthropologist from the University of Illinois at Urbana-Champaign. Joel Schwarz | University of Washington Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:7b3bac92-0237-4d89-b4bf-520b6e4d3991>
3.84375
1,023
Content Listing
Science & Tech.
32.619386
95,640,543
Exquisite wing fossils reveal the world's first butterflies appeared 200 million years ago, long BEFORE there were flowers on Earth to pollinate - Moths and butterflies were thought to have evolved alongside flowers - However, the world's first flowers sprouted around 140 million years ago - This suggests moths and butterflies emerged before flowers appeared - Scientists say they must have developed coiled mouthparts for a purpose other than feeding on nectar Newly-found fossils suggest moths and butterflies have been on Earth for at least 200 million years - at least 70 million years longer than previously thought. Scales from the wings of at least seven species of 'Lepidoptera' - the group that includes moths and butterflies - were found in a sample of ancient rock in Germany. As well as pushing back the date for the emergence of Lepidoptera, the discovery proves that butterflies did not evolve alongside flowers as previously thought. Scientists hope the discovery could help them understand the early evolution of moths and butterflies, and aid researchers with their conservation. Scroll down for video Newly-found fossils suggest moths and butterflies have been on Earth for at least 200 million years - at least 70 million years longer than previously thought. Shown here is primitive moth that has a proboscis that can suck up fluid. The scale bar is 1 centimeter (2.5 inches). A team of researchers, led by Dr Timo van Eldijk from the University of Utrecht, looked at the wings of 70 different butterfly and moth fossils. The tiny fossils weighed no more than 0.35 ounces (10 grams) and were about the same size as a speck of dust. The ancient remnants were found in the rocks of northern Germany and researchers analysed and compared the fossilised wing scales to those of existing species. The research team used acid to dissolve the rock and leave only the remains of the wings. They were surprised to find the fossils were closely related to moths and butterflies that belong to a group still alive today. These creatures have long tongues, known as a proboscis, that they use for sucking up nectar. But the first flowers are believed to have sprouted on Earth 140 million years ago - long after the date these fossils appears. Dr Russell Garwood of the University of Manchester, who is not connected with the study, told the BBC: 'This new evidence suggests that perhaps the coiled mouthparts had another role, before flowering plants evolved.' Researchers analysed and compared the fossilised wing scales to those of existing species (pictured). They found that the fossils were closely related to animals of the modern-day Glossata sub- group which has a long proboscis = The patterns and ridges of wing scales (pictured) were used to identify the fossils found. The tiny fossils are believed to be from around the Triassic-Jurassic boundary around 200 million years ago One theory is that these small flying insects fed on gymnosperms. Gymnosperms are flowerless, seed-producing plants such as conifers which dominated the Jurassic landscape and produce tiny drops of high-energy liquid. Another is that the early Jurassic and late Triassic era was a very dry and arid time and the proboscis of the butterflies was an 'efficient technique to replenish lost moisture and survive desiccation stress,' the researchers said in the study. The ancient fossils were discovered in a small place called Schandelah in the vicinity of Braunschweig in northern Germany by a team of researchers from the University of Utrecht The wing sales of the fossils provide evidence that the early butterfly's had a long nose and they likely used this to feed off the gymnosperms of the time as well as using it to drink water EARTH'S FIRST FLOWERING PLANT It looks like a magnolia and would not be out of place in any front garden. But this flower is the mother (and father) of every flowering plant living today and was gazed upon by the dinosaurs 140 million years ago. No fossils this old have ever been found, so scientists recreated it by analysing every plant family on Earth over six years. We now know the flowers in our gardens come from this one bloom, with three separate whorls of layered petals and both male and female reproductive organs. Its petals lie open because its main pollinator was probably a beetle, with bees only just evolving at that time and yet to require tubular flowers like snapdragons. It looks like a magnolia and would not be out of place in any front garden. But this flower is the mother (and father) of every flowering plant living today and was gazed upon by the dinosaurs 140 million years ago Flowering plants appeared on our planet relatively recently, brightening up a drab landscape previously dominated by ferns, horsetails and mosses. They now represent 90 per cent of all land plants, and scientists claim this is the most accurate picture of their common ancestor produced yet. Dr Emily Bailes, who worked on the study, said: 'This is the best representation so far of the flower which is parent to every modern flower we see today, and would have existed while the dinosaurs were still on the planet, which is really exciting. 'It has three concentric circles of petal-like organs, unlike the majority of plants today. We don't know for sure what colour this flower would have been, but I think it is quite pretty.' The origin of early flowering plants, called angiosperms, remains one of the biggest puzzles in biology, almost 140 years after Charles Darwin called their rapid rise in the Cretaceous period 'an abominable mystery'. The picture published in journal Nature Communications looks unlike anything we have today or any of the ideas previously proposed. It has three whorls, or concentric circles of petals, rather like the magnolia it resembles, which makes it unusual among modern-day plants. Only around 20 per cent now match this, with plants typically having fewer layers, such as the two whorls seen in lilies. Most watched News videos - Brave lion cub forced to jump into raging river to follow mother - Drowned woman and child found next to survivor clinging to wreck - Shocking video shows driver knocking cyclists off their bikes - Moment off-duty cop shoots armed motorbike thief dead - 'It's a find of a lifetime': Archaeologist Dr Clíodhna Ní Lionáin - The streets of Alcudia in Mallorca are flooded by mini-tsunami - Love Island TEASER: Georgia gets anxious as she could be kicked off - White woman confronts mother playing outside with child - Schwarzenegger criticizes 'wet noodle' Trump after Putin meeting - The moment Katie Price's mum is given heartbreaking prognosis - Model Annabelle Neilson walks the catwalk in 2010 fashion show - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga'
<urn:uuid:136ca01a-1f97-44b5-b6b8-bd9332638871>
3.640625
1,472
Truncated
Science & Tech.
36.283785
95,640,551
Division. Summation of Power Series Divide the answer to Ex. 2.2 (p × q × r) by each of its 3 factors: there will be no remainders. In each case check the decimal point of your answer by inspection of the product of the other two factors, i.e. test (pqr)/r by inspection of pq. Find the reciprocals, to 4S, of 1·249, 0·08334, 750·2. Check the figures of your answers by reciprocal tables, and then multiply each number by its 4S reciprocal. KeywordsPower Series Numerical Mathematic Decimal Point Division Problem Single Error Unable to display preview. Download preview PDF.
<urn:uuid:76ae86f9-991e-4c6b-8b63-6469565b336a>
2.8125
153
Truncated
Science & Tech.
75.499652
95,640,552
Washington: 'Comet Lovejoy' will light up the sky from January 7th through 24th as it is ready to enter its "best and brightest phase." The comet had been predicted to be glowing at 4th magnitude, bright enough that skywatchers with clear, dark skies might be able to just glimpse it by eye, without optical aid and the early-evening sky during this time will be dark and moonless, allowing the best views. On January 7th, Comet Lovejoy passes closest by Earth at a distance of 44 million miles (70 million km), nearly half the distance from Earth to the Sun. But its distance will change only a little for many nights after that, so you'll have plenty of opportunities to track it down. This was the fifth comet discovery by Australian amateur astronomer Terry Lovejoy, and he found it in images taken with his backyard 8-inch telescope. It's a very long-period comet, meaning that it has passed through the inner solar system before, roughly 11,500 years ago. Slight gravitational perturbations by the planets will alter the orbit a bit, so that the comet will next return in about 8,000 years. Astronomers have given it the official designation C/2014 Q2. The current Comet Lovejoy was not producing enough dust to create a bright tail and in fact this interloper wasn't expected to become so obvious at all. But by late 2014 amateur astronomers had noticed that the comet was brightening steadily and faster than redicted.
<urn:uuid:95c82a59-2e5b-401a-929c-8104c987604b>
3.5625
311
News Article
Science & Tech.
52.678429
95,640,566
Advanced Applications of Theory of Surfaces Special topics on the geometric properties of surfaces are treated in this chapter. They are developed from the fundamental theory of surfaces which has been explained in the previous chapter. Knowledge of these topics is useful for treating free-form surfaces in advanced problems. First we discuss the umbilics and lines of curvature. On a free-form surface, there are points and regions which have the special characters inherent to its shape, and whose locations do not depend on the coordinate system adopted. The lines of curvature make orthogonal nets on a surface, and the pattern they form exhibits inherent features of the surface. The umbilics are singular points, or curves or regions seeing from the lines of curvature. On the free-form surface the umbilics appear more frequently than our expectation. There are other curves on the surface which depend not only on its inherent features, but also on its orientation with respect to its observers or its environments or its surface physical properties. These curves are useful for describing or evaluating the objects from engineering and aesthetic criteria. KeywordsPrincipal Curvature Characteristic Curf Intersection Curve Advance Application Contour Curve Unable to display preview. Download preview PDF. - Coxeter, H.S.M.: Introduction to geometry. 2nd edition, John Wiley 1969Google Scholar - Hosaka, M.: *Theory of curve and surface synthesis and their smooth fitting. J. IPS Japan 10 (3): 121–131, 1969Google Scholar - Enomoto, H. et al.: *Computer experiment on global properties of structure lines of images using graphic display and its consideration). J. IPS Japan 17 (7): 641–649, 1976Google Scholar - Kajiya, J.T.: Ray tracing parametric patches. Computer Graphics (Proc. Siggraph’82) 16 (3): 224–254, 1984Google Scholar - Stoer, J., Bulirsch, R.: Introduction to numerical analysis. New York: Springer-Verlag 1980Google Scholar - Press, W. H., Flannery, B.P., Teukolsky, S.A., Vetterling W.T.: Numerical Recipies in C Cambridge: Cambridge University Press 1988Google Scholar - Higashi, M., Kushimoto, T., Hosaka, M.: On formulation and display for visualizing features and evaluating quality of free-form surfaces. In: Vandoni, C.E., Duce, D.A.(eds): Proc. Eurographics’90 1990, pp. 299–309Google Scholar - Love, A.E.H.: Treatise on the mathematical theory of elasticity. Cambridge: Cambridge University Press 1934, pp. 401–410Google Scholar
<urn:uuid:a08f337a-818e-4a60-8bd7-0c91e7327488>
3
570
Truncated
Science & Tech.
49.334245
95,640,570
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century and the 19th century. Since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is closely related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas and techniques endemic to this field. Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. Mathematical analysis of curves and surfaces had been developed to answer some of the nagging and unanswered questions that appeared in calculus, like the reasons for relationships between complex shapes and curves, series and analytic functions. These unanswered questions indicated greater, hidden relationships. The general idea of natural equations for obtaining curves from local curvature appears to have been first considered by Leonhard Euler in 1736, and many examples with fairly simple behavior were studied in the 1800s. When curves, surfaces enclosed by curves, and points on curves were found to be quantitatively, and generally, related by mathematical forms, the formal study of the nature of curves and surfaces became a field of study in its own right, with Monge's paper in 1795, and especially, with Gauss's publication of his article, titled 'Disquisitiones Generales Circa Superficies Curvas', in Commentationes Societatis Regiae Scientiarum Gottingesis Recentiores in 1827. Initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric. This is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Riemannian geometry generalizes Euclidean geometry to spaces that are not necessarily flat, although they still resemble the Euclidean space at each point infinitesimally, i.e. in the first order of approximation. Various concepts based on length, such as the arc length of curves, area of plane regions, and volume of solids all possess natural analogues in Riemannian geometry. The notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds. A distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i.e. for small neighborhoods of points. Any two regular curves are locally isometric. However, the Theorema Egregium of Carl Friedrich Gauss showed that for surfaces, the existence of a local isometry imposes strong compatibility conditions on their metrics: the Gaussian curvatures at the corresponding points must be the same. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat. An important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the "ordinary" plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite. A special case of this is a Lorentzian manifold, which is the mathematical basis of Einstein's general relativity theory of gravity. Finsler geometry has the Finsler manifold as the main object of study. This is a differential manifold with a Finsler metric, i.e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F : TM → [0,∞) such that: Symplectic geometry is the study of symplectic manifolds. An almost symplectic manifold is a differentiable manifold equipped with a smoothly varying non-degenerate skew-symmetric bilinear form on each tangent space, i.e., a nondegenerate 2-form ω, called the symplectic form. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed: dω = 0. A diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, so symplectic manifolds necessarily have even dimension. In dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism. The phase space of a mechanical system is a symplectic manifold and they made an implicit appearance already in the work of Joseph Louis Lagrange on analytical mechanics and later in Carl Gustav Jacobi's and William Rowan Hamilton's formulations of classical mechanics. By contrast with Riemannian geometry, where the curvature provides a local invariant of Riemannian manifolds, Darboux's theorem states that all symplectic manifolds are locally isomorphic. The only invariants of a symplectic manifold are global in nature and topological aspects play a prominent role in symplectic geometry. The first result in symplectic topology is probably the Poincaré-Birkhoff theorem, conjectured by Henri Poincaré and then proved by G.D. Birkhoff in 1912. It claims that if an area preserving map of an annulus twists each boundary component in opposite directions, then the map has at least two fixed points. Contact geometry deals with certain manifolds of odd dimension. It is close to symplectic geometry and like the latter, it originated in questions of classical mechanics. A contact structure on a (2n + 1) – dimensional manifold M is given by a smooth hyperplane field H in the tangent bundle that is as far as possible from being associated with the level sets of a differentiable function on M (the technical term is "completely nonintegrable tangent hyperplane distribution"). Near each point p, a hyperplane distribution is determined by a nowhere vanishing 1-form , which is unique up to multiplication by a nowhere vanishing function: A local 1-form on M is a contact form if the restriction of its exterior derivative to H is a non-degenerate two-form and thus induces a symplectic structure on Hp at each point. If the distribution H can be defined by a global one-form then this form is contact if and only if the top-dimensional form is a volume form on M, i.e. does not vanish anywhere. A contact analogue of the Darboux theorem holds: all contact structures on an odd-dimensional manifold are locally isomorphic and can be brought to a certain local normal form by a suitable choice of the coordinate system. Complex differential geometry is the study of complex manifolds. An almost complex manifold is a real manifold , endowed with a tensor of type (1, 1), i.e. a vector bundle endomorphism (called an almost complex structure) It follows from this definition that an almost complex manifold is even-dimensional. An almost complex manifold is called complex if , where is a tensor of type (2, 1) related to , called the Nijenhuis tensor (or sometimes the torsion). An almost complex manifold is complex if and only if it admits a holomorphic coordinate atlas. An almost Hermitian structure is given by an almost complex structure J, along with a Riemannian metric g, satisfying the compatibility condition An almost Hermitian structure defines naturally a differential two-form The following two conditions are equivalent: where is the Levi-Civita connection of . In this case, is called a Kähler structure, and a Kähler manifold is a manifold endowed with a Kähler structure. In particular, a Kähler manifold is both a complex and a symplectic manifold. A large class of Kähler manifolds (the class of Hodge manifolds) is given by all the smooth complex projective varieties. Differential topology is the study of global geometric invariants without a metric or symplectic form. Differential topology starts from the natural operations such as Lie derivative of natural vector bundles and de Rham differential of forms. Beside Lie algebroids, also Courant algebroids start playing a more important role. A Lie group is a group in the category of smooth manifolds. Beside the algebraic properties this enjoys also differential geometric properties. The most obvious construction is that of a Lie algebra which is the tangent space at the unit endowed with the Lie bracket between left-invariant vector fields. Beside the structure theory there is also the wide field of representation theory. The apparatus of vector bundles, principal bundles, and connections on bundles plays an extraordinarily important role in modern differential geometry. A smooth manifold always carries a natural vector bundle, the tangent bundle. Loosely speaking, this structure by itself is sufficient only for developing analysis on the manifold, while doing geometry requires, in addition, some way to relate the tangent spaces at different points, i.e. a notion of parallel transport. An important example is provided by affine connections. For a surface in R3, tangent planes at different points can be identified using a natural path-wise parallelism induced by the ambient Euclidean space, which has a well-known standard definition of metric and parallelism. In Riemannian geometry, the Levi-Civita connection serves a similar purpose. (The Levi-Civita connection defines path-wise parallelism in terms of a given arbitrary Riemannian metric on a manifold.) More generally, differential geometers consider spaces with a vector bundle and an arbitrary affine connection which is not defined in terms of a metric. In physics, the manifold may be the space-time continuum and the bundles and connections are related to various physical fields. From the beginning and through the middle of the 18th century, differential geometry was studied from the extrinsic point of view: curves and surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in an ambient space of three dimensions). The simplest results are those in the differential geometry of curves and differential geometry of surfaces. Starting with the work of Riemann, the intrinsic point of view was developed, in which one cannot speak of moving "outside" the geometric object because it is considered to be given in a free-standing way. The fundamental result here is Gauss's theorema egregium, to the effect that Gaussian curvature is an intrinsic invariant. The intrinsic point of view is more flexible. For example, it is useful in relativity where space-time cannot naturally be taken as extrinsic (what would be "outside" of it?). However, there is a price to pay in technical complexity: the intrinsic definitions of curvature and connections become much less visually intuitive. These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one. (See the Nash embedding theorem.) In the formalism of geometric calculus both extrinsic and intrinsic geometry of a manifold can be characterized by a single bivector-valued one-form called the shape operator. Below are some examples of how differential geometry is applied to other fields of science and mathematics.
<urn:uuid:ad1f6458-fa7f-4ad7-84b9-c2a88c4ef7ab>
3.46875
2,534
Knowledge Article
Science & Tech.
27.054612
95,640,585
A unique breakthrough by researchers at Linköping University in Sweden creates new potential in medicine and biochemistry and at the same time provides a new piece of the puzzle in theories about the origins of life. Normally, inorganic materials like silica are unwelcome in biological systems, since they disrupt the form and function of proteins. “We wanted to reverse the thinking and try to design proteins that take on their function only after encountering an inorganic surface,” says Bengt-Harald Jonsson, professor of molecular biotechnology. He directs the research team that is now presenting its findings in Angewandte Chemie. The team designed a peptide (a short protein) with a specific distribution of positive charges. The peptide was mixed into a solution of spherical silica particles, about 9 nanometers (billionths of a meter) across. When the peptide was free in the solution it had no structure whatsoever, but when it connected with the negatively charged silica ball it assumed the form of a helix. The result was a complex of a silica particle and a functional protein. When the researchers added amino acids to their peptide, the complex took on the properties of a catalyst, a function similar to that of enzymes in living cells. The method has several possible fields of application:- recognition of organic molecules “We know that RNA (which plays a decisive role in the transfer of information in cells) can bind with clay particles whose surfaces have negative charges. The probability of peptides with amino acids having formed well-defined structures with the clay at an early stage of development is considerably greater, since they are more diversified than RNA is,” says Bengt-Harald Jonsson. Åke Hjelm | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b9dc5fb5-76f5-45b0-b8c3-eb5b50faed83>
3.140625
946
Content Listing
Science & Tech.
35.15339
95,640,603
- Open Access Shallow pressure sources associated with the 2007 and 2014 phreatic eruptions of Mt. Ontake, Japan © The Author(s) 2016 Received: 9 March 2016 Accepted: 20 July 2016 Published: 29 July 2016 We modeled pressure sources under Mount Ontake volcano, Japan, on the basis of global navigation satellite system (GNSS) observations of ground deformation during the time period including the 2007 and 2014 phreatic eruptions. The total change in volume in two sources below sea level in the period including the 2007 eruption was estimated from GNSS network observations to be 6 × 106 m3. Additionally, data from a GNSS campaign survey yielded an estimated volume change of 0.28 × 106 m3 in a shallower source just beneath the volcanic vents. The 2007 eruption may have been activated by magmatic activity at depth. During the 2014 eruption, the volume change at depth was very small. However, tiltmeter data indicated inflation from a shallow source that began 7 min before the eruption, representing a volume change estimated to be 0.38 × 106 m3. We infer that the potential for subsurface hydrothermal activity may have remained high after the 2007 eruption. Mount Ontake volcano, a 3067-m volcano in central Honshu Island, Japan, has had two phreatic eruptions in recent years that were accompanied by ground deformation. Even though a phreatic eruption does not produce lava, it can cause multiple casualties to hikers during the popular mountaineering seasons. For this reason, it is important to study the shallow pressure sources beneath active volcanoes that can produce phreatic eruptions. A very small eruption in late March of 2007 resulted in no casualties; however, an eruption on September 27, 2014, killed 63 people. The 2014 eruption was an isolated phreatic eruption that created a small pyroclastic flow, but there was no ejecta or flow of lava. The eruption plume reached an estimated 10,000 m above sea level (Meteorological Research Institute 2016). The volume of the eruptive products was estimated to be 0.3–0.5 × 106 m3 dense rock equivalent (DRE) from a field survey conducted after the 2014 eruption (Maeno et al. 2016), but no obvious ground deformation was detected on the day of the eruption. It is difficult to detect ground deformation prior to phreatic eruptions because they can occur without obvious magma migration. However, ground deformation caused by shallow pressure sources associated with phreatic or possible phreatic eruptions has been observed in some cases (e.g., Takagi 2013; Yoshida et al. 2012). In this paper, we report evidence on shallow pressure sources associated with the 2007 and 2014 eruptions of Mt. Ontake. Ground deformation before and after the 2007 eruption The geodetic coordinates of GEONET stations are analyzed by GSI using precise ephemeris data, and they are generally provided in a form called the F3 final solution (Nakagawa et al. 2009). The JMANET system, which consists of single-frequency-receiver stations, except for JMA510, is independent from the nationwide GEONET system. The raw data of both JMANET and GEONET stations were analyzed so that coordinates based on JMANET were connected to GEONET. We recalculated the coordinate of JMA510 referred to GSI0614 and GSI0988 by double-frequency analysis, and we also did the coordinates of JMA511 and JMA512 referred to JMA510 by single-frequency analysis, using the GNSS analysis software Bernese Ver. 5.0 (Dach et al. 2007). The relative coordinates of the JMANET stations were connected to the F3 final solution of GEONET. Source parameters were calculated by an inversion analysis using the formulas of Mogi (1958) and Okada (1992) and assuming an analytical region consisting of an elastic half-space. However, given the rugged topography, there are large differences of elevation among the GNSS stations. Therefore, we set the boundary of the elastic half-space at the height of each GNSS station for a more accurate approximation of the pressure source. On volcanoes, as in other precipitous areas, surface displacement due to pressure changes of underground sources can be affected by topography (Meteorological Research Institute 2008), and ground deformation cannot always be explained by an approximate analytic solution. To check the effect of topography, we calculated the displacement of the three JMANET stations, given our estimated source parameters, using the finite element method (FEM) and a digital elevation model (DEM). The resulting displacement was only as much as 9 % greater than the displacement yielded by the approximate analytic solution. Therefore, in the analysis area of this study, the effect of topography on the source estimation is negligible compared with observation errors. Ground deformation associated with the 2014 eruption GNSS network data show that the ground deformation before the 2014 eruption (Fig. 2) was too small for the pressure source to be modeled by the usual method (Miyaoka and Takagi 2016). However, a JMA tiltmeter in a borehole at site JMA510 detected ground tilt changes before and after the 2014 phreatic eruption, which started on September 27, 2014, at 11:52 Japan local time. The pendulum tiltmeter had operated at the bottom of a 100-m-deep borehole, and 1-Hz sampling data had been transmitted to the headquarters of JMA since 2010 (Volcanology Division, JMA 2014). It is difficult to estimate possible long-term precursory tilt changes before the 2014 Mt. Ontake eruption. There may have been gradual northwest-upward tilt in the months before the eruption. However, the tiltmeter record may also include noise due to groundwater fluctuations and snowmelt. A broadband seismometer operated by Nagano Prefecture at location MIT (Fig. 8), 3 km from the summit, recorded ground motions (Maeda et al. 2015). We converted this seismic record to ground tilt by numerical integration (Aoyama and Oshima 2015). The result showed uplift oriented west–southwest (NS component, −0.11 × 10−6 radians; EW component, −0.94 × 10−6 radians), which is the direction toward the summit. This tilt change is shown in Fig. 7c. The orientation of the uplift does not coincide exactly with the alignment of the volcanic vents in the Jigokudani valley, but it does coincide roughly with it (Fig. 8). We carried out GNSS campaign surveys of the summit area in September 2011 and in October 2015, after the eruption. Because the results included aftereffects of the 2011 off the Pacific coast of Tohoku Earthquake, it was difficult to estimate how much of the deformation was volcanic, but volcanic regional deformation cannot be ruled out in the summit area. We infer the following record of pressure sources accompanying ground deformation associated with the 2007 and 2014 eruptions of Mt. Ontake. GNSS observations suggest that an open-crack fault pressure source deeper than 5 km below sea level and a spherical shallow pressure source at sea level inflated gradually starting 3 months before the 2007 eruption. The volume changes were 5.5 × 106 and 0.32 × 106 m3, respectively. The volume change of the deep source was large, so it was likely a dike-type magma chamber. Murase et al. (2016) also suggested the tensile crack model from precise leveling measurements. At the same time, the shallow source was located at the hypocenter of the very long-period seismic event associated with the 2007 eruption (Nakamichi et al. 2009). An additional, shallower source, 1700 m above sea level, inflated with a volume increase of 0.28 × 106 m3 between August 2005 and September 2007. There were no tiltmeter data for this period. Miyaoka and Takagi (2016) showed that the 2014 eruption was preceded by slight inflation of a deep pressure source at unspecified depth, beginning less than 1 month before the eruption. They did not discuss a shallower source. However, tiltmeter data show that the summit area began tilting upward 7 min before the eruption and then reversed direction when the eruption began. We interpret this change as having been caused by a shallow pressure source. From these observations, we can infer that magma filled a chamber below sea level and caused subsurface hydrothermal activity just before the 2007 eruption. A shallower source, 1700 m above sea level, also inflated and caused a small phreatic eruption. Inflation of this shallower source ceased until the following GNSS campaign survey of 2007. Subsurface hydrothermal activity probably remained high after 2007. The GNSS observations indicate that deep magma migration was not associated directly with the 2014 eruption, but that existing magma under the volcanic edifice reactivated the shallower hydrothermal source that was responsible for the 2007 eruption and subsequently led to regional ground deformation. This shallow source caused the phreatic eruption of September 27, 2014. These inferences are consistent with the groundwater pressure observation (Koizumi et al. 2016). Assuming that this source is located at the 2007 shallower source, the volume change of the Mogi source is estimated to be 0.38 × 106 m3 from the tilt change just before the eruption. The 2007 GNSS campaign survey data show that the volume change of the shallower source was 0.28 × 106 m3. Maeno et al. (2016) estimated that the 2014 eruption produced 0.3–0.5 × 106 m3 DRE of eruptive products. The volume change just before the 2014 eruption estimated from the tiltmeter data, 0.38 × 106 m3, is consistent with the estimated amount of eruptive products. Ground deformation data from Mt. Ontake around the 2007 and 2014 eruptions reveal details of the pressure sources beneath the volcano. GNSS network observations suggest that volume changes before and after the 2007 eruption totaled 6 × 106 m3, of which 5.5 × 106 m3 was in an open-crack fault and 0.32 × 106 m3 was in a shallower sphere below sea level. GNSS campaign survey data suggest a volume change of 0.28 × 106 m3 in a shallow source, 1700 m above sea level, just beneath the volcanic vents. In the 2014 eruption, volume change at depth was very small; however, tiltmeter data suggest that a shallow source inflated 7 min before the eruption with a volume change of 0.38 × 106 m3. AT analyzed the GNSS data, estimated pressure models, and drafted this manuscript. SO carried out a portion of the quantitative analysis and discussed the volcanic activity and estimated models. Both authors read and approved the final manuscript. We thank GSI for furnishing the GEONET GNSS data. We thank the government of Nagano Prefecture for the use of the broadband seismometer data. We are grateful to Makoto Miyashita, Hideki Kojima, Yasushi Ikeda, Tadayoshi Ueno, and Keita Torisu for their help with the GNSS surveys. Most of the figures were prepared using Generic Mapping Tools (Wessel and Smith 1998). Calculations for locating pressure sources were done using MaGCAP-V software, developed by the Meteorological Research Institute (Fukui et al. 2013). We thank two anonymous reviewers and Dr. Koshun Yamaoka, the editor, for their useful comments and suggestions. This work was supported by KAKENHI Grant-in-Aid for Special Purposes, Grant Number 26900002 of the Japan Society for the Promotion of Science. Both authors declare that they have no competing interests. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. - Aoyama H, Oshima H (2015) Precursory tilt changes of small phreatic eruptions of Meakan-dake volcano, Hokkaido, Japan, in November 2008. Earth Planets Space 67:119. doi:10.1186/s40623-015-0289-9 View ArticleGoogle Scholar - Dach R, Hugentobler U, Fridez P, Meindl M (2007) Bernese GPS software version 5.0. Astronomical Institute, University of Bern, BernGoogle Scholar - Fukui K, Ando S, Fujiwara F, Kitagawa S, Kokubo K, Onizawa S, Sakai T, Shimbori T, Takagi A, Yamamoto T, Yamasato H, Yamazaki A (2013) MaGCAP-V: a Windows-based software to analyze ground deformation and geomagnetic change in volcanic areas. IAVCEI 2013 Abstract, 4W 2C-P8Google Scholar - Kimura K, Tsuyuki T, Suganuma I, Hasegawa H, Misu H, Fujita K (2015) Rainfall correction of volumetric strainmeter data by tank models. Q J Seismol 78:93–158 (in Japanese, with English abstract) Google Scholar - Koizumi N, Sato T, Kitagawa Y, Ochi T (2016) Groundwater pressure changes and crustal deformation before and after the 2007 and 2014 eruptions of Mt. Ontake. Earth Planets Space 68:48. doi:10.1186/s40623-016-0420-6 View ArticleGoogle Scholar - Maeda Y, Kato A, Terakawa T, Yamanaka Y, Horikawa S, Matsuhiro K, Okuda T (2015) Source mechanism of a VLP event immediately before the 2014 eruption of Mt. Ontake, Japan. Earth Planets Space 67:187. doi:10.1186/s40623-015-0358-0 View ArticleGoogle Scholar - Maeno F, Nakada S, Oikawa T, Yoshimoto M, Komori J, Ishizuka Y, Takeshita Y, Shimano T, Kaneko T, Nagai M (2016) Reconstruction of a phreatic eruption on 27 September 2014 at Ontake volcano, Central Japan, based on proximal pyroclastic density current and fallout deposits. Earth Planets Space 68:82. doi:10.1186/s40623-016-0449-6 View ArticleGoogle Scholar - Meteorological Research Institute (2008) Studies on evaluation method of volcanic activity. Technical Reports of the Meteorological Research Institute, vol 53, pp 23–34. doi:10.11483/mritechrepo.53 (in Japanese, with English captions) Google Scholar - Meteorological Research Institute (2016) The eruption cloud echo from Mt. Ontake on September 27, 2014 observed by weather radar network. Report of Coordinating Committee for Prediction of Volcanic Eruption, vol 119, pp 76–81 (in Japanese, with English captions) Google Scholar - Miyaoka K, Takagi A (2016) Detection of crustal deformation prior to the 2014 Mt. Ontake eruption by the stacking method. Earth Planets Space 68:60. doi:10.1186/s40623-016-0439-8 View ArticleGoogle Scholar - Mogi K (1958) Relations between the eruptions of various volcanoes and the deformations of the ground surface around them. Bull Earthq Res Inst Univ Tokyo 36:99–134Google Scholar - Murase M, Kimata F, Yamanaka Y, Horikawa S, Matsuhiro K, Matsushima T, Mori H, Ohkura T, Yoshikawa S, Miyajima R, Inoue H, Mishima T, Sonoda T, Uchida K, Yamamoto K, Nakamichi H (2016) Preparatory process preceding the 2014 eruption of Mount Ontake volcano, Japan: insights from precise leveling measurements. Earth Planets Space 68:9. doi:10.1186/s40623-016-0386-4 View ArticleGoogle Scholar - Nakagawa H, Toyofuku T, Kotani K, Miyahara B, Iwashita C, Kawamoto S, Hatanaka Y, Munekane H, Ishimoto M, Yutsudo T, Ishikura N, Sugawara Y (2009) Development and validation of GEONET new analysis strategy (version 4). J Geospatial Inf Auth Japan 118:1–8 (in Japanese) Google Scholar - Nakamichi H, Kumagai H, Nakano M, Okubo M, Kimata F, Ito Y, Obara K (2009) Source mechanism of very-long-period event at Mt. Ontake, central Japan: response of a hydrothermal system to magma intrusion beneath the summit. J Volcanol Geotherm Res 187:167–177View ArticleGoogle Scholar - Okada Y (1992) Internal deformation due to shear and tensile faults in a half-space. Bull Seism Soc Am 82:1018–1040Google Scholar - Takagi A (2013) Ground deformation prior to the 2011 Shinmoedake eruption Technical Reports of the Meteorological Research Institute, vol 69, pp 146–151. http://www.mri-jma.go.jp/Publish/Technical/DATA/VOL_69/5_2-2.pdf (in Japanese, with English captions) - Tamura Y, Sato T, Ooe M, Ishiguro M (1991) A procedure for tidal analysis with a Bayesian information criterion. Geophys J Int 104:507–516View ArticleGoogle Scholar - Volcanology Division, JMA (2008) Volcanic Activity of Ontakesan from March 2007 to June 2007. Report of Coordinating Committee for Prediction of Volcanic Eruption, vol 97, pp 14–29. http://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/kaisetsu/CCPVE/Report/097/kaiho_097_06.pdf (in Japanese, with English captions) - Volcanology Division, JMA (2014) Installation of new volcano monitoring systems for 47 volcanoes in Japan. Q J Seismol 77:241–310. http://www.jma.go.jp/jma/kishou/books/kenshin/vol77p241.pdf (in Japanese, with English abstract) - Wessel P, Smith WHF (1998) New, improved version of Generic Mapping Tools released. EOS Trans AGU 79:579. doi:10.1029/98EO00426 View ArticleGoogle Scholar - Yoshida Y, Funakoshi M, Nishida M, Ohmi K, Takagi A, Ando S (2012) Crustal Deformation observed by GPS around Azuma Volcano. Q J Seismol 76:1–8 (in Japanese, with English abstract and captions) Google Scholar
<urn:uuid:a2013dcb-ec4b-4c45-96de-a07a20e0c867>
3.0625
4,075
Academic Writing
Science & Tech.
50.144443
95,640,614
The movement and landfall of Tropical Cyclone Bingiza was captured over the weekend of Feb. 12-13 in a series of infrared satellite imagery from the Atmospheric Infrared Sounder (AIRS) instrument that flies aboard NASA's Aqua satellite. Aqua and Terra provided companion visible images to the infrared images of Bingiza's track across northern Madagascar. This series of infrared satellite imagery from the AIRS instrument on NASA\'s Aqua satellite shows the progression of Tropical Cyclone Bingiza over the weekend of Feb. 12-13. On February 12 at 21:35 UTC, Bingiza\'s center was still at sea, and an eye was visible. On Feb. 13 at 0947 UTC, AIRS noticed the western edge of Bingiza over northeastern Madagascar and the storm appears to be expanding. On Feb. 13 at 22:17 UTC, Bingiza\'s center was on the northeastern coastline and it was making landfall. Credit: NASA/JPL, Ed Olsen Today, Feb. 14 at 0900 UTC (4 a.m. EST), Cyclone Bingiza had maximum sustained winds of 85 knots (98 mph / 157 kmh) over land. It was located about 250 nautical miles (287 miles/463 km) northeast of Antananarivo, Madagascar, near 16.0 South and 49.3 East. It was moving westward near 8 knots (9 mph/15 kmh). Currently there are warnings posted for Malagasy. Heavy rainfall is expected to be the main hazard for northern Madagascar. This morning's (Feb. 14) infrared AIRS satellite image from 10:23 UTC (5:23 a.m. EST) shows northern Madagascar covered by the storm. It also showed that Bingiza remained well-organized with tightly-curved convective thunderstorm banding wrapping into a well-defined low-level circulation center. It continues to draw energy from the warm waters of the Southern Indian Ocean.Although the storm was still at hurricane strength at that time, no eye was visible in the infrared image. The strongest thunderstorms and coldest (-63F/-52C), highest cloud tops were over north central Madagascar and over the Mozambique Channel. The imagery also showed that the western edge of Bingiza was already over the Mozambique Channel. AIRS images are created at NASA's Jet Propulsion Laboratory, in Pasadena, Calif. The forecasters at the Joint Typhoon Warning Center expect Bingiza to continue tracking west-southwestward over land over the next 36 hours while rapidly weakening. The storm is expected to track over northern Madagascar and by Feb. 16 it will move into the Mozambique Channel where it is expected to regenerate in the warm waters (30 degrees Celsius) and low wind shear. Once in the Channel, forecasters expect that it will be steered southwestward to southward. Forecasts currently differ on the end Bingiza's life. Some models predict a second landfall in southern Madagascar right now, while others keep the storm at sea. Rob Gutro | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:671b2826-3d8b-4c42-b834-a3a24164dac0>
3.078125
1,191
Content Listing
Science & Tech.
47.730278
95,640,621
Methane is a substantial driver of global climate change, contributing 30 percent of current net climate warming. Concern over methane is mounting, due to leaks associated with rapidly expanding unconventional oil and gas extraction, and the potential for large-scale release of methane from the Arctic as ice cover continues to melt and decayed material releases methane to the atmosphere. At the same time, methane is a growing source of energy, and aggressive methane mitigation is key to avoiding dangerous levels of global warming. Methane capture in zeolite SBN. Blue represents adsorption sites, which are optimal for methane (CH4) uptake. Each site is connected to three other sites (yellow arrow) at optimal interaction distance. The research team, made up of Amitesh Maiti, Roger Aines and Josh Stolaroff of LLNL and Professor Berend Smit, researchers Jihan Kim and Li-Chiang Lin at UC Berkeley and Lawrence Berkeley National Lab, performed systematic computer simulation studies on the effectiveness of methane capture using two different materials - liquid solvents and nanoporous zeolites (porous materials commonly used as commercial adsorbents). While the liquid solvents were not effective for methane capture, a handful of zeolites had sufficient methane sorption to be technologically promising. The research appears in the April 16 edition of the journal, Nature Communications. Unlike carbon dioxide, the largest emitted greenhouse gas, which can be captured both physically and chemically in a variety of solvents and porous solids, methane is completely non-polar and interacts very weakly with most materials. "Methane capture poses a challenge that can only be addressed through extensive material screening and ingenious molecular-level designs," Maiti said. Methane is far more potent as a greenhouse gas than CO2. Researchers have found that the release of as little as 1 percent of methane from the Arctic alone could have a warming effect approaching that being produced by all of the CO2 that has been pumped into the atmosphere by human activity since the start of the Industrial Revolution. Methane is emitted at a wide range of concentrations from a variety of sources, including natural gas systems, livestock, landfills, coal mining, manure management, wastewater treatment, rice cultivation and a few combustion processes. The team's research focused on two different applications -- concentrating a medium-purity methane stream to a high-purity range (greater than 90 percent), as involved in purifying a low-quality natural gas; and concentrating a dilute stream (about 1 percent or lower) to the medium-purity range (greater than 5 percent), above methane's flammability limit in air. Through an extensive study, the team found that none of the common solvents (including ionic liquids) appears to possess enough affinity toward methane to be of practical use. However, a systematic screening of around 100,000 zeolite structures uncovered a few nanoporous candidates that appear technologically promising. Zeolites are unique structures that can be used for many different types of gas separations and storage applications because of their diverse topology from various networks of the framework atoms. In the team's simulations, one specific zeolite, dubbed SBN, captured enough medium source methane to turn it to high purity methane, which in turn could be used to generate efficient electricity. "We used free-energy profiling and geometric analysis in these candidate zeolites to understand how the distribution and connectivity of pore structures and binding sites can lead to enhanced sorption of methane while being competitive with CO2 sorption at the same time," Maiti said. Other zeolites, named ZON and FER, were able to concentrate dilute methane streams into moderate concentrations that could be used to treat coal-mine ventilation air. The work at LLNL was funded by the Advanced Research Projects Agency-Energy (ARPA-E). More InformationNew materials for methane capture from dilute and medium-concentration sources Anne Stark | EurekAlert! Research finds new molecular structures in boron-based nanoclusters 13.07.2018 | Brown University 3D-Printing: Support structures to prevent vibrations in post-processing of thin-walled parts 12.07.2018 | Fraunhofer-Institut für Produktionstechnologie IPT For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:bab4e22a-5c4e-4001-b1c6-a4fd4920fdc9>
3.671875
1,478
Content Listing
Science & Tech.
29.243593
95,640,635
This tutorial will show you how we can deal with empty or blank cell in excel file using Apache POI. We cannot handle blank or empty cell if we use cell iterator, so we have to get the maximum cell index for the row and iterate through index to deal with empty cell. The Apache POI is to create and maintain Java APIs for manipulating various file formats based upon the Office Open XML standards (OOXML) and Microsoft’s OLE 2 Compound Document format (OLE2). You can read and write MS Excel files using Java. In addition, you can read and write MS Word and MS PowerPoint files using Java. Apache POI is the Java Excel solution (for Excel 97-2008). For more information please go through https://poi.apache.org/ If you already have an idea on how to create a maven project in Eclipse will be great otherwise I will tell you here how to create a maven project in Eclipse. Continue reading “Deal with empty or blank cell in excel file using apache poi”
<urn:uuid:2ec236a3-ade6-46f0-833a-f1a9be8378fe>
2.640625
219
Truncated
Software Dev.
56.388588
95,640,636
|MadSci Network: Earth Sciences| I think you are talking about "hot-spots", that are "small" conducts, with a few hundred kilometers in diameter, that connect the core-mantle boundary with the surface. This idea was initially proposed by J.T. Wilson, of Toronto University, and the concept was developed later by W.J. Morgan, of Princeton, in 1970. Tuzo's idea was the only mechanism able to explain the volcanic island chains in regions where you don't have a lithospheric plate boundary (e.g., Hawaii Islands). The hot spots are the expression in the surface of these volcanic conducts, called "plumes". There are about 20 hot spots in the world, most of them in Africa, and one in the region of the Yellowstone Natl. Park. In these regions, one expect a high heat flow (with volcanism, geisers, etc.), and lava eruption if it is in the continent; if the hotspot is located in the oceanic area, as the lithospheric plate is moving, it can create chains of islands, just like Hawaii. In these island chains one can observe that in one end the volcanism is extinct and the island is older than the other extreme, where the volcanism is active and the island is young. An interesting reference about this theme can be found in chapter 16 ("Causes of Plate Tectonics") of the book "The way the Earth Works: an introduction to the new global geology and its revolutionary development", by Peter J. Wyllie, edited by John Wiley & Sons, Inc., 1976. Best regards Eder C. Molina email@example.com Dept. of Geophysics Institute of Astronomy and Geophysics University of Sao Paulo - BRAZIL Try the links in the MadSci Library for more information on Earth Sciences.
<urn:uuid:d4352fd7-a6df-46c3-91ec-8932fccfb86f>
3.71875
388
Knowledge Article
Science & Tech.
55.499441
95,640,658
In a paper published by the international journal Conservation Biology, Dr Simon Black and Dr Jim Groombridge have proposed that an understanding and application of their adapted framework, now known as the Conservation Excellence Model, will enable: greater clarity in goal setting; more-effective identification of job roles within programs; better links between technical approaches and measures of biological success; and more-effective use of resources. The model could also improve how a conservation program’s effectiveness is evaluated and may be used to compare different programs – for example, during reviews of project performance by sponsoring organisations. Dr Black, a conservation biologist with experience in organisational and management development, said: ‘The conservation sector is traditionally overstretched and remains relatively underdeveloped in terms of management thinking. At the same time, there is an increasing expectation that conservation programs should pay their way. Consequently, conservation practitioners and charities need to demonstrate that genuine achievement in conserving species or habitats is occurring and that their efforts are seen as value for money. We feel that it is important to lead and manage a program in a way that enables people working on the ground to make a difference to the species or habitats with which they are involved. ‘The model we are offering will enable managers to think about which results should be analysed and on which work activities they should focus their efforts in order to achieve the best long-term outcomes.’ Dr Black is also Director of the DICE EARTH Centre, an innovative training and consultancy hub that provides support to the global conservation community. This includes: conservation, management and leadership training; organisation design and program assessment; and program improvement methodologies. Gary Hughes | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:5b2c9560-88ec-4780-842d-93a74f508443>
2.75
997
Content Listing
Science & Tech.
32.204401
95,640,683
Nasa’s robotic spacecraft Cassini that has been orbiting Saturn for 13 years is set for a final dive towards the planet and burn up in its atmosphere in a “grand finale” after it flies ast Titan, Saturn’s largest moon, on September 15. Cassini’s imaging cameras will take their last look around the Saturn system, sending back pictures of moons Titan and Enceladus, the hexagon-shaped jet stream around the planet’s north pole and features in the rings. With its antenna pointed at Earth, the spacecraft will send back its final images and other data collected along the way. Soon after, Cassini will burn up and disintegrate like a meteor, NASA said in a statement. Cassini, a collaboration between Nasa, ESA and the Italian space agency, Agenzia Spaziale Italiana, was launched for the first time on October 15, 1997. The spacecraft entered orbit around Saturn on June 30, 2004, carrying the European Huygens probe. After its four-year prime mission, Cassini’s tour was extended twice. Its key discoveries have included the global ocean with indications of hydrothermal activity within Enceladus — a potential target for scientists that can harbour life — and liquid methane seas on Titan. As Cassini plunges past Saturn, in its final run, the spacecraft will collect some incredibly rich and valuable information that was too risky to obtain earlier in the mission. The spacecraft will make detailed maps of Saturn’s gravity and magnetic fields, revealing how the planet is arranged internally, and possibly helping to solve the irksome mystery of just how fast Saturn is rotating. The final dives will vastly improve our knowledge of how much material is in the rings, bringing us closer to understanding their origins. Cassini’s particle detectors will sample icy ring particles being funneled into the atmosphere by Saturn’s magnetic field. Its cameras will take amazing, ultra-close images of Saturn’s rings and clouds. In 2017, Cassini completed 13 years in orbit around Saturn, following a seven-year journey from Earth. The spacecraft is running low on the rocket fuel used for adjusting its course. If left unchecked, this situation would eventually prevent mission operators from controlling the course of the spacecraft, NASA said. NASA chose to safely dispose of the spacecraft in the atmosphere of Saturn, in order to avoid the unlikely possibility of Cassini someday colliding with one of Saturn’s moons, the statement said.
<urn:uuid:bfa246fe-b7dd-4bd1-be65-3493a004749f>
3.625
513
News Article
Science & Tech.
35.73025
95,640,698
According to New Scientist magazine, which features Dr Tyrrell's research this week, this work demonstrates the most far-reaching disruption of long-term planetary processes yet suggested for human activity. Dr Tyrrell's team used a mathematical model to study what would happen to marine chemistry in a world with ever-increasing supplies of the greenhouse gas, carbon dioxide. The world's oceans are absorbing CO2 from the atmosphere but in doing so they are becoming more acidic. This in turn is dissolving the calcium carbonate in the shells produced by surface-dwelling marine organisms, adding even more carbon to the oceans. The outcome is elevated carbon dioxide for far longer than previously assumed. Computer modelling in 2004 by a then oceanography undergraduate student at the University, Stephanie Castle, first interested Dr Tyrrell and colleague Professor John Shepherd in the problem. They subsequently developed a theoretical analysis to validate the plausibility of the phenomenon. The work, which is part-funded by the Natural Environment Research Council, confirms earlier ideas of David Archer of the University of Chicago, who first estimated the impact rising CO2 levels would have on the timing of the next ice age. Dr Tyrrell said: 'Our research shows why atmospheric CO2 will not return to pre-industrial levels after we stop burning fossil fuels. It shows that it if we use up all known fossil fuels it doesn't matter at what rate we burn them. The result would be the same if we burned them at present rates or at more moderate rates; we would still get the same eventual ice-age-prevention result.' Ice ages occur around every 100,000 years as the pattern of Earth's orbit alters over time. Changes in the way the sun strikes the Earth allows for the growth of ice caps, plunging the Earth into an ice age. But it is not only variations in received sunlight that determine the descent into an ice age; levels of atmospheric CO2 are also important. Humanity has to date burnt about 300 Gt C of fossil fuels. This work suggests that even if only 1000 Gt C (gigatonnes of carbon) are eventually burnt (out of total reserves of about 4000 Gt C) then it is likely that the next ice age will be skipped. Burning all recoverable fossil fuels could lead to avoidance of the next five ice ages. Dr Tyrrell is a Reader in the University of Southampton's School of Ocean and Earth Science. This research was first published in Tellus B, vol 59 p664. Sarah Watts | alfa Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:3f8fada7-f004-4ff1-9e96-fb8cffeb73f2>
3.921875
1,144
Content Listing
Science & Tech.
45.510703
95,640,699
why does carbon dioxide cause global warming Why do carbon dioxide emissions heat up the planet? The temperature of the Earth depends on a balance between incoming energy from the Sun and the energy that bounces back into space. Carbon dioxide absorbs heat that would otherwise be lost to space. Some of this energy is re-emitted back to Earth, causing additional heating of the planet. What are the major sources of carbon dioxide? Most man-made carbon emissions come from burning fossil fuels for energy. In the UK, the biggest emitters are from transport and the domestic sectors, of which aviation is the fastest growing. Because of their varying chemical constituents, different fossil fuels produce different amounts of carbon dioxide. Coal produces most, then oil, and then gas. Which country produces most carbon? The US emits the most: 5,800 million tonnes every year. Next is China, over 3,000; Russia, over 2,000; Japan, 1,200; and India, 1,000 million tonnes. Other major emitters are Germany, 800; Canada, 600m; the UK, 500m; and Italy, 47m. Since 1958, a continuous measurement of the carbon dioxide content of the atmosphere has been made at Mauna Loa Observatory in Hawaii. These observations were initiated by Charles Keeling, who died in 2005, and have been maintained by his son Ralph ever since. Sunshine is a manifestation of solar radiation and when it is absorbed by the surface of the Earth, the surface heats up and emits a different kind of radiation, known as infrared radiation. Carbon dioxide is a special chemical in that it is transparent to solar radiation and yet it absorbs infrared radiation. Thus, the presence of carbon dioxode in our atmosphere allows sunshine to penetrate to the surface but inhibits the emission of infrared radiation to space. The consequence of the absorption of infrared radiation by carbon dioxide in the atmosphere is that Earth is much warmer than it has any right to expect based upon its distance from the Sun. In fact, Earthвs average surface temperature is 59 degrees Fahrenheit when it would be 0 if carbon dioxide and other such greenhouse gases (like water vapor and methane) did not exist in our atmosphere. When Keeling began his measurements in 1958, the atmosphere contained 315 carbon dioxide molecules for every million molecules of gaseous atmosphere. April 2014 was the first month in 56 years in which the monthly average carbon dioxide fraction topped 400 molecules per million (it was 401. 33). The continual increase in this carbon dioxide fraction is considered to be the main contributor to the global temperature increase known as global warming. Such values are a first in human history and likely represent the highest carbon dioxide fraction in our atmosphere in at least the last 800,000 years. It is high time that we had a sober, data-driven discussion about the hazards presented by this dangerous trend. Analytical, skeptical science has to be central to this discussion. - Views: 30 why does deforestation increase carbon dioxide in the atmosphere why do we use fossil fuels today why do we burn fuels science homework why do we need carbon dioxide in the atmosphere why do we need to reduce co2 emissions why was there no oxygen in the early atmosphere why do we need to recycle paper
<urn:uuid:ec9a589e-d3ac-40c6-85a9-b6c1d83ca268>
3.640625
657
Personal Blog
Science & Tech.
42.547562
95,640,713
about this item Limnology as a process refers to the study of ponds, rivers, lakes, wetlands, streams, etc. Freshwater biology is a sub-division of limnology. It is the study of freshwater ecosystems, especially their scientific and biological aspects. It studies in detail the relationship of aquatic plants and animals with their ecosystem along with species distribution. This book is a compilation of chapters that discuss the most vital aspects in the field of freshwater biology. Such selected concepts that redefine this field have been presented in it. For all those who are interested in freshwater biology, this textbook can prove to be an essential guide. Number of Pages: 222 Publisher: Ingram Pub Services Street Date: May 23, 2018 Item Number (DPCI): 248-72-8149
<urn:uuid:121b9ea2-94e5-4c3f-95a3-f8c00e3cdf48>
2.765625
162
Product Page
Science & Tech.
43.198
95,640,719
An Explosion from Across the Universe — with guest Dr. Rodolfo Barniol Duran A podcast is out! Gamma ray bursts are some of the most powerful explosions we know of, ones that we can literally see from across the Universe. But they are extremely short — lasting only a few milliseconds to a few minutes. What could cause such an explosion? Today, Dr. Rodolfo Barniol Duran, a theoretical astrophysicist at Purdue, talks to us about what we know about these mysterious explosions. You can listen to the podcast here. Science and technology are everywhere in our lives. This podcast takes a look not only at the science itself, but its role in society, how it affects our lives, and how it influences how we define ourselves as humans. Episodes also throw in a mix of culture, history, ethics, philosophy, religion, and the future! Hosted by Elizabeth Fernandez, an astronomer and science communicator. Let’s spark some dialog! Thanks for listening!
<urn:uuid:43b4e91d-ac01-4839-a68c-8b622b6bc303>
2.65625
208
Truncated
Science & Tech.
45.991392
95,640,727
29 January 2008 29 January 2008 Researchers at Rensselaer Polytechnic Institute and Rice University have created the darkest material ever made by man, a thin coating comprised of low-density arrays of loosely vertically-aligned carbon nanotubes, The material absorbs more than 99.9 percent of light and one day could be used to boost the effectiveness and efficiency of solar energy conversion, infrared sensors, and other devices. The researchers who developed the material have applied for a Guinness World Record for their efforts. “It is a fascinating technology, and this discovery will allow us to increase the absorption efficiency of light as well as the overall radiation-to-electricity efficiency of solar energy conservation,” said Shawn-Yu Lin, professor of physics at Rensselaer and a member of the university’s Future Chips Constellation, who led the research project. “The key to this discovery was finding how to create a long, extremely porous vertically-aligned carbon nanotube array with certain surface randomness, therefore minimizing reflection and maximizing absorption simultaneously.” The research results were published in the journal Nano Letters. All materials, from paper to water, air, or plastic, reflect some amount of light. Scientists have long envisioned an ideal black material that absorbs all the colors of light while reflecting no light. So far they have been unsuccessful in engineering a material with a total reflectance of zero. The total reflectance of conventional black paint, for example, is between 5 and 10 percent. The darkest manmade material, prior to the discovery by Lin’s group, boasted a total reflectance of 0.16 percent to 0.18 percent. Lin’s team created a coating of low-density, vertically aligned carbon nanotube arrays that are engineered to have an extremely low index of refraction and the appropriate surface randomness, further reducing its reflectivity. The end result was a material with a total reflective index of 0.045 percent — more than three times darker than the previous record, which used a film deposition of nickel-phosphorous alloy. “The loosely-packed forest of carbon nanotubes, which is full of nanoscale gaps and holes to collect and trap light, is what gives this material its unique properties,” Lin said. “Such a nanotube array not only reflects light weakly, but also absorbs light strongly. These combined features make it an ideal candidate for one day realizing a super black object.” “The low-density aligned nanotube sample makes an ideal candidate for creating such a super dark material because it allows one to engineer the optical properties by controlling the dimensions and periodicities of the nanotubes,” said Pulickel Ajayan, the Anderson Professor of Engineering at Rice University in Houston, who worked on the project when he was a member of the Rensselaer faculty. The research team tested the array over a broad range of visible wavelengths of light, and showed that the nanotube array’s total reflectance remains constant. “It’s also interesting to note that the reflectance of our nanotube array is two orders of magnitude lower than that of the glassy carbon, which is remarkable because both samples are made up of the same element — carbon,” said Lin. This discovery could lead to applications in areas such as solar energy conversion, thermalphotovoltaic electricity generation, infrared detection, and astronomical observation. Other researchers contributing to this project and listed authors of the paper include Rensselaer physics graduate student Zu-Po Yang; Rice postdoctoral research associate Lijie Ci; and Rensselaer senior research scientist James Bur. The project was funded by the U.S. Department of Energy’s Office of Basic Energy Sciences and the Focus Center New York for Interconnects. Lin’s research was conducted as part of the Future Chips Constellation at Rensselaer, which focuses on innovations in materials and devices, in solid state and smart lighting, and applications such as sensing, communications, and biotechnology. A new concept in academia, Rensselaer constellations are led by outstanding faculty in fields of strategic importance. Each constellation is focused on a specific research area and comprises a multidisciplinary mix of senior and junior faculty, as well as postdoctoral researchers and graduate students. The new darkest manmade material, with its 0.045 % reflectance (right), is noticeably darker than the 1.4% NIST reflectance standard (left). This photo was taken under a flash light illumination. Sharp & Tappin has installed and commissioned a Compcut 200 composite plate saw at Renault Sport Racing in Enstone, Oxfordshire, UK. Electric GT Holdings and SPV Racing recently unveiled the race-ready version of the EPCS V2.3 Tesla P100DL at Circuit de Barcelona-Catalunya. The car features lightweight body parts made using Bcomp's ampliTex and powerRibs natural fibre composite reinforcement products, contributing to a 500 kg weight reduction over the road edition. UK company Codem Composites has provided key bodywork components to support the F1 team Sahara Force India.
<urn:uuid:01a67382-3b11-4a0f-8921-84371f7c32fb>
3.4375
1,075
News Article
Science & Tech.
28.114742
95,640,738