text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Python use Selenium to control the browser is easy to use, and can do lots of stuff, recently used it as automatic login the website and reply the forum post at certain interval. pip install selenium You have to download the webdriver and put somewhere in your computer. For Chrome, it’s “chromedriver.exe”. For Firefox, no webdriver file required, however you will require to download “geckodriver.exe”, it’s similar to “chromedriver.exe”, otherwise you will encounter below error: #selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH. You can refer to this Link to download “geckodriver.exe”. Python script is really simple. Import selenium webdriver from selenium import webdriver Connect Chrome Browser #your path to store your chromedriver.exe chrome_path = r"C:\Users\xionghuilin\Desktop\chromedriver.exe" driver = webdriver.Chrome(chrome_path) For Firefox case driver = webdriver.Firefox() Goto url address def goturl(driver,url): try: driver.get(url) except: return False return True while True: if goturl(driver,"http://your url intended to go"): break; #waiting for browser to response time.sleep(1) Input username/password and Login To get the element name, ID or class name, you can right click on the website, then click “Inspect Element”(For Chrome or Firefox). mm = "用户名" #if it is unicode, requires to decode as utf-8 mm = unicode(mm.decode("utf-8")) user=driver.find_element_by_name("element name of the username") user.clear() user.send_keys(mm) password=driver.find_element_by_id("element ID of password") password.send_keys("password") login=driver.find_element_by_class_name("the element on the browser") login.click() #wait for browser to response time.sleep(1) 6044total visits,2visits today
<urn:uuid:82654e23-8a3f-4b6e-8be2-2559620636d4>
2.609375
508
Tutorial
Software Dev.
42.598333
95,640,746
Differentiate between Excoscopic and endoscopic pattern of development in embryo ?© BrainMass Inc. brainmass.com July 15, 2018, 8:50 pm ad1c9bdddf There are two basic patterns of embryo development - Exoscopic and Endoscopic. Exo implies "out of" and endo implies 'into'. Scopic refers to the archegonium. The shoot apical meristem of excoscopic embryos grows ... Embry development can be exoscopic or endoscopic. Each type of embryo development is restricted to specific plant groups. Excoscopic embryo development typically occurs in Bryophyta, Psilophyta and Sphenophyta. However, endoscopic embryo development is restricted to Cycadophyta, Coniferophyta and Anthophyta
<urn:uuid:c9394bc3-e61f-4a8e-adf3-20ee006a091f>
2.8125
163
Truncated
Science & Tech.
16.059855
95,640,747
After investigating the upper atmosphere of the Red Planet for a full Martian year, NASA's MAVEN mission has determined that the escaping water does not always go gently into space. Sophisticated measurements made by a suite of instruments on the Mars Atmosphere and Volatile Evolution, or MAVEN, spacecraft revealed the ups and downs of hydrogen escape - and therefore water loss. The escape rate peaked when Mars was at its closest point to the sun and dropped off when the planet was farthest from the sun. The rate of loss varied dramatically overall, with 10 times more hydrogen escaping at the maximum. This image shows atomic hydrogen scattering sunlight in the upper atmosphere of Mars, as seen by the Imaging Ultraviolet Spectrograph on NASA's Mars Atmosphere and Volatile Evolution mission. About 400,000 observations, taken over the course of four days shortly after the spacecraft entered orbit around Mars, were used to create the image. Hydrogen is produced by the breakdown of water, which was once abundant on Mars' surface. Because hydrogen has low atomic mass and is weakly bound by gravity, it extends far from the planet (the darkened circle) and can readily escape. Credit: NASA/University of Colorado "MAVEN is giving us unprecedented detail about hydrogen escape from the upper atmosphere of Mars, and this is crucial for helping us figure out the total amount of water lost over billions of years," said Ali Rahmati, a MAVEN team member at the University of California at Berkeley who analyzed data from two of the spacecraft's instruments. Hydrogen in Mars' upper atmosphere comes from water vapor in the lower atmosphere. An atmospheric water molecule can be broken apart by sunlight, releasing the two hydrogen atoms from the oxygen atom that they had been bound to. Several processes at work in Mars' upper atmosphere may then act on the hydrogen, leading to its escape. This loss had long been assumed to be more-or-less constant, like a slow leak in a tire. But previous observations made using NASA's Hubble Space Telescope and ESA's Mars Express orbiter found unexpected fluctuations. Only a handful of these measurements have been made so far, and most were essentially snapshots, taken months or years apart. MAVEN has been tracking the hydrogen escape without interruption over the course of a Martian year, which lasts nearly two Earth years. "Now that we know such large changes occur, we think of hydrogen escape from Mars less as a slow and steady leak and more as an episodic flow - rising and falling with season and perhaps punctuated by strong bursts," said Michael Chaffin, a scientist at the University of Colorado at Boulder who is on the Imaging Ultraviolet Spectrograph (IUVS) team. Chaffin is presenting some IUVS results on Oct. 19 at the joint meeting of the Division for Planetary Sciences and the European Planetary Science Congress in Pasadena, California. In the most detailed observations of hydrogen loss to date, four of MAVEN's instruments detected the factor-of-10 change in the rate of escape. Changes in the density of hydrogen in the upper atmosphere were inferred from the flux of hydrogen ions - electrically charged hydrogen atoms - measured by the Solar Wind Ion Analyzer and by the Suprathermal and Thermal Ion Composition instrument. IUVS observed a drop in the amount of sunlight scattered by hydrogen in the upper atmosphere. MAVEN's magnetometer found a decrease in the occurrence of electromagnetic waves excited by hydrogen ions, indicating a decrease in the amount of hydrogen present. By investigating hydrogen escape in multiple ways, the MAVEN team will be able to work out which factors drive the escape. Scientists already know that Mars' elliptical orbit causes the intensity of the sunlight reaching Mars to vary by 40 percent during a Martian year. There also is a seasonal effect that controls how much water vapor is present in the lower atmosphere, as well as variations in how much water makes it into the upper atmosphere. The 11-year cycle of the sun's activity is another likely factor. "In addition, when Mars is closest to the sun, the atmosphere becomes turbulent, resulting in global dust storms and other activity. This could allow the water in the lower atmosphere to rise to very high altitudes, providing an intermittent source of hydrogen that can then escape," said John Clarke, a Boston University scientist on the IUVS team. Clarke will present IUVS measurements of hydrogen and deuterium - a form of hydrogen that contains a neutron and is heavier - on Oct. 19 at the planetary conference. By making observations for a second Mars year and during different parts of the solar cycle, the scientists will be better able to distinguish among these effects. MAVEN is continuing these observations in its extended mission, which has been approved until at least September 2018. "MAVEN's findings reveal what is happening in Mars' atmosphere now, but over time this type of loss contributed to the global change from a wetter environment to the dry planet we see today," said Rahmati. MAVEN's principal investigator is based at the University of Colorado's Laboratory for Atmospheric and Space Physics, Boulder. The university provided two science instruments and leads science operations, as well as education and public outreach, for the mission. NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the MAVEN project and provided two science instruments for the mission. Lockheed Martin built the spacecraft and is responsible for mission operations. The University of California at Berkeley's Space Sciences Laboratory also provided four science instruments for the mission. NASA's Jet Propulsion Laboratory in Pasadena, California, provides navigation and Deep Space Network support, as well as the Electra telecommunications relay hardware and operations. Rob Gutro | EurekAlert! Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:7b5e23ea-2ba6-4d65-af91-94b66519acb6>
4.03125
1,791
Content Listing
Science & Tech.
36.020133
95,640,768
3D model (JSmol) |UN number||1385 (anhydrous)| |Molar mass||78.0452 g/mol (anhydrous)| 240.18 g/mol (nonahydrate) |Appearance||colorless, hygroscopic solid| |Density||1.856 g/cm3 (anhydrous) | 1.58 g/cm3 (pentahydrate) 1.43 g/cm3 (nonohydrate) |Melting point|| 1,176 °C (2,149 °F; 1,449 K) (anhydrous) | 100 °C (pentahydrate) 50 °C (nonahydrate) |12.4 g/100 mL (0 °C) | 18.6 g/100 mL (20 °C) 39 g/100 mL (50 °C) |Solubility||insoluble in ether | slightly soluble in alcohol |Antifluorite (cubic), cF12| |Fm3m, No. 225| |Tetrahedral (Na+); cubic (S2−)| |Safety data sheet||ICSC 1047| Dangerous for the environment (N) |R-phrases (outdated)||R31, R34, R50| |S-phrases (outdated)||(S1/2), S26, S45, S61| |> 480 °C (896 °F; 753 K)| Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). |what is ?)(| Sodium sulfide is the chemical compound with the formula Na2S, or more commonly its hydrate Na2S·9H2O. Both are colorless water-soluble salts that give strongly alkaline solutions. When exposed to moist air, Na2S and its hydrates emit hydrogen sulfide, which smells like rotten eggs. Some commercial samples are specified as Na2S·xH2O, where a weight percentage of Na2S is specified. Commonly available grades have around 60% Na2S by weight, which means that x is around 3. Such technical grades of sodium sulfide have a yellow appearance owing to the presence of polysulfides. These grades of sodium sulfide are marketed as 'sodium sulfide flakes'. Although the solid is yellow, solutions of it are colorless. - Na2SO4 + 2 C → Na2S + 2 CO2 - 2 Na + S → Na2S Reactions with inorganic reagents The sulfide ion in sulfide salts such as sodium sulfide can incorporate a proton into the salt by protonation: + H+ → SH− Because of this capture of the proton ( H+), sodium sulfide has basic character. Sodium sulfide is strongly basic, able to absorb two protons. Its conjugate acid is sodium hydrosulfide (SH− ). An aqueous solution contains a significant portion of sulfide ions that are singly protonated. Sodium sulfide is unstable in the presence of water due to the gradual loss of hydrogen sulfide into the atmosphere. - 2 Na2S + 3 O2 + 2 CO2 → 2 Na2CO3 + 2 SO2 Oxidation with hydrogen peroxide gives sodium sulfate: - Na2S + 4 H2O2 → 4 H2O + Na2SO4 Upon treatment with sulfur, polysulfides are formed: - 2 Na2S + S8 → 2 Na2S5 It is used in water treatment as an oxygen scavenger agent and also as a metals precipitant; in chemical photography for toning black and white photographs; in the textile industry as a bleaching agent, for desulfurising and as a dechlorinating agent; and in the leather trade for the sulfitisation of tanning extracts. It is used in chemical manufacturing as a sulfonation and sulfomethylation agent. It is used in the production of rubber chemicals, sulfur dyes and other chemical compounds. It is used in other applications including ore flotation, oil recovery, making dyes, and detergent. It is also used during leather processing, as an unhairing agent in the liming operation. Reagent in organic chemistry Alkylation of sodium sulfide give thioethers: - Na2S + 2 RX → R2S + 2 NaX Even aryl halides participate in this reaction. Sodium sulfide can be used as nucleophile in Sandmeyer type reactions. Sodium sulfide reduces1,3-dinitrobenzene derivatives to the 3-nitroanilines. Aqueous solution of sodium sulfide can be refluxed with nitro carrying azo dyes dissolved in dioxane and ethanol to selectively reduce the nitro groups to amine; while other reducible groups, e.g. azo group, remain intact. Sulfide has also been employed in photocatalytic applications. - Zintl, E; Harder, A; Dauth, B. (1934). "Gitterstruktur der oxyde, sulfide, selenide und telluride des lithiums, natriums und kaliums". Z. Elektrochem. Angew. Phys. Chem. 40: 588–93. - Wells, A.F. (1984) Structural Inorganic Chemistry, Oxford: Clarendon Press. ISBN 0-19-855370-6. - Holleman, A. F.; Wiberg, E. "Inorganic Chemistry" Academic Press: San Diego, 2001. ISBN 0-12-352651-5. - So, J.-H; Boudjouk, P; Hong, Harry H.; Weber, William P. (1992). "Hexamethyldisilathiane". Inorg. Synth. Inorganic Syntheses. 29: 30. doi:10.1002/9780470132609.ch11. ISBN 978-0-470-13260-9. - L. Lange, W. Triebel, "Sulfides, Polysulfides, and Sulfanes" in Ullmann's Encyclopedia of Industrial Chemistry 2000, Wiley-VCH, Weinheim. doi:10.1002/14356007.a25_443 - Charles C. Price, Gardner W. Stacy "p-Aminophenyldisulfide" Org. Synth. 1948, vol. 28, 14. doi:10.15227/orgsyn.028.0014 - Khazaei; et al. (2012). "synthesis of thiophenols". Synthesis Letters - Thieme Chemistry. 23: 1893. - Hartman, W. W.; Silloway, H. L. (1955). "2-Amino-4-nitrophenol". Organic Syntheses.; Collective Volume, 3, p. 82 - Yu; et al. (2006). "Syntheses of functionalized azobenzenes". Tetrahedron. 62: 10303. - Savateev, A.; Dontsova, D.; Kurpil, B.; Antonietti, M. (June 2017). "Highly crystalline poly(heptazine imides) by mechanochemical synthesis for photooxidation of various organic substrates using an intriguing electron acceptor – Elemental sulfur". Journal of Catalysis. Volume 350: Pages 203–211.
<urn:uuid:59140e82-fb41-437e-b160-eb1a7695617d>
2.703125
1,647
Knowledge Article
Science & Tech.
62.560594
95,640,789
Alien hunters claim that a tiny lizard on Mars was photographed by NASA's Curiosity Rover. Scientists at NASA's Jet Propulsion Laboratory (JPL) predicts that a major dust storm will hit Mars in the next few weeks that might limit the solar energy used by Martian rovers Curiosity and Opportunity. NASA's new blue-collar Mars rover is capable of digging and collecting soil from Mars where fuel and propellant components may be found. While waiting for the first manned mission to the Red Planet, scientists have been exploring Mars with the help of these spacecrafts. Here is NASA’s pride of Martian robotic explorers roving Earth’s neighboring planet. NASA's Mars Curiosity Rover released a colored image of Martian surface showing some Earth-like formations. Just like Earth, Mars is also a thing of beauty, especially when it comes to rock formations. Recent images from NASA's Mars Curiosity rover reveal stunning layered rock formations in the "Murray Buttes" region of the red planet. However, are human explorations polluting Martian waters? ULA’s Atlas V rocket was chosen by NASA to launch its highly anticipated Mars 2020 rover into space. A new stunning 360-degree video of Mars was beamed back to Earth by NASA's Mars Curiosity Rover. The Mars Curiosity Rover can now identify targets and fire lasers without commands from human controllers on Earth. NASA reveals the design of the new Mars 2020 rover that is designed to search evidence of life in the red planet. Scientists have found clues that suggest there could still be flowing water on Mars today. The Mars Curiosity Rover entered "safe mode" on July 2. Engineers at NASA are trying to determine what caused it but says the rover is now "stable". NASA's Curiosity Rover discovered sand dunes that are comparable to those found underwater on Earth. A new study suggests that the discovery of manganese oxide on Mars could prove that there was Earth-like oxygen on the Martian atmosphere during the ancient times.
<urn:uuid:f7c75afe-ac59-4b59-ad5d-9a9fdc499d4a>
3.328125
404
Content Listing
Science & Tech.
46.608692
95,640,801
What is uranium 238 used for dating rocks The energies involved are so large, and the nucleus is so small that physical conditions in the Earth (i.e. The rate of decay or rate of change of the number N of particles is proportional to the number present at any time, i.e. The half-life is the amount of time it takes for one half of the initial amount of the parent, radioactive isotope, to decay to the daughter isotope. Many accept radiometric dating methods as proof that the earth is millions of years old, in contrast to the biblical timeline. Mike Riddle exposes the unbiblical assumptions used in these calculations The primary dating method scientists use for determining the age of the earth is radioisotope dating. Plants absorb carbon dioxide from the atmosphere and animals eat plants. Even though it decays into nitrogen, new carbon-14 is always being formed when cosmic rays hit atoms high in the atmosphere.Thus, if we start out with 1 gram of the parent isotope, after the passage of 1 half-life there will be 0.5 gram of the parent isotope left.After the passage of two half-lives only 0.25 gram will remain, and after 3 half lives only 0.125 will remain etc.Radioisotope dating (also referred to as radiometric dating) is the process of estimating the age of rocks from the decay of their radioactive elements.There are certain kinds of atoms in nature that are unstable and spontaneously change (decay) into other kinds of atoms.
<urn:uuid:96eb5d54-884d-4f4f-b9a8-490c82eff85c>
3.546875
317
Spam / Ads
Science & Tech.
51.908935
95,640,803
Frost weathering is a collective term for several mechanical weathering processes induced by stresses created by the freezing of water into ice. The term serves as an umbrella term for a variety of processes such as frost shattering, frost wedging and cryofracturing. The process may act on a wide range of spatial and temporal scales, from minutes to years and from dislodging mineral grains to fracturing boulders. It is most pronounced in high-altitude and high-latitude areas and is especially associated with alpine, periglacial, subpolar maritime and polar climates, but may occur anywhere at sub-freezing temperatures (between -3 and -8 °C) if water is present. Certain frost-susceptible soils expand or heave upon freezing as a result of water migrating via capillary action to grow ice lenses near the freezing front. This same phenomenon occurs within pore spaces of rocks. The ice accumulations grow larger as they attract liquid water from the surrounding pores. The ice crystal growth weakens the rocks which, in time, break up. It is caused by the expansion of ice when water freezes, putting considerable stress on the walls of containment. This is actually a very common process in all humid, temperate areas where there is exposed rock, especially porous rocks like sandstone. Sand can often be found just under the faces of exposed sandstone where individual grains have been popped off, one by one. This process is often termed frost spalling. In fact, this is often the most important weathering process for exposed rock in many areas. Similar processes can act on asphalt pavements, contributing to various forms of cracking and other distresses, which, when combined with traffic and the intrusion of water, accelerate rutting, the formation of potholes, and other forms of pavement roughness. The traditional explanation for frost weathering was volumetric expansion of freezing water. When water freezes to ice, its volume increases by nine percent. Under specific circumstances, this expansion is able to displace or fracture rock. At a temperature of -22 °C, ice growth is known to be able to generate pressures of up to 207MPa, more than enough to fracture any rock. For frost weathering to occur by volumetric expansion, the rock must have almost no air that can be compressed to compensate for the expansion of ice, which means it has to be water-saturated and frozen quickly from all sides so that the water does not migrate away and the pressure is exerted on the rock. These conditions are considered unusual, restricting it to a process of importance within a few centimeters of a rock's surface and on larger existing water-filled joints in a process called ice wedging. Not all volumetric expansion is caused by the pressure of the freezing water; it can be caused by stresses in water that remains unfrozen. When ice growth induces stresses in the pore water that breaks the rock, the result is called hydrofracture. Hydrofracturing is favoured by large interconnected pores or large hydraulic gradients in the rock. If there are small pores, a very quick freezing of water in parts of the rock may expel water, and if the water is expelled faster than it can migrate, pressure may rise, fracturing the rock. Since research in physical weathering begun around 1900, volumetric expansion was, until the 1980s, held to be the predominant process behind frost weathering. This view was challenged in 1985 and 1986 publications by Walder and Hallet. Nowadays researchers such as Matsuoka and Murton consider the "conditions necessary for frost weathering by volumetric expansion" as unusual. However the bulk of recent literature demonstrates that that ice segregation is capable of providing quantitative models for common phenomena while the traditional, simplistic volumetric expansion does not. - Hales, T. C.; Roering, Joshua (2007). "Climatic controls on frost cracking and implications for the evolution of bedrock landscapes". Journal of Geophysical Research: Earth Surface. 112 (F2). Bibcode:2007JGRF..112.2033H. doi:10.1029/2006JF000616. Retrieved 16 February 2018. - Taber, Stephen (1930). "The mechanics of frost heaving" (PDF). Journal of Geology. 38: 303–317. Bibcode:1930JG.....38..303T. doi:10.1086/623720. - Goudie, A.S.; Viles H. (2008). "5: Weathering Processes and Forms". In Burt T.P.; Chorley R.J.; Brunsden D.; Cox N.J.; Goudie A.S. Quaternary and Recent Processes and Forms. Landforms or the Development of Gemorphology. 4. Geological Society. pp. 129–164. ISBN 9781862392496. - Eaton, Robert A.; Joubert, Robert H. (December 1989), Wright, Edmund A., ed., Pothole Primer: A Public Administrator's Guide to Understanding and Managing the Pothole Problem, Special Report 81-21, U.S. Army Cold Regions Research and Engineering Laboratory - Minnesota's Cold Weather Road Research Facility (2007). "Investigation of Low Temperature Cracking in Asphalt Pavements — Phase II (MnROAD Study)". - Matsuoka, N.; Murton, J. (2008). "Frost weathering: recent advances and future directions". Permafrost Periglac. Process. 19: 195–210. doi:10.1002/ppp.620. - T︠S︡ytovich, Nikolaĭ Aleksandrovich (1975). The mechanics of frozen ground. Scripta Book Co. pp. 78–79. ISBN 978-0-07-065410-5. - Walder, Joseph S.; Bernard, Hallet (February 1986). "The Physical Weathering of Frost Weathering: Towards a More Fundamental and Unified Perspective". Arctic and Alpine Research. 8 (1): 27–32. JSTOR 1551211. - "Periglacial weathering and headwall erosion in cirque glacier bergschrunds"; Johnny W. Sanders, Kurt M. Cuffey1, Jeffrey R. Moore, Kelly R. MacGregor and Jeffrey L. Kavanaugh; Geology; July 18, 2012, doi: 10.1130/G33330.1 - Bell, Robin E. (27 April 2008). "The role of subglacial water in ice-sheet mass balance". Nature Geoscience. 1 (5802): 297–304. Bibcode:2008NatGe...1..297B. doi:10.1038/ngeo186. - Murton, Julian B.; Peterson, Rorik; Ozouf, Jean-Claude (17 November 2006). "Bedrock Fracture by Ice Segregation in Cold Regions". Science. 314 (5802): 1127–1129. Bibcode:2006Sci...314.1127M. doi:10.1126/science.1132127. PMID 17110573. - Dash, G.,; A. W. Rempel; J. S. Wettlaufer (2006). "The physics of premelted ice and its geophysical consequences". Rev. Mod. Phys. American Physical Society. 78 (695): 695. Bibcode:2006RvMP...78..695D. CiteSeerX . doi:10.1103/RevModPhys.78.695. Retrieved 30 November 2009. - Rempel, A.W.; Wettlaufer, J.S.; Worster, M.G. (2001). "Interfacial Premelting and the Thermomolecular Force: Thermodynamic Buoyancy". Physical Review Letters. 87 (8): 088501. Bibcode:2001PhRvL..87h8501R. doi:10.1103/PhysRevLett.87.088501. PMID 11497990. - Rempel, A. W. (2008). "A theory for ice-till interactions and sediment entrainment beneath glaciers". Journal of Geophysical Research. American Geophysical Union. 113 (113=): F01013. Bibcode:2008JGRF..11301013R. doi:10.1029/2007JF000870. - Peterson, R. A.,; Krantz , W. B. (2008). "Differential frost heave model for patterned ground formation: Corroboration with observations along a North American arctic transect". Journal of Geophysical Research. American Geophysical Union. 113: G03S04. Bibcode:2008JGRG..11303S04P. doi:10.1029/2007JG000559.
<urn:uuid:d3edfee6-5644-48fb-9b16-959911530a39>
4.0625
1,858
Knowledge Article
Science & Tech.
64.381657
95,640,840
EXAMPLE 11. 1. Writing Equations. Solution. From Table 11.2, we see that hematite is Fe 2 O 3 . From Chapter 7, we know that hydrochloric acid is HCl. We are told in the example that the products are FeCl 3 and H 2 O so we now have enough information to write an unbalanced equation. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. From Table 11.2, we see that hematite is Fe2O3. From Chapter 7, we know that hydrochloric acid is HCl. We are told in the example that the products are FeCl3 and H2O so we now have enough information to write an unbalanced equation. Fe2O3 + HCl FeCl3 + H2O (not balanced) To balance the equation, we see that there are two Fe atoms on the left and only one on the right, so we add a coefficient of 2 to FeCl3. Fe2O3 + HCl 2 FeCl3 + H2O (not balanced) Now there are six Cl atoms on the right, so a coefficient of 6 is needed for HCl. Fe2O3 + 6 HCl 2 FeCl3 + H2O (not balanced) Finally, there are three O atoms and six H atoms on the left, so a 3 in front of H2O will complete the task. Fe2O3 + 6 HCl 2 FeCl3 + H2O (balanced) What mass in grams of HF is needed to dissolve 12 g of quartz? Quartz dissolves in hydrofluoric acid to give silicon tetrafluoride and water. Write and balance an equation for this process. You are told by a geoscientist that hematite (Table 11.2) can be dissolved in hydrochloric acid to form FeCl3 and water. Write a balanced equation for the process. We haven’t yet discussed energy in detail, but there is enough information in the paragraph to permit us to do the calculation using dimensional analysis. The paragraph notes that 6300 kJ of energy is needed to make a single disposable bottle, that 8300 kJ is needed to make a single returnable bottle, and that the returnable is used an average of 12.5 times. That means that one returnable is the equivalent of 12.5 disposables. We begin by calculating the number of disposables equivalent to 10,000 returnables. 10,000 returnables x Now we can calculate the energy cost of the disposables, using the factor 6300 kJ = 1 disposable. 125,000 disposables x Then we can calculate the energy cost of the returnables in a similar way. 10,000 returnables x Notice that we can do this calculation without knowing just what a joule is! = 125,000 disposables When one mole (16.0 g) of methane is burned in air, 890 kJ of energy is produced. What mass of methane is equivalent to the energy difference between the returnables and the disposables in Example 11.2? = 7.88 x 108 kJ = 8.3 x 107 kJ Based on the data in the previous paragraph, calculate the quantity of energy needed to produce 10,000 returnable bottles, and compare it to the energy needed to produce the equivalent number of disposable bottles.
<urn:uuid:a34492e7-76df-47cf-8adf-d150eb07f53f>
3.78125
773
Tutorial
Science & Tech.
64.854394
95,640,844
ROTATION OF GRANULAR MATERIAL IN LABORATORY TESTS AND ITS NUMERICAL SIMULATION USING TIJ–COSSERAT CONTINUUM THEORY It is important to take into consideration the particle rotation when discussing the behaviour of deformation in granular material. A Cosserat continuum theory is suitable for the problems that include rotation of particles in granular materials because the deformation of a ground composed of granular materials is described by both displacements and rotations. In this study, laboratory tests were carried out to investigate the rotation behaviour of granular materials. Then, an elasto-plastic model for sand based on tij-sand model was formulated within Cosserat continuum theory. Furthermore, the model is implemented into a finite element code for the numerical simulation of boundary value problems related to the tests. From a series of laboratory tests and simulations, results are compared and discussed in detail. KeywordsShear Band Granular Material Earth Pressure Particle Rotation Active Earth Pressure Unable to display preview. Download preview PDF. - 1.T. Nakai (1989), An isotropic hardening elasto-plastic model for sand considering the stress path dependency in three-dimensional stresses, Soils and Foundations, (1), pp. 119–137.Google Scholar - 2.K. Sawada, H. Kato, A. Yashima and F. Zhang (2002), Analytical study of grains rotation using tij sand model based on Cosserat continuum theory, Proceedings of 1st International Workshop on New Frontiers in Computational Geotechnics, Banff Alberta, Canada, pp. 175–182.Google Scholar
<urn:uuid:887c43a7-87c9-4dc8-a168-01aab6c1a565>
2.53125
358
Academic Writing
Science & Tech.
28.158557
95,640,880
The National Oceanic and Atmospheric Administration (NOAA) issued its annual Arctic Report Card today, and no time might seem more crucial than now as the world grapples with the natural, physical and socio-political aspects of climate change. NOAA released the report card to the media and the public via a call-in webinar on Thursday, Dec. 1. Karen Frey, assistant professor of geography in the Graduate School of Geography at Clark University, contributed to the 2011 Arctic Report Card’s collection of scientific essays, along with an international team of 121 scientists from 14 countries. On Thursday, she was on a panel of three distinguished researchers who presented the live webinar and conducted a Q&A session with reporters from the Associated Press, Reuters, ClimateWire, and others. * Click here to view the 2011 Arctic Report Card and watch a video. * Monica Medina, principal deputy undersecretary for Oceans and Atmosphere at NOAA made opening remarks. Other key panelists were Howie Epstein of the University of Virginia, who reported on vegetation, and Don Perovich of the U.S. Army Engineer Research and Development Center (ERDC) Cold Regions Research and Engineering Laboratory in Hanover, N.H., who talked about sea ice. Frey focused on how dramatic declines in Arctic sea ice are resulting in increased primary productivity of phytoplankton, and the important consequences to ecosystems and the food chain. Prof. Frey is research adviser to six Ph.D., M.A., and B.A. students, working on projects in Siberia, Alaska, West Antarctica, the Himalayas, and the Chukchi/Beaufort Seas. The key points summarized in the report: "Persistent warming has caused dramatic changes in the Arctic Ocean and the ecosystem it supports. Ocean changes include reduced sea ice and freshening of the upper ocean, and impacts such as increased biological productivity at the base of the food chain and loss of habit for walrus and polar bears." Professor Frey has been involved in several expeditions to study climate change in the Arctic. For the past two years, she has been part of NASA’s multi-year ICESCAPE (Impacts of Climate change on the Eco-Systems and Chemistry of the Arctic Pacific Environment) project, conducting research from aboard the U.S. Coast Guard Cutter Healy in waters off Alaska’s northern shores. She is a principal investigator and led select Clark undergraduates in “The Polaris Project: Rising Stars in the Arctic” (National Science Foundation, International Polar Year), a field course in eastern Siberia to study the hydrological and biogeochemical impacts of climate warming and permafrost thaw. * Visit Prof. Frey's Polar Science Research Laboratory blog, here. * Issued annually, the Arctic Report Card is a timely source for clear, reliable and concise environmental information on the state of the Arctic, relative to historical time series records. Some of the essays are based upon updates to articles in the Bulletin of the American Meteorological Society State of the Climate in 2009. The Arctic Report Card is collaboratively supported by the international Arctic Council. The Conservation of Arctic Flora and Fauna (CAFF) Circumpolar Biodiversity Monitoring Program (CBMP) provides collaborative support through the delivery and editing of the biological elements of the Report Card. The audience for the Arctic Report Card is wide, including scientists, students, teachers, decision makers and the general public interested in Arctic environment and science. The web-based format facilitates future timely updates of the content. Support for the Report Card is provided by the NOAA Climate Program Office through the Arctic Research Program. Jackie Richter-Menge is the chief editor of the Arctic Report Card.
<urn:uuid:85a64156-20a3-4fb0-a213-d5a1e9938563>
2.953125
765
News (Org.)
Science & Tech.
35.827098
95,640,883
Authors: Gerges Francis Tawdrous Solar Data Analysis tells us that…… Mercury has 2 velocities 1st Velocity = 47.4 km/sec which is registered in NASA Planetary Fact Sheet 2nd Velocity= 47.8 km/sec which is necessary velocity to cover Mercury Orbital Circumference 57.9 mkm x 2π= 364 mkm in Mercury Orbital Period 88 days The difference between both velocities = 1% Why are there 2 velocities for the same Planet? And how that can be possible? This question is similar to the old question concerning Newton 3rd Law "why for Each Action there's reaction equal in value and opposite in direction? In brief… why the forces are created in double? Comments: 8 Pages. [v1] 2018-07-10 13:30:35 Unique-IP document downloads: 0 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:778a18dc-15f9-40df-b574-0652017d73cb>
3.078125
319
Academic Writing
Science & Tech.
53.51567
95,640,890
In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis), is that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). It is a special case of the more generally stated principle of least action. Using the calculus of variations, it results in an integral equation formulation of the equations of motion for the system. Maupertuis's principle states that the true path of a system described by generalized coordinates between two specified states and is an extremum (i.e., a stationary point, a minimum, maximum or saddle point) of the abbreviated action functional where are the conjugate momenta of the generalized coordinates, defined by the equationhttps://en.wikipedia.org/w/index.php?title=Maupertuis%27s_principle&action=edit where is the Lagrangian function for the system. In other words, any first-order perturbation of the path results in (at most) second-order changes in . Note that the abbreviated action is a functional (i.e. a function from a vector space into its underlying scalar field), which in this case takes as its input a function (i.e. the pathes between the two specified states). For many systems, the kinetic energy is quadratic in the generalized velocities although the mass tensor may be a complicated function of the generalized coordinates . For such systems, a simple relation relates the kinetic energy, the generalized momenta and the generalized velocities provided that the potential energy does not involve the generalized velocities. By defining a normalized distance or metric in the space of generalized coordinates one may immediately recognize the mass tensor as a metric tensor. The kinetic energy may be written in a massless form Hence, the abbreviated action can be written since the kinetic energy equals the (constant) total energy minus the potential energy . In particular, if the potential energy is a constant, then Jacobi's principle reduces to minimizing the path length in the space of the generalized coordinates, which is equivalent to Hertz's principle of least curvature. Comparison with Hamilton's principle - their definition of the action... - the solution that they determine... - Hamilton's principle determines the trajectory as a function of time, whereas Maupertuis's principle determines only the shape of the trajectory in the generalized coordinates. For example, Maupertuis's principle determines the shape of the ellipse on which a particle moves under the influence of an inverse-square central force such as gravity, but does not describe per se how the particle moves along that trajectory. (However, this time parameterization may be determined from the trajectory itself in subsequent calculations using the conservation of energy.) By contrast, Hamilton's principle directly specifies the motion along the ellipse as a function of time. - ...and the constraints on the variation. - Maupertuis's principle requires that the two endpoint states and be given and that energy be conserved along every trajectory. By contrast, Hamilton's principle does not require the conservation of energy, but does require that the endpoint times and be specified as well as the endpoint states and . Maupertuis was the first to publish a principle of least action, where he defined action as , which was to be minimized over all paths connecting two specified points. However, Maupertuis applied the principle only to light, not matter (see the 1744 Maupertuis reference below). He arrived at the principle by considering Snell's law for the refraction of light, which Fermat had explained by Fermat's principle, that light follows the path of shortest time, not distance. This troubled Maupertuis, since he felt that time and distance should be on an equal footing: "why should light prefer the path of shortest time over that of distance?" Accordingly, Maupertuis asserts with no further justification the principle of least action as equivalent but more fundamental than Fermat's principle, and uses it to derive Snell's law. Maupertuis specifically states that light does not follow the same laws as material objects. A few months later, well before Maupertuis's work appeared in print, Leonhard Euler independently defined action in its modern abbreviated form and applied it to the motion of a particle, but not to light (see the 1744 Euler reference below). Euler also recognized that the principle only held when the speed was a function only of position, i.e., when the total energy was conserved. (The mass factor in the action and the requirement for energy conservation were not relevant to Maupertuis, who was concerned only with light.) Euler used this principle to derive the equations of motion of a particle in uniform motion, in a uniform and non-uniform force field, and in a central force field. Euler's approach is entirely consistent with the modern understanding of Maupertuis's principle described above, except that he insisted that the action should always be a minimum, rather than a stationary point. Two years later, Maupertuis cites Euler's 1744 work as a "beautiful application of my principle to the motion of the planets" and goes on to apply the principle of least action to the lever problem in mechanical equilibrium and to perfectly elastic and perfectly inelastic collisions (see the 1746 publication below). Thus, Maupertuis takes credit for conceiving the principle of least action as a general principle applicable to all physical systems (not merely to light), whereas the historical evidence suggests that Euler was the one to make this intuitive leap. Notably, Maupertuis's definitions of the action and protocols for minimizing it in this paper are inconsistent with the modern approach described above. Thus, Maupertuis's published work does not contain a single example in which he used Maupertuis's principle (as presently understood). In 1751, Maupertuis's priority for the principle of least action was challenged in print (Nova Acta Eruditorum of Leipzig) by an old acquaintance, Johann Samuel Koenig, who quoted a 1707 letter purportedly from Leibniz that described results similar to those derived by Euler in 1744. However, Maupertuis and others demanded that Koenig produce the original of the letter to authenticate its having been written by Leibniz. Koenig only had a copy and no clue as to the whereabouts of the original. Consequently, the Berlin Academy under Euler's direction declared the letter to be a forgery and that its President, Maupertuis, could continue to claim priority for having invented the principle. Koenig continued to fight for Leibniz's priority and soon luminaries such as Voltaire and the King of Prussia, Frederick II were engaged in the quarrel. However, no progress was made until the turn of the twentieth century, when other independent copies of Leibniz's letter were discovered. - Analytical mechanics - Hamilton's principle - Gauss's principle of least constraint (also describes Hertz's principle of least curvature) - Hamilton–Jacobi equation - Pierre Louis Maupertuis, Accord de différentes loix de la nature qui avoient jusqu'ici paru incompatibles (original 1744 French text); Accord between different laws of Nature that seemed incompatible (English translation) - Leonhard Euler, Methodus inveniendi/Additamentum II (original 1744 Latin text); Methodus inveniendi/Appendix 2 (English translation) - Pierre Louis Maupertuis, Les loix du mouvement et du repos déduites d'un principe metaphysique (original 1746 French text); Derivation of the laws of motion and equilibrium from a metaphysical principle (English translation) - Leonhard Euler, Exposé concernant l'examen de la lettre de M. de Leibnitz (original 1752 French text); Investigation of the letter of Leibniz (English translation) - König J. S. "De universali principio aequilibrii et motus", Nova Acta Eruditorum, 1751, 125–135, 162–176. - J. J. O'Connor and E. F. Robertson, "The Berlin Academy and forgery", (2003), at The MacTutor History of Mathematics archive. - C. I. Gerhardt, (1898) "Über die vier Briefe von Leibniz, die Samuel König in dem Appel au public, Leide MDCCLIII, veröffentlicht hat", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, I, 419–427. - W. Kabitz, (1913) "Über eine in Gotha aufgefundene Abschrift des von S. König in seinem Streite mit Maupertuis und der Akademie veröffentlichten, seinerzeit für unecht erklärten Leibnizbriefes", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, II, 632–638. - H. Goldstein, (1980) Classical Mechanics, 2nd ed., Addison Wesley, pp. 362–371. ISBN 0-201-02918-9 - L. D. Landau and E. M. Lifshitz, (1976) Mechanics, 3rd. ed., Pergamon Press, pp. 140–143. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover) - G. C. J. Jacobi, Vorlesungen über Dynamik, gehalten an der Universität Königsberg im Wintersemester 1842–1843. A. Clebsch (ed.) (1866); Reimer; Berlin. 290 pages, available online Œuvres complètes volume 8 at Gallica-Math from the Gallica Bibliothèque nationale de France. - H. Hertz, (1896) Principles of Mechanics, in Miscellaneous Papers, vol. III, Macmillan. - V.V. Rumyantsev (2001) , "Hertz's principle of least curvature", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
<urn:uuid:402e1a3d-2c5b-4ae3-a4bd-8b463fa46e4e>
3.46875
2,260
Knowledge Article
Science & Tech.
41.033624
95,640,892
|Biopolymers are produced by organisms. These are monomeric units that are linked together to give rise to large structures e.g. RNA, DNA and polynucleotide. Multilayer films are thin layers of different structures. These multilayer films find application in optics, optical coatings, optical filters and mirrors etc. Multilayer films are best used for spatial filtering in optical imaging. A major process distinction between biopolymers and different polymers is found in their structures. All polymers are product of repetitive units referred to as monomers. Biopolymers typically have a well-defined structure, this is often not a process characteristic (lignocellulose), the precise chemical composition and also the sequence during which these units are organized is named the first structure, within the case of proteins. Several biopolymers fold into characteristic compact shapes that verify their biological functions and rely in a very sophisticated method on their primary structures. Structural biology is that the study of the structural properties of the biopolymers. These biopolymers also play a crucial role in medicinal sciences, pharmaceutical sciences and textiles. In distinction most artificial polymers have a lot of easier and a lot of random (or stochastic) structures. Thin-film layers are common within the flora and fauna. Their effects turn out colours seen in soap bubbles and oil slicks, similarly because the structural coloration of some animals. In several cases, iridescent colours that were once thought to result from planate layers, like in opals, peacocks, and also the Blue Morpho butterfly, prove to result from a lot of complicated periodic photonic crystal structures. In producing, skinny film layers is achieved through the deposition of 1 or a lot of skinny layers of fabric onto a substrate (usually glass). This is often most frequently done employing a physical vapour deposition method, like evaporation or sputter deposition, or a activity like chemical vapour deposition.
<urn:uuid:03b95d22-cab8-47ce-b3a3-b7fed3542ac4>
3.109375
400
Knowledge Article
Science & Tech.
22.451087
95,640,895
Planet X ? Planet X : In 2016 further work showed this unknown distant planet is likely on an inclined, eccentric orbit that goes no closer than about 200 AU and no further than about 1600 AU from the Sun. The orbit is predicted to be anti-aligned to the clustered extreme trans-Neptunian objects. Because Pluto is no longer considered a planet by the International Astronomical Union, this new hypothetical object has become known as Planet Nine. Watch this video
<urn:uuid:747a7e61-503d-488f-a18e-df2005a00ac6>
2.953125
94
Truncated
Science & Tech.
45.431788
95,640,899
As quarks have a baryon number of +1/, and antiquarks of −1/, the pentaquark would have a total baryon number of 1, and thus would be a baryon. Further, because it has five quarks instead of the usual three found in regular baryons (a.k.a. 'triquarks'), it would be classified as an exotic baryon. The name pentaquark was coined by Claude Gignoux et al. and Harry J. Lipkin in 1987; however, the possibility of five-quark particles was identified as early as 1964 when Murray Gell-Mann first postulated the existence of quarks. Although predicted for decades, pentaquarks proved surprisingly difficult to discover and some physicists were beginning to suspect that an unknown law of nature prevented their production. The first claim of pentaquark discovery was recorded at LEPS in Japan in 2003, and several experiments in the mid-2000s also reported discoveries of other pentaquark states. Others were not able to replicate the LEPS results, however, and the other pentaquark discoveries were not accepted because of poor data and statistical analysis. On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons (Λ0 Outside particle physics laboratories, pentaquarks also could be produced naturally by supernovae as part of the process of forming a neutron star. The scientific study of pentaquarks might offer insights into how these stars form, as well as allowing more thorough study of particle interactions and the strong force. A quark is a type of elementary particle that has mass, electric charge, and colour charge, as well as an additional property called flavour, which describes what type of quark it is (up, down, strange, charm, top, or bottom). Due to an effect known as colour confinement, quarks are never seen on their own. Instead, they form composite particles known as hadrons so that their colour charges cancel out. Hadrons made of one quark and one antiquark are known as mesons, while those made of three quarks are known as baryons. These 'regular' hadrons are well documented and characterized, however, there is nothing in theory to prevent quarks from forming 'exotic' hadrons such as tetraquarks with two quarks and two antiquarks, or pentaquarks with four quarks and one antiquark. A wide variety of pentaquarks are possible, with different quark combinations producing different particles. To identify which quarks compose a given pentaquark, physicists use the notation qqqqq, where q and q respectively refer to any of the six flavours of quarks and antiquarks. The symbols u, d, s, c, b, and t stand for the up, down, strange, charm, bottom, and top quarks respectively, with the symbols of u, d, s, c, b, t corresponding to the respective antiquarks. For instance a pentaquark made of two up quarks, one down quark, one charm quark, and one charm antiquark would be denoted uudcc. The quarks are bound together by the strong force, which acts in such a way as to cancel the colour charges within the particle. In a meson, this means a quark is partnered with an antiquark with an opposite colour charge – blue and antiblue, for example – while in a baryon, the three quarks have between them all three colour charges – red, blue, and green.[nb 1] In a pentaquark, the colours also need to cancel out, and the only feasible combination is to have one quark with one colour (e.g. red), one quark with a second colour (e.g. green), two quarks with the third colour (e.g. blue), and one antiquark to counteract the surplus colour (e.g. antiblue). The binding mechanism for pentaquarks is not yet clear. They may consist of five quarks tightly bound together, but it is also possible that they are more loosely bound and consist of a three-quark baryon and a two-quark meson interacting relatively weakly with each other via pion exchange (the same force that binds atomic nuclei) in a "meson-baryon molecule". The requirement to include an antiquark means that many classes of pentaquark are hard to identify experimentally – if the flavour of the antiquark matches the flavour of any other quark in the quintuplet, it will cancel out and the particle will resemble its three-quark hadron cousin. For this reason, early pentaquark searches looked for particles where the antiquark did not cancel. In the mid-2000s, several experiments claimed to reveal pentaquark states. In particular, a resonance with a mass of MeV/c2 (4.6 1540 σ) was reported by LEPS in 2003, the . This coincided with a pentaquark state with a mass of predicted in 1997. 1530 MeV/c2 The proposed state was composed of two up quarks, two down quarks, and one strange antiquark (uudds). Following this announcement, nine other independent experiments reported seeing narrow peaks from , with masses between and 1522 MeV/c2, all above 4 σ. 1555 MeV/c2 While concerns existed about the validity of these states, the Particle Data Group gave the a 3-star rating (out of 4) in the 2004 Review of Particle Physics. Two other pentaquark states were reported albeit with low statistical significance—the (ddssu), with a mass of and the 1860 MeV/c2 c (uuddc), with a mass of . Both were later found to be statistical effects rather than true resonances. 3099 MeV/c2 Ten experiments then looked for the , but came out empty-handed. Two in particular (one at BELLE, and the other at CLAS) had nearly the same conditions as other experiments which claimed to have detected the (DIANA and SAPHIR respectively). The 2006 Review of Particle Physics concluded: [T]here has not been a high-statistics confirmation of any of the original experiments that claimed to see the ; there have been two high-statistics repeats from Jefferson Lab that have clearly shown the original positive claims in those two cases to be wrong; there have been a number of other high-statistics experiments, none of which have found any evidence for the ; and all attempts to confirm the two other claimed pentaquark states have led to negative results. The conclusion that pentaquarks in general, and the , in particular, do not exist, appears compelling. The 2008 Review of Particle Physics went even further: There are two or three recent experiments that find weak evidence for signals near the nominal masses, but there is simply no point in tabulating them in view of the overwhelming evidence that the claimed pentaquarks do not exist... The whole story—the discoveries themselves, the tidal wave of papers by theorists and phenomenologists that followed, and the eventual "undiscovery"—is a curious episode in the history of science. Despite these null results, LEPS results as of 2009[update] continue to show the existence of a narrow state with a mass of ±4 MeV/c2, with a 1524statistical significance of 5.1 σ. Experiments continue to study this controversy. 2015 LHCb results In July 2015, the LHCb collaboration at CERN identified pentaquarks in the Λ0 p channel, which represents the decay of the bottom lambda baryon (Λ0 b) into a J/ψ meson (J/ψ), a kaon (K− ) and a proton (p). The results showed that sometimes, instead of decaying via intermediate lambda states, the Λ0 b decayed via intermediate pentaquark states. The two states, named P+ c(4380) and P+ c(4450), had individual statistical significances of 9 σ and 12 σ, respectively, and a combined significance of 15 σ — enough to claim a formal discovery. The analysis ruled out the possibility that the effect was caused by conventional particles. The two pentaquark states were both observed decaying strongly to J/ψp, hence must have a valence quark content of two up quarks, a down quark, a charm quark, and an anti-charm quark ( ), making them charmonium-pentaquarks. The search for pentaquarks was not an objective of the LHCb experiment (which is primarily designed to investigate matter-antimatter asymmetry) and the apparent discovery of pentaquarks was described as an "accident" and "something we’ve stumbled across" by the Physics Coordinator for the experiment. Studies of pentaquarks in other experiments The production of pentaquarks from electroweak decays of Λ0 b baryons has extremely small cross-section and yields very limited information about internal structure of pentaquarks. For this reason, there are several ongoing and proposed initiatives to study pentaquark production in other channels. It is expected that pentaquarks will be studied in electron-proton collisions in Hall B E2-16-007 and Hall C E12-12-001A experiments at JLAB. The major challenge in these studies is a heavy mass of the pentaquark, which will be produced at the tail of photon-proton spectrum in JLAB kinematics. For this reason, the currently unknown branching fractions of pentaquark should be sufficiently large to allow pentaquark detection in JLAB kinematics. The proposed Electron Ion Collider which has higher energies is much better suited for this problem. An interesting channel to study pentaquarks in proton-nuclear collisions was suggested in . This process has a large cross-section due to lack of electroweak intermediaries and gives access to pentaquark wave function. In the fixed-target experiments pentaquarks will be produced with small rapidities in laboratory frame and will be easily detected. Besides, if there are neutral pentaquarks, as suggested in several models based on flavour symmetry, these might be also produced in this mechanism. This process might be studied at future high-luminosity experiments like After@LHC and NICA. The discovery of pentaquarks will allow physicists to study the strong force in greater detail and aid understanding of quantum chromodynamics. In addition, current theories suggest that some very large stars produce pentaquarks as they collapse. The study of pentaquarks might help shed light on the physics of neutron stars. - The colour charges do not correspond to physical visible colours. They are arbitrary labels used to help scientists describe and visualise the charges of quarks. - Gignoux, C.; Silvestre-Brac, B.; Richard, J. M. (1987-07-16). "Possibility of stable multiquark baryons". Physics Letters B. 193 (2): 323–326. Bibcode:1987PhLB..193..323G. doi:10.1016/0370-2693(87)91244-5. - H. J. Lipkin (1987). "New possibilities for exotic hadrons — anticharmed strange baryons". Physics Letters B. 195 (3): 484–488. Bibcode:1987PhLB..195..484L. doi:10.1016/0370-2693(87)90055-4. - "Observation of particles composed of five quarks, pentaquark-charmonium states, seen in Λ0 b→J/ψpK− decays". CERN/LHCb. 14 July 2015. Retrieved 2015-07-14. - H. Muir (2 July 2003). "Pentaquark discovery confounds sceptics". New Scientist. Retrieved 2010-01-08. - K. Hicks (23 July 2003). "Physicists find evidence for an exotic baryon". Ohio University. Retrieved 2010-01-08. - See p. 1124 in C. Amsler et al. (Particle Data Group) (2008). "Review of particle physics" (PDF). Physics Letters B. 667 (1-5): 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. R. Aaij et al. (LHCb collaboration) (2015). "Observation of J/ψp resonances consistent with pentaquark states in Λ0 b→J/ψK−p decays". Physical Review Letters. 115 (7): 072001. arXiv: . Bibcode:2015PhRvL.115g2001A. doi:10.1103/PhysRevLett.115.072001. - I. Sample (14 July 2015). "Large Hadron Collider scientists discover new particles: pentaquarks". The Guardian. Retrieved 2015-07-14. - J. Pochodzalla (2005). "Duets of strange quarks". Hadron Physics. p. 268. ISBN 161499014X. - G. Amit (14 July 2015). "Pentaquark discovery at LHC shows long-sought new form of matter". New Scientist. Retrieved 2015-07-14. - T. D. Cohen; P. M. Hohler; R. F. Lebed (2005). "On the Existence of Heavy Pentaquarks: The large Nc and Heavy Quark Limits and Beyond". Physical Review D. 72 (7): 074010. arXiv: . Bibcode:2005PhRvD..72g4010C. doi:10.1103/PhysRevD.72.074010. W.-M. Yao et al. (Particle Data Group) (2006). "Review of particle physics: " (PDF). Journal of Physics G. 33: 1. arXiv: . Bibcode:2006JPhG...33....1Y. doi:10.1088/0954-3899/33/1/001. - D. Diakonov; V. Petrov & M. Polyakov (1997). "Exotic anti-decuplet of baryons: prediction from chiral solitons". Zeitschrift für Physik A. 359 (3): 305. arXiv: . Bibcode:1997ZPhyA.359..305D. doi:10.1007/s002180050406. - T. Nakano et al. (LEPS Collaboration) (2009). "Evidence of the Θ+ in the γd→K+K−pn reaction". Physical Review C. 79 (2): 025210. arXiv: . Bibcode:2009PhRvC..79b5210N. doi:10.1103/PhysRevC.79.025210. - P. Rincon (14 July 2015). "Large Hadron Collider discovers new pentaquark particle". BBC News. Retrieved 2015-07-14. - "Where has all the antimatter gone?". CERN/LHCb. 2008. Retrieved 2015-07-15. - Schmidt, Iván; Siddikov, Marat (3 May 2016). "Production of pentaquarks in pA-collisions". Physical Review D. 93 (9): 094005. arXiv: . Bibcode:2016PhRvD..93i4005S. doi:10.1103/PhysRevD.93.094005. - N. Cardoso; M. Cardoso & P. Bicudo (2013). "Color fields of the static pentaquark system computed in SU(3) lattice QCD". Physical Review D. 87 (3): 034504. arXiv: . Bibcode:2013PhRvD..87c4504C. doi:10.1103/PhysRevD.87.034504. - David Whitehouse (1 July 2003). "Behold the Pentaquark (BBC News)". BBC News. Retrieved 2010-01-08. - Thomas E. Browder; Igor R. Klebanov; Daniel R. Marlow (2004). "Prospects for Pentaquark Production at Meson Factories". Physics Letters B. 587: 62. arXiv: . Bibcode:2004PhLB..587...62B. doi:10.1016/j.physletb.2004.03.003. - Akio Sugamoto (2004). "An Attempt to Study Pentaquark Baryons in String Theory". arXiv: [hep-ph]. - Kenneth Hicks (2005). "An Experimental Review of the Pentaquark". Journal of Physics: Conference Series. 9: 183. arXiv: . Bibcode:2005JPhCS...9..183H. doi:10.1088/1742-6596/9/1/035. - Mark Peplow (18 April 2005). "Doubt is Cast on Pentaquarks". Nature. doi:10.1038/news050418-1. - Maggie McKie (20 April 2005). "Pentaquark hunt draws blanks". New Scientist. Retrieved 2010-01-08. - Thomas Jefferson National Accelerator Facility (21 April 2005). "Is It Or Isn't It? Pentaquark Debate Heats Up". SpaceDaily.Com. Retrieved 2010-01-08. - Dmitri Diakonov (2005). "Relativistic Mean Field Approximation to Baryons". European Physical Journal A. 24: 3. Bibcode:2005EPJAS..24a...3D. doi:10.1140/epjad/s2005-05-001-3. - Schumacher, R. A. (2006). "The Rise and Fall of Pentaquarks in Experiments". AIP Conference Proceedings. 842: 409. arXiv: . doi:10.1063/1.2220285. - Kandice Carter (2006). "The Rise and Fall of the Pentaquark". Symmetry Magazine. 3 (7): 16.
<urn:uuid:3410a274-5cb2-4b2b-94f9-2463906d32c3>
3.625
3,916
Knowledge Article
Science & Tech.
71.377918
95,640,901
Posted by : Nur Prasetiyo (Zutonx Blog) Monday, September 6, 2010 The three fundamental physical quantities of mechanics are length, mass, and time, which in the SI system have the units meters (m), kilograms (kg), and seconds (s), respectively. Prefixes indicating various powers of ten are used with these three basic units. The density of a substance is defined as its mass per unit volume. Different substances have different densities mainly because of differences in their atomic masses and atomic arrangements. The number of particles in one mole of any element or compound, called Avogadro’s number, The method of dimensional analysis is very powerful in solving physics problems. Dimensions can be treated as algebraic quantities. By making estimates and making order-of-magnitude calculations, you should be able to approximate the answer to a problem when there is not enough information available to completely specify an exact solution. When you compute a result from several measured numbers, each of which has a certain accuracy, you should give the result with the correct number of significant figures.
<urn:uuid:dfcea155-7fad-4762-b93b-a4fba8d41562>
3.953125
227
Personal Blog
Science & Tech.
24.437955
95,640,904
Track topics on Twitter Track topics that are important to you An electrochemical cell comprising a novel dual-component graphite and Earth-crust abundant metal anode, a hydrogen producing cathode and an aqueous sodium chloride electrolyte has been constructed and used for carbon dioxide mineralisation. Under an atmosphere of 5% carbon dioxide in nitrogen, the cell exhibited both capacitive and oxidative electrochemistry at the anode. The graphite acted as a supercapacitive reagent concentrator, pumping carbon dioxide into aqueous solution as hydrogen carbonate. Simultaneous oxidation of the anodic metal generated cations which reacted with the hydrogen carbonate to give mineralised carbon dioxide. Whilst conventional electrochemical carbon dioxide reduction requires hydrogen, this cell generates hydrogen at the cathode. Carbon capture can be achieved in a highly sustainable manner using scrap metal within the anode, seawater as the electrolyte, an industrially-relevant gas stream and a solar panel as an effectively zero-carbon energy source. This article was published in the following journal. This work describes the electrochemical degradation of Reactive Black 5 (RB5) by two methods: electrochemical and photo-assisted electrochemical degradation with and without a Fenton reagent. Two anod... Cholinium chloride at a concentration of 5 mol kg-1 in water is proposed as low cost and environmentally friendly aqueous electrolyte enabling to extend the operating range of carbon/carbon supercapac... Polyaniline (PANI) as a pseudocapacitive material has very high theoretical capacitance of 2000 F g-1. However, its practical capacitance has been limited by low electrochemical surface area and unfav... Carbon aerogel/xerogel can be easily tuned to have hierarchical pores ranging from micropores to macropores. Nitrogen doping is considered to enhance the wettability and conductivity of the carbon ele... Carbon aerogels of an inter-connected three-dimensional (3D) structure are a potential carbon material for supercapacitors. We report a new oxidation modification method to prepare a series of modifie... We aim to test our method for measuring chemosensitivity (the ventilatory response to a change in carbon dioxide), which uses sinusoidal carbon dioxide stimuli. Hypotheses: - Ca... Carbon dioxide insufflation during colonoscopy significantly reduces discomfort (pain, bloating and flatulence) after the procedure. So far, it has not been studied in inflammatory bowel d... The purpose of this study is to determine if blowing carbon dioxide into the surgical field during open-heart surgery to displace retained chest cavity air from the atmosphere will decreas... Endovascular repair of infrarenal abdominal aortic aneurysms (AAA) requires a contrast agent to identify the vascular anatomy and placement of the stent graft. Iodine contrast has traditi... Investigators evaluate the effect of patient position (Trendelenburg and reverse Trendelenburg) on arterial, end-tidal and transcutaneous carbon dioxide partial pressure in patients underg... An enzyme with high affinity for carbon dioxide. It catalyzes irreversibly the formation of oxaloacetate from phosphoenolpyruvate and carbon dioxide. This fixation of carbon dioxide in several bacteria and some plants is the first step in the biosynthesis of glucose. EC 126.96.36.199. A family of zinc-containing enzymes that catalyze the reversible hydration of carbon dioxide. They play an important role in the transport of CARBON DIOXIDE from the tissues to the LUNG. EC 188.8.131.52. Catalyzes the decarboxylation of an alpha keto acid to an aldehyde and carbon dioxide. Thiamine pyrophosphate is an essential cofactor. In lower organisms, which ferment glucose to ethanol and carbon dioxide, the enzyme irreversibly decarboxylates pyruvate to acetaldehyde. EC 184.108.40.206. A copper protein that catalyzes the formation of 2 moles of 3-phosphoglycerate from ribulose 1,5-biphosphate in the presence of carbon dioxide. It utilizes oxygen instead of carbon dioxide to form 2-phosphoglycollate and 3-phosphoglycerate. EC 220.127.116.11. An enzyme of the lyase class that catalyzes the conversion of ATP and oxaloacetate to ADP, phosphoenolpyruvate, and carbon dioxide. The enzyme is found in some bacteria, yeast, and Trypanosoma, and is important for the photosynthetic assimilation of carbon dioxide in some plants. EC 18.104.22.168. Within medicine, nutrition (the study of food and the effect of its components on the body) has many different roles. Appropriate nutrition can help prevent certain diseases, or treat others. In critically ill patients, artificial feeding by tubes need t...
<urn:uuid:35721012-0391-4de1-b618-a0b8d6c9fac3>
2.546875
1,045
Content Listing
Science & Tech.
23.429574
95,640,907
A study by an international team of scientists coordinatedby Italy's MUSE - Science Museum updates knowledge on the faunal richness of the Eastern Arc Mountains of Tanzania and Kenya; presents the discovery of 27 new vertebrate species (of which 23 amphibians and reptiles); identifies the drivers of the area's exception biological importance and advocates for its candidature to the UNESCO's List of World Heritage Sites. A study documenting the latest research findings on the faunal richness of the tropical moist forests of the Eastern Arc Mountains of Kenya and Tanzania was published on-line today (26th of September – 5am BST) in Diversity and Distributions. The study summarises the last decade of biodiversity research in the Eastern Arc Mountains, including the discovery of 27 vertebrate species that are new to science; and 14 other species not previously known to exist in the area. The results further re-enforce the importance of the Eastern Arc Mountains as one of the top sites on earth for biological diversity and endemism. The study was conducted by an international team coordinated by researchers of the Tropical Biodiversity Section at MUSE-Science Museum in Italy. The team includes several research and conservation agencies in Tanzania and across the world which were supported by the Critical Ecosystem Partnership Fund, a global partnership dedicated to providing funding and technical assistance to NGOs and private sector involved in the conservation of globally important biodiversity hotspots. The biodiversity research that was supported by CEPF targeted the most remote and least-surveyed forests in the Eastern Arc Mountains. The Eastern Arc Mountains are geologically ancient. The persistence of forest on these mountains, for several million years, has driven an extraordinary differentiation of living forms. The Eastern Arc Mountains comprise 13 blocks extending in an arc from southern Kenya to south-central Tanzania. "Our study shows how little we still know about the Earth's biodiversity hotspots, and how important targeted biodiversity inventories are in revealing the biological wealth of our planet" said Dr Francesco Rovero, Head of the Tropical Biodiversity section at MUSE-Science Museum, and senior author of the publication. "We can now rank the 13 Eastern Arc Mountain blocks by biological importance and we can better understand the forces that have caused such extraordinary patterns of biological richness. These findings provide the Governments of Tanzania and Kenya, and other agencies involved in the protection of these forests, with management recommendations, among which is to revive the Eastern Arc Mountain's candidature to UNESCO's List of World Heritage Sites", continued Dr Rovero. "The Eastern Arc Mountains were already known for the unusually high density of endemic species, however we lacked comprehensive data from at least six of the 13 mountain blocks," said Professor Neil Burgess, a leading expert on Africa's biodiversity from the Center for Macroecology, Evolution and Climate at the University of Copenhagen and UNEP-World Conservation Monitoring Centre. "The new findings affirm the importance of conserving as large an extent of forest as possible, particularly where a forest extends across different altitudes. Besides forest extent, forest elevational range and rainfall were found to be equally important drivers of richness of vertebrate species", continued Professor Burgess. "The candidature of this area to UNESCO's List of World Heritage Sites would ensure greater international visibility and support for the long-term protection of these exceptional but highly threatened fragments of rainforest. We are urging the Government of Tanzania to embrace this new research as a basis for reviving Tanzania's application to UNESCO," said Charles Meshack, Executive Director of the Tanzania Forest Conservation Group. "Twenty-three of the 27 new species that we reported in the study are amphibian and reptiles," said Michele Menegon, researcher with the Tropical Biodiversity Section at MUSE." These results make the Eastern Arc the most important site in Africa for these two classes of vertebrate. Some of these species are up to 100 million years old and are evidence of the great age, forest stability and unique evolutionary history of these mountains." Five low and high resolution images, a map and legends can be downloaded here (27.5 MB): http://www.muse.it/it/ufficio-stampa/Cartelle-stampa/Documents/foto_biodiversita_eastern_arc_agosto2014.zip Video interview to Francesco Rovero and Michele Menegon (MUSE) for download here (30.7 MB): https://www.dropbox.com/sh/z0ruhdb1fwj6mxv/AAB27xdosNeeiYER3F3c9_YGa Francesco Rovero | Eurek Alert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:f37458da-4a9e-4c98-b4cb-9bf0cd103965>
3.296875
1,595
Content Listing
Science & Tech.
31.553349
95,640,921
why is gravity known as the weakest force ? According to me, it is known as the weakest force as any other force can overcome it. Eg. When we pick a bag, we are overcoming the gravitational force by using our strength. as it is inversely proportional to the square of the distance between the object and it decreases considerably with increasing distance
<urn:uuid:1751759e-dbbc-4db9-9036-787ef2a1b0a7>
2.890625
71
Q&A Forum
Science & Tech.
45.099274
95,640,925
- Open Access Is the Ryukyu subduction zone in Japan coupled or decoupled? —The necessity of seafloor crustal deformation observation © The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences; TERRAPUB. 2009 Received: 19 July 2008 Accepted: 3 June 2009 Published: 10 November 2009 The 2004 Sumatra-Andaman earthquake of Mw 9.3 occurred in a region where a giant earthquake seemed unlikely from the point of view of tectonics. This clearly implies that our current understanding of strain accumulation processes of large earthquakes at subduction zones needs to be reexamined. The Ryukyu subduction zone is one such zone since no large earthquake has been anticipated there for reasons similar those pertaining to the Sumatra-Andaman arc. Based on our analysis of historical earthquakes, plate motion, back-arc spreading, and GPS observation along the Ryukyu trench, we highly recommend monitoring seafloor crustal deformation along this trench to clarify whether a large earthquake (Mw>8) could potentially occur there in the future.
<urn:uuid:2933c2a9-59bc-4e8c-b086-f7f974346ffa>
2.984375
266
Truncated
Science & Tech.
19.612033
95,640,932
Over four billion years ago, the Earth was bombarded with giant asteroids, researchers have confirmed. Scientists from academic and government institutions, including NASA's Solar System Exploration Research Virtual Institute (SSERVI) published the new findings in the July 31, 2014 issue of Nature, in a paper entitled "Widespread Mixing and Burial of Earth's Hadean Crust by Asteroid Impacts". The scientists have built a model describing the role of asteroids in the formation of the Earth’s uppermost crust during the “Hadean” geologic eon. Yvonne Pendleton, SSERVI Director at Ames told NASA that "this new model helps explain how repeated asteroid impacts may have buried Earth's earliest and oldest rocks." The new research shows how asteroidal collisions massively altered the geology of the Hadean eon Earth, and that these events were likely to have played a major role in the subsequent evolution of life on Earth as well. Does Water Mean Martians? Your Problems Are Minuscule And Irrelevant Are We Alone In The Universe? 4 Awesome Facts About Pluto This Space Video Will Make You Feel Insignificant Space Dreams Do Come True Can This Moon Hold Life? Watch This New Concept Video By SpaceX The Race To Mars Nissan And NASA? Amazon Prime Day Amazon Prime Day 2018: How to choose the best deals The easiest way to fix an iPhone stuck in recovery mode without data loss How are Canadians using mobile devices to relax? The gaming apps dominating the market in Canada
<urn:uuid:4908f325-9916-4879-976e-22331196605d>
3.796875
323
News Article
Science & Tech.
35.90301
95,640,935
Four large earthquakes were recorded on Wednesday and Thursday this week, including one major earthquake in southern Japan which killed 2 people, as well as another in India where Royal couple Kate and William were visiting. Scientists have raised the alarm that the unusual number of quakes occurring at the moment could be a precursor to “the big one” – indicating that a cataclysmic mega-quake might be on its way in the next few days. Scientists say that the above number of significant quakes across south Asia and the Pacific this year likely means that we are about to experience an earthquake the size of the Nepal quake of 2015 in which 8,000 people died. Roger Bilham, seismologist of University of Colorado, said: “The current conditions might trigger at least four earthquakes greater than 8.0 in magnitude. “And if they delay, the strain accumulated during the centuries provokes more catastrophic mega earthquakes.” Yesterday’s quake was followed by a 5.9-magnitude earthquake which struck off the coast of the southern Philippines. The earthquake happened at 2.20am (Singapore time) off Mindanao island. Local authorities said there was no tsunami risk and that they had not received reports of casualties or damage. Buildings were destroyed by a powerful 6.4 magnitude quake which shook southern Japan today. Japan’s Meteorological Agency said the epicentre was in the Mashiki town in the Kumamoto prefecture. Officials said the region’s nuclear facilities were not affected. A 6 magnitude earthquake also hit today off the coast of the Pacific island of Vanuatu, according to the United States Geological Survey (USGS). It was 53 miles from town Port Orly and the fourth one this week in the immediate area, after a 6.4-strength tremor hit a week earlier. Vanuatu is on the “Pacific Ring of Fire,” one of the most seismic parts of the globe and known for its earthquakes and volcanoes. Seismologists say the Himalayan region is overdue for a tremor stronger than Nepal’s 7.9 strength quake last year. Today’s quakes take the total to nine across Asia in a period of just over three and a half months – nearly three every month. Just four days ago, on April 10, six people died in Pakistan when a 6.6-magnitude quake hit Kabul with aftershocks in India Two days before, on April 8, there was a magnitude 4.2 earthquake in Nepal. Nepal had suffered a larger 5.5 magnitude one on February 22. A month before, on January 20, there was a 6.1-magnitude earthquake in China, and 16 days earlier 11 people died when a 6.7-magnitude earthquake hit Manipur in India. India’s disaster management experts from the Ministry of Home Affairs (MHA) said in January an 8.2 magnitude quake was due in the already ruptured Himalayan region. The 2011 Sikkim earthquake created more ruptured in the Himalayas, on top of those caused by previous quakes, and scientists have feared the area is continually weakening with each new quake. India’s National Institute of Disaster Management (NIDM) says stress in the mountains of the north-east and the colliding of the Himalayan plate iand the Indo-Burmese plate in the to the puts the whole region on red alert. Techtonic plates west of the Nepal earthquake are still locked and scientists fear this is another trigger waiting to go off. A scientific study published in Nature Geoscience said the Nepal quake: “Failed to rupture the locked portions of the Himalayan thrust beneath and west of the Kathmandu basin because of some persistent barrier of mechanical and structural origin.” Stresses locked in this area could be released, potentially causing a massive quake. BK Rastogi, director general of the Ahmedabad-based Institute of Seismological Research, said: “An earthquake of the same magnitude is overdue. That may happen either today or 50 years from now in the region of the Kashmir, Himachal, Punjab and Uttrakhand Himalyas. “Seismic gaps have been identified in these regions. “The accumulation of stress is going on everywhere. But where it will reach the elastic limit, we don’t know nor also when. But what we do know is that it is happening everywhere.” Latest posts by Sean Adl-Tabatabai (see all) - Ecuador Agrees To Hand Assange Over To Police ‘Within Days’ - July 20, 2018 - ABC Considers Firing Whoopi Goldberg For Spitting In Jeanine Pirro’s Face - July 20, 2018 - Mueller Grants Podesta ‘Full Immunity’ To Help Bring Down Trump - July 20, 2018
<urn:uuid:ddbc3605-6aff-47cf-a450-e984cf508e19>
2.640625
1,031
Content Listing
Science & Tech.
49.408905
95,640,938
Lateral Waves in a One-Dimensionally Anisotropic Half-Space Anisotropy in conductivity is found in various stratified media including alternating layers of dense rock with low conductivity and less dense rock with higher conductivity shown schematically in Fig. 9.1.1. In such media, the conductivity transverse to the bedding surfaces is always smaller than that along these surfaces (Parkhomenko 1967). For horizontal surfaces, this means that σ2z is smaller than σ2x = σ2y, e.g., σ2z ~ 0.002 S/m and σ2x = σ2y ~ 0.004 S/m. Anisotropy in conductivity is also found in only slightly stratified hard clay (for which the conductivity ratio of 2 is appropriate) and in rocks with different conductivities along the principal crystallographic axes. It may be added that a slightly inclined stratification must also exhibit different conductivities to the components of current parallel to the bedding surfaces and those perpendicular to them. In this case, σ2x and σ2y may differ somewhat from each other and more substantially from σ2z. KeywordsOceanic Crust Bedding Surface Lateral Wave Dense Rock Final Formula Unable to display preview. Download preview PDF.
<urn:uuid:222b989f-ff0a-458d-894e-95265ce48a80>
2.875
281
Academic Writing
Science & Tech.
43.448238
95,640,944
How to compile and build the SQLite library on Linux (Ubuntu) The SQLitePass library uses specific SQLite functions to retrieve schema information on an SQL statement. These functions are : Unfortunately, they are not always available in the precompiled library available on the SQLite webpage or in the sqlite package dedicated to your Linux distribution. In order to get these functions in our sqlite3.so library, we need to compile the SQLite source code with the [SQLITE_ENABLE_COLUMN_METADATA] compiler directive. tutorial shows one simple way to achieve the library compilation and installation on linux (ubuntu). Feel free to post your comments if you of a better "HowTo". Now, If you want to compile your own SQLite library, follow this step by step tutorial : Go to the SQLite webpage at http://www.sqlite.org/download.html and download the latest sqlite_amalgamation.x.x.xx.tar.gz file : file with your favorite archive manager Unzip the file in a new folder (/home/myname/sqlite3 for instance). Go to the Unzipped directory. Open, edit and save the sqlite3.c file to add the line #define SQLITE_ENABLE_COLUMN_METADATA Follow the INSTALL file instructions : Open a console window and enter : cd /home/luc/Documents/Developpement/sqlite-184.108.40.206 (in our example) to go to the directory where the sqlite3.c is located Then type the following sentences : You need to run 'make install' as root or with sudo... Finally, sudo make clean are ready to use the SQLitePassDbo components !
<urn:uuid:375a2002-df2b-40ae-9275-83002bd6467d>
2.71875
387
Tutorial
Software Dev.
43.087719
95,640,964
NASA, National Aeronautics and Space Administration As the “shoreline” of the Earth’s atmosphere, the mesosphere/lower thermosphere (MLT) region is home to many interesting and important phenomena, the most visible of which are the auroras. Geomagnetic storms, in addition to causing very intense auroral activity, also deposit large amounts of energy into the earth’s ionosphere. Recent analysis of data from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument aboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite suggests that 5.3μm emission from vibrationally excited NO is the main method of energy dissipation from energy deposited by geomagnetic storms. Additionally, NO+ has been shown to be the major contributor to geomagnetic storm induced 4.3μm nighttime emission. In order to better physically understand these two large sources of geomagnetic storm energy dissipation, a sounding rocket mission, ROCKet-borne Storm Energetics of Auroral Dosing in the E-region (ROCK-STEADE) is being proposed. The ROCK-STEADE instrument suite consists of several photometers, an interferometer, an IR spectrometer, and two time-of-flight mass spectrometers (TOFMS). The TOFMS will measure the ion and neutral compositions in the atmosphere as the sounding rocket travels through the MLT. Due to the use of microchannel plate (MCP) detectors in TOFMS, one of the major challenges to making measurements in the MLT is the high ambient pressure. Other challenges and sources of error and background include stray UV photons, scattering of gas molecules from the interior surfaces of the instrument, dissociation of molecules in the bow shock caused by the supersonic rocket flight, and reactive recombination at the surfaces of the instrument. Methods of dealing with these challenges include: • Recent advances in MCP technology allowing MCP operation into the mtorr range • Cooling the front surface of the TOFMS using liquid He to eliminate the bow shock (thus making possible the direct sampling of the ambient atmosphere) • Cryogenically cooling the interior of the instrument to eliminate scattering of gas from instrument walls and therefore also reducing the contribution of reactive recombination • Rigorous error analysis to account for the background contribution of stray UV Available at: http://works.bepress.com/addison-everett/2/
<urn:uuid:478a69e1-a3e4-468d-9bc8-2a9f0a5deab3>
3.203125
528
Content Listing
Science & Tech.
13.433164
95,640,990
A team of European scientists claim that it has generated the most "precise" and "direct" measurement of antimatter. The Alpha Collaboration, which is based at the European Organization for Nuclear Research (CERN) and involves academic and other institutions from across the world, has spent three decades exploring space to examine antimatter. In particular, the Alpha Collaboration has been trying to find the essential differences between matter and antimatter. Now, CERN has published an article in the academic journal Nature detailing this ongoing research. It said the research "opens a completely new era of high-precision tests between matter and antimatter". To develop a better insight into antimatter, the team has focused much of its research on comparing the similarities between hydrogen and antihydrogen atoms. [Research could] shed light on why the universe is made up almost entirely of matter, even though equal amounts of antimatter should have been created in the Big Bang "Its spectrum is characterised by well-known spectral lines at certain wavelengths, corresponding to the emission of photons of a certain frequency or colour when electrons jump between different orbits," explained the research group. "Measurements of the hydrogen spectrum agree with theoretical predictions at the level of a few parts in a quadrillion (1015) - a stunning achievement that antimatter researchers have long sought to match for antihydrogen." The researchers explained that by comparing the measurements of these atoms, it is possible to test a theory called the charge-parity-time (CPT) invariance. This can "shed light on why the universe is made up almost entirely of matter, even though equal amounts of antimatter should have been created in the Big Bang," they claimed. However, it has been more challenging to produce and trap antihydrogen atoms. As a workaround, the ALPHA team has been using a special laser machine called the Antiproton Decelerator (AD). In the latest part of this test, the researchers wanted to accelerate their antihydrogen spectroscopy activities by "using not just one but several detuned laser frequencies". They combined these with 1S-2S transition frequency in hydrogen. The researchers said they were then able to "measure the spectral shape, or spread in colours, of the 1S-2S antihydrogen transition and get a more precise measurement of its frequency". Professor Jeffrey Hangst of the Institut for Fysik og Astronomi in Denmark and a spokesperson for the Alpha Collaboration, said: "The precision achieved in the latest study is the ultimate accomplishment for us. We have been trying to achieve this precision for 30 years and have finally done it. "This is real laser spectroscopy with antimatter, and the matter community will take notice. We are realising the whole promise of CERN's anti-proton decelerator (AD) facility; it's a paradigm change." Children as young as four to be taught about the dangers of social media Bans already issued to hundreds of players who used offensive language The site is perfectly situated for launching small satellites into orbit Delegates at the ESOF 2018 conference were warned that their perceptions of the digital age were coloured by private industry
<urn:uuid:dc821da3-fc49-4616-b675-0bcca7f4b68c>
3.359375
663
News Article
Science & Tech.
23.581461
95,640,994
Phase Separation in Fluids in the Absence of Gravity Effects Gravity effects during the phase separation of binary fluids in the critical region have been suppressed by two means: annuling the gravity in a space experiment and/or using a strictly density-matched mixture on earth. Such an isodensity system can be produced by partially deuterating one component in a binary mixture of cyclohexane and methanol. It has been demonstrated that this does not affect the phase transition. Periodic-like patterns can thus be observed. They grow from a microscopic scale up to the final equilibrium stage, determined by the competition between wetting forces, finite volume effects and the remaining gravity influence. These structures are measured as a 2-D section of the 3-D pattern of interfaces between the phase domains. However only interfaces orientated perpendicular to the plane of observation can be detected. This combines with the interconnectivity of the domains to make the visible interface periodicity (Lm) the same as that of domains. After a numerical analysis, the structure factor Ŝ of these interfaces can be obtained, and its scaling properties can be checked. Light-scattering experiments are also reported, so that the scaling properties of the corresponding 3-D structure factor S can be compared to those of Ŝ; especially the equivalence between Lm, as measured from the direct observation or from light-scattering, can be tested, and the difference and similarities between S and Ŝ can be analyzed. Finally emphasis is placed on the possibility of studying quantitatively the phase separation of fluids through this direct observation, and so, even at a microscopic level. KeywordsPhase Separation Light Scattering Ternary Mixture Gravity Effect Space Experiment Unable to display preview. Download preview PDF. - /l/ See e.g. “Phase Transitions” Cargese 1980 ed. by M. Levy, J.C. Le Guillou and J. Zinn-Justin (Plenum, N-Y, 1982).Google Scholar - /3/ See e.g. “Phase Transition and Critical Phenomena” ed. by C. Domb, J.M. Lebowitz (Academic, 1983) Vol.8.Google Scholar - 5.J.S. Langer, M. Bar-On and H.D. Miller, Phys. Rev. All, 1417 (1975)Google Scholar - 8.// K. Binder, C. Billotet and P. Mirold, Z. Phys. B30, 183 (1978).Google Scholar - 12.D. Beysens, P. Guenoun and F. Perrot, Submitted to Phys. Rev.A (1987).Google Scholar - 15.C. Houessou, P. Guenoun, R. Gastaud, F. Perrot and D. Beysens, Phys. Rev. A32, 1818 (1985).Google Scholar - 18a.Y.C. Chou and W.I. Goldburg, Phys. Rev. A20, 2105 (1978);Google Scholar - 20.R. Hosemann, and J.N. Bagchi, Direct Analysis of Diffraction by Matter (North Holland, Amsterdam 1962).Google Scholar - 21.P. Guenoun, R. Gastaud, F. Perrot and D. Beysens, Phys. Rev. A (1987, to appear).Google Scholar - 23.P. Guenoun, Thesis (1987, Paris, unpublished).Google Scholar
<urn:uuid:6c9c59bb-6a00-47c2-b830-3dfee99b4af8>
3.078125
741
Truncated
Science & Tech.
63.766533
95,641,002
The different types of faults - including reverse, normal, thrust and strike-slip - are defined by how the two blocks of crust move relative to each other. CK-12 (Middle School) MS-ESS2-2 - Geoscience Processes & Earth’s Surface , MS-ESS2-3 - Plate Motions DEPTH OF KNOWLEDGE (DOK) LEVELS: 3Read this assignment in Actively Learn
<urn:uuid:c9aff874-5aa3-49c1-9928-7e506d360c94>
3.609375
95
Product Page
Science & Tech.
25.125
95,641,005
GNSS and Hydrography - 01/01/2008 A precise navigation system that has been fully operational only since 1995, yet has had such an impact on hydrography… it looks as if the future of these systems will make interesting reading. GNSS stands for Global Navigation Satellite System and has been dominated by GPS. All hydrographic surveyors will know how this has improved their productivity in the last decade. Today GNSS includes GPS and, to a lesser extent, Glonass. In the future it may well include modernised GPS signals, further Glonass, the European-funded Galileo system, and other satellite-based augmentation services. Not only does this scenario mean ‘more of the same’ but it would also provide the GNSS system with greater accuracy and robustness. A scan of GNSS systems is dominated by GPS, which has been giving solid service since going operational last decade. The intentional degradation of the GPS position, called Selective Availability, was removed in the year 2000, improving standalone accuracy to about 10m. Civil users are currently limited to one GPS signal (C/A on L1); they are also given only codeless or semi-codeless access to the P(Y) code on the L2 frequency. The GPS Satellite Vehicles (SVs) have progressed through various models and all Block II/IIA SVs have been launched. The Block IIR-M are in production, with six operational; Block IIF SVs are under development. In terms of Glonass, there are today a limited number of operational SVs: seven, in fact. It has been difficult to predict the launch pattern of Glonass, the last launch having been in December 2002. Galileo is the European-funded satellite system presently in the planning stages. It is designed to be interoperable with GPS and therefore global, but with built-in regional and local enhancement capabilities. Since February 2003 there has been some debate between the various member countries concerning management and funding issues. DGPS Navigation Beacons have been established by more than forty countries as an aid to marine navigation. The most reliable of these are those that comply with the IALA (International Association of Marine Aids to Navigation and Lighthouse Authorities) standards. Navigation beacons are, during 2003, still being installed world-wide and are used by many hydrographic surveying users, as metre level accuracy is being achieved. In the USA and Europe ‘free to air’ Satellite-Based Augmentation Services (SBAS) such as the US-based WAAS and the European equivalent, EGNOS, are today being tested by civilian GPS receivers. SBAS transmit DGPS corrections only at this stage. They deliver code phase corrections and a level of integrity monitoring, the focus being on use for aviation. The footprint of these geostationary satellites typically covers continental land-masses, but users on inland waterways and in coastal regions can also receive these signals. Because the US Department of Transportation has clearly stated plans for GPS modernisation, this stands as one of the most definitive statements of all GNSS strategies. As of December 2002 the Department has stated a wish for: - Stable, consistent GPS policy and service - Expanding use of GPS in trans-portation safety - Second Civil signal (L2C), beginning with launches in 2004 of Block IIR-M SVs - Third Civil signal (L5), beginning with launches in 2005 of Block IIF SVs What does this mean for hydrographic users? The second civil frequency means a stronger signal, which will assist in cases of canopy problems (e.g. riverbank trees) and use that is less susceptible to unintentional interference. The third civil frequency should mean a considerably faster and more robust carrier phase integer ambiguity search, leading to improvements in real-time kinematic positioning (RTK - centimetre accuracy) such as longer range, rapid initialisation and possibly improved vertical accuracy. GPS modernisation has already been mentioned in this article. The L2C signal will begin with SV launches in 2004 and projected full capability in 2012. The third civilian frequency, L5, will begin with launches in 2005 with projected full capability in 2015. Glonass is harder to forecast. For example, the impact of the recent Colombia shuttle incident may mean that the Russian space agency will focus on support for the International Space Station rather than on Glonass launches. Future budgetary constraints on the part of the Russian Government are also hard to predict. However the published Glonass programme lists system upgrades with new Glonass-M SVs - their first launch in 2003. These SVs will introduce a second frequency for the civil community. The programme also introduces Glonass-K SVs at a later stage, these having reduced mass and the ability to launch six to eight at a time from the Proton-M rocket. With the Galileo project currently funded for further planning, its timetable has it operating in 2008, although recently commentators say this projected date is slipping out towards 2012. The total design, in-orbit validation and full deployment of Galileo is costing 3.4 Billion euros. Satellite Based Augmentation Services (SBAS) are about to go operational. The USA WAAS is due to go into full operation capability in 2003. Civilian users are presently utilising the European EGNOS in test mode. This may well go into full operational status in April 2004. In the future, EGNOS may accommodate corrections from other systems, such as Galileo. The Japanese SBAS equivalent, MSAS, is planning to launch its first SV in 2003 after a launch failure in 1999 delayed such plans. There are also plans by Canada, China and India to operate SBAS services over their countries. The Impact on Hydrography The impact of all these plans for GNSS is good news for hydrography. GPS receiver manufacturers are studying the interoperability of these systems in new receiver designs. The future of the GNSS will result in a more robust solution for users. There will be more SVs in orbit, originating from more countries. The new signals will reduce vulnerability from unintentional interference as frequencies are added and spread over the spectrum. Since the signals will be stronger, GNSS will operate in more marginal areas where vegetation obstructs the antenna during waterway surveys. Standalone positional accuracy should improve from the 10m to the 2m level, assuming the use of a suitable receiver capable of utilising the new signals. Users will have greater access to reliable centimetre accuracy positioning, due initially to the second civilian frequency (L2C) and then to the third civilian frequency (L5). By processing this extra data, the GNSS receiver of the future should be able to operate from a base station in RTK mode over much extended ranges and initialise virtually immediately, as does DGPS presently. Other benefits, such as greater vertical accuracy, should result and this can be used for tide, draft and heave measurements. The impact of national SBAS systems on hydrography may be significant in the future. Why is SBAS not in major use today in hydrography? Firstly, marine authorities were quick to see value in the installation of land-based navigation beacons (MSK) to counter the effects of Selective Availability, and many coastal waterways are well covered by this service. Secondly, SBAS coverage is optimised for continental landmasses and offshore coverage is therefore not ideal. However, future systems may better cover coastal and offshore regions and therefore be used to a greater extent in hydrographic surveying. GPS receiver manufacturers are aware of these future satellite-based navigation systems. The improvements are global in the true sense of the word. Hydrographers will be users of GNSS well into the future and will be keen to use receivers which operate in a seamless and global manner, with yet higher quality assurances. References / Further Reading - EGNOS: www.esa.int/export/esaSA/ GGG63950NDC_navigation_0.html - Galileo: http://europa.eu.int/comm/ dgs/energy_transport/galileo - GPS status et. al.: www.navcen.uscg. Last updated: 19/07/2018
<urn:uuid:1d65499d-6fd0-4929-b30d-b83f95a79e0d>
3.171875
1,725
Knowledge Article
Science & Tech.
37.705392
95,641,018
Jülich researchers develop ultrahigh-resolution 3-D microscopy technique for electric fields Using a single molecule as a sensor, scientists in Jülich have successfully imaged electric potential fields with unrivalled precision. The ultrahigh-resolution images provide information on the distribution of charges in the electron shells of single molecules and even atoms. The 3D technique is also contact-free. The first results achieved using "scanning quantum dot microscopy" have been published in the current issue of Physical Review Letters. The related publication was chosen as the Editor's suggestion and selected as a Viewpoint in the science portal Physics. The technique is relevant for diverse scientific fields including investigations into biomolecules and semiconductor materials. Left: The scanning quantum dot micrograph of a PTCDA molecule reveals the negative partial charges at the ends of the molecule as well as the positive partial charges in the center. Center: Simulated electric potential above a PTCDA molecule with molecular structure. Right: Schematic of charge distribution in the PTCDA molecule. Copyright: Forschungszentrum Juelich "Our method is the first to image electric fields near the surface of a sample quantitatively with atomic precision on the sub-nanometre scale," says Dr. Ruslan Temirov from Forschungszentrum Jülich. Such electric fields surround all nanostructures like an aura. Their properties provide information, for instance, on the distribution of charges in atoms or molecules. For their measurements, the Jülich researchers used an atomic force microscope. This functions a bit like a record player: a tip moves across the sample and pieces together a complete image of the surface. To image electric fields up until now, scientists have used the entire front part of the scanning tip as a Kelvin probe. But the large size difference between the tip and the sample causes resolution difficulties - if we were to imagine that a single atom was the same size as a head of a pin, then the tip of the microscope would be as large as the Empire State Building. Single molecule as a sensor In order to improve resolution and sensitivity, the scientists in Jülich attached a single molecule as a quantum dot to the tip of the microscope. Quantum dots are tiny structures, measuring no more than a few nanometres across, which due to quantum confinement can only assume certain, discrete states comparable to the energy level of a single atom. The molecule at the tip of the microscope functions like a beam balance, which tilts to one side or the other. A shift in one direction or the other corresponds to the presence or absence of an additional electron, which either jumps from the tip to the molecule or does not. The "molecular" balance does not compare weights but rather two electric fields that act on the mobile electron of the molecular sensor: the first is the field of a nanostructure being measured, and the second is a field surrounding the tip of the microscope, which carries a voltage. "The voltage at the tip is varied until equilibrium is achieved. If we know what voltage has been applied, we can determine the field of the sample at the position of the molecule," explains Dr. Christian Wagner, a member of Temirov's Young Investigators group at Jülich's Peter Grünberg Institute (PGI-3). "Because the whole molecular balance is so small, comprising only 38 atoms, we can create a very sharp image of the electric field of the sample. It's a bit like a camera with very small pixels." A patent is pending for the method, which is particularly suitable for measuring rough surfaces, for example those of semiconductor structures for electronic devices or folded biomolecules. "In contrast to many other forms of scanning probe microscopy, scanning quantum dot microscopy can even work at a distance of several nanometres. In the nanoworld, this is quite a considerable distance," says Christian Wagner. Until now, the technique developed in Jülich has only been applied in high vacuum and at low temperatures: essential prerequisites to carefully attach the single molecule to the tip of the microscope. "In principle, variations that would work at room temperature are conceivable," believes the physicist. Other forms of quantum dots could be used as a sensor in place of the molecule, such as those that can be realized with semiconductor materials: one example would be quantum dots made of nanocrystals like those already being used in fundamental research. Tobias Schloesser | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:32c4d992-d641-4ee5-a1ff-143db49eb308>
2.84375
1,563
Content Listing
Science & Tech.
35.652293
95,641,019
Fermium: the essentials Fermium atoms have 100 electrons and the shell structure is 220.127.116.11.30.8.2. The ground state electronic configuration of neutral fermium is [Rn].5f12.7s2 and the term symbol of fermium is 3H6. Fermium's chemical properties are largely unknown. Fermium is a radioactive rare earth metal. The longest living isotope is is 257Fm with a half-life of 80 days. It is of no commercial importance. Fermium: physical properties - Density of solid: (no data) kg m-3 - Molar volume: (no data) cm3 - Thermal conductivity: 10 (estimate) W m‑1 K‑1 Fermium: heat properties - Melting point: about 1800 [1527 °C (2781 °F)] K - Boiling point: about 1800 [1527 °C (2781 °F)] K - Enthalpy of fusion: 20.5 kJ mol-1 Fermium: atom sizes - Atomic radius (empirical): (no data) pm - Molecular single bond covalent radius: 167 (coordination number 3) ppm - van der Waals radius: (no data) ppm - Pauling electronegativity: 1.3 (Pauling units) - Allred Rochow electronegativity: 1.2 (Pauling units) - Mulliken-Jaffe electronegativity: (no data) Fermium: orbital properties - First ionisation energy: 627 (inferred) kJ mol‑1 - Second ionisation energy: 1200 kJ mol‑1 - Third ionisation energy: 2240 kJ mol‑1 - Universe: (no data) ppb by weight - Crustal rocks: (no data) ppb by weight - Human: (no data) ppb by weight Fermium: crystal structure Fermium: biological data - Human abundance by weight: (no data) ppb by weight Fermium has no biological role. Reactions of fermium as the element with air, water, halogens, acids, and bases where known. Fermium: binary compounds Binary compounds with halogens (known as halides), oxygen (known as oxides), hydrogen (known as hydrides), and other compounds of fermium where known. Fermium: compound properties Bond strengths; lattice energies of fermium halides, hydrides, oxides (where known); and reduction potentials where known. Fermium: historyFermium was discovered by Workers at Argonne, Los Alamos, USA, and the University of California at Berkeley, USA. in 1952 at USA. Origin of name: named after "Enrico Fermi". Isolation: coming soon!
<urn:uuid:23b9ffad-4617-4f18-b39b-836f89fb4751>
3.1875
632
Knowledge Article
Science & Tech.
54.495639
95,641,037
Orbiting the sun from far beyond planet Neptune, the recently discovered planetoid Haumea is a fast-spinning elongated object, completing one rotation every four hours. Now, following fresh observations, astronomers have discovered this dwarf planet also has a ring system in orbit around it. It's easy to picture a black hole as a kind of all-powerful cosmic drain, a sinkhole of super-strong gravity that snags and swallows passing nebulae or stars. While it is true we can’t observe matter once it crosses a black hole’s event horizon, scientists are zeroing in on what happens in the margins, where molecular clouds release vast amounts of energy as it circles the plughole. Ancient black holes hidden away in deep space have left behind nuclear clues about the first-ever stars, according to Professor Raffaella Schneider from the Department of Physics of Sapienza University of Rome, Italy, who leads a team of stellar archaeologists. To colonise the solar system we need to figure out how to build settlements on alien surfaces, and, according to Professor Matthias Sperl, a material scientist from the German Aerospace Center (DLR), our best bet rests on 3D-printed bricks made from moon dust. Using off-the-shelf technology and innovative economics, lightweight helium balloons have started carrying remote-controlled laboratories to the edge of space and back, offering the business case for new types of science missions. European researchers have used telescopes around the world to spot a cluster of seven planets orbiting a Jupiter-sized ultra-cool star 40 light-years from earth, increasing the chances of discovering evidence of life on distant worlds. Complex and painful disease has been historically overlooked, researchers say. Robin Garrity says that registration, identification and geofencing will increase security. Chemical switches on DNA could explain how the environment may influence the traits we pass on, according to Prof. Thomas Carell.
<urn:uuid:de6d8848-4322-407c-ba2d-1683e8b443c8>
3.421875
403
Content Listing
Science & Tech.
28.857999
95,641,043
- Open Access Biodiversity of Talitridae family (Crustacea, Amphipoda) in some Tunisian coastal lagoons © Jelassi et al.; licensee Springer. 2015 Received: 7 May 2014 Accepted: 25 December 2014 Published: 16 January 2015 Although wetlands were remarkable habitats with their fauna and flora diversity, few studies have been devoted to the study of amphipod distribution in this type of environment. To study both qualitatively and quantitatively amphipod community, surveys were conducted during the spring season in ten coastal lagoons ranging from subhumid to arid bioclimatic stage. At each station, eight quadrats of 50 × 50 cm were randomly placed. Amphipods were preserved in alcohol 70°C. In the laboratory, the specimens collected were identified and counted. Meanwhile, analyses of organic matter, particle size, and heavy metals from the soil taken from each station were made. A total of 1,340 specimens of amphipods were collected, and eight species belonging to Talitridae family were identified. Species richness ranges from one species collected in the supralittoral zone of El Bcherliya (Ghar El Melh lagoon) and eight species in the supralittoral zone of Bizerte lagoon. In this last station, the relative abundance of amphipods was significantly higher (36.04%, N = 483). In addition, the diversity indices of Simpson, Shannon-Weaver, and equitability shows that the highest species diversity characterizes this same station while the community was more balanced in opposite El Boughaz (Ghar El Melh lagoon) (J″ = 0.996). The spatial distribution of different amphipod species depends on edaphic (heavy metals, granulometry, organic matter) and climatic (temperature, humidity) factors. Coastal areas, the natural interface between water and land, were valuable ecosystems since they host high biodiversity levels (Defeo and McLachlan 2005; McLachlan and Brown 2006). Many human activities, such as fishing, land reclamation, engineering, shipping, and recreational activities, affect coastal ecosystems (McLachlan and Brown 2006; Schlacher et al. 2007). The impact of pollution in coastal areas was generally strong and the contaminants of main concern include persistent organic pollutants, oil, radionuclides, fertilizers, trace metals, and pathogens (Islam and Tanaka 2004). Trace metals were toxic and nondegradable elements that have adverse effects on living organisms (e.g., immunodeficiency, negative effect on metabolic processes and cell membrane permeability) (Ikem and Egiebor 2005). Many of these elements occur naturally, but their input has been enhanced through the millennia by various human activities (mining, metallurgical and tanning industries, chemical plants, paper mills) (Islam and Tanaka 2004). In the Mediterranean, there was a high diversity of wetlands (lagoon, lake, sebkhas, oued, hill reservoir and dam) that were of great importance in conservation biology, and they were considered among the most biologically diverse and productive ecosystems (Medail and Quezel 1999). They offer a wide variety of natural habitats for plants and aquatic animals as well as semiterrestrial and terrestrial. The interactions of biological (plants, animals, microorganisms, etc.) and physicochemical components (granulometry, temperature, humidity, etc.) of wetlands enable them to perform many ecological functions such as shoreline stabilization and water purification. In Tunisia, semi-closed shallow lagoons were among the most sensitive areas to environmental stresses (Benrejeb-Jenhani and Romdhane 2002). In general, lagoon sediments were considered reservoirs of many chemical pollutants, especially heavy metals, which represent the most prominent marine pollution agents (Phillips and Rainbow 1994) affecting local communities and human health (Förstner and Wittmann 1981; Boucheseiche et al. 2002). Hence, quality management of marine coastal environments becomes a priority for many countries. Peracarid crustaceans have received special attention, because many of them were important components of soft sediment faunas (Dauvin et al. 1994) and were considered good indicators of water and sediment quality (Corbera and Cardell 1995; Alfonso et al. 1998). Further, detritivorous peracarids play an important role in the degradation of organic matter, in both aquatic and terrestrial habitats. Talitrid amphipods were key species in the energy flow of sandy shore ecosystems (Griffiths et al. 1983). Feeding on terrestrial and marine material, these species integrate the different sources of contamination and constitute an important source of food for many species of invertebrates, fishes, and birds (Griffiths et al. 1983; Wildish 1988; Bergerard 1989; Koch 1989). This group constitutes one of the dominant macrofaunal groups in sandy beaches (Dahl 1946; McLachlan and Jaramillo 1995). Their ecological relevance has justified worldwide studies, for instance with respect to their behavioral plasticity (Scapini and Fasinella 1990; Scapini et al. 1993; Scapini et al. 1995), locomotor activity rhythms (Nasri-Ammar and Morgan 2005, 2006; Ayari and Nasri-Ammar 2012a, b; Jelassi and Nasri-Ammar 2013; Jelassi et al. 2013a, b), the factors influencing their spatial distribution and oriented movements in sandy beach (Scapini and Quochi 1992; Borgioli et al. 1999; Scapini et al. 1999; Ayari and Nasri-Ammar 2011; Jelassi et al. 2012; Jelassi et al. 2013c), their behavioral strategies (Fallaci et al. 1999), and genetic structure of different populations (De Matthaeis et al. 1995; Bulnheim and Schwenzer 1999). With regard to biodiversity, along the European coasts, talitrid populations have been compared genetically to assess inter- and intraspecific variations (De Matthaeis et al. 1995). In terms of applied research, a number of articles have been published on trace metal (Cu, Zn, Fe, Cd, Pb, Mn, and Ni) concentrations and bioaccumulation by talitrids and on their role in biomonitoring (Fialkowski et al. 2000; Rainbow et al. 1989; Weeks 1992). Studying the amphipod diversity in three different complexes, lagoons in Northern Tunisia, Jelassi et al. (2013c) showed that the abundance of different amphipod species at the three lagoon complexes in northern Tunisia could be best explained by the soil contents of several heavy metals. In Tunisia, amphipod communities inhabiting the supralittoral zone of wetlands, other than sandy beaches (Jelassi and Nasri-Ammar 2013; Jelassi et al. 2012; Jelassi et al. 2013a), have not received much attention. Thus, aims of this study were to produce a list of amphipod species collected in the supralittoral zones of different lagoons, to estimate species richness, abundance, and density, and we focus on environmental factors that may control their spatial distribution. Sampling method and soil analysis Quantitative samples of amphipods were manually collected in spring (April 2010) during ten consecutive days in the early morning hours. In the supralittoral zone of each site, eight quadrats of 50 × 50 cm were randomly placed and 20 min were devoted to collect amphipods inside the quadrat. The content of each quadrat (7 cm depth) was placed in individual boxes. Humidity and temperature of soil were measured in situ at each site. In the laboratory, specimens were preserved in 70% ethanol. Afterwards, they were identified and counted. The identification of these species was carried out under a Leica MS 5 binocular microscope (Leica Microsystems, Wetzlar, Germany), using the key of Ruffo (1993) and Ruffo et al. (2014) (for the new name of Orchestia cavimana). At each station, a soil sample was taken from a depth of 0 to 10 cm. Grain size distribution of these composite samples was analyzed using different sieves in descending order (from 2 mm to 25 μm). A subsample was brought to the ICP-MS laboratory at University of Kiel and sieved to obtain the <250-μm grain size fraction which was then dried and milled (Bat and Raffaelli 1999). Heavy metals were extracted from a 250-mg sample of powder with 10 mL 7 N nitric acid on a hot plate at 80°C (2.5 h). The solution was made up to 20 mL, centrifuged at 3,500 rpm for 15 min, and the supernatant transferred to a 20-mL sample vial. The metals vanadium (V), chromium (Cr), manganese (Mn), cobalt (Co), nickel (Ni), copper (Cu), zinc (Zn), arsenic (As), cadmium (Cd), tin (Sn), thallium (Tl), lead (Pb), lithium (Li), rubidium (Rb), and strontium (Sr) were analyzed by inductively coupled plasma-mass spectrometry (ICP-MS). Average analytical reproducibility was estimated from replicate analyses of some samples and was found to be better than 2% RSD (1 sigma relative standard deviation) for all elements. The accuracy of analytical results was monitored by analyzing certified reference materials (CRM): GSMS-2 (marine sediment; Chinese Academy of Geological Sciences, PR China) and PACS-1 (coastal sediment; NRCC Canada) as unknowns along with the samples. Organic matter content was determined by weighing before and after ashing at 450°C for 3 h at the University of Salzburg. Soil sodium content was determined by atomic absorption technique based on the methodology of the IC2MP (Institute of Chemistry of Poitiers: materials and natural resources). To obtain a usable sample, we put 5 g of soil in 50 ml of distilled water. After 1 h, the solution was filtered through a 50-lm mesh. To obtain reliable measurements, it was necessary to carry out successive dilutions. In order to characterize the amphipod communities, the following ecological parameters were calculated: species richness (S) expressed by the number of species at each station, occurrence frequency (F = (n/N) * 100) where ‘n’ was the number of times that species appears in the sample and ‘N’ was the total number of samples, and relative species abundance (A r = (n i/N) * 100) where ‘n i’ was the number of individuals of each species and ‘N’ was the total number of amphipods at each station. Mean density of the amphipod community at each station and the mean density of each species at each station were expressed as number of individuals per m2. Species diversity and evenness were calculated by the Simpson index (Is = 1/∑ p i 2) with p i = (n i/N) (N = total number of amphipods; n i = the number of individuals for each species) (Simpson 1949), the Shannon-Weaver index (H′ = −Σ p i log2 p i) (Frontier 1983), and Pielou's evenness index (J′ = H′/ log2S) (Pielou 1966). The degree of similarity among sampling stations was evaluated using similarity cluster dendrograms. For this analysis, the data matrix consisting of total abundances of species at each site was converted into a symmetric matrix using the Bray-Curtis similarity index. The similarity matrix was agglomerately clustered using complete linkage based on presence/absence of species. The analysis above was performed with the PRIMER software package (Clarke and Warwick 1994). Differences in abundance, species richness, and densities of amphipod species among lagoons were tested using ANOVA test. Principal correspondence analysis (PCA) of amphipod distribution was performed using the free version of XLSTAT. Heavy metals, sodium, organic matter, and granulometry of different stations The highest concentration of the majority of heavy metals, namely vanadium, nickel, zinc, arsenic, cadmium, thallium, and lead, was observed in the supralittoral zone of Tunis North lagoon. For the chromium and manganese, the maximum of content characterizes the supralittoral zone of Bizerte lagoon with respectively 26.393 ppm and 281.748 ppm. The maximum content in copper (39.098 ppm) was observed in the supralittoral zone of El Bcherliya. The supralittoral zone of Korba lagoon revealed the highest concentration in cobalt (8.311 ppm) and rubidium (15.814 ppm). Whereas, the supralittoral zone of El Bibane lagoon was characterized by the highest content in lithium (29.087 ppm), in strontium (2,101.549 ppm), and in tin (7.340 ppm). Furthermore, the lowest concentration for all heavy metals studied was observed in the supralittoral zone of Sidi Ali Mekki. The different heavy metals analyzed do not exceed the maximum value tolerated except for the lead that exceeded 100 ppm in the supralittoral zone of Tunis North lagoon (Henin 1983). In these different stations, soil sodium content ranged between 1.26 mg/g and 7.2 mg/g of soil in the supralittoral zone of Bizerte lagoon and the old harbor of Ghar El Melh, respectively. For the organic matter, the percentage varied between 0.6% in the supralittoral zone of Sidi Ali Mekki and 9.46% in the supralittoral zone of Bizerte lagoon. Concerning the granulometry, the supralittoral zones of Bizerte lagoon, El Bcherliya, old harbor, Tunis North, and Korba were characterized by loamy sand substrates. The supralittoral zones of Tunis South and El Bibane lagoons were characterized by sandy substrates. The sandy loam, sandy clay loam, and sandy silt loam substrates characterized respectively the supralittoral zones of opposite El Boughaz, Tazarka, and Sidi Ali Mekki lagoons. Species richness and occurrence frequency Species composition, mean density and diversity indices of amphipod community in different lagoons Total number of amphipod (N) The study of the occurrence frequency of different species of amphipods on lagoon shores showed that O. gammarellus was qualified as common species (F = 80%) and, O. mediterranea (F = 70%) and O. stephenseni (F = 60%) were classified as constant species. As for P. platensis (F = 50%), O. montagui (F = 40%), and D. deshayesii (F = 30%), they were called accessory species. Finally, T. saltator (F = 20%) and C. cavimana (F = 10%) were qualified as rare species. Relative abundance and density A count of 1,340 amphipod individuals was collected in different lagoons. In this type of wetlands, the supralittoral zone of Bizerte lagoon revealed the highest abundance (36.04%) (Table 1). Moreover, the difference between lagoons was statistically significant (Anova test: F = 4.371; DF = 7; p = 0.001). The study of relative abundance of the different species showed that O. mediterranea was the most abundant species in the supralittoral zone of Bizerte lagoon (25.7%). Whereas, in El Bcherliya, the old harbor of Ghar El Melh, the North and South lagoons of Tunis, O. gammarellus dominated the community. In the supralittoral zone of El Bibane lagoon, O. montagui was the most important species (28.3%). Anova test revealed that differences in species observed between lagoons were highly significant (F = 4.371; DF = 7; p = 0.001). Furthermore, results showed that O. mediterranea presented the most important density in the supralittoral zone of Bizerte lagoon (62 ind. m−2). Whereas, O. gammarellus showed the most important density in the supralittoral zones of El Bcherliya (0.5 ind. m−2), the old harbor of Ghar El Melh (19 ind. m−2), the North (34 ind. m−2) and South (34.5 ind. m−2) lagoons of Tunis. Diversity and community similarity According to Simpson index, the highest diversity was observed in the supralittoral zone of Bizerte lagoon (Is = 6.059) (Table 1). Unlike to the previous index, the Shannon-Weaver index takes into account much more the rare species, it confirmed this result and it ranged between 1.287 in the supralittoral zone of Sidi Ali Mekki lagoon and 2.771 in Bizerte lagoon. Concerning the equitability, it was important in opposite El Boughaz where we noted that the species were equitably distributed. Amphipod distribution according to environmental factors The first three factorial axes (F1, F2, and F3) extract 54.39%, 19.57%, and 9.32% of the variance respectively; it was a cumulative percentage of 83.28% (Figure 4). On the first axis (F1) extracting almost half of the total inertia, there were Bizerte lagoon, North and south lagoons of Tunis which were correlated positively with the majority of heavy metals namely vanadium, chromium, manganese, cobalt, nickel, copper, zinc, arsenic, rubidium, cadmium, thallium, and lead. These stations were characterized negatively by the soil temperature. On the second axis which extracts only 19.57% of the total variance, there were El Bcherliya, El Bibane and Korba lagoons which were related to soil humidity, particle size, and the content of strontium, lithium, and tin. On the third axis, were projected negatively the organic matter and positively the concentration on sodium. These two parameters do not characterize these different lagoon stations. The study of biodiversity of amphipods communities highlighted differences in species richness, relative abundance and diversity between and within lagoons. Globally the species richness was higher in the supralittoral zone of Bizerte lagoon than in the other stations. In this station, eight amphipod species were identified. This would be related to the presence of important vegetation in spring as well as the Cymodocea nodosa leaf litter and a high percentage of organic matter. In this same station, the highest diversity, abundance, and density were observed. Moreover, it was O. mediterranea that presented the most important density in this supralittoral zone. In other type of wetlands such as oueds, species richness varied between one species in oueds Laakarit, Khniss, and El Fared and six species in Tinja (Korsi station) whereas in sebkhas, only two species were collected in sebkhas Gargour and Moknine (Jelassi 2014; Jelassi et al. 2013c). In supralittoral zones of dams and hill lakes, no species has been found (Jelassi 2014). Using the same method of collection, Ayari (2012) showed the presence of five species namely T. saltator, D. deshayesii, O. gammarellus, O. montagui, and O. mediterranea in the supralittoral zone of Bizerte Corniche beach and only one species, T. saltator, in the supralittoral zone of Gabès beach. This author attributed the lowest species richness in the last station (Gabès) to the highest tidal amplitude reported in this beach and the highest species richness observed in Bizerte Corniche to the presence of Posidonia oceanica benches associated with other algae that provide shelter and food for different amphipod species. In Zouaraa beach, only two species were identified: T. saltator and Talorchestia brito living in sympatry with a total density generally low and never exceeded 860 ind.m−2 (Charfi-Cheikhrouha et al. 2000). At Ouderef, Gabes, and Zarrat beaches (Gulf of Gabes, Tunisia), the amphipod T. saltator was the most abundant species, present in the three beaches studied; with the highest densities in Ouderef beach (Pérez-Domingo et al. 2008). O. montagui and D. deshayesii in the Bay of Bou Ismail reached more than 45.000 ind. m−2 (Louis 1980), and O. mediterranea in the estuary of Bou Regreg, Morocco, attained 7,000 ind. m−2 (Elkaïm et al. 1985). On the Isle of Man, densities of T. saltator were estimated at 80 to 400 ind. m−2 (Williams, 1995). Marsden (1991) and Cardoso and Veloso (1996) showed that fluctuations in population density were frequent in talitrids and indicated periods of intense reproduction. They observed, in fact, similar patterns of variation for Talorchestia quoyana and Pseudorchestoidea brasiliensis, respectively, with highest densities in summer and late winter. The distribution of amphipod species in the different lagoons was investigated according to environmental factors using PCA analysis. Our statistical analyses indicated that the variation in the spatial distribution of amphipod species depended on some climatic (temperature, humidity) as well as edaphic factors, which was influenced by soil quality. In the different lagoons, T. saltator, found only in the supralittoral zone of Bizerte and El Bibane lagoons, was correlated with temperature, humidity, particle size, and some heavy metals. In sebkhas and oueds, this species was correlated only with edaphic factors (unpublished data). Several studies were focused on the role of environmental factors and showed the influence of factors than others (Borgioli et al. 1999; Scapini et al. 1995; Scapini and Fasinella 1990; Scapini et al. 1999; Scapini and Quochi 1992). Jelassi et al. (2012) showed that Talitridae abundance was tightly controlled by air temperature. The highest abundance observed in this site could be also explained by the important recruitment in spring (Jelassi 2014). Studying the amphipod diversity at three Tunisian lagoon complexes, Jelassi et al. (2013c) showed that according to the constrained correspondence analysis, the abundance of O. mediterranea and O. gammarellus at the three lagoon complexes in northern Tunisia could be best explained by the soil contents of several heavy metals, namely zinc, thallium, and cadmium and the proportion of the coarse sand fraction. T. saltator abundance, by contrast, negatively corresponded to these station characteristics. Air and soil temperature were the best predictors for O. stephenseni abundance that negatively corresponded with the proportion of the fine sand fraction and the organic matter content of the soil. O. montagui and C. cavimana abundances corresponded positively with air humidity and the soil lithium and rubidium contents but negatively with the soil tin content and the proportion of the silt and clay fraction. D. deshayesii and P. platensis did not exhibit any clear correspondence with station characteristics. Bouslama et al. (2009) showed that the temperature was the most important factor influencing the zonation; indeed, the increase of temperature induced the migration of T. saltator population from the top to the bottom of beach. This result was similar to that found by Fallaci et al. (2003) that reported that the mean zonation of T. saltator was exclusively influenced by the temperature during the activity period. Marques et al. (2003) have highlighted a positive correlation between temperature and the density or biomass of Talitrus saltator population. These authors showed that the positive correlation may be interpreted as a cause-and-effect relation, with temperature favoring recruitment and, consequently, the increase in density and biomass. On the other hand, Colombini et al. (2002) noted the importance of the parameters of the sediment in the choice of specific distribution area especially for young individuals. A total of 1,340 talitrids individuals was collected in different coastal lagoons. Abundance, density and diversity were more pronounced in the supralittoral zone of Bizerte lagoon characterized by the presence of eight amphipod species: O. montagui, O. gammarellus, O. mediterranea, O. stephenseni, O. cavimana, P. platensis, D. deshayesii and T. saltator. Moreover, according to principal component analysis, the spatial distribution of species in the different lagoons in Tunisia depends on edaphic (heavy metals, granulometry, organic matter) and climatic (temperature, humidity) factors. The study was supported by the Research Unit of Bio-ecology and Evolutionary Systematics (UR11ES11), Faculty of Science of Tunis, University of Tunis El Manar, University of Salzburg, and the University of Kiel. - Alfonso MI, Bandera ME, Lopez-Gonzalez PJ, Garcia-Gomez JC (1998) The Cumacean community associated with a seaweed as a bioindicator of environmental conditions in the Algeciras Bay (Strait of Gibraltor). Cah Biol Mar 39:197–205Google Scholar - Ayari A (2012) Eco-éthologie de deux espèces sympatriques Talitrus saltator et Deshayesorchestia deshayesi (Crustacés, Amphipodes) au niveau de deux plages tunisiennes. Thèse de doctorat Faculté des Sciences de Tunis, Université de Tunis El Manar Tunisie, pp 210Google Scholar - Ayari A, Nasri-Ammar K (2012a) Seasonal variation of the endogenous rhythm in tow sympatrics amphipod: Talitrus saltator and Deshayesorchestia deshayesii from Bizerte beach (North of Tunisia). Biol Rhythm Res 43(5):515–526View ArticleGoogle Scholar - Ayari A, Nasri-Ammar K (2012b) Locomotor rhythm phenology of Talitrus saltator from two geomorphologically different beaches of Tunisia: Bizerte (North of Tunisia) and Gabes gulf (South of Tunisia). Biol Rhythm Res 43(2):113–123View ArticleGoogle Scholar - Ayari A, Nasri-Ammar K (2011) Distribution and biology of amphipods in two geomorphologically different sandy beaches of Tunisia. Crustaceana 84(5–6):591–599View ArticleGoogle Scholar - Bat L, Raffaelli D (1999) Effects of gut sediment contents on heavy metal levels in the amphipod Corophium volutator (Pallas). Turk J Zool 23:67–71Google Scholar - Benrejeb-Jenhani A, Romdhane MS (2002) Impact des perturbations anthropiques sur l'évolution du phytoplancton de la lagune de Boughrara, (Tunisie). Bull Inst Natn Scien Tech Mer de Salammbô 29:65–75Google Scholar - Bergerard J (1989) Ecologie des laisses de marée. Ann Biol 28:39–54Google Scholar - Borgioli C, Martelli L, Porri F, D'Elia A, Marchetti GM, Scapini F (1999) Orientation in Talitrus saltator (Montagu): trends in intrapopulation variability related to environmental and intrinsic factors. J Exp Mar Biol Ecol 238:29–47View ArticleGoogle Scholar - Boucheseiche C, Cremille E, Pelte T, Pojer K (2002) Bassin Rhône -Méditerranée-Corse. Guide technique n°7, Pollution toxique et écotoxicologie: notion de base. Agence de l'Eau Rhône – Méditerranée - Corse, LyonGoogle Scholar - Bouslama MF, El Gtari M, Charfi-Cheikhrouha F (2009) Impact of environmental factors on zonation, abundance, and other biological parameters of two Tunisian populations of Talitrus saltator (Amphipoda, Talitridae). Crustaceana 82(2):141–157View ArticleGoogle Scholar - Bulnheim HP, Schwenzer DE (1999) Allozyme variation and genetic divergence in populations of Talitrus saltator (Crustacea: Amphipoda) around the Atlantic coast, the Azores and the Canary Islands. Cah Biol Mar 40:185–194Google Scholar - Cardoso RS, Veloso VG (1996) Population biology and secondary production of the sandhopper Pseudorchestoidea brasiliensis (Amphipoda: Talitridae) at Prainha Beach, Brazil. Mar Ecol Progr Ser 142:111–119View ArticleGoogle Scholar - Charfi-Cheikhrouha F, El Gtari M, Bouslama MF (2000) Distribution and reproduction of two sandhoppers, Talitrus saltator and Talorchestia brito from Zouaraa beach-dune system (Tunisia). Pol Arch Hydrobiol 43:621–629Google Scholar - Clarke KR, Warwick RM (1994) Change in marine communities: an approach to statistical analysis and interpretation. Nat Environ Res Council, Plymouth Marine Biological Laboratory, Plymouth, UKGoogle Scholar - Colombini I, Aloia A, Bouslama MF, El Gtari M, Fallaci M, Ronconi L, Scapini F, Chelazzi L (2002) Small-scale spatial and seasonal differences in the distribution of beach arthropods on the northern Tunisian coasts. Are species evenly distributed along the shore? Mar Biol 140:1001–1012View ArticleGoogle Scholar - Corbera J, Cardell MJ (1995) Cumaceans as indicators of eutrophication on soft bottoms. Sci Mar 59:63–69Google Scholar - Dahl E (1946) The Amphipoda of the sound. I: terrestrial Amphipoda. Acta Univ Lundensis 42:1–53Google Scholar - Dauvin JC, Bellan G, Bellan-Santini D, Castric A, Comolet-Tirman J, Francour P, Gentil F, Girard A, Gofas S, Mahé C, Noël P, De Reviers B (1994) Typologie des ZNIEFF Mer. Liste des paramètres et des biocénoses des côtes françaises métropolitaines. Patrimoines Naturels 12:1–64Google Scholar - De Matthaeis E, Cobolli M, Mattoccia M, Scapini F (1995) Geographic variation in Talitrus saltator (Crustacea, Amphipoda): biochemical evidence. Boll Zool 62:77–84View ArticleGoogle Scholar - Defeo O, McLachlan A (2005) Patterns, processes and regulatory mechanisms in sandy beach macrofauna: a multi-scale analysis. Mar Ecol Prog Ser 295:1–20View ArticleGoogle Scholar - Elkaïm B, Irlinger JP, Pichard S (1985) Dynamique de la population d’Orchestia mediterranea L. (Crustacé, Amphipode) dans l’estuaire du Bou Regreg (Maroc). Can J Zool 63:2800–2809View ArticleGoogle Scholar - Fallaci M, Aloia A, Audoglio M, Colombini I, Scapini F, Chelazzi L (1999) Differences in behavioural strategies between two sympatric talitrids (Amphipoda) inhabiting an exposed sandy beach of the French Atlantic coast. Estuar Coast Shelf Sci 48:469–482View ArticleGoogle Scholar - Fallaci M, Colombini I, Lagar M, Scapini F, Chelazzi L (2003) Distribution patterns of different age classes and sexes in a Tyrrhenian population of Talitrus saltator (Montagu). Mar Biol 142:101–110Google Scholar - Fialkowski W, Rainbow PS, Fialkowska E, Smith BD (2000) Biomonitoring of trace metals along the Baltic Coast of Poland using the sandhopper Talitrus saltator (Montagu) (Crustacea: Amphipoda). Ophelia 52:183–192View ArticleGoogle Scholar - Förstner U, Wittmann GTW (1981) Metal pollution in the aquatic environment, 2nd edn. Springer, BerlinView ArticleGoogle Scholar - Frontier S (1983) Stratégies d'échantillonnage en écologie. Masson, Paris, p 494Google Scholar - Griffiths CL, Stenton-Dozey JME, Koop K (1983) Kelp wrack and the flow of energy through a sandy beach ecosystem. In: McLachlan A, Erasmus T (eds) Sandy Beaches as Ecosystems. Junk Publications, The Hague, pp 547–556View ArticleGoogle Scholar - Henin S (1983) Les éléments traces dans les sols. Science Du Sol 2:67–71Google Scholar - Ikem A, Egiebor NO (2005) Assessment of trace elements in canned fishes (mackerel, tuna, salmon, sardines and herrings) marketed in Georgia and Alabama (United States of America). J Food Comp Anal 18:771–787View ArticleGoogle Scholar - Islam MS, Tanaka M (2004) Impacts of pollution on coastal and marine ecosystems including coastal and marine fisheries and approach for management: a review and synthesis. Mar Poll Bull 48:624–649View ArticleGoogle Scholar - Jelassi R (2014) Eco-éthologie des peuplements d'Amphipodes au niveau des zones humides de la Tunisie. Thèse de doctorat Université de Tunis, Tunisie, p 328Google Scholar - Jelassi R, Nasri-Ammar K (2013) Seasonal variation of locomotor activity rhythm of Orchestia montagui in the supralittoral zone of Bizerte lagoon (North of Tunisia). Biol Rhythm Res 44(5):718–729View ArticleGoogle Scholar - Jelassi R, Khemaissia H, Nasri-Ammar K (2012) Intra-annual variation of the spatiotemporal distribution and abundance of Talitridae and Oniscidea (Crustacea, Peracarida) at Bizerte Lagoon (northern Tunisia). Afr J Ecol 50:381–392View ArticleGoogle Scholar - Jelassi R, Ayari A, Nasri-Ammar K (2013a) Seasonal variation of locomotor activity rhythm of Orchestia gammarellus in the supralittoral zone of Ghar Melh lagoon (North-East of Tunisia). Biol Rhythm Res 44(6):956–967View ArticleGoogle Scholar - Jelassi R, Akkari-Ayari A, Bohli-Abderrazak D, Nasri-Ammar K (2013b) Endogenous locomotor activity rhythm of two sympatric species of Talitrids (Crustacea, Amphipoda) from the supralittoral zone of Bizerte lagoon (Northern Tunisia). Biol Rhythm Res 44(2):265–275View ArticleGoogle Scholar - Jelassi R, Zimmer M, Khemaissia H, Garbe-Schönberg D, Nasri-Ammar K (2013c) Amphipod diversity at three Tunisian lagoon complexes in relation to environmental conditions. J Nat His 47(45–46):2849–2868View ArticleGoogle Scholar - Koch H (1989) The effect of tidal inundation on the activity and behavior of the supralittoral talitrid amphipod Traskorchestia traskiana. Crustaceana 57:295–303View ArticleGoogle Scholar - Louis M (1980) Etude d'un peuplement mixte d'Orchestia montagui Audouin et d'Orchestia deshayesii Audouin dans la baie de Bou Ismail. Bull Ecol 11:97–111Google Scholar - Marques JC, Gonçalves SC, Pardal MA, Chelazzi L, Colombini I, Fallaci M, Bouslama MF, El Gtari M, Charfi-Cheikhrouha F, Scapini F (2003) Comparison of T. Saltator (Amphipoda, Talitridae) biology, dynamics and secondary production in Atlantic (Portugal) and Mediterranean (Italy and Tunisia) populations. Estuar Coast Shelf Sci 58:127–148View ArticleGoogle Scholar - Marsden ID (1991) Kelp-sandhopper interactions on a sand beach in New Island. II. Population dynamics of Talorchestia quoyana (Milne-Edwards). J Exp Mar Biol Ecol 152:75–90View ArticleGoogle Scholar - McLachlan A, Brown AC (2006) The ecology of sandy shores second ed. Elsevier, Amsterdam, p 392Google Scholar - McLachlan A, Jaramillo E (1995) Zonation on sandy beaches. Oceanogr Mar Biol Ann Review 33:305–335Google Scholar - Medail F, Quezel P (1999) Biodiversity hotspots in the Mediterranean basin: setting global conservation priorities. Conserv Biol 13:1510–1513View ArticleGoogle Scholar - Nasri-Ammar K, Morgan E (2005) Variation saisonnière du rythme de l'activité locomotrice de Talitrus saltator issu de la plage de Korba (Cap Bon, Tunisie). Bull Soc Zool Fr 130(1):19–29Google Scholar - Nasri-Ammar K, Morgan E (2006) Seasonality of the endogenous activity rhythm in Talitrus saltator (Montagu) from a sandy beach in north-eastern Tunisia. Biol Rhythm Res 37:479–488View ArticleGoogle Scholar - Pérez-Domingo S, Castellanos C, Junoy J (2008) The sandy beach macrofauna of Gulf of Gabès (Tunisia). Mar Ecol 29:51–59View ArticleGoogle Scholar - Phillips DJH, Rainbow PS (1994) Biomonitoring of trace aquatic contaminants, Environmental Management Series. Chapman and Hall, LondonGoogle Scholar - Pielou EC (1966) The measurement of diversity in different types of biological collections. J Theor Biol 13:131–144View ArticleGoogle Scholar - Rainbow PS, Moore PG, Watson D (1989) Talitrid amphipods (Crustacea) as biomonitors for copper and zinc. Estuar Coast Shelf Sci 28:567–582View ArticleGoogle Scholar - Ruffo S (1993) The Amphipoda of Mediterranean. Part IV: Localities and map-Agenda to parts 1-3-Key to families-Ecology-Faunistics and zoogeography. Mem Inst Oceanogr de Monaco 13:959Google Scholar - Ruffo S, Tarocco M, Latella L (2014) Cryptorchestia garbinii n. sp. (Amphipoda: Talitridae) from Lake Garda (Northern Italy), previously referred to as Orchestia cavimana Heller, 1865, and notes on the distribution of the two species. Ital J Zool 81:91–98View ArticleGoogle Scholar - Scapini F, Fasinella D (1990) Genetic determination and plasticity in the sun orientation of natural populations of Talitrus saltator. Mar Biol 107:141–145View ArticleGoogle Scholar - Scapini F, Quochi G (1992) Orientation in sandhoppers from Italian populations: have they magnetic orientation ability? Bull Zool 59:437–442View ArticleGoogle Scholar - Scapini F, Lagar MC, Mezzetti MC (1993) The use of slope and visual information in sandhoppers: innateness and plasticity. Mar Biol 115:545–553View ArticleGoogle Scholar - Scapini F, Buiatti M, De Matthaeis E, Mattoccia M (1995) Orientation behaviour and heterozygosity of sandhopper populations in relation to stability of beach environments. J Evol Biol 8:43–52View ArticleGoogle Scholar - Scapini F, Porri F, Borgioli C, Martelli L (1999) Solar orientation of adult and laboratory-born juvenile sandhoppers: inter- and intra-population variation. J Exp Mar Biol Ecol 238:107–126View ArticleGoogle Scholar - Schlacher TA, Dugan J, Schoeman DS, Lastra M, Jones A, Scapini F, Mclachlan A, Defeo O (2007) Sandy beaches at the brink. Divers Distrib 13:556–560View ArticleGoogle Scholar - Simpson EH (1949) Measurement of diversity. Nature 163:672–688Google Scholar - Weeks JM (1992) The use of the terrestrial amphipod Arcitalitrus dorrieni (Crustacea: Amphipoda: Talitridae) as a potential biomonitor of ambient zinc and copper availabilities in leaf-litter. Chemosphere 24:1505–1522View ArticleGoogle Scholar - Wildish DJ (1988) Ecology and natural history of aquatic Talitroidea. Can J Zool 66:2340–2359View ArticleGoogle Scholar - Williams JA (1995) Burrow-zone distribution of the supralittoral Amphipod Talitrus saltator on Derbyhaven beach, Isle of Man – a possible mechanism for regulating desiccation stress? J Crust Biol 15:466–475View ArticleGoogle Scholar This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
<urn:uuid:b23b200e-16c7-4607-b5a3-603bfd425878>
3.046875
9,287
Academic Writing
Science & Tech.
37.285085
95,641,048
Gravitational Wave Detection by Interferometry by Matthew Pitkin, Stuart Reid, Sheila Rowan, Jim Hough Publisher: arXiv 2011 Number of pages: 80 The main theme of this review is a discussion of the mechanical and optical principles used in the various long baseline systems in operation around the world - LIGO (USA), Virgo (Italy/France), TAMA300 and LCGT (Japan), and GEO600 (Germany/U.K.) - and in LISA, a proposed space-borne interferometer. Home page url Download or read it online for free here: by T. L. Wilson - arXiv An overview of the techniques of radio astronomy. It contains a short history, details of calibration procedures, coherent/heterodyne and incoherent/bolometer receiver systems, observing methods for single apertures and interferometers, etc. by Andrew J. Butrica - NASA History Division A comprehensive history of this surprisingly significant scientific discipline. Quite rigorous and systematic in its methodology, To See the Unseen explores the development of the radar astronomy specialty in the larger community of scientists. by Wallace H. Tucker - NASA History Office Some of the topics covered in this book include creative violence, stellar explosions, cosmic rays, superbubbles, stellar coronas, collapsed stars, neutron stars, degenerate dwarf stars, black holes, X-ray images of galaxies, galactic nuclei, etc. by Geoffrey A. Blake - California Institute of Technology This course discusses the fundamental aspects of atomic and molecular spectra that enable one to infer physical conditions in astronomical, planetary and terrestrial environments from the analysis of their electromagnetic radiation.
<urn:uuid:76e511dd-7092-4db8-a38d-eed19b108e1e>
2.5625
349
Content Listing
Science & Tech.
14.9475
95,641,059
Wednesday, 26 March 2008 Have you ever hated to be right? The Wilkins Ice Shelf covered an area of 16,000km2 (the size of Northern Ireland). Having been stable for most of the last century it began retreating in the 1990s. A major breakout occurred in 1998 when 1000km2 of ice was lost in a few months. Satellite images processed at the US National Snow and Ice Data Center revealed that the retreat began on February 28 when a large (41 by 2.5 km) iceberg calved away from the ice shelf's south-western front. In a series of images, the edge of the shelf proceeded to crumble and disintegrate in a pattern that has become characteristic of climate-caused ice shelf retreats throughout the northern Peninsula, leaving a sky-blue patch spreading across the ocean surface compose of hundreds of large blocks of exposed old glacier ice (see pictures). By 8 March, the ice shelf had lost just over 570 km2, and the patch of disintegrated Antarctic ice had spread over 1400km2. As of mid-March, only a narrow strip of shelf ice was protecting several thousand kilometres of potential further break-up. The recent break out leaves a thin strip of ice between Charcot and Latady islands on the Antarctic Peninsula. Climate warming has increased the volume of summer meltwater on glaciers, which has weakened ice shelves. Sea ice, which protects ice shelves from ocean swell, has reduced also as a result of warming temperatures. The collapse of the 32502 Larsen B Ice Shelf took place in 2002. During the past 40 years the average summer temperatures in this region of the north-east Peninsula has been 2.2°C. The western Antarctic Peninsula has showed the biggest increase in temperatures (primarily in winter) observed anywhere on Earth over the past half-century. The Antarctic Peninsula is an area of rapid climate change and has warmed faster than anywhere else in the Southern Hemisphere over the past half century. Climate records from the west coast of the Antarctic Peninsula show that temperatures in this region have risen by nearly 3°C during the last 50 years – several times the global average and only matched in Alaska. Ice sheet – is the huge mass of ice, up to 4 km thick, that covers Antarctica's bedrock. It flows from the centre of the continent towards the coast where it feeds ice shelves. Ice shelf – is the floating extension of the grounded ice sheet. It is composed of freshwater ice that originally fell as snow, either in situ or inland and brought to the ice shelf by glaciers. As they are already floating any disintegration (like Larsen B) will have no impact on sea level. Sea level will rise only if the ice held back by the ice shelf flows more quickly into the sea. Regular satellite images of Wilkins Ice Shelf were obtained using NASA's Modis instruments and the International Polar Year 'Polar View' project which uses the European Space Agency Envisat satellite. Polar View operates to provide timely images of the Antarctic sea ice and shelves to assist science and operations in the Southern Ocean. Further information and images are available at www.polarview.aq This discovery follows the recent UNEP report that the world's glaciers are continuing to melt away. Data from 30 reference glaciers in nine mountain ranges show that between the years 2004-2005 and 2005-2006 the average rate of melting and thinning has more than doubled. The Cambridge-based British Antarctic Survey (BAS) is a world leader in research into global environmental issues. With an annual budget of around £40 million, five Antarctic Research Stations, two Royal Research Ships and five aircraft BAS undertakes an interdisciplinary research programme and plays an active and influential role in Antarctic affairs. BAS has joint research projects with over 40 UK universities and has more than 120 national and international collaborations. It is a component of the Natural Environment Research Council.JIM ELLIOT, BRITISH ANTARCTIC SURVEY. Posted by Rev. Peter Doodes at 10:16
<urn:uuid:b0717a63-8693-4abb-92d8-250db1c539cb>
3.421875
826
Personal Blog
Science & Tech.
47.147821
95,641,060
Mercury is emitted to the air from Hg-enriched and low Hg-containing (natural background) substrates. Emitted Hg can be geogenic, or can be derived from the re-emission of Hg that was previously deposited to the soil from the atmosphere. Atmospheric Hg can be derived from natural and/or anthropogenic sources and can be deposited by wet or dry processes. It is important to understand the relative magnitude of emission, deposition, and re-emission of Hg associated with terrestrial ecosystems with natural background soil Hg concentrations because these landscapes cover large terrestrial surface areas. This information is also important for developing biogeochemical mass balances, assessing the impacts of atmospheric Hg sources, and predicting the effectiveness of regulatory controls at local, regional, and global scales. The major focus of this paper is to discuss air–substrate Hg exchange for low Hg-containing soils (<0.1 μg Hg g−1) from two areas in Nevada and one in Oklahoma, USA. Data collected with field and laboratory gas exchange systems are presented. Results indicate that in order to adequately characterize substrate–air Hg exchange, diel and seasonal data must be collected under a variety of environmental conditions. Field and laboratory data showed that dry deposition of gaseous Hg to substrates with low Hg concentrations is an important process. Environmental parameters important in influencing emissions include soil water content, incident light, temperature, atmospheric oxidants, and air Hg concentrations. There are synergistic and antagonistic effects between these parameters complicating prediction of flux. Mercury exchange between the atmosphere and low mercury containing substratesApplied Geochemistry Citation InformationGustin M.S., Engle M.A., Ericksen J., Lyman S., Stamenkovic J., Xin M., 2006. Elemental Hg exchange between the atmosphere and low Hg containing substrates. Applied Geochemistry 21, 1913-1923.
<urn:uuid:6c1e8964-9b47-463a-9a43-69303f3224cd>
3.234375
399
Academic Writing
Science & Tech.
29.44459
95,641,069
SOHO has spotted over 2100 comets, most of which are from what's known as the Kreutz family, which graze the solar atmosphere where they usually evaporate completely. But on December 2, 2011, the discovery of a new Kreutz-family comet was announced. This comet was found the old-fashioned way: from the ground. Australian astronomer Terry Lovejoy spotted the comet, making this the first time a Kreutz comet has been found through a ground-based telescope since the 1970's. The comet has been designated C/2011 W3 (Lovejoy). Discovering a comet before it moves into view of space-based telescopes, gives scientists the opportunity to prepare the telescopes for the best possible observations. Indeed, since comet Lovejoy was visible from the ground, scientists have high hopes that this might be an exceptionally bright comet, making it all the easier to view and study. (Some Kreutz comets –- such as Ikeya-Seki in 1965 -- are so bright they can be seen with the naked eye in the daytime, though this is extremely rare.) The comet moved into view of the Solar Terrestrial Relations Observatory (STEREO) on Monday, December 12. It should be visible in SOHO by Wednesday, Dec 14. Next up is Hinode, which will make observations at about 6 p.m. ET on Dec 15, as the comet moves towards its closest approach to the sun. Hinode's solar optical telescope will take the highest resolution images of this close approach. As the comet passes through the sun's atmosphere, the corona, an increase in particle collisions may produce X-rays, so Hinode may also capture X-ray images of the comet. The comet will likely pass within some 87,000 miles of the sun, and disappear behind the northwest limb of the sun shortly after it is seen by Hinode. Susan Hendrix | EurekAlert! Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters Theorists publish highest-precision prediction of muon magnetic anomaly 16.07.2018 | DOE/Brookhaven National Laboratory For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:5ddc593e-0988-4fef-bf38-7f06da927d2d>
3.328125
1,042
Content Listing
Science & Tech.
43.802181
95,641,077
« A REGIONAL GNSS/GPS NETWORK FOR MONITORING THE CARPATHIAN-DANUBIAN-PONTIC SPACE DEFORMATIONS AND THE IMPACT OF LOCAL EARTHQUAKES » The modern Romanian GNSS/GPS (Global Navigation Satellite System/Global Positioning System) network started in 2001, when the first permanent station was installed on the Lacauti peak in the mountainous zone of the Carpathian Bending Zone, West of the Vrancea epicentral area. Nowadays the network has 24 operational stations and 3 more under construction in 2017. Main objectives such as monitoring the surface expressions of the crustal changes occurring in and around the Romanian Carpathians and the neighboring tectonic units, as direct expression of the tectonic processes viewed at a larger scale (for example on the Northeastern flank of the Africa-Europe interaction) and observation of the crustal motions in order to better understand the surface-to-depth interconnections of intermediate deep earthquakes with shallow expressions in the Vrancea zone, can be highlighted. The GNSS/GPS network can also provide improved, reliable, high-accuracy environmental measurements for global weather forecasts, climate monitoring, earthquake precursors (ionospheric studies), coseismic studies, GNSS positioning and navigation, and other research for complementary purposes. 4 + 3 =
<urn:uuid:09e3dda2-ab85-4f3a-9b64-ae129d98f742>
3.34375
284
Knowledge Article
Science & Tech.
-6.295179
95,641,081
A groundbreaking scientific study is underway testing the use of fire as a way to help endangered coastal habitats adapt to rising sea levels. Thanks to the Gulf of Mexico Foundation based here in Corpus Christi, scientists in Mississippi are looking at controlled burns as way to increase bio-diversity in coastal ecosystems, and help wetland marshes migrate inland as shorelines recede. And our wetlands along the gulf are disappearing for a variety of reasons including development, land subsidence, and rising sea levels. In places like Louisiana the losses are truly alarming. And that's a major concern because the gulf is one of the nations most productive bodies of water and wetlands are a key component of the ecosystem. The foundation obtained a $245,000 grant so that scientists can study whether controlled burns can help marsh vegetation move inland, as sea levels rise and shorelines recede. The project is the brainchild of Dr. Loretta Battalgia with Southern Illinois University And Dr.Julia Cherry with the University of Alabama, They were doing coastal plant studies when they noticed that after hurricanes, wetland vegetation moved inland into the areas damaged by the storms. Dr. Battaglia says, "Where the woody plant communities were disturbed, either thru the wind or the storm surge that killed a lot of plants that weren't tolerant of the high salinity levels, we began to see some migration up slope of the marsh species. and this led us to think about what are the barriers for up slope migration, because marsh species are being impacted by sea level rise. And it turns out the main barrier is the low brush that usually separates wetlands from the coastal prairies and forests. And that led to the idea of mimicking nature using controlled burns to remove the undergrowth. Data was collected from control plots before the burns, and it will be compared with data collected from the plots over the next 2 years. But so far, the preliminary results look very encouraging. This just one of more than 80 project that have been founded by the Gulf of Mexico Foundation was created here 25 years ago by a group of concerned businessmen and scientists. For more information on the foundation, its projects,and donation opportunities, just go to http://www.gulfmex.org/ Can't find something?
<urn:uuid:e4a47f00-707d-4a48-ad4f-3b4b62a70e72>
3.53125
463
News Article
Science & Tech.
44.545031
95,641,112
Ask Ethan: How Large Is The Entire, Unobservable Universe? “We know the size of the Observable Universe since we know the age of the Universe (at least since the phase change) and we know that light radiates. […] My question is, I guess, why doesn’t the math involved in making the CMB and other predictions, in effect, tell us the size of the Universe? We know how hot it was and how cool it is now. Does scale not affect these calculations?” Our Universe today, to the best of our knowledge, has endured for 13.8 billion years since the Big Bang. But we can see farther than 13.8 billion light years, all because the Universe is expanding. Based the matter and energy present within it, we can determine that the observable Universe is 46.1 billion light years in radius from our perspective, a phenomenal accomplishment of modern science. But what about the unobservable part? What about the parts of the Universe that go beyond where we can see? Can we say anything sensible about how large that is? We can, but only if we make certain assumptions. Come find out what we know (and think) past the limits of what we can see on this week’s Ask Ethan! What Was It Like When The Big Bang First Began? “Once inflation comes to an end, and all the energy that was inherent to space itself gets converted into particles, antiparticles, photons, etc., all the Universe can do is expand and cool. Everything smashes into one another, sometimes creating new particle/antiparticle pairs, sometimes annihilating pairs back into photons or other particles, but always dropping in energy as the Universe expands. The Universe never reaches infinitely high temperatures or densities, but still attains energies that are perhaps a trillion times greater than anything the LHC can ever produce. The tiny seed overdensities and underdensities will eventually grow into the cosmic web of stars and galaxies that exist today. 13.8 billion years ago, the Universe as-we-know-it had its beginning. The rest is our cosmic history.” The Big Bang is normally treated as the very beginning of the Universe, but in reality there’s a phase that came before the hot Big Bang to set it up. During cosmic inflation, the Universe was filled with an extremely large amount of energy inherent to space itself, causing the Universe to inflate, stretch flat, and achieve almost exactly the same properties everywhere. The Universe we have today, however, is full of matter and radiation, and originated in a hot Big Bang 13.8 billion years ago. How did we go from this inflating state to our hot, dense, uniform and expanding-and-cooling Universe? This tells you the best scientific story of how we got there, along with an in-depth description of what it was like at those first moments where our Universe gave us something to look at. The 7 Most Powerful Fireworks Shows In The Universe “Forget mere chemical reactions; in space, matter-energy conversion creates unprecedentedly powerful explosive events. Here are the 7 most powerful natural displays of cosmic fireworks. 7.) Type Ia supernova: when two white dwarf stars collide, they initiate a runaway fusion reaction, destroying both stellar remnants.” Throughout the Universe, there are many beautiful displays of cosmic fireworks. Stars are born; galaxies collide; gas gets heated and expelled; stars and stellar remnants explode and die. We typically think of supernova events as the culmination of the brightest, most energetic things that can happen in the cosmos. But supernovae only fill up the bottom rungs on the list of the most powerful, natural fireworks shows that the Universe provides us with. Which ones are the most energetic? Find out on this incredible start to your pre-4th-of-July Monday! Ask Ethan: Could The Universe Be Torn Apart In A Big Rip? “Is The Big Rip—where expansion exceeds all the other forces—still considered a possible future for our Universe? What are the arguments for or against? And if so, how would it unfold, what would happen?” In addition to normal matter, dark matter, neutrinos, and radiation, the Universe is made up of dark energy: a new form of energy intrinsic to space itself. Although the data indicates that dark energy is consistent with being a cosmological constant, whose energy density won’t change with time, it’s possible that this energy will increase or decrease in strength. If it decreases, it could decay entirely or even reverse sign. resulting in a Big Crunch. But if it increases, we could have a spectacularly catastrophic fate: the Big Rip. In the Big Rip, bound objects will literally be ripped apart on galactic, stellar, planetary, and eventually even atomic scales. Even space itself will rip apart in the end. The Big Rip isn’t ruled out, but if it’s going to occur, our current constraints push it out to 80 billion years in the future. Find out what it would look like and how we’ll know! What Was It Like When The Universe Was Inflating? “In theory, what lies beyond the observable Universe will forever remain unobservable to us, but there are very likely large regions of space that are still inflating even today. Once your Universe begins inflating, it’s very difficult to get it to stop everywhere. For every location where it comes to an end, there’s a new, equal-or-larger-sized location getting created as the inflating regions continue to grow. Even though most regions will see inflation end after just a tiny fraction of a second, there’s enough new space getting created that inflation should be eternal to the future.” You’ve no doubt heard that the overwhelming scientific consensus is that the observable Universe began with the hot Big Bang. What’s far less common, but just as overwhelmingly accepted and well-understood, is that a period of cosmological inflation occurred prior to the Big Bang in order to set it up. While most of us can visualize the expanding Universe fairly well, it’s much more difficult to get a good handle on what the Universe looked like during the epoch of cosmic inflation. Yet if you want to know where our Universe came from, and how it was born with the properties our hot Big Bang started off with, that’s exactly the challenge you have to meet. Here’s an in-depth but scientifically accurate description of what the Universe was like when inflation occurred, and how it gives us the Universe we inhabit today! We Just Found The Missing Matter In The Universe, And Still Need Dark Matter “For over 40 years, scientists have argued over dark matter’s existence. Big questions arose from the motions inside galaxies, clusters of galaxies, and along the cosmic web. From their gravity, we can infer the total mass in the Universe. Yet multiple sources indicate that only 15% of that mass can be baryonic: made of normal matter.” Is dark matter truly necessary? Many argued that, until we found the entirety of the normal matter in the Universe, we couldn’t be sure. The motions of galaxies, clusters of galaxies, and the formation of large-scale structure and the cosmic web all indicate a certain amount of mass in the Universe, and many sources such as the CMB and big bang nucleosynthesis indicate that the “normal” matter can only be about 15% of the total, implying dark matter. But finding all the normal matter has proven elusive, with the theorized WHIM (warm-hot intergalactic medium) not showing up in sufficient abundance. In particular, the hot part just wasn’t there. Until now. Observation made with XMM-Newton have at last revealed it, and it’s there in just the right, predicted amounts. And therefore, dark matter is still absolutely necessary. The Counterintuitive Reason Why Dark Energy Makes The Universe Accelerate “In a nutshell, a new form of energy can affect the Universe’s expansion rate in a new way. It all depends on how the energy density changes over time. While matter and radiation get less dense as the Universe expands, space is still space, and still has the same energy density everywhere. The only thing that’s changed is our automatic assumption that we made: that energy ought to be zero. Well, the accelerating Universe tells us it isn’t zero. The big challenge facing astrophysicists now is to figure out why it has the value that it does. On that front, dark energy is still the biggest mystery in the Universe.” There are lots of explanations out there for why the Universe’s expansion is accelerating. Some people point towards the negative pressure of a cosmological constant and talk about how this causes space to fly apart. Others call it a “fifth force” and imply that it’s a new fundamental relation that functions as some sort of anti-gravity. Neither of those explanations are correct, though, and they both complicate a much simpler (and more correct!) truth: that the Universe’s expansion rate is simply determined by all the different types of matter and energy within it. Dark energy is just another type of energy, but it’s different in a very particular way from the normal matter, dark matter, neutrinos, and radiation that we know. Dark energy makes the Universe accelerate because of how it evolves and changes differently from everything else we know of over time. Come find out how! Meet The Universe’s First-Ever Supermassive Binary Black Holes “In 1891, the object OJ 287, 3.5 billion light years distant and a blazar itself, optically bursted. Every 11-12 years since, it’s produced another burst, recently discovered to have two, narrowly-separated peaks. Its central, supermassive black hole is 18 billion solar masses, one of the largest known in the Universe. This periodic double-burst arises from a 100-150 million solar mass black hole punching through the primary’s accretion disk.” The big problem with black holes is that, well, they’re so dark. They don’t emit any detectable light of their own, so we have to rely on indirect, secondary signals to infer their existence. That usually arises in the form of radio and X-ray radiation from matter that gets accelerated by the black hole’s extreme gravity, as well as from the magnetic fields that an accretion disk around the black hole can create. The radiation can form jets, and when a jet points at our eyes, we see a blazar. Well, the system OJ 287 has a periodic blazar that flares in a double-burst every 11-12 years, indicative of a large, supermassive black hole orbiting an even more massive behemoth, punching through the accretion disk twice with every orbit. Come meet OJ 287, first found to burst way back in 1891, and still one of only two supermassive black hole binaries known in the Universe! The Surprising Reason Why Neutron Stars Don’t All Collapse To Form Black Holes “The measurements of the enormous pressure inside the proton, as well as the distribution of that pressure, show us what’s responsible for preventing the collapse of neutron stars. It’s the internal pressure inside each proton and neutron, arising from the strong force, that holds up neutron stars when white dwarfs have long given out. Determining exactly where that mass threshold is just got a great boost. Rather than solely relying on astrophysical observations, the experimental side of nuclear physics may provide the guidepost we need to theoretically understand where the limits of neutron stars actually lie.” If you take a large, massive collection of matter and compress it down into a small space, it’s going to attempt to form a black hole. The only thing that can stop it is some sort of internal pressure that pushes back. For stars, that’s thermal, radiation pressure. For white dwarfs, that’s the quantum degeneracy pressure from the electrons. And for neutron stars, there’s quantum degeneracy pressure between the neutrons (or quarks) themselves. Only, if that last case were the only factor at play, neutron stars wouldn’t be able to get more massive than white dwarfs, and there’s strong evidence that they can reach almost twice the Chandrasekhar mass limit of 1.4 solar masses. Instead, there must be a big contribution from the internal pressure each the individual nucleon to resist collapse. For the first time, we’ve measured that pressure distribution inside the proton, paving the way to understanding why massive neutron stars don’t all form black holes. New Stars Turn Galaxies Pink, Even Though There Are No ‘Pink Stars’ “New star-forming regions produce lots of ultraviolet light, which ionizes atoms by kicking electrons off of their nuclei. These electrons then find other nuclei, creating neutral atoms again, eventually cascading down through its energy levels. Hydrogen is the most common element in the Universe, and the strongest visible light-emitting transition is at 656.3 nanometers. The combination of this red emission line — known as the Balmer alpha (or Hα) line — with white starlight adds up to pink.” When you look through a telescope’s eyepiece at a distant galaxy, it will always appear white to you. That’s because, on average, starlight is white, and your eyes are more sensitive to white light than any color in particular. But with the advent of a CCD camera, collecting individual photons one-at-a-time, you can more accurately gauge an astronomical object’s natural color. Even though new stars are predominantly blue in color, star-forming regions and galaxies appear pink. The problem compounds itself when you realize there isn’t any such thing as a pink star! And yet, there’s a straightforward physical explanation for what we see. It’s a combination of ultraviolet radiation, white starlight, and the physics of hydrogen atoms that turn galaxies pink. Find out how, with some incredible visuals, today!
<urn:uuid:06db1ea0-1aea-43f2-b28f-a61535b9484d>
3.25
3,020
Content Listing
Science & Tech.
47.624441
95,641,116
Changes in the sun's energy output may have led to marked natural climate change in Europe over the last 1000 years, according to researchers at Cardiff University. Scientists studied seafloor sediments to determine how the temperature of the North Atlantic and its localised atmospheric circulation had altered. Warm surface waters flowing across the North Atlantic, an extension of the Gulf Stream, and warm westerly winds are responsible for the relatively mild climate of Europe, especially in winter. Slight changes in the transport of heat associated with these systems can led to regional climate variability, and the study findings matched historic accounts of climate change, including the notoriously severe winters of the 16th and 18th centuries which pre-date global industrialisation. The study found that changes in the Sun's activity can have a considerable impact on the ocean-atmospheric dynamics in the North Atlantic, with potential effects on regional climate. Predictions suggest a prolonged period of low sun activity over the next few decades, but any associated natural temperature changes will be much smaller than those created by human carbon dioxide emissions, say researchers. The study, led by Cardiff University scientists, in collaboration with colleagues at the University of Bern, is published today in the journal Nature Geoscience. Dr Paola Moffa-Sanchez, lead author from Cardiff University School of Earth and Ocean Sciences, explained: "We used seafloor sediments taken from south of Iceland to study changes in the warm surface ocean current. This was done by analysing the chemical composition of fossilised microorganisms that had once lived in the surface of the ocean. These measurements were then used to reconstruct the seawater temperature and the salinity of this key ocean current over the past 1000 years." The results of these analyses revealed large and abrupt temperature and salinity changes in the north-flowing warm current on time-scales of several decades to centuries. Cold ocean conditions were found to match periods of low solar energy output, corresponding to intervals of low sunspot activity observed on the surface of the sun. Using a physics-based climate model, the authors were able to test the response of the ocean to changes in the solar output and found similar results to the data. "By using the climate model it was also possible to explore how the changes in solar output affected the surface circulation of the Atlantic Ocean," said Prof Ian Hall, a co-author of the study. "The circulation of the surface of the Atlantic Ocean is typically tightly linked to changes in the wind patterns. Analysis of the atmosphere component in the climate model revealed that during periods of solar minima there was a high-pressure system located west of the British Isles. This feature is often referred to as atmospheric blocking, and it is called this because it blocks the warm westerly winds diverting them and allowing cold Arctic air to flow south bringing harsh winters to Europe, such as those recently experienced in 2010 and 2013." Meteorological studies have previously found similar effects of solar variability on the strength and duration of atmospheric winter blockings over the last 50 years, and although the exact nature of this relationship is not yet clear, it is thought to be due to complex processes happening in the upper layers of the atmosphere known as the stratosphere. Dr Paola Moffa-Sanchez added: "In this study we show that this relationship is also at play on longer time-scales and the large ocean changes, recorded in the microfossils, may have helped sustain this atmospheric pattern. Indeed we propose that this combined ocean-atmospheric response to solar output minima may help explain the notoriously severe winters experienced across Europe between the 16th and 18th centuries, so vividly depicted in many paintings, including those of the famous London Frost Fairs on the River Thames, but also leading to extensive crop failures and famine as corroborated in the record of wheat prices during these periods." The study concludes that although the temperature changes expected from future solar activity are much smaller than the warming from human carbon dioxide emissions, regional climate variability associated with the effects of solar output on the ocean and atmosphere should be taken into account when making future climate projections. Notes for Editors: Funding for this research has come from the Natural Environment Research Council, UK, the National Science Foundation, Switzerland, the European Commission and NCAR's Computational and Information Systems Laboratory (CISL). This research forms part of the Climate Change Consortium of Wales (C3W; http://c3wales.org/). To arrange media interviews with Professor Ian Hall or Dr Paola Moffa-Sanchez, please contact Heath Jeffries, Media Manager, Cardiff University, on 07908 824029 or 02920 870917; email email@example.com Cardiff University is recognised in independent government assessments as one of Britain's leading teaching and research universities and is a member of the Russell Group of the UK's most research intensive universities. Among its academic staff are two Nobel Laureates, including the winner of the 2007 Nobel Prize for Medicine, University Chancellor Professor Sir Martin Evans. Founded by Royal Charter in 1883, today the University combines impressive modern facilities and a dynamic approach to teaching and research. The University's breadth of expertise encompasses: the College of Arts, Humanities and Social Sciences; the College of Biomedical and Life Sciences; and the College of Physical Sciences, along with a longstanding commitment to lifelong learning. Cardiff's four flagship Research Institutes are offering radical new approaches to cancer stem cells, catalysis, neurosciences and mental health and sustainable places. Heath Jeffries | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:230fdd78-0880-44b5-8209-3ea2eed06478>
3.6875
1,772
Content Listing
Science & Tech.
31.79787
95,641,118
Write the pseudocode for a recursive function TERNARY TO BINARY, that will convert a ternary tree into a binary search tree.© BrainMass Inc. brainmass.com July 20, 2018, 7:02 am ad1c9bdddf I recommend reading the attached file; it's the same as below but with nicer formatting :-) Your problem is to write a psuedocode algorithm which will take in a ternary search tree and convert it into a binary search tree. Recall that a ternary search tree is just a binary search tree where each node has an extra child. I will assume for this problem that in your binary search tree, a matching node becomes the right child; you can easily modify the algorithm if your teacher prefers it to be the left child. Let's start with just the most basic psuedocode: set binary_node_1 = ternary_node_1 The expert writes pseudocodes for a recursive function. The expert converts a ternary tree into a binary search tree.
<urn:uuid:f2cf4ecf-f08a-4917-a68e-13aa2941e02b>
3.9375
223
Tutorial
Software Dev.
52.202249
95,641,125
Resolution and contrast in an electron microscope image are dependent in part upon several factors in the selection or preparation of the specimen over which the electron microscopist has some control. For example, there are various ways of observing the specimen: as a dispersed powder; as a thin section; and indirectly by the use of surface replicas. One cannot expect the same limits of resolution for each of these methods, and even within each method the resolving power and contrast vary with specimen thickness, film thickness, stability of the specimen, and stability of the supporting film or replica. In order to establish some basis for selecting one method over the other, a brief discussion of some of the problems peculiar to the preparation techniques used in electron microscopy will be given. The techniques described are intended only as a guide. The individual must decide what method is best for his particular specimen and may often have to modify existing methods. Because of the need to evacuate the lens column and specimen chamber to pressures in the range of 10-4 mm Hg, the specimen must be in a dry state. In this connection it must be noted that even the presence of thin films of organic materials oflow volatility can have a harmful effect by contributing to the formation of a carbonaceous film at the surface of the specimen due to breakdown of the structure in the electron beam; such films can seriously reduce contrast in the image and impair resolution.
<urn:uuid:8c0f67f7-75d6-421f-900b-a26a16a044e9>
3.03125
280
Truncated
Science & Tech.
22.641965
95,641,138
Posted by Etienne Membrives This article is now at https://etienne.membrives.fr/pebibyte/articles/simple-face-recognition-using-opencv/. Posted on January 21, 2011, in Code and tagged C++, face recognition, OpenCV, source code. Bookmark the permalink. 28 Comments. thanks for the rear example how can i run this on vs2010 I am sorry, but I do not know how to run this code on Visual Studio. I don’t use any special GCC functions here, only the C++ standard library and OpenCV. So it should be a simple matter of installing OpenCV on your machine, and adapting the linking flags. thanks for your reply. are you using command prompt to run the code if not vs2010? Yes, I am using the command-line prompt, but under GNU/Linux. You may want to look at this page on how to install and use OpenCV with Visual Studio. yes ive done the opencv stepup properly. it keeps asking me “did you forget to add include stdafx.h to your source”. can you email me the project folder. so i can try running it. are you using the same database in the cognetics article? hbk1_hbk1 at hotmail dot com I am not using Visual Studio (I actually never used it), so my directory, which is, by the way, composed uniquely of facerecog.cpp, will not help you. Regarding stdafx.h in the OpenCV example source code, it refers to some Microsoft specific precompiled headers. The tutorial says Click on “Debug -> Build Solution” or “Release -> Build Solution” to compile your project. If there was a compile error then you did not set the header file includes correctly. If there was a linker error then you did not set the library path or lib files correctly. Double-check the steps to create your project, especially parts about include files. If that doesn’t help, I think you can remove the stdafx.h include, but you may have to change _tmain and _TCHAR into something more common. thx for your help. ill try doing the above and let you know how it goes. hi i got rid of the errors by reinstalling opencv and building the libs all over again. the codes fine now. just wanted to know where to store the images. im using the same images as in the article . my train.txt and test.txt file are in the same directory as the source code. should i leave the images in one folder in there? also is the result for this code the same as the article? The train.txt and test.txt files should be in the same directory as the binary. If you follow the Cognotics example, images are in several directories numbered s1 to s40. These directories also live in the same directory as the binary and train.txt/test.txt. Obviously, you can modify test.txt and train.txt to point toward images elsewhere. hi, how can i modify this code so when it is given a image of one of the people in the database it will return that persons information. im sorry for being a bother to you. im new to this and im working on a uni project with similar functionality. thanks for your help I won’t write code for you, but I can give you some indications regarding the overall design: you have a database storing the information you want to display. What you want, is to add a field to your database to store the face of the person projected into the PCA space. You also need to store the projection matrix (the eigenvectors). At the recognition phase, you have to project the input face into the PCA space using the eigenvectors, then compute the distance between the projected vector and each stored vector and take the smallest distance. If you’re stuck with the implementation, you are better off taking some reference book and reading it (The Art of Computer Programming by Donald Knuth is an example), as it doesn’t seem to involve any difficulty outside of the PCA code (that you have on this blog). thank you soo much for the tips. and no i will not ask you to write any code for me. but i will take any advice on how to implement the above 🙂 i was actually thinking along the same lines . so let me just confirm the above. a database of the user information with a field that contains the projected image and eigenvectors.of the user. regarding the recognition phase is the user with the lowest distance between the stored vectors the correct user? so i only have to compute the vectors to get the correct result? Except that eigenvectors are common to all users, and only eigenvalues need to be stored for each one of them (“set of eigenvalues for a user” and “projected user face” are strictly equivalent). See any linear algebra course (Stanford, MIT and Khan Academy are three good free resources) to have more in-depth understanding of what’s going on here. And yes, regarding the recognition phase, you only have to compute the eigenvalues of the face to be recognized and take the user with the smallest distance. You may want to add a threshold to this distance, though, if you have to address the case where you try to recognize a person not in the database. Can you please help me for run this code in VS 2008? I’ve never use c++ in VS, but I’ve use c# in VS. Sorry, I can’t. hi~I check you code in windows,i have problems for all of the vectors this seems to not work, all vectors only contain ‘nan’ as components.and eigen values all 0;I need you help, thx~ The problem with this code is that if you give an unknown face it will recognize it at some one else face. It should tell us that it is an unknown face. how can it Possible??/ It is actually quite simple in theory, but needs a lot of tuning in practice. Basically, you need to define a minimum distance and declare a match only if the distance is less than this threshold (in findNearestNeighbor()). I have a doubt, if say i need to add one more face to the training face list. then do i need to re-create the facedata.xml? Is there any work around for the same, like can i include one more sample training face instead of creating the whole xml file again? I did not have looked at this code for quite some time, but yes, you don’t need to re-create everything. You just need to update projectedTrainFaceMat (add one more projected face) and update nTrainFaces accordingly. Thanks for this solution. I read all the articles/code from the willowgarage links and cognotics article, but it was a huge pian trying to actually get the code up and running. Then I found this. Thanks a lot. I built this code on linux and trying to run. I have created a txt file called ‘train.txt’. This file contains 2 jpg files in each line. when i run this app with ‘train’ as command line argument, the application is crashing in function cvLoadImage. Can you please suggest what the train.txt file should contain and what is the image size we need to use? Should the images contain more faces? If there are sample ‘train.txt’ with jpg image files, please share. thanks in advance Hi This is a followup post to my previous post, i have analyzed the code and put debug statements to debug the issue. Find below the detials of my analysis. Please suggest where i’m going wrong. 1. I have Created a file “train.txt” and kept in the current folder 2. if i cat train.txt, it shows 3. face2.jpg is jpeg file with 2 faces and face3.jpg is with 3 faces. These are color jpg files 4. The function, loadFaceImgArray(“train.txt) will open the train.txt file, it will go in the following while loop to get the image file names from “train.txt” file // count the number of faces while( fgets(imgFilename, 512, imgListFile) ) ++nFaces; 5. when it comes out of this loop, i will have nFaces = 2 and imgFilename=face3.jpg 6. Allocate memory for faceImgArr= 8 bytes 7. Call cvCreateMat 8. Go in for loop till nFaces, and load image, here it goes in 2 loops and loads face3.jpg 9. Close the image file and return 10.Call doPCA() function 11. In doPCA() function, it crashes in statement faceImgSize.width = faceImgArr->width; 12. It is not able to access faceImgArr->width location and hence giving segmentation fault. Can you please suggest me where i’m doing wrong? thanks in advance. I think the issue is with the format of your train.txt file. As you can see on line 108, it expects each line to be composed of a decimal (actually, an integer), then a space, then a string. The actual format is: So a train.txt file with two faces, one for person #2 and one for person #3 should be: Hope that helps Thanks a lot for the inputs. Eventhough i could solve the crash issue, sorry that i’m still not clear about the train.txt file. Say i have test1.jpg which contains only one face, then should i write in the train.txt file as 1 test1.jpg and if the test1.jpg has 2 faces, then it should be 2 test1.jpg? Secondly is train.txt same as test.txt? else what should be test.txt? How to analyse the output of this program? It shows nearest= and truth= with some big number on the screen. How to interpret this? Hi, i would like to know Which version of OpenCV i need to use for building this code? I’m Currently using OpenCV2.4. Please let me know if this works with Open CV 2.4? I can’t understan why nEigens is nTrainFaces-1. Can it be equal? I tried nEigens = nTrainFaces; but in this situation last eigen image is returned null. Why? Pingback: DetectionBasedTracker : OpenCV implementation. « bytesandlogics RSS - Posts RSS - Comments Blog at WordPress.com.
<urn:uuid:c8e53de4-6ee5-49ff-9070-c088fc105dad>
2.5625
2,358
Comment Section
Software Dev.
70.584852
95,641,150
The world might need to radically combat global warming by deliberately injecting a small volcanic eruption's worth of planet-cooling sulfur into the stratosphere. That's the talked-about proposal being set forth in a controversial essay by Nobel Prize-winning chemist Paul J. Crutzen, which appears in the August issue of the journal Climatic Change (DOI: 10.1007/s10584-006-9101-y). The journal's editor, climatologist Stephen H. Schneider of Stanford University, says scientists and politicians "must study the potential" of geoengineering-based strategies such as the global-scale cooling experiment that Crutzen, of Max Planck Institute for Chemistry, in Mainz, Germany, outlines in his essay. However, Schneider stresses that considering such drastic measures to mitigate global warming should in no way reduce a sense of urgency for safer solutions, including more energy-frugal lifestyles and helping the developing world leapfrog over the carbon dioxide-emitting adventure of the Industrial Revolution. In his essay, Crutzen explains that global warming from the buildup of carbon dioxide and other greenhouse gases is partially countered by a cooling effect due to backscattering of sunlight by aerosols that form from sulfate particles. In this way, even the sulfurous pollution that has caused acid rain, now on a decline, has had a cooling effect. The 1991 eruption of Mount Pinatubo in the Philippines provides a dramatic natural example of the power of aerosol cooling. Six months after the eruption, Crutzen notes, about 6 billion kg of sulfur (from the volcano's initial injection of 10 billion kg) in the form of aerosol-forming sulfate remained in the stratosphere. The result of this event was a 0.5 °C cooling at Earth's surface in the year following the eruption. |COURTESY OF PAUL CRUTZEN| It would take about 5.3 billion kg of sulfur introduced into the stratosphere per year to compensate for a doubling of atmospheric carbon dioxide levels, Crutzen says. Increasing the amount of sulfate aerosols "can be achieved by burning S2 or [by] H2S carried into the stratosphere on balloons and by artillery guns to produce SO2," he suggests in his essay. Perhaps a chemist could develop a sulfur-containing gas that is stable lower in the atmosphere, where it would be easier to place, but which then would undergo a reaction in the stratosphere to produce SO2 or another sulfur-containing gas that can produce aerosol-forming H2SO4, Crutzen asks as a prod to his chemist colleagues. Crutzen describes his climate engineering proposal as a last-resort hedge on what he fears will become a too-little-too-late response to global warming. "I am only in favor of doing the manipulation if there are no severe side effects and the climate is running away," Crutzen tells C&EN. For now, he says, "I recommend research." In recognition of the controversial nature of Crutzen's ideas, Climatic Change commissioned a half-dozen articles to help readers develop a fuller perspective of the pros, cons, and uncertainties, Schneider says. Climate modeler Gavin Schmidt of NASA's Goddard Institute for Space Studies in New York City views Crutzen's essay as a strong but prudent call to other scientists to engage in "what if" scenarios, including the worst cases. "I am not as pessimistic as Crutzen," Gavin says. There's a chance the world will take steps that will make geoengineering experiments unnecessary, he says, a result that Crutzen rates as "a pious wish."
<urn:uuid:8d704f8f-79e7-405a-a02a-68e6e0c9e4d5>
3.578125
753
News Article
Science & Tech.
38.05842
95,641,169
Near Realtime Maps of Possible Earthquake-Triggered Landslides USGS scientists have been developing a system to quickly identify areas where landslides may have been triggered by a significant earthquake.Read Story Mission Areas L2 Landing Page Tabs Costs and consequences of natural hazards can be enormous; each year more people and infrastructure are at risk. We develop and apply hazards science to help protect U.S. safety, security, and economic well being. These scientific observations, analyses, and research are crucial for the Nation to become more resilient to natural hazards.Read Our Science Strategy This information is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely science to assess ongoing hazards. The information has not received final approval by the U.S. Geological Survey (USGS) and is provided on the condition that neither the USGS nor the U.S. Government shall be held liable for any damages... A summary of recent and past landslides and debris flows caused by rainfall in Southern California. A summary of recent and past landslides and debris flows caused by rainfall in Northern and Central California. Once the smoke clears from a wildfire, the danger is not over!! Other hazards, such as flash floods and debris flows, now become the focus. Areas recently burned by wildfires are particularly susceptible to flash floods and debris flows during rainstorms. Estimates of the probability and volume of debris flows that may be produced by a storm in a recently burned area, using a model with characteristics related to basin shape, burn severity, soil properties, and rainfall. Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can produce dangerous flash floods and debris flows.... Our Mission Statement The mission of the Samples Repository (Buczkowski, 2018) is to 1. serve as the USGS repository for geological, biological, and geochemical samples collected through field research sponsored by the WHCMSC, 2. provide long-term storage of these samples collected by WHCMSC scientists and affiliated researchers under controlled conditions to ensure... The USGS Woods Hole Coastal and Marine Science Center's Samples Repository is co-located on the Woods Hole Oceanographic Institution's Quissett Campus at 384 Woods Hole Road in Woods Hole, Massachusetts. The K.O. Emery Geotechnical Wing serves as the primary storage location for all geological, biological, and geochemical samples collected through USGS research, or in the permanent... Since 2002, the Woods Hole Coastal and Marine Science Center’s Samples Repository (WHCMSC) has been supporting research by providing secure storage for geological, biological, and geochemical samples; maintaining organization and an active inventory of these sample collections; as well as by providing access to these collections for study and reuse. Over the years, local storage... Sampling data collected in Cape Cod Bay, Buzzards Bay, and Vineyard Sound; south of Martha's Vineyard; and south and east of Nantucket, Massachusetts, in 2011, U.S. Geological Survey Field Activity 2011-015-FA These survey data are used to explore the nature of the sea floor and, in conjunction with high-resolution geophysical data, to make interpretive maps of sedimentary environments and validate acoustic remote sensing data. Integrated terrain models covering 16,357 square kilometers of the Massachusetts coastal zone and offshore waters were built to provide a continuous elevation and bathymetry terrain model for ocean planning purposes. A Triangulated Irregular Network was created from public-domain bathymetric and LiDAR data using the ArcGIS terrain-model framework. Conceptual salt marsh units for wetland synthesis: Edwin B. Forsythe National Wildlife Refuge, New Jersey The salt marsh complex of the Edwin B. Forsythe National Wildlife Refuge (EBFNWR), which spans over Great Bay, Little Egg Harbor, and Barnegat Bay (New Jersey, USA), was delineated to smaller, conceptual marsh units by geoprocessing of surface elevation data. Flow accumulation based on the relative elevation of each location is used to determine the ridge lines that separate each marsh unit.... Geophysical data collected along the Atlantic continental slope and rise 2014, U.S. Geological Survey Field Activity 2014-011-FA, cruise MGL1407 In summer 2014, the U.S. Geological Survey conducted a 21-day geophysical program in deep water along the Atlantic continental margin by using R/V Marcus G. Langseth (Field Activity Number 2014-011-FA). The purpose of the seismic program was to collect multichannel seismic reflection and refraction data to determine sediment thickness Swath bathymetry collected offshore of Fire Island and western Long Island, New York in 2014, U.S. Geological Survey Field Activity 2014-072-FA Hurricane Sandy, the largest storm of historical record in the Atlantic basin, severely impacted southern Long Island, New York in October 2012. In 2014, the U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers (USACE), conducted a high-resolution multibeam echosounder survey with Alpine Ocean Seismic Survey, Inc., offshore of Fire Island and western Long Island... This dataset displays the spatial variation mean tidal range (i.e. Mean Range of Tides, MN) in the Edwin B. Forsythe National Wildlife Refuge, which spans over Great Bay, Little Egg Harbor, and Barnegat Bay in New Jersey, USA. MN was based on the calculated difference in height between mean high water (MHW) and mean low water (MLW) using the VDatum (v3.5) software (... Exposure potential of salt marsh units in Edwin B. Forsythe National Wildlife Refuge to environmental health stressors This dataset displays the exposure potential to environmental health stressors in the Edwin B. Forsythe National Wildlife Refuge (EBFNWR), which spans over Great Bay, Little Egg Harbor, and Barnegat Bay in New Jersey, USA. Exposure potential is calculated with the Sediment-bound Contaminant Resiliency and Response (SCoRR) ranking system (Reilly and others, 2015) Water quality in the Barnegat Bay estuary along the New Jersey coast is the focus of a multidisciplinary research project begun in 2011 by the U.S. Geological Survey (USGS) in cooperation with the New Jersey Department of Environmental Protection. A continuous elevation surface (terrain model) integrating all available elevation data in the area was produced for water circulation modeling... Point cloud from low-altitude aerial imagery from unmanned aerial system (UAS) flights over Coast Guard Beach, Nauset Spit, Nauset Inlet, and Nauset Marsh, Cape Cod National Seashore, Eastham, Massachusetts on 1 March 2016 (LAZ file) This point cloud was derived from low-altitude aerial images collected from an unmanned aerial system (UAS) flown in the Cape Cod National Seashore on 1 March, 2016. The objective of the project was to evaluate the quality and cost of mapping from UAS images. The point cloud contains 434,096,824 unclassifed and unedited geolocated points. Atlantic coast piping plover (Charadrius melodus) nest sites are typically found on low-lying beach and dune systems, which respond rapidly to coastal processes like sediment overwash, inlet formation, and island migration that are sensitive to climate-related changes in storminess and the rate of sea-level rise. Data were obtained to understand piping plover habitat distribution. Conceptual salt marsh units for wetland synthesis: Edwin B. Forsythe National Wildlife Refuge, New Jersey Recent research shows that sediment budgets of microtidal marsh complexes on the Atlantic and Pacific coasts of the United States consistently scale with areal unvegetated/vegetated marsh ratio (UVVR) despite differences in sea-level rise, tidal range, elevation, vegetation, and stressors. This highlights UVVR as a broadly applicable indicator of microtidal marsh stability. As part of this data synthesis effort, hydrodynamic and sediment transport modeling of Barnegat Bay Little Egg Harbor (BBLEH) has been used to create the following wetland data layers in Edwin B. Forsythe National Wildlife Refuge (EBFNWR), New Jersey: 1) Hydrodynamic residence time , 2) salinity change and 3) salinity exposure change in wetlands, and 4) sediment supply to wetlands The seismic-landslide probability map covers the counties of Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, Solano, and Sonoma. The slope failures are triggered by a hypothetical earthquake with a moment magnitude of 7.0 occurring on April 18, 2018, at 4:18 p.m. on the Hayward Fault in the east bay part of California’s San Francisco Bay region. CSMP is a cooperative program to create a comprehensive coastal and marine geologic and habitat base map series for all of California's State waters. Data collected during this project reveal the seafloor offshore of the California coast in unprecedented detail and provide an ecosystem context for the effective management of this precious marine resource. This portal is a “go to” source for maps related to ocean and coastal mapping. Information is organized by geography or region, by theme, and by the year data was published. We conduct post-fire debris-flow hazard assessments for select fires in the Western U.S. We use geospatial data related to basin morphometry, burn severity, soil properties, and rainfall characteristics to estimate the probability and volume of debris flows that may occur in response to a design storm. This map and the original delineate areas where large numbers of landslides have occurred and areas which are susceptible to landsliding in the conterminous United States. The purpose of the Inventory Project is to provide a framework and tools for displaying and analyzing landslide inventory data collected in a spatially aware digital format from individual states. The Planetary Geologic Mapping Program serves the international science community through the production of high-quality and refereed geologic maps of planetary bodies. This program is in coordination between NASA science programs and the USGS Astrogeology Science Center. Regional patterns of Mesozoic-Cenozoic magmatism in western Alaska revealed by new U-Pb and 40Ar/39Ar ages: Chapter D in Studies by the U.S. Geological Survey in Alaska, vol. 15 In support of regional geologic framework studies, we obtained 50 new argon-40/argon-39 (40Ar/39Ar) ages and 33 new uranium-lead (U-Pb) ages from igneous rocks of southwestern Alaska. Most of the samples are from the Sleetmute and Taylor Mountains quadrangles; smaller collections or individual samples are from the Bethel, Candle, Dillingham,...Bradley, Dwight C.; Miller, Marti L.; Friedman, Richard M.; Layer, Paul W.; Bleick, Heather A.; Jones, James V.; Box, Steven E.; Karl, Susan M.; Shew, Nora B.; White, Timothy S.; Till, Alison B.; Dumoulin, Julie A.; Bundtzen, Thomas K.; O'Sullivan, Paul B.; Ullrich, Thomas D. Evidence for coseismic subsidence events in a southern California coastal saltmarsh Paleoenvironmental records from a southern California coastal saltmarsh reveal evidence for repeated late Holocene coseismic subsidence events. Field analysis of sediment gouge cores established discrete lithostratigraphic units extend across the wetland. Detailed sediment analyses reveal abrupt changes in lithology, percent total organic matter,...Leeper, Robert; Rhodes, Brady P.; Kirby, Matthew E.; Scharer, Katherine M.; Carlin, Joseph A.; Hemphill-Haley, Eileen; Avnaim-Katav, Simona; MacDonald, Glen M.; Starratt, Scott W.; Aranda, Angela Electrical resistivity investigation of fluvial geomorphology to evaluate potential seepage conduits to agricultural lands along the San Joaquin River, Merced County, California, 2012–13 Increased flows in the San Joaquin River, part of the San Joaquin River Restoration Program, are designed to help restore fish populations. However, increased seepage losses could result from these higher restoration flows, which could exacerbate existing drainage problems in neighboring agricultural lands and potentially damage crops. Channel...Groover, Krishangi D.; Burgess, Matthew K.; Howle, James F.; Phillips, Steven P. Development of a coupled wave-flow-vegetation interaction model Emergent and submerged vegetation can significantly affect coastal hydrodynamics. However, most deterministic numerical models do not take into account their influence on currents, waves, and turbulence. In this paper, we describe the implementation of a wave-flow-vegetation module into a Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST)...Beudin, Alexis; Kalra, Tarandeep S.; Ganju, Neil K.; Warner, John C. Barrier island breach evolution: Alongshore transport and bay-ocean pressure gradient interactions Physical processes controlling repeated openings and closures of a barrier island breach between a bay and the open ocean are studied using aerial photographs and atmospheric and hydrodynamic observations. The breach site is located on Pea Island along the Outer Banks, separating Pamlico Sound from the Atlantic Ocean. Wind direction was a major...Safak, Ilgar; Warner, John C.; List, Jeffrey Biogeomorphic classification and images of shorebird nesting sites on the U.S. Atlantic coast Atlantic coast piping plover (Charadrius melodus) nest sites are typically found on low-lying beach and dune systems, which respond rapidly to coastal processes like sediment overwash, inlet formation, and island migration that are sensitive to climate-related changes in storminess and the rate of sea-level rise. Data were obtained to understand...Sturdivant, Emily; Thieler, E. Robert; Zeigler, Sara; Winslow, Luke; Hines, Megan K.; Read, Jordan S.; Walker, Jordan I. High-resolution geophysical data collected along the Delmarva Peninsula, 2014, USGS Field Activity 2014-002-FA The Delmarva Peninsula is a 220-kilometer-long headland, spit, and barrier island complex that was significantly affected by Hurricane Sandy. A U.S. Geological Survey cruise was conducted in the summer of 2014 to map the inner continental shelf of the Delmarva Peninsula using geophysical and sampling techniques to define the geologic framework...Pendleton, Elizabeth; Ackerman, Seth D.; Baldwin, Wayne E.; Danforth, William W.; Foster, David S.; Thieler, E. Robert; Brothers, Laura L. Oceanographic and water-quality measurements collected south of Martha’s Vineyard, MA, 2014–2015 This web page provides access to oceanographic and water-quality observations made at seven sites near the Martha’s Vineyard Coastal Observatory (MVCO) as part of National Science Foundation “Bottom Stress and the Generation of Vertical Vorticity Over the Inner Shelf” project. The objective was to measure bottom stress at several locations with...Montgomery, Ellyn T.; Sherwood, Christopher R.; Martini, Marinna A.; Trowbridge, Jannelle; Scully, M.; Brosnahan, Sandra M. Low-altitude aerial imagery and related field observations associated with unmanned aerial systems (UAS) flights over Coast Guard Beach, Nauset Spit, Nauset Inlet, and Nauset Marsh, Cape Cod National Seashore, Eastham, Massachusetts on 1 March 2016 Low-altitude (approximately 120 meters above ground level) digital images were obtained from cameras mounted in a fixed-wing unmanned aerial vehicle (UAV) flown from the lawn adjacent to the Coast Guard Beach parking lot on 1 March, 2016. The UAV was a Skywalker X8 operated by Raptor Maps, Inc., contractors to the U.S. Geological Survey (USGS)....Sherwood, Christopher R. Geomorphological control on variably saturated hillslope hydrology and slope instability In steep topography, the processes governing variably saturated subsurface hydrologic response and the interparticle stresses leading to shallow landslide initiation are physically linked. However, these processes are usually analyzed separately. Here, we take a combined approach, simultaneously analyzing the influence of topography on both...Giuseppe, Formetta; Simoni, Silvia; Godt, Jonathan W.; Lu, Ning; Rigon, Riccardo Coastal bathymetry data collected in June 2014 from Fire Island, New York—The wilderness breach and shoreface Scientists from the U.S. Geological Survey St. Petersburg Coastal and Marine Science Center in St. Petersburg, Florida, collected bathymetric data along the upper shoreface and within the wilderness breach at Fire Island, New York, in June 2014. The U.S. Geological Survey is involved in a post-Hurricane Sandy effort to map and monitor the...Nelson, Timothy R.; Miselis, Jennifer L.; Hapke, Cheryl J.; Wilson, Kathleen E.; Henderson, Rachel E.; Brenner, Owen T.; Reynolds, Billy J.; Hansen, Mark E. Get your science used—Six guidelines to improve your products Introduction Natural scientists, like many other experts, face challenges when communicating to people outside their fields of expertise. This is especially true when they try to communicate to those whose background, knowledge, and experience are far distant from that field of expertise. At a recent workshop, experts in risk communication offered...Perry, Suzanne C.; Blanpied, Michael L.; Burkett, Erin R.; Campbell, Nnenia M.; Carlson, Anders; Cox, Dale A.; Driedger, Carolyn L.; Eisenman, David P.; Fox-Glassman, Katherine T.; Hoffman, Sherry; Hoffman, Susanna M.; Jaiswal, Kishor S.; Jones, Lucile M.; Luco, Nicolas; Marx, Sabine M.; McGowan, Sean M.; Mileti, Dennis S.; Moschetti, Morgan P.; Ozman, David; Pastor, Elizabeth; Petersen, Mark D.; Porter, Keith A.; Ramsey, David W.; Ritchie, Liesel A.; Fitzpatrick, Jessica K.; Rukstales, Kenneth S.; Sellnow, Timothy L.; Vaughon, Wendy L.; Wald, David J.; Wald, Lisa A.; Wein, Anne; Zarcadoolas, Christina Lava still oozes from the northern edge of the ‘a‘ā flow near the lighthouse at Cape Kumukahi (upper right). Smoke from burning vegetation marks location of lava oozeouts. View is toward the northeast. Braided section of the lava channel located "downstream" between about 3.5 to 6 km (2.2 to 3.7 mi) from fissure 8 (upper right). The width of the two channels in the middle center is about 325 m (1,065 ft). View is toward the southwest. View of the partially filled Kapoho Crater (center) and the open lava channel where it makes a 90-degree turn around the crater. The open channel no longer directly enters the ocean. Lava flows freely through the channel only to the southern edge of Kapoho Crater (left side of image). Clearly, lava moves into and through the molten core of the thick ‘a‘ā flow across a... This animated GIF shows a sequence of radar amplitude images that were acquired by the Agenzia Spaziale Italiana CosmoSkyMed satellite system. The images illustrate changes to the... For several years, a special ultraviolet camera has been located near Keanakākoʻi Crater at Kīlauea's summit. The camera was capable of detecting SO2 gas coming from Halema‘uma‘u crater. This morning, the camera was removed because there is very little SO2 to measure these days at the summit. In addition, cracking near Keanakākoʻi Crater was making access difficult.... The WorldView-3 satellite acquired this view of Kīlauea's summit on July 3. Despite a few clouds, the area of heaviest fractures in the caldera is clear. Views into the expanding Halema‘uma‘u crater reveal a pit floored by rubble. HVO, on the northwest caldera rim, is labeled. Potential coastal change impacts due to Alberto. USGS partnership with Lower Elwha Klallam Tribe featured in new fact sheet on Elwha River dam removals A USGS-led special issue of Marine Geology received a most-cited certificate from the journal in May 2018. USGS research geologist Sam Johnson of the Pacific Coastal and Marine Science Center (PCMSC) made an invited visit to the Korea Institute of Geology and Mineral Industries (KIGAM) in Daejon, South Korea, on April 24–26. With ash eruptions occurring from Kilauea’s summit this week, there is a threat of an even larger steam-driven violent explosion. Such an eruption could happen suddenly and send volcanic ash 20,000 feet into the air, threatening communities for miles. Representatives of the news media are invited to join a telephone briefing for the latest updates on Kīlauea's volcanic activity and its impacts. On Thursday, April 26, research geologist Curt Storlazzi of the USGS Pacific Coastal and Marine Science Center gave a public lecture on “The Role of U.S. Coral Reefs in Coastal Protection—Rigorously valuing flood reduction benefits to inform coastal zone management decisions.” A deluge of media coverage followed publication of a USGS-led study showing that sea-level rise and wave-driven flooding could make many low-lying atoll islands uninhabitable by the mid-21st century by contaminating freshwater aquifers and damaging infrastructure. The... Estuaries and wetlands provide a critical defense against storms and sea-level rise while providing economically valuable services. How well they protect coastal communities and host diverse ecosystems is largely a function of their shape (morphology), which is controlled by factors such as sediment movement and biological feedbacks. Have you ever wondered what scientists do at a volcano observatory when a volcano is not erupting? There is plenty to accomplish—probably more than you can imagine. May is Volcano Preparedness Month in Washington, providing residents an opportunity to become more familiar with volcanic risk in their communities and learn about steps they can take to reduce potential impacts.
<urn:uuid:7222341f-63dc-44c1-9690-59be20b3b08e>
2.890625
4,795
Content Listing
Science & Tech.
39.220927
95,641,185
New evidence published in Science by Smithsonian geologists dates the closure of an ancient seaway at 13 to 15 million years ago and challenges accepted theories about the rise of the Isthmus of Panama and its impact on world climate and animal migrations. A team analyzed zircon grains from rocks representing an ancient sea and riverbeds in northwestern South America. The team was led by Camilo Montes, former director of the Panama Geology Project at the Smithsonian Tropical Research Institute. He is now at the Universidad de los Andes. The team's new date for closure of the Central American Seaway, from 13 to 15 million years ago, conflicts with the widely accepted 3 million year date for the severing of all connections between the Atlantic and the Pacific, the result of work done by the Panama Paleontology Project, directed by emeritus scientists Jeremy B.C. Jackson and Anthony Coates, also at the Smithsonian Tropical Research Institute. If a land connection was complete by this earlier date, the rise of the Isthmus of Panama from the sea by tectonic and volcanic action predates the movement of animals between continents known as the Great American Biotic Interchange. The rise of the Isthmus is implicated in major shifts in ocean currents, including the creation of the Gulf Stream that led to warmer temperatures in northern Europe and the formation of a great ice sheet across North America. "Beds younger than about 13 to 15 million years contain abundant zircon grains with a typically Panamanian age," said Montes. "Older beds do not. We think these zircons were deposited by rivers flowing from the Isthmus of Panama when it docked to South America, nearly 10 million years earlier than the date of 3 million years that is usually given for the connection." The new model sends scientists like the University of Colorado at Boulder's Peter Molnar off to look for other explanations for climate change. Molnar wrote in the journal Paleoceanography, "...let me state that the closing of the Central America Seaway seems to be no more than a bit player in global climate change. Quite likely it is a red herring." "What is left now is to rethink what else could have caused such dramatic global processes nearly 3 million years ago," said Carlos Jaramillo, Smithsonian Tropical Research Institute scientist and member of the research team. The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama, is a unit of the Smithsonian Institution. The institute furthers the understanding of tropical nature and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems. Website: http://www. C. Montes, A. Cardona, C. Jaramillo, A. Pardo, J.C. Silva, V. Valencia, C. Ayala, L.C. Pérez-Angel, L.A. Rodriguez-Parra, V. Ramirez, H. Niño. 2015. Middle Miocene closure of the Central American Seaway. Science. April 10. Beth King | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:55373739-cef5-4e20-89be-b8f86225a7de>
3.734375
1,272
Content Listing
Science & Tech.
42.791183
95,641,201
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up Clone this wiki locally Welcome to the Esri Java Geometry Library wiki! Esri Java Geometry Library Geometries can have attributes Z, M, ID. All geometries support Affine Transformations in 2D space. Included OGC Wrappers provide these types. List of Operations In geometry-api-java, the geometry is planar with the exception of GeometryEngine.geodesicDistanceOnWGS84. The X/Y values are considered on an infinite plane, and all operations are executed based on that assumption. Boolean operations on Polygons, Polylines, Points and MultiPoints. - Simplify - validates and fixes the geometry to be correct for storage in geodatabase - IsSimple - validates the geometry to be correct for storage in geodatabase - Simplify with OGC restrictions - validates and fixes the geometry to be correct according to OGC rules - IsSimple with OGC restrictions - checks if the geometry is correct according to OGC rules - Boundary - creates a geometry that is the boundary of a given geometry - Buffer - creates buffer polygon around the given geometry - Clip - clips geometries with a 2-dimensional envelope - Convex Hull - creates the convex hull of a given geometry - Densify - densifies geometries by plotting points between existing vertices - Distance - calculates the distance between two geometries - Generalize - simplifies geometries using the Douglas-Peucker algorithm - Offset - creates geometries that are offset from the input geometries by a given distance - Proximity - finds the closest point on a geometry to a given point - Quadtree structure - can be used for spatial indexing - Geodesic Distance (see geodesicDistanceOnWGS84 in GeometryEngine) - calculates the shortest distance between two points on the WGS84 spheroid - Certain operations can be accelerated to perform faster when a geometry instance is used over and over. This is achieved using the Operator's accelerateGeometry method. Depending on the acceleration degree, the acceleration builds and attaches to the geometry a quad tree and/or a rasterized representation (a hit map). Presently, some relational operators and some topological operators benefit from the acceleration. - Relational, Topological, Validation, and some other operators use XY tolerance (aka cluster tolerance) in processing. The tolerance is used to set the minimum distance between coordinates below which the coordinates are considered equal. The tolerance value is derived from the SpatialReference instance and is around 1 mm. This value should match the default value of tolerance in Geodatabase. When SpatialReference instance is not provided (null), the operators use a small value derived from the bounding box of the geometries participating in the operation. The OperatorSimplify, when used together with a SpatialReference instance, will enforce the tolerance value thus ensuring validity of the generated geometry for storing with Geodatabase.
<urn:uuid:6b6a8d61-4878-434d-9fca-d6ce27f62994>
2.640625
669
Documentation
Software Dev.
6.866053
95,641,217
In this program, we are going to pass date into the cells in excel sheet using java .You have create any number of cells and rows and also you can pass different data types into cells. The package we need to import is : org.apache.poi.hssf.usermodel.HSSFCellStyle class is used to create an object of HSSFCellStyle. By using this style object we set the format of the cell data values. For this we use setDataFormate(HSSFDataFormat.getBuiltinFormat("m/d/yy h:mm")). This method is used to create the object of the HSSFCellStyle. This object is created to set the cell style. This method is used to set the format of the cellstyle. This method is used to build the format which will be passed into setDataFormat() to set the format for cell style. To add date into cells, create new date object and pass directly into setCellValue() method. At last set these in cell style. The code of the program is given below: The output of the program is given below:
<urn:uuid:dd14dcc7-54c4-4bc5-bae6-239b71957202>
3.296875
244
Documentation
Software Dev.
66.628935
95,641,226
Five species of oysters are cultivated for food in the Puget Sound region: the Kumamoto (Crassostrea sikamea), the Pacific (Crassostrea gigas), the Eastern (Crassostrea virginica), the European (Ostrea edulis) and—the only native species—the Olympia (Ostrea lurida). We know that ocean acidification decreases the amount of calcium carbonate building blocks available in the water for these and other shell-building animals to use as they grow. As an animal’s body pulls calcium carbonate from the water, it may be laid down in different formations, mostly commonly either calcite or aragonite, to form a shell. Pacific oysters, for example, begin building their shells 14–18 hours after the egg is fertilized, laying down a shell made of aragonite. The larva continues life in the form of plankton for the next two to three weeks, feeding on microalgae, until it grows a foot to stick down to a hard surface. At this point, the young creature switches from building a shell of aragonite to building one of calcite. Ocean acidification (OA) has been shown to cause delay of shell formation, weaker shells, and therefore increased mortality in larval oysters, as well as decreased growth and shell-building in adult oysters. We also know the changing chemistry of the ocean is a complex process with varying effects from species to species. For example, unlike many oyster species, our native Olympia oysters show no negative effects when exposed to different levels of OA. But Olympias are unique in another way: most oysters are broadcast spawners, while Olympias brood their young for 10–12 days, during that critical period of initial shell formation. This allows Olympia oyster larvae to have slower rates of growth and shell-building during this phase, possibly easing the energetic burden of trying to build a shell in conditions of OA. Young oysters are known as spat. If you look closely at an empty oyster shell on a Puget Sound beach, you may be able to see spat—which can look like anything from a small black dot to a fingernail-sized small oyster—clinging to it. Although several species of oyster can live on muddy bottoms, they thrive when able to attach to a hard surface (another reason to leave shells on the beach!). Like a snail shell becoming a home for a hermit crab, old oyster shells become a nursery for the next generation.
<urn:uuid:15d55ab7-d8b4-430f-9010-de754b549aa7>
3.96875
528
Knowledge Article
Science & Tech.
40.153083
95,641,242
The process is safer, simpler and less expensive than previous methods to convert the greenhouse gas associated with climate change to a useful product, said Krishnan Rajeshwar, interim associate vice president for research at UT Arlington and one of the authors of a paper recently published in the journal Chemical Communications. Researchers began by coating the walls of copper oxide, CuO, nanorods with crystallites made from another form of copper oxide, Cu2O. In the lab, they submerged those rods in a water-based solution rich in CO2. Irradiating the combination with simulated sunlight created a photoelectrochemical reduction of the CO2 and that produced methanol. In contrast, current methods require the use of a co-catalyst and must be conducted at high operating pressures and temperatures. Many also use toxic elements, such as cadmium, or rare elements, such as tellurium, Rajeshwar said. “As long as we are using fossil fuels, we’ll have the question of what to do with the carbon dioxide,” said Rajeshwar, a distinguished professor of chemistry and biochemistry and co-founder of the Center for Renewable Energy, Science & Technology, CREST, at UT Arlington. “An attractive option would be to convert greenhouse gases to liquid fuel. That’s the value-added option.” Co-authors on the recently published paper, “Efficient solar photoelectrosynthesis of methanol from carbon dioxide using hybrid CuO-Cu2O semiconductor nanorod arrays,” are Ghazaleh Ghadimkhani, Norma Tacconi, Wilaiwan Chanmanee and Csaba Janaky, all of the UT Arlington College of Science’s Department of Chemistry and Biochemistry and CREST. Janaky also has a permanent appointment at the University of Szeged in Hungary. Rajeshwar said he hopes that others will build on the research involving copper oxide nanotubes, CO2 and sunlight. “Addressing tomorrow’s energy needs and finding ways to stem the harmful effect of greenhouse gases are areas where UT Arlington scientists can connect their work to real-world problems,” said Carolyn Cason, vice president for research at the University. “We hope solutions in the lab are only the beginning.” In addition to the journal, the new work also was featured in a recent edition of Chemical and Engineering News. That piece noted that the experiments generated methanol with 95 percent electrochemical efficiency and avoided the excess energy input, also known as overpotential, of other methods. Tacconi, a recently retired research associate professor at UT Arlington, said the two types of copper oxide were selected because both are photo active and they have complementary solar light absorption. “And what could be better in Texas than to use the sunlight for methanol generation from carbon dioxide?” Other than fuel, methanol is used in a wide variety of chemical processes, including the manufacturing of plastics, adhesives and solvents as well as wastewater treatment. In the United States, there are 18 methanol production plants with a cumulative annual capacity of more than 2.6 billion gallons, according to the paper.The carbon dioxide-to-fuel research is part of the innovation going on at The University of Texas at Arlington, a comprehensive research institution of more than 33,800 students and more than 2,200 faculty members in the heart of North Texas. Visit www.uta.edu to learn more. Traci Peterson | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:998fc792-aa3c-49bc-b6d9-791e35e2e95f>
3.40625
1,325
Content Listing
Science & Tech.
33.658438
95,641,251
The Amazon rainforest puts on its biggest growth spurt during the dry season, according to new research. The finding surprised the researchers. "Most of the vegetation around the world follows a general pattern in which plants get green and lush during the rainy season, and then during the dry season, leaves fall because theres not enough water in the soil to support plant growth," said lead researcher Alfredo R. Huete of The University of Arizona in Tucson. "What we found for a large section of the Amazon is the opposite. As soon as the rains stop and you start to enter a dry period, the Amazon becomes alive. New leaves spring out, theres a flush of green growth and the greening continues as the dry season progresses." The paper by Huete and eight colleagues in the United States and Brazil is scheduled for publication on 22 March in Geophysical Research Letters. This finding holds true only for the undisturbed portion of the rainforest. Areas where the primary forest has been converted to other uses or disturbed "brown down" in the dry season, said Huete, a professor of soil, water and environmental science. Huete suggests the deep roots of trees in the undisturbed forest can reach water even in the dry season, allowing the trees to flourish during the sunnier, drier part of the year. In contrast, plants in areas that have been logged or converted to other uses cannot reach deep water in the dry season and therefore either go dormant or die. The researchers say that figuring out the metabolism of the Amazon, the largest old-growth rainforest on the planet, is crucial for understanding how rainforests and other tropical environments function and how deforestation affects biodiversity and sustainable land use in the tropics. It will also help scientists better understand the global carbon cycle, which includes the natural sequestration and release of carbon dioxide, a major greenhouse gas. The finding that converted forests grow differently from undisturbed forests has implications for understanding the effects of fires in the tropics, including the fires that sometimes rage in tropical areas during El Nino years, which bring drought to many tropical areas, including the Amazon. The research team analyzed five years of satellite images from the MODIS (Moderate Resolution Imaging Spectroradiometer) instrument mounted on NASAs Terra satellite and by cross-checking with information from sites on the ground. To determine when the Amazon rainforest is growing, Huetes lab used a new measure, called Enhanced Vegetation Index (EVI), for detecting greenness in MODIS images of very highly vegetated rainforests. Growing plants generate more chlorophyll and therefore look greener. "We can look at this increase in greenness as a measure of Amazon health, because in the disturbed areas we dont see the greenness increase during the dry season," Huete said. "A lot of people are interested in the rainforest because of the humongous amount of carbon it stores. A very slight change in the forests activity will make a tremendous change in the global carbon cycle." "With the satellite, we can say the whole Amazon basin is doing something," Huete said. The teams next step, Huete said, is to see if other tropical rainforests behave the same way and how the rainforests behave in El Nino years. He added, "We also want to look harder at the transition zones at the edge of the rainforest to see whether different kinds of disturbance cause different growth patterns." The research was funded by NASA and is part of the Brazilian-led Large Scale Biosphere-Atmosphere Experiment in Amazônia (LBA). Harvey Leifert | American Geophysical Union Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:89d4e26a-cd2b-4d01-8ff2-825b8f548c03>
3.46875
1,323
Content Listing
Science & Tech.
39.599623
95,641,266
© Steven Siegel Marine Photobank This year’s International Day of Biological Diversity (May 22) focuses on islands. Bradnee Chambers, Executive Secretary of the Convention on the Conservation of Migratory Species of Wild Animals discusses the impact of the growing problem of marine debris on islands’ wildlife and the economic and environmental consequences. - Some of the Earth’s most delicate tropical paradises are being disfigured by the by-products of the modern age – marine debris: plastic bottles, carrier bags and discarded fishing gear. Just a tiny fraction of this originates from the islands themselves – most is generated on land and enters the sea through the sewers and drains; the rest comes from passenger liners, freighters and fishing vessels, whose crews often use the oceans as a giant waste disposal unit. While much of the garbage sinks, some of it joins the giant gyres where the currents carry it across the globe. Small Island Developing States (SIDS), recognised as a distinct group of nations by the UN Conference on Environment and Development in 1992, lack the space to dedicate to landfill sites and do not have the resources to deal with the huge problem of marine debris that is being washed up on their doorstep – as the tides and currents wash the accumulated marine garbage onto their beaches. Domestically, they can take steps to ensure that they do not add to the problem – American Samoa for instance has banned plastic bags – but the “polluter pays” principle would require that those responsible for producing the waste should be made responsible for disposing of it properly. A litter-strewn beach is an eye-sore and with tourism playing a major role in the economies of many island states, marine debris can have substantial adverse financial implications threatening local businesses and employment prospects. Palau has banned commercial fisheries in its huge territorial waters forsaking the lucrative licensing revenue and will develop ecotourism based on snorkelling and scuba diving as a sustainable alternative. Alive, Palau’s sharks can bring in $1.9 million each over their life-time. Dead, a shark is worth a few hundred dollars, most of it attributable to the fins used to make soup considered a delicacy in parts of East Asia. In February, Indonesia became the world’s largest sanctuary for manta rays and banned the fishing and export of the species throughout the 2.2 million square miles surrounding the archipelago. The numbers are about the same; as a tourist attraction, a manta ray is worth in excess of 1 million dollars; as meat or medicine no more than 500 dollars. Whale-watching creates jobs while bird-watching boosts binocular and camera sales and both help hotel occupancy rates. And the total number of international travellers broke the one billion mark for the first time in 2012 making tourism one of the main foreign exchange earners globally particularly for many developing countries, including SIDS. But marine debris casts its ominous shadow and threatens to break the virtuous circle which would otherwise guarantee sustainable livelihoods and incentives to protect wildlife. Sea birds inadvertently feed their young with plastic which then blocks the chicks’ intestines preventing them from eating properly and leading to a slow and painful death. The staple prey of some marine turtles is jellyfish but the turtles often mistake plastic bags for their favourite food with same dire results. For larger species such as whales, dolphins and seals, discarded fishing gear – ghost nets – are a problem as the animals become entangled in them. This can impede the animals’ movement and ability to hunt as well as cause serious injury or even death through drowning. Remote island habitats support a rich and diverse fauna often including unique endemic species and provide vital stop-over sites for migrants and breeding sites for marine birds. But long established bird colonies have fallen victim to another danger exacerbated by humans – that posed by invasive alien species. The problem of rodent infestations is well documented. Mice and rats have escaped from ships wreaking havoc on the local bird populations which had previously nested on the ground with impunity as there were no predators. Eradication programmes have successfully rid 400 islands of their alien rodents. Less well known is the phenomenon of “rafting” where the invaders also use marine debris as a vector – plastic bottles are harbouring a potentially devastating assortment of worms, insect larvae, barnacles and bacteria, and warmer waters arising from climate change increase the resilience of these unwanted stowaways making them an even more potent danger. One of the fascinations of dealing with the animals covered by the Convention on Migratory Species is how they link different countries and even continents. Many of the species are endangered and their conservation as well as the threats that they face require internationally coordinated measures. This applies to marine debris, a singularly unwelcome “migratory species” whose continued presence CMS will be doing its utmost to eliminate. Last updated on 21 May 2014
<urn:uuid:ff46779b-09ac-445a-a54e-e1430c217531>
3.640625
1,006
News (Org.)
Science & Tech.
26.208931
95,641,276
Measuring Crustal Stress: Borehole Methods Stresses within the Earth’s crust are “measured” indirectly by coring, slotting and loading a piece of rock with subsequent analysis of re-equilibrium deformations. This action requires assumptions about constitutive behaviour of the rock, i.e. the relationship between measured strain and inferred stress (Eq. (4.2) for anisotropic, Eq. (4.8) for isotropic rock). In addition, Eq. (4.8) includes the effect of temperature on mechanical stresses. If the location of stress measurement is chosen to be close to natural discontinuities (fracture, fault) or excavation (borehole, tunnel) boundaries, near-field stresses are determined (Sect. 4.4). The virgin or in-situ stress field can only be observed at distances of two to three times the size of the excavation discontinuity or any other stress concentrator (inclusion). KeywordsHydraulic Fracture Borehole Wall Crustal Stress Maximum Horizontal Stress Breakdown Pressure Unable to display preview. Download preview PDF.
<urn:uuid:ca62a373-7636-4f15-bd79-302b5ceb2efe>
3.421875
241
Truncated
Science & Tech.
37.023385
95,641,279
Chapter 2, “Variables and Data Types,” introduced the two types of arrays available for use in your PHP programs, indexed and associative. As you may recall, indexed arrays manipulate elements in accordance with position, while associative arrays manipulate elements in terms of a key/value association. Both offer a powerful and flexible method by which to handle large amounts of data. KeywordsArray Element Array Size Language Construct Input Array Language Edition Unable to display preview. Download preview PDF.
<urn:uuid:d60755d4-f2e8-4802-bc0f-76cd0494cfb7>
2.921875
101
Truncated
Software Dev.
18.547154
95,641,280
Despite the advanced stage of diamond thin-film technology, with applications ranging from superconductivity to biosensing, the realization of a stable and atomically thick two-dimensional diamond material, named here as diamondene, is still forthcoming. Adding to the outstanding properties of its bulk and thin-film counterparts, diamondene is predicted to be a ferromagnetic semiconductor with spin polarized bands. Here, we provide spectroscopic evidence for the formation of diamondene by performing Raman spectroscopy of double-layer graphene under high pressure. The results are explained in terms of a breakdown in the Kohn anomaly associated with the finite size of the remaining graphene sites surrounded by the diamondene matrix. Ab initio calculations and molecular dynamics simulations are employed to clarify the mechanism of diamondene formation, which requires two or more layers of graphene subjected to high pressures in the presence of specific chemical groups such as hydroxyl groups or hydrogens. Diamond is the hardest and least compressible material1,2,3 as well as the best bulk heat conductor4. In addition, it is chemically inert5, highly refractive at optical wavelengths, and transparent to ultraviolet6. Unlike graphite, another bulk carbon allotrope that can easily exfoliate due to its layered hexagonal structure7, 8, diamond does not present a stable two-dimensional (2D) counterpart to date, and this is mostly due to its tetrahedral structure. Nevertheless, many of the outstanding physical properties of graphene (the 2D version of graphite) rely on its dimensionality7, 9 the same as with other 2D materials10, such as phosphorene11, silicene12, 13, 2D transition metal dichalcogenides14, and 2D transition metal carbides or nitrides15. Given the technological advances in the diamond thin-film production and applications5, 16,17,18,19,20,21 the systematic realization of an atomically thin 2D diamond structure is highly desirable. A first step was recently given by Barboza et al.,22 who proposed and provided experimental evidence for the existence of a 2D diamond crystal formed when two or more layers of graphene are subjected to high pressures in the presence of chemical groups. With the assumption that the chemical groups are hydroxyl radicals, the compound was named diamondol, and was characterized as a 2D ferromagnetic semiconductor with spin polarized bands22. These unique properties, which arise from the periodic array of dangling bonds at the bottom layer, make diamondol a promising candidate for spintronics. Thus far, the existence of this 2D rehybridized carbon material has been demonstrated by electric force microscopy experiments, which have monitored the charge injection into mono- and bi-layer graphene with increasing tip-force interaction and different water contents on the graphene surface22. Here we provide spectroscopic evidence of the formation of such a 2D-diamond structure, which we shall denote as diamondene, by performing Raman spectroscopy of double-layer graphene under high pressure conditions using water as the pressure transmission medium (PTM). The results are explained in terms of a breakdown in the Kohn anomaly associated with the finite size of remaining sp 2 sites inside the rehybridized 2D matrix. Ab initio calculations and molecular dynamics (MD) simulations are employed to clarify the formation mechanism in the present experimental conditions tested. Additional experiments performed in single-layer graphene using water as PTM, and also in double-layer graphene using mineral oil as PTM indicate that the pressure-induced formation of diamondene is drastically favored by the stacking of two or more layers of graphene surrounded by specific chemical groups such as hydroxyl groups and hydrogens. Figure 1a shows the schematic of the experimental setup. The sample was placed into a diamond anvil cell (DAC) capable of operating up to ≈15 GPa. The details about the experimental conditions are provided in the Methods section. Figure 1b shows the evolution of the first-order Raman-allowed bond-stretching G band with increasing pressure (up to ≃14 GPa) using water as PTM. Due to the superposition of the D (~1350 cm−1) and 2D (~2700 cm−1) bands with the first- and second-order bond-stretching peaks of diamond, respectively, the G band was the only clearly observable Raman feature from graphene in the high-pressure experiments. The sample used in this experiment was a double-layer chemical vapor deposition (CVD)-grown graphene transferred to a Teflon substrate (G/G/T)23. It should be noted that what we call double-layer graphene is, in fact, a structure formed by the deposition of a single layer of graphene on top of another single layer of graphene. This is different from the traditional bilayer graphene with AB stacking. Two spectra are shown in Fig. 1b for each pressure level, one obtained with an excitation laser energy E L = 2.33 eV (green symbols), and the other with E L = 2.54 eV (blue symbols). The solid lines are Voigt fit to the experimental data. A quick visual inspection of Fig. 1b reveals that the G band becomes steeper and broader as the pressure increases. These two events can be seen in detail in Fig. 2a, b, which show the plots of the G band frequency (ω G) and line width (Γ G), respectively, as a function of the pressure (P), both of which were extracted from the spectra in Fig. 1b. As shown in panel 2(a), ω G undergoes a (rough) linear blueshift with increasing pressure (filled symbols), and the change is reversible upon pressure release (empty symbols). The main cause for this dispersive behavior is a pressure-induced hydrostatic strain that generates G-phonon hardening24, 25. Another possible cause is the occurrence of charge transfer between the PTM and the G/G sample (the so-called pressure-induced doping), although significant doping from PTM is questionable26, 27. The G band broadening with increasing pressure observed in Fig. 2b (full symbols) is also reversible upon pressure releasing (empty symbols). Previous high-pressure Raman experiments conducted on graphite indicate that this material undergoes a phase transition for pressure values between 10–20 GPa, turning into a diamond-like material in which sp 2 and sp 3 hybridizations coexist28,29,30. This sp 2/sp 3 mixed phase has been confirmed through other experimental techniques, such as inelastic X-ray scattering31, optical transmittance32, and electrical resistivity33. Several theoretical models have been proposed to explain these experimental findings28, 31, 34,35,36,37, and the general consensus is that this diamond-like phase originates from the formation of sp 3 bonds, favored by the enhanced interlayer interaction induced by high pressure. High-pressure Raman experiments were also conducted in mono- and few-layer graphene samples26, 27, 38,39,40 and a phase transformation has been observed in graphene nanoplates at 15 GPa38. This phase transformation is associated with an abrupt broadening of the G band, explained in terms of interlayer coupling that gives rise to sp 3 bonds in these few-layer graphene nanoplates38. In this work, the main contribution to the G band broadening upon compression is probably the extra strain and stress gradients caused by substrate deformation and quasi-hydrostaticity of the medium. The hydrostaticity of the water medium was inferred in our experiments by analyzing the ruby’s fluorescence peaks, as shown in Supplementary Fig. 1 of Supplementary Note 1. The presence of sp 3 sites in graphitic systems results in G band frequency dispersion with excitation laser energy (the frequency gets higher with increasing excitation laser energy)41. Accordingly, the data shown in Fig. 2a confirm that the G band frequency obtained with E L = 2.33 eV and E L = 2.54 eV (defined as and , respectively) splits for P ≥ 7.5 GPa, with getting systematically higher than . The splitting can be better visualized in Fig. 2c, which shows the plot of the difference as a function of P. The dashed-red line is a step-function fit to the experimental data, and the gray areas delimit the 95% confidence intervals. A complete statistical analysis of all data presented in this work is discussed in Supplementary Note 2. Apart from the step function fitting (fitting parameters shown in Supplementary Table 1), we also performed a hypothesis test on the difference in means (detailed description presented in Supplementary Tables 2 and 3), with normality tested by the Shapiro-Wilk method (details in Supplementary Table 4) and visual inspection of Q-Q plots (shown in Supplementary Fig. 2). For pressures below 7.5 GPa, no considerable difference between the G band frequencies obtained with different laser sources is observed , as obtained by the step-function fitting (dashed-red line). For P ≥ 7.5 GPa, the step-function fitting gives Δω G ~ 3.9 cm−1. It is important to notice that the splitting is irreversible upon pressure release (empty symbols), even for values below 7.5 GPa. The dependence of ω G and Γ G on P was measured in another high-pressure Raman experiment (second run) carried out under the same experimental conditions (water as PTM), but using a distinct G/G/T sample. Figure 1c shows the G band data obtained from this second run. The fitting parameters extracted from the spectra shown in Fig. 1c are presented in Figs. 2d–f. The general trend is similar to the one observed in the first run (data shown in Fig. 1b and 2a–c), although some differences can be noted. First, the ω G vs. P plot (Fig. 2d) exhibits two plateaus, probably related to the loss of hydrostaticity of the water medium, which becomes quasi-hydrostatic in the interval 2–10 GPa (the hydrostacity of the water medium is discussed in the Supplementary Note 1 available). Second, we found Δω G ~ 2.7 cm−1 in the interval 5–10 GPa, and Δω G ~ 6.7 cm−1 above 10 GPa (Fig. 2f) (see details about the step-function fitting process in the Supplementary Note 2 available). Moreover, we have found that the blueshift Δω G in this second run was reversible upon pressure release, as can be seen by following the empty symbols in Fig. 2d, f. At last, the G band broadening with pressure was considerably steeper in the second run, which can be easily checked by direct comparison between the data shown in Fig. 2b, e. The blueshift of the G band with increasing excitation energy suggests the occurrence of a system in which the sp 2 and sp 3 phases coexist. This system is idealized in Fig. 3a, which illustrates an sp 3 matrix (blue region) inserted in a graphitic system (gray region). This type of system involves a wide sort of different nanometer-sized sp 2 domains (of characteristic lateral length ) with distinct electronic and vibrational properties due to quantum confinement. In this scenario, the confinement of E 2g phonons within sp 2 domains that are smaller than the phonon coherence length contributes to the G band broadening (the phonon coherence length in graphene is in the order of tens of nanometers)42, 43. As illustrated in the inset at the left side of Fig. 3a, the gapless energy dispersion of π electrons in pristine graphene is linear and symmetric around the corner of the first Brillouin zone (K point). The coupling of π electrons or holes with zone-center (Γ point) transversal and longitudinal optical phonons (TO and LO, respectively) gives rise to a strong screening effect that generates a kink (frequency softening) in the degenerated TO and LO phonon branches at Γ point. This sudden softening is called Kohn anomaly44, and is illustrated in Fig. 3b (black line). Since the G band originates from the double-degenerated/zone-center E 2g phonon mode (TO/LO at Γ point), its frequency is extremely sensitive to eventual changes in the oscillation strength of electron-phonon interactions near the Fermi level44. The quantum confinement of π electrons inside small sp 2 sites opens up a band gap of magnitude E g at the K point. The smaller/larger gets, the wider/narrower the associated band gap becomes (larger/smaller E g), as illustrated in the top/bottom insets at the right side of Fig. 3a. The presence of this band gap weakens the Kohn anomaly effect, which attenuates the softening in ω G. At this point, we arrive at the conclusion that the smaller (larger) gets, the higher (lower) ω G becomes. An eventual match of the excitation laser energy E L with the band gap energy E g enhances the Raman scattered signal due to the achievement of a resonance condition in the optical absorption. As discussed in the previous paragraph, sp 2 sites with smaller (larger) sizes present larger (smaller) E g. In this case, the Raman signal originated by smaller (larger) sp 2 sites are resonantly selected by higher (lower) excitation laser energies. All these facts together lead to the conclusion that smaller (larger) sp 2 sites, with wider (narrower) π electron energy band gaps, are resonantly selected by higher (lower) values of excitation laser energies, generating G band scattering with higher (lower) frequencies. Therefore, the blueshift observed in the G band frequency for higher values of E L supports the proposition that a mixed sp 2/sp 3 system is formed when the double-layer graphene is subject to high pressures. The principle underlying the diamondene formation is that the presence of chemical radicals, such as hydroxyl groups or hydrogens, may substantially decrease the pressure required to promote covalent bonds between carbon atoms in distinct layers of a double-layer graphene. For an ideal coverage of such groups, the result is a stable structure in which all carbon atoms of the upper layer are found in sp 3 hybridization due to the formation of C–OH or C–H bonds surrounded by three C–C interlayer covalent bonds. The presence of a substrate prevents the interaction of the lower carbon atoms with additional chemical groups. However, the carbon atoms at the bottom layer may chemically bond to the underlying substrate. In order to prevent this possibility, we used Teflon substrates in our experiment, since it is a known chemically inertial material. The final structure can be shown to be stable even in the absence of external pressure. Reversible structures may also occur if the coverage is incomplete22. To further investigate the mechanism of diamondene formation in the present context, we have performed first principles density functional theory (DFT) calculations as well as MD simulations based on model potentials (technical details are described in the Methods section). Similar formalisms have been employed recently in studies concerning the diamondization of functionalized few-layer graphene45,46,47. In both approaches, we began with the bilayer graphene in presence of chemical groups (–H or –OH). In the DFT approach, we focused on quantitative aspects—the determination of the pressure threshold required to transform the system (either with –OH or –H groups) into the sp 3 network and the structural characterization of the final compound. On the other hand, the MD simulations aimed at qualitatively describing the formation and the stability of the system (–H case) subjected to pressure at room temperature. In the model assumed in the first principles description, the pressure was imposed by geometric constraints in specific atoms during relaxation. The initial geometry was chosen with the lower C atoms and upper O atoms (–OH case) or H atoms (–H case) placed in the z = 0 and z = z 0 planes, respectively. During the relaxation, the vertical displacements of the lower C atoms were constrained to take place only in the positive z direction, while the oxygen atoms of the –OH groups (or hydrogen atoms of the –H groups) were allowed to vertically move only in the negative z direction. The displacements were not constrained in the xy plane. When the convergence criterion was reached, the constrained forces were used to estimate the applied pressure. Figure 4a shows initial (left) and converged (right) geometries for the case in which the distance between graphene layers was initially set to d = 2.8 Å. Blue, red, and gray spheres illustrate H, O,and C atoms, respectively. Upon relaxation, the distance d decreases to 2.2 Å, still too long to characterize a covalent interaction between layers. Indeed, the lower layer did not present any corrugation that could indicate a deviation from the planar sp2 network. The constrained forces in this final geometry correspond to an applied pressure of 4.7 GPa. On the other hand, Fig. 4b shows a second calculation in which the initial distance d was set to 2.7 Å. After relaxation, the diamondene is formed, as depicted in the right side of the figure. The C–C interlayer bond lengths become 1.66 Å, and the constrained forces are negligible. The calculations were repeated for distances d = 2.6, 2.5, 2.4 and 2.3 Å, all of which lead to diamondene formation. Altogether, these calculations allowed us to estimate a critical pressure around 4.7 GPa. The rehybridization process reported in the last paragraph also applies to –H chemical groups, as confirmed by MD simulations using LAMMPS package48. The simulations were performed for a model comprising a total of 2288 carbon atoms representing a bilayer graphene in which the upper layer interacts with 572 hydrogen atoms. External pressure was applied through two pistons modeled as purely repulsive force-field walls. The first piston was fixed and acted only on the carbon atoms of the lower layer. On the other hand, the top piston (initially localised 1.63 Å above the upper carbon atoms) acted on the whole system, being dynamically driven to reach specific levels of pressure. Figure 5a shows a pressure vs. time (t) plot that summarizes the results obtained from the MD simulations. The procedure was divided into five stages, indicated in Fig. 5a and described as follows. Stage (i) corresponds to thermal equilibration, in which the system runs for 100 ps at 300 K, with pressure fluctuating around zero. The loading process takes place during stage (ii), throughout the linear approach of the top piston for 100 ps. The load achieved in the previous stage is kept constant for 2 ns (by fixing the final top piston position) in stage (iii), keeping the system in pressure equilibration. The unloading is carried out in stage (iv), when the piston is released and linearly goes back to its initial position during 100 ps. The final structure equilibration is achieved in stage (v), which takes an additional 100 ps after total piston release. Figure 5b is a zoomed version of 5(a), stressing pressure levels close to diamondene transition (between 4 and 5 GPa). The red curve in Fig. 5 shows the evolution of the system when compressed up to an instantaneous (non-equilibrium) peak pressure of 4.92 GPa, indicated by the bullet in panel (b). As the system evolves in the pressure equilibration stage, the pressure slightly decays; after 0.95 ns, a transition to diamondene starts at 4.57 GPa (indicated by the circle in Fig. 5b), when a sharp pressure drop (to about 4.0 GPa) takes place. The characteristic geometries just before and after the transition takes place are illustrated in Fig. 5b (top and bottom cartoons, respectively). The pressure value remains constant until the end of this stage, and no substantial changes can be observed in the diamondene structure, which remains stable even during the unloading stage and after the final equilibration period. A second run, depicted by the blue curves of Fig. 5a, b, was performed for a slightly higher peak pressure (5.06 GPa). This second run confirmed the phenomenology observed in the first one (red curves), with the diamondene formation taking place in a shorter time window, as expected. The residual pressure observed in both cases (first and second runs) after pressure release (stage (v)) is an artifact introduced by the fact that the simulation box is not rescaled along the periodic directions after the structural transition takes place. Additional runs were conducted in a similar fashion for peak pressures smaller than 4.92 GPa (black curves in Fig. 5). In this case, diamondene formation was not observed in the overall simulation time, which corresponded to 2 ns in the pressure equilibration stage. The stabilization indicates that transitions are not to be expected for time periods greater than 2 ns, since the bilayer evolves without significant structural changes. The general picture that emerges from these theoretical results is that under high pressures, as the distances from water molecules and from the adjacent layer decrease, the carbon atoms of the top layer acquire an sp 3 component in their hybridizations. This process increases their reactivity, making them act as dangling bond centers. Simultaneously, the highly polarized bonds in the nearby water molecules weaken upon approximation to these centers. Water molecules in contact with the top graphene layer are in crystal form (water freezes under ≈1.0 GPa at room temperature), and depending on which atom (H or O) is closer, the final result may be a mixture of C–H and C–OH bonds. Furthermore, the fact that water molecules are relatively small prevents steric-hindrance effects, allowing the formation of these bonds in multiple sites. The resulting structure, the diamondene, may be characterized as a 2D compound, which belongs to the hexagonal crystal family with lattice parameter a = 2.55 Å. In this regard, it is worth comparing it with the hexagonal diamond, a bulk material also known as lonsdaleite, which is focus of intense debate in the literature49. Lonsdaleite has a wurtzite crystal structure with interlayer bonds in the eclipsed conformation. As such, an ultra-thin compound derived from it may be viewed as the result of the compression of a bilayer graphene in the AA stacking, rather than in the AB stacking as in the diamondene case. Our DFT calculations indicate that a lonsdaleite-diamondene is energetically less favourable by 50 meV per primitive cell when compared with the diamondene conformation described in the present work. Nevertheless, kinetic aspects may play an important role in the diamondization process as in the bulk case50, and we cannot rule out the existence of a mixture of ultra-thin lonsldaleite and diamondene in our samples. It must be pointed out, however, that the conclusions of the present work are restricted to bilayer graphene under pressure in the presence of reactive groups, and may be extended to the two top layers of few-layer graphene22. The sp 2 to sp 3 transformation of the entire graphite structure is a completely different issue—it would involve either the analysis in other pressure ranges and/or the addition of catalysts on both sides of the few layer graphene, as discussed in ref. 50. It is, therefore, not considered or discussed in the present work. Additionally, we would like to stress that further experimental investigation (e.g., X-ray and/or electron diffraction techniques) is necessary to unequivocally determine the crystal structure of diamondene. For example, X-ray diffraction of bilayer graphene under high pressure could be performed in third generation synchrotron light sources, eventually demonstrating the diamondene structure. The Raman cross-check As discussed above and experimentally explored in ref. 22, to achieve the diamodene formation within the pressure range employed in the current work, the use of water as the PTM is absolutely necessary, since it provides the chemical groups that covalently bond to the carbon atoms in the top layer, stabilizing the sp 3 structure. Additionally, the diamondization of single-layer graphene in water is expected to occur at much higher pressure (P ≥ 20 GPa) than the maximum achieved in the present work. These two limitations open the possibility to test the diamondene hypothesis by simply carrying out high-pressure Raman experiments in two different systems: a single-layer graphene transferred to a Teflon substrate (G/T) using water as the PTM, and a double-layer graphene transferred to a Teflon substrate (G/G/T) using mineral oil (Nujol) as the PTM. We performed these two experiments applying the same conditions as before (for acquiring the data shown Figs. 1 and 2), and the results are shown in Fig. 6. No statistically significant shift on the G band frequency with the excitation laser energy was observed (see discussion in the Supplementary Note 1 available), either for the single-layer in water, (Fig. 6a, b), or for the double-layer in mineral oil (Fig. 6d, e). These observations provide additional evidence to support the hypothesis of diamondene formation and reinforce our theoretical predictions and previous experimental results22, thus indicating that the formation of diamondene is strongly favorable to doubly-stacked graphene compressed in the presence of chemical radicals. It is worth noticing that, even in this case, Raman spectra obtained from the double-layer graphene outside the anvil cell after pressure release (down to atmospheric pressure) indicate that the diamondene structure did not survive to ambient conditions. We have provided spectroscopic evidence for the existence of diamondene by performing high-pressure Raman spectroscopy experiments in double-layer graphene using water as the PTM. The current technology of high pressure and high temperature cell apparatus involving larger volumes can make it possible to scale this novel material in a bulk quantity51. Potential applications include spintronics for quantum computation52, microelectromechanical systems (MEMS)17, superconductivity18, electrodes for electrochemical technologies19, substrates for DNA-engineering20, biosensors5, 21, among others. Since the Raman analysis presented here provides indirect evidence for the diamondene formation, an important extension of this work would be the direct measurement of the 2D hexagonal diamond structure by X-ray or electron diffraction techniques performed under high-pressure conditions. Raman spectra were acquired using an alpha 300 system from WITec (Ulm, Germany) equipped with a highly linear (0.02%) piezo-driven stage, and an objective lens from Nikon (20×, NA = 0.4). Two laser lines were used: (i) a Nd:YAG polarized laser (λ = 532 nm), and (ii) an argon laser (λ = 488 nm). The incident laser was focused with a diffraction-limited spot size (0.61λ/NA), and the Raman signal was detected by a high-sensitivity, back-illuminated CCD located behind a 600 gmm−1 grating. The spectrometer used was an ultra-high throughput Witec UHTS 300 with up to 70% throughput, designed specifically for Raman microscopy. The measurements were performed with powers of approximately 10 and 3 mW for the 532 and 488 nm lasers, respectively. These values were chosen in order to optimize the throughput signal, which was lowered due to absorption and reflection by the DAC, without causing damage due to sample heating. Sample loading into the high-pressure cell The sample was initially cut into a strip of dimensions ~ (0.5 × 2) cm. The DAC used in this experiment was a pneumatically pressurized type. The strip with the graphene was then positioned on top of the gasket in such a way that the sample was completely covering the gasket hole. After that, the DAC was closed, resulting in the G/G/Teflon/gasket to be sandwiched between the two diamonds. The pressure was then raised up to ~4 bar, when the diamond began to deform the gasket. Because the sample was sandwiched between the diamond and the gasket, it was cut and felt inside the gasket hole. Afterwards, the pressure was released back to the atmospheric level, the DAC was opened, and the PTM and ruby were added to the gasket hole. The first-principles calculations are based on the DFT53, 54 as implemented in the SIESTA code55, 56. The Kohn-Sham orbitals were expanded in a double-ζ basis set composed of numerical pseudo atomic orbitals of finite range enhanced with polarization orbitals. A common atomic confinement energy shift of 0.01 Ry was used to define the basis function cutoff radii, while the fineness of the real space grid was determined by a mesh cutoff of 450 Ry. For the exchange-correlation potential, we used the generalized gradient approximation57, and the pseudopotentials were modeled within the norm-conserving Troullier-Martins58 scheme in the Kleinman-Bylander factorized form59. All geometries were optimized until the maximum force component on any atom was less than 10 meVÅ−1. Periodic boundary conditions were imposed, with a lattice vector in the z direction large enough (22.4 Å) to prevent interactions between periodic images. As for the MD simulations, we employed the LAMMPS package48 with the interactions between atoms modeled through AIREBO potential60. All trajectories were generated in the canonical ensemble by means of the Nosé-Hoover thermostat61, 62, responsible for keeping the average temperature in 300 K. We employed a simulation box with dimensions 55.9, 54.6 and 40 Å in the x, y and z directions, respectively, with periodic boundary conditions imposed in the xy plane. Two pistons, modeled as purely repulsive force-field walls, were used to apply external pressure to the system. We used a time step of 0.25 fs. The data that support the findings of this study are available from the corresponding author upon request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The authors acknowledge Ado Jorio, Marcelo O. Aguiar, Marta P. Vidal, and Rogério Magalhães Paniago for fruitful discussions. This work was supported by CNPq, FAPEMIG, and Rede de Instrumentação em Nano-Espectroscopia Óptica. L.G.P.M. acknowledges financial support from CNPq and the grant from Program “Fórmula Santander”. A.B.O acknowledges CNPq (Grants 303820/2013-6 and 459852/2014-0). M.J.S.M and A.B.O acknowledge PROPP-UFOP (Auxlio Financeiro a Pesquisador, Grant Custeio–2016). We acknowledge computational support from LCC–Cenapad–UFMG and Cesup–UFRGS.
<urn:uuid:311a9a54-75f2-4424-974a-9d2331489d34>
2.78125
6,592
Academic Writing
Science & Tech.
43.096744
95,641,283
- Open Access Vertical velocity from the Korean GPS Network (2000–2003) and its role in the South Korean neo-tectonics © The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences. 2007 - Received: 27 March 2006 - Accepted: 10 January 2007 - Published: 8 June 2007 In the absence of adequate leveling observations in South Korea, the vertical deformation has been investigated using the Korean Global Positioning System (GPS) Network data (2000-2003). Although the vertical components of the GPS velocities have been rarely used in crustal deformation studies because of their high noise level, the processing strategy employed here enhances the data quality and eliminates the seasonal effect. The obtained vertical velocity field shows that the maximum vertical velocity in the ITRF 97 reference frame is 3.3 mm/year (subsidence), which reflects a relatively low level of seismic activity in South Korea. Two deformation patterns were recognized; subsidence in the Okchun Basin, and uplift in its adjacent areas. This subsidence is due to the collision of Kyonggi Massif and Okchun Basin (part of the South China block) against the Yongnam Massif and Taebaeksan Basin (part of the North China block).
<urn:uuid:7ddbd42f-4e1f-409e-99d2-1307a65330b4>
2.53125
301
Truncated
Science & Tech.
20.672227
95,641,297
Scientists from the Max Planck Institute for Biology of Ageing in Cologne discover serine as the hitherto unknown amino acid for protein modification, changing a 50-year-old paradigm. Scientific achievements enlarge our knowledge about how things work and eventually enable us to understand details and even to predict the unknown. Chemical structure of the amino acid serine, which is part of almost all proteins and the main target for ADP ribosylation. In the background a membrane with stained proteins. ©Max Planck Institute for Biology of Ageing But sometimes the assumptions we make based on what we have already seen limit our perception and bias our approach to the new. This is exactly what happened with ADP ribosylation (ADPr), a particular protein modification that appears in every cell and is essential for almost all biological processes. When a protein gets ADP ribosylated, it gets labeled with additional information. For example, an ADPr signal can be placed on a protein for the recruitment of vital factors to repair damaged DNA, the genetic information in cells. Like an alarm in case of an emergency, it marks the place where help is needed. For half a century it was believed that ADPr modifies particular sites on proteins: the amino acids glutamate, aspartate, arginine and lysine. But the functional characterization of these identified sites showed very slow progress. “We know the reason for this now: most of the sites were mis-localised” says Orsolya Leidecker, a scientist in the group of Dr. Ivan Matić from the Max Planck Institute for Biology of Ageing. Now the scientists have finally identified the amino acid serine as the major site of ADPr by using a new technique. Due to its chemical structure, serine was never really considered as a target, which makes this finding all the more exciting. “It’s a little like the discovery of the structure of the DNA”, explains group leader Dr. Ivan Matić. “People had known for decades that there must be genetic information stored somewhere but didn’t know where or how. The field of ADPr modification was similarly slow to develop for lack of precise knowledge about which amino acid ADPr attaches to. Now we finally know exactly where this information sits.” Additionally, the Matić group and their collaborators in Oxford have developed a simple method for validating serine ADP-ribosylated sites in cells, which enables any scientist to examine ADPr on their protein of interest. “Anybody in any lab can perform the experiment and investigate if the modification of their protein is on serine” says Leidecker, who contributed to the main part of the work. Actually, the identification of the correct position of ADPr is only the beginning. Researchers can now investigate the impact of ADPr on proteins, understand their functionality and develop strategies to use this modification as a target for drug development. Targeting processes regulated by ADPr is already a very promising strategy in treatment of cancer and acute cardiovascular conditions. The research was performed in collaboration with CECAD. Dr. Annegret Burkert | Max-Planck-Institut für Biologie des Alterns O2 stable hydrogenases for applications 23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Health and Medicine 23.07.2018 | Earth Sciences 23.07.2018 | Science Education
<urn:uuid:ef5baf23-c8f3-4cf3-b41b-9866ba527d2b>
2.90625
1,259
Content Listing
Science & Tech.
35.956368
95,641,298
Fatty acids found on the surface of water droplets react with sunlight to form organic molecules, a new study reports, essentially uncovering a previously unknown form of photolysis. The results could affect models that account for aerosol particles, including models related to climate. Conventional wisdom holds that carboxylic acids and saturated fatty acids, which are abundant throughout the environment, only react with hydroxyl radicals and are not affected by sunlight. Light on the air-sea interface. This material relates to a paper that appeared in the 12 August 2016, issue of Science, published by AAAS. The paper, by S. Rossignol at Université Lyon in Villeurbanne, France, and colleagues was titled, "Atmospheric photochemistry at a fatty acid-coated air-water interface." Credit: Christian George, CNRS-IRCELYON However, these previous conclusions are based on observations of the molecules in a gas phase, or dissolved in solution. Here, Stéphanie Rossignol and colleagues studied nonanoic acid (NA) during a liquid-gas phase, as the molecules interact with surface water. When the researchers studied NA along the surface of a liquid while it was exposed to UV light, they observed the formation of organic compounds. They conducted a series of experiments to adjust for possible contamination, concluding that NA is indeed responsible for the observed photochemistry resulting in these compounds. Based on the type of photochemistry observed, the authors say that similar reactions may be common to all carboxylic acid molecules. Considering how common fatty acids are in the environment, such photochemical processing on aerosols or other aqueous sites could have a significant impact on local ozone and particle formation, the authors say. In a related Perspective, Veronica Vaida notes that these previously unappreciated secondary organic products "will affect secondary organic aerosol mass, composition, and optical properties, in turn defining the particle's overall effect on climate, air quality, and health." Science Press Package | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:7291e45b-d80a-41cd-93bd-0f92c743d798>
3.234375
1,069
Content Listing
Science & Tech.
32.55746
95,641,309
|RHODOPHYTA : GIGARTINALES : Cystocloniaceae||RED ALGAE| Description: Thallus rising from a discoid holdfast forming a thin blade up to 12 cms long, irregularly dichotomously branched but very variable. Often branching from the margins. Habitat: Epilithic and epiphytic, very rare in the littoral but apparently not uncommon in the sublittoral to depths of 30 m. Distribution: Widespread in the British Isles including the Shetlands and Channel Islands, but very rare on the eastern shores of England. Europe: Mediterranean, Azores, Portugal, Spain, France and the Baltic. Further afield: Canary Islands . Similar Species: Other flat membranous species may be easily confused. Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Morton, O. & Picton, B.E. (2016). Rhodophyllis divaricata (Stackhouse) Papenfuss. [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=ZM6930 Accessed on 2018-07-18 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:52286f02-2fcb-47f0-a15d-58cf83d12bec>
2.953125
303
Knowledge Article
Science & Tech.
34.193443
95,641,333
posted by Anonymous 25.0 mL of 0.100 M acetic acid (Ka= 1.8 x 10^-5) is titrated with 0.100 M NaOH. Calculate the pH after the addtion of 27.00 mL of 0.100M NaOH. CH3COOH + H2O <-> H3O^+ + CH3COO^- 25mL x 0.100 mmol/ml = 2.5mmol CH3COOH 27mL x 0.100 mmol/ml = 2.7mmol NaOH There are more base than acid.... so what do I do? So the base neutralizes all of the acetic acid and one is left with 2.5 mmoles of the salt (sodium acetate) + an excess of (2.7-2.5 = 0.2 mmoles NaOH. The volume is 25 + 27 mL = ??
<urn:uuid:5504bad2-1605-4017-b243-4ea915b7bbcf>
2.6875
198
Q&A Forum
Science & Tech.
122.87576
95,641,340
Share this article: The third annual International Women and Girls in Science Day will be celebrated worldwide on Feb. 11 in order to recognize the achievements of women in science and to encourage girls to study science. “February 11 is a day that we celebrate the achievements of women, known and unknown, remembered and forgotten, who initially paved the way for those who come later in every walk of life, and to give an opportunity for children: girls and boys, to choose role models in science,” said Princess Dr. Nisreen El-Hashemite, executive director of the Royal Academy of Science International Trust (RASIT) and founder of the International Women and Girls in Science Day. Quiz Maker - powered by Riddle Participants from all over the world will gather at the United Nations to discuss issues relating to this year’s theme, “Equality and Parity in Sciences for Peace and Development.” The 2018 Forum “will highlight the need for integrated policies for equality and parity in science for achieving implementation of the 2030 Agenda through a ‘One UN’ lens,” El-Hashemite said. The goal of the forum is to present practical solutions for inclusion of women and girls in science in terms of “policies, institutions, legal instruments and other mechanisms.” According to El-Hashemite, there is an international concern in the academic sphere about a gender gap. “Science is essential to the development and prosperity of humanity, and a science devoid of the vibrancy that would result from the inclusion of a wider pool of abilities, viewpoints and work methods will produce a tepid outcome,” El-Hashemite said. “Women’s talents, perspectives, work methods and skills could be recognized worldwide on such a day for wide impact. Promotion of education for women in science and for their entry into scientific careers will serve to build inclusive institutional climates within all countries and allow policies and procedures to be crafted for gender equality, leadership training, and mentoring.” The most important word to keep in mind during the 2018 international day, according to El-Hashemite, is implementation. Member States of the United Nations will sign a political declaration for Equality and Parity in Science on Feb. 9. “We aim to involve women in science in policy making bodies and promoting them to greater roles in government politics and legislation and to strengthen partnerships between governments and women in science experts,” El-Hashemite said. Meteorologists explain why we should ditch the outdated term 'weather girl' Why are hundreds of female meteorologists all wearing the same dress on Pi Day? Americans are saving energy by spending more time at home, study shows Diversity is important in all fields, but especially important in meteorology, according to Dr. Jamese Sims, an algorithm engineer for NOAA and member of the American Meteorologist Society’s Board on Women and Minorities. “We want to make sure that when we’re building our forecast and outreach that we’re meeting the needs of society as a whole,” Sims said. Dr. Sepi Yalda, a professor of meteorology and the director of the Millersville University’s Center for Disaster Research and Education, agrees that diversity in the meteorology field offers much needed varying perspectives. Much more work in weather and climate is related to emergency management and disaster preparedness, according to Yalda. With varying perspectives, experts are thinking of better ways to prepare and communicate messages to all members of society. “If we can attract women in these fields, it can play a critical role in helping with our community, being better prepared and in the overall goal of a sustainable society and environment,” Yalda said. Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated. An uptick in monsoon rainfall is expected to heighten the flood threat across eastern and northern India this weekend and early next week. After several dry days, the weather will take a downhill turn as NASCAR drivers gear up for the Foxwoods Resort Casino 301 in Loudon, New Hampshire, on Sunday. A change in the weather pattern will favor periodic bouts of wet and stormy conditions as well as unusually cool air at times for the Midwest for the duration of July. While it has long been known that volcanic eruptions can alter global climate patterns, less has been documented on the impacts of localized weather patterns following an increase in nearby activity. An 11-million-ton iceberg hovers over the town of Innaarsuit in Greenland. The massive iceberg floats dangerously close to shore, threatening the small town. Two people suffered shark bites while swimming in the water off Fire Island in Suffolk County, New York, according to NBC New York. Tropical Storm Ampil is set to strengthen as it tracks toward Japan’s Ryukyu Islands into the weekend. A rainstorm moving up from the south will coincide with a shift in the jet stream and mark the beginning of an extended period of wet, humid conditions in the northeastern US that may last into August.
<urn:uuid:c27110c3-69d9-4e72-8cbc-45f83450f72b>
3.453125
1,088
News Article
Science & Tech.
32.953432
95,641,352
March 2018 - Staff from the Alaska Regional Office and the Golden Gate National Recreation Area Natural Resources Division are continuing to explore and map sea caves and related features along the park’s coast. With surveys of the Marin Headlands coastline completed—but just some of San Francisco’s shorelines surveyed—they have already found and mapped over 100 caves and cave-like features. The next set of surveys are scheduled for this fall. Sea Cave Monitoring Continues Along Golden Gate’s Shores One of the interesting things that the team has discovered is that the relatively vertical walls of many arches clearly show how marine organisms are separated into bands based on their tolerance to the air and sun. For instance, the photo of the arch at Bird Rock shown here reveals limpets (small white circles below the whitewash) and barnacles (smaller white dots), which can be found in this “splash zone” as they are the most tolerant. Below that are mussels and gooseneck barnacles (the lumpy green-brown layer), with mixed red algae below that. This work has been funded by the NPS Geologic Resources Division through the technical assistance program, and boat support to access the caves is provided by NPS lifeguards and the California Department of Fish and Wildlife. Contact Darren Fong with questions.
<urn:uuid:ffaee8f7-598c-480e-b71a-52090209b687>
2.84375
274
News (Org.)
Science & Tech.
36.365241
95,641,354
Authors: S. Shepard, G. Carver Affilation: Louisiana Tech University, United States Pages: 703 - 706 Keywords: solar energy, photovoltaic systems, power distribution Two of the most promising means of reducing the cost per kilowatt of photovoltaic (PV) systems are: 1) to reduce the cost – which includes recent progress made in CIGS; and/or 2) to increase the power – which includes progress in CPV (concentrated PV) systems. The problem in CPV is that we are also concentrating the out-of-band ultraviolet and infrared spectral components that only serve to heat the PV, causing its efficiency to plummet. Alternatively one can use optical filtering between the collection optics and the PV. Optical waveguides (including fiber) offer many useful filtering qualities and thermal distribution advantages over other optical filters. Owing to the fact that the guidance mechanism in a photonic crystal fiber (PCF) is a transverse resonance (set up by a pattern of air holes) rather that total-internal-reflection, the electric field can be predominantly confined to air instead of glass. This results in many new properties – one being minimization of Rayleigh scattering in the visible, which opens up new possibilities for solar energy applications. We demonstrate that the output power of a CIGS PV can be tripled by increasing the concentration level by a factor of 16 through the use of a PCF concentrating filter. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:3ef985f7-e366-4cbd-bf3f-11faef894ca4>
2.5625
319
Academic Writing
Science & Tech.
27.152692
95,641,355
Ozone is a bit of a shape-shifting chemical. High in the stratosphere, ozone acts as an ultraviolet-blocking shield around Earth (which is why the ozone hole is such a problem). At ground level, it’s a pollutant that can cause serious respiratory problems. And if it finds its way into the troposphere — the lowest level of the atmosphere — ozone serves as a potent greenhouse gas that warms the planet. A supercell thunderstorm over Great Bend, Kan. Credit: Lane Pearman/Flickr It ends up in the troposphere through a variety of processes including human pollution. It also finds its way there by trickling down from the stratosphere. In the past, scientists have attributed the trickle between the atmosphere’s different layers to large-scale patterns, such as shifts in the jet stream or air moving from the tropics toward the poles. But for the first time, research has definitively shown that it’s not just these large-scale movements that lure ozone down from the stratosphere, it’s also smaller-scale events like thunderstorms. “The convective-scale events like thunderstorms are smaller. They’re not explained well in global climate models but we know they’re important,” scientist Laura Pan, the lead author of the new research published in Geophysical Research Letters, said. Pan’s findings could be important to climate modelers looking to get a better handle on just how greenhouse gases end up in the troposphere and where they go once they get there. Some research has projected that severe storms — or at least the conditions favorable for their formation — could increase by 40 percent over the U.S. by 2100 during the height of severe storm season if our carbon dioxide emissions continue unabated. The new research could be a warning about a potentially unexplored feedback loop that could futher warm the planet, with more storms bringing more warming ozone to the lower levels of the atmosphere. Pan, who works on atmospheric chemistry at the National Center for Atmospheric Research in Boulder, Colo., found that as thunderheads rise to heights up to 50,000 feet above the Earth’s surface, they cause ripples in the boundary between the troposphere — the lowest layer of the atmosphere — and the stratosphere — the next layer above it. Those ripples can actually tear a gap in the boundary layer on the front of the storm, allowing ozone-rich stratospheric air to pour down to the troposphere. Understanding this new process has implications for our understanding of the current climate as well as future ramifications. “If you have a weather pattern change, say your storms get more intense and bigger storms happen more often, our models need to reflect the chemical changes (such as ozone) as well,” Pan said. Those changes could in turn lead to feedbacks, generating larger storms that drive more ozone into the atmosphere. Current weather models have an easier time capturing thunderstorm dynamics than climate models, which have a fuzzier view of these small-scale processes. Michael Prather, an atmospheric chemist at University of California-Irvine who has modeled this process, said the new study is a “nice piece of work that clearly shows the process” of how thunderstorms can facilitate the movement of ozone between the stratosphere and the troposphere. The reason Pan’s work has such a clear view of the process is because she got up close and personal with thunderstorms. Pan flew in and around storms aboard the National Science Foudnation’s Gulfstream V research aircraft outfitted with special equipment to monitor ozone and other chemicals in the atmosphere in a number of field studies. She considers airborne studies to be the key to many of the new findings, particularly their outer workings. “People who study storms have focused mostly on the inside of the storms. There is not much information on the flow pattern around the cloud,” Pan said. Measurements taken during the flight show that ozone concentrations on the front of the storm were more than double that of the surrounding air, dropping to 5 miles above the Earth’s surface and spreading more than 60 miles ahead of the storm. Pan said further research needs to be done to ensure the information is useable in climate models. She suggested moving forward by both monitoring other storms to understand the process and working closely with weather modelers to quantify exactly how much ozone is leaking down from the stratosphere and where it goes afterward.
<urn:uuid:44515456-beb4-46f7-9084-a4464b0e767c>
4.125
924
News Article
Science & Tech.
41.805617
95,641,356
Notices on extreme temperatures for Europe No active warnings Alert level dark green: Active weather notice Alert level yellow: Severe weather watch Alert level orange: Severe weather warning (moderate) Alert level red: Severe weather warning (heavy) Alert level violet: Severe weather warning (extreme) The map gives an overview of all the clues for extrem temperatures for Europe. Areas respectively locations in dark green are the ones where severe frost or severe heat has to be reckoned with. The detailed and reliable forecasts of such extrem temperature events are important escpecially for the people's temporary health and the agriculture, as well as for event organisers of Europe. Professional and experienced meteorologists at the Severe Weather Centre continually adjust the severe weather forecasts manually and make them cutting-edge 24 hours a day.
<urn:uuid:39cc440c-efad-487f-aa5f-30cd544d40b9>
2.640625
165
Knowledge Article
Science & Tech.
0.935455
95,641,360
RECRUITERS Post your GeoJob Ad Today CORPUS CHRISTI, Texas — Aerial images from a drone are being evaluated as a method to survey seagrasses scarred by boat propellers. Texas Parks & Wildlife Department has partnered with Texas A&M University-Corpus Christi to determine if using unmanned aircraft systems, sometimes called drones or UAS, is as effective as using planes. Dr. Michael Starek, Assistant Professor of Engineering, has been analyzing the images and data collected from flights in December of a small UAS about 350 feet above Redfish Bay’s seagrasses. TPWD has conducted aerial surveys since 2007 using piloted aircraft flying at an altitude of about 2,000 feet, said Faye Grubbs, Upper Laguna Madre Ecosystem Leader with TPWD. This project will compare the output from each method, and analyze costs of processing and ease of mobilization. “We are comparing the accuracies of the different imagery sets – manned vs. unmanned –and how well we are able to map scar features observed in the imagery,” she said. UAS-collected imagery has the potential to change environmental monitoring at many scales, not just in coastal regions, said Starek. The first step is to show drone-captured data is comparable to that collected from manned planes. University researchers have been doing just that for several years, flying along the coast and comparing drone-captured images to data from on-the-ground and in-the-water surveying by traditional means. One of the biggest challenges with aerial imaging of the sea floor is weather and water clarity, Starek said. “This experiment showed that with proper flight planning for weather conditions, mapping of prop scars with a small UAS can be a viable alternative to more costly, piloted airborne surveys,” Starek said. “Results from the flights show impressive spatial fidelity in the UAS-collected imagery. Pixel resolutions down to one inch will allow mapping of seagrass impacted by prop scaring at very fine spatial detail previously unattainable.” Although the results show the capabilities, Starek said UAS technology still has to evolve both in platform endurance and in regulations to allow these systems to fly autonomously over much larger areas, such as an entire bay system. This latter component will evolve as the technology and confidence in its use matures. “In the not too distant future, I can foresee the day when a fleet of small UAS equipped with cameras can routinely map an entire bay system at a fraction of the cost for traditional piloted airborne surveys,” he said. “The potential for UAS technology is immense.” Seagrasses serve as a refuge and nursery ground for fish, shrimp and crabs. They provide oxygen to the water column and serve as an area for growth of drift algae, a food source for shrimp, fish and crabs. A law prohibiting the uprooting of seagrasses coastwide was passed by the Texas Legislature during the 83rd legislative session and has been in effect since September 2013. Motorboats cause propeller scarring when they drift into shallow waters and tear a trough in the bay bottom. TPWD continues to collect aerial imagery in four areas along the Texas coast to evaluate the effects of the regulation. Based on the outcome of this project, Grubbs said the department may use drones for not only monitoring changes in propeller scarring, but possibly for mapping other habitats as well. About Texas A&M University-Corpus Christi: Offering more than 80 of the most popular degree programs in the state, Texas A&M-Corpus Christi has proudly provided a solid academic reputation, renowned faculty, and highly-rated degree programs since 1947. The Island University has earned its spot as a premier doctoral-granting institution, supporting a UAS test site, two institutes, and 13 research centers and labs. Discover your island at http://www.tamucc.edu/
<urn:uuid:cc09aa2e-2443-4105-9c66-ef7f68c1f33f>
3.140625
839
News Article
Science & Tech.
36.535573
95,641,397
This paper provides new insights on the regeneration step of an ion exchange process for the treatment of surface and ground water characterized by high sulphate concentration. Repeated regeneration of ion exchange resin with a sodium chloride solution (brine) did not alter the resin performances with respect to the fresh one. Besides, neither the sodium chloride concentration of the brine, which was varied between 1 and 3 M, nor the presence of sulphates at concentrations up to 20 g/L in the brine, did notably affect the regeneration efficiency. The brine was effectively treated by adding calcium or barium chloride, in order to remove the sulphates and re-establish the original chloride concentration. Calcium chloride was allowed to obtain up to 70% sulphate precipitation, whereas an almost 100% precipitation efficiency was obtained when barium chloride was used. The precipitation step was described by a model based on the mass action, coupled to the Bromley model for the description of the non-ideal behaviour of the electrolytic solution. This model was shown to give correct, or at least conservative, estimates of the equilibrium sulphate concentration when either calcium or barium chloride was used as precipitating agent. Ion exchange process in the presence of high sulphate concentration: resin regeneration and spent brine reuse R. Baciocchi, A. Chiavola; Ion exchange process in the presence of high sulphate concentration: resin regeneration and spent brine reuse. Water Science and Technology: Water Supply 1 July 2006; 6 (3): 35–41. doi: https://doi.org/10.2166/ws.2006.791 Download citation file:
<urn:uuid:a239f8e2-ad2d-44e9-9cdb-9eb9acb4bdb7>
2.65625
335
Academic Writing
Science & Tech.
35.513896
95,641,420
Blowfly larvae (Diptera: Calliphoridae) fulfil an important ecological function in the decomposition of animal remains. They are also used extensively in forensic entomology, predominantly to establish a minimum time since death, or a minimum post-mortem interval, using the larval length as a 'biological clock'. This study examined the larval growth rate of a forensically important fly species, Calliphora vicina Robineau-Desvoidy (Diptera: Calliphoridae) at temperatures of between 4 degrees C and 30 degrees C, under controlled laboratory conditions. The laboratory flies had been trapped initially in London, U.K. The minimum developmental temperature was estimated to be 1 degrees C and 4700 accumulated degree hours (ADH) were required for development from egg hatch to the point of pupariation. Lines fitted to the laboratory larval growth data were found to adequately explain the growth of larvae in the field. The nature of variation in growth rates from geographically isolated populations is discussed. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:b8fe4fd5-5820-4a45-8120-a343b85de9d6>
3.109375
229
Academic Writing
Science & Tech.
12.795
95,641,431
noun: A material that absorbs neutrons in a reactor. Angliško žodžio naudojimo pavyzdys: And, last of all, the first scan showed that, this time, the placenta is placed towards my back, effectively removing a potential frontal shock absorber from the equation. Richards decided a hinge would help by allowing the ankle to move: The most effective shock absorber is the ankle. In this vessel, which is in communication with another vessel called the absorber, containing cold water or very weak ammonia liquor, evaporation takes place, owing to the readiness with which cold water or weak liquor absorbs the ammonia, water at 59° Fahr. absorbing 727 times its volume of ammonia vapor. They got that way by sitting on vast amounts of money and essentially act as an inflation absorber, which is good for everyone. It is this half advanced field of the charged particles of the absorber which is added to the half retarded field of the source.
<urn:uuid:fd804268-78b8-4061-b15c-b36444d308ea>
2.921875
215
Structured Data
Science & Tech.
40.961709
95,641,445
25 relations: Absolute magnitude, Apparent magnitude, Boss General Catalogue, Bright Star Catalogue, Constellation, Durchmusterung, Extrasolar Planets Encyclopaedia, Flamsteed designation, Giant star, HD 167042, Henry Draper Catalogue, Hipparcos, Light-year, List of exoplanets, Lynx (constellation), Minimum mass, Okayama Planet Search Program, Smithsonian Astrophysical Observatory Star Catalog, Star, Sun, 14 Andromedae, 14 Andromedae b, 6 Lyncis b, 81 Ceti, 81 Ceti b. Absolute magnitude is a measure of the luminosity of a celestial object, on a logarithmic astronomical magnitude scale. New!!: 6 Lyncis and Absolute magnitude · The apparent magnitude of a celestial object is a number that is a measure of its brightness as seen by an observer on Earth. New!!: 6 Lyncis and Apparent magnitude · Boss General Catalogue (GC, sometimes General Catalogue) is an astronomical catalogue containing 33,342 stars. New!!: 6 Lyncis and Boss General Catalogue · The Bright Star Catalogue, also known as the Yale Catalogue of Bright Stars or Yale Bright Star Catalogue, is a star catalogue that lists all stars of stellar magnitude 6.5 or brighter, which is roughly every star visible to the naked eye from Earth. New!!: 6 Lyncis and Bright Star Catalogue · A constellation is a group of stars that are considered to form imaginary outlines or meaningful patterns on the celestial sphere, typically representing animals, mythological people or gods, mythological creatures, or manufactured devices. New!!: 6 Lyncis and Constellation · In astronomy, Durchmusterung or Bonner Durchmusterung (BD), is the comprehensive astrometric star catalogue of the whole sky, compiled by the Bonn Observatory (Germany) from 1859 to 1903. New!!: 6 Lyncis and Durchmusterung · The Extrasolar Planets Encyclopaedia is an astronomy website, founded in Paris, France at the Meudon Observatory by Jean Schneider in February 1995, which maintains a database of all the currently known and candidate extrasolar planets, with individual pages for each planet and a full list interactive catalog spreadsheet. A Flamsteed designation is a combination of a number and constellation name that uniquely identifies most naked eye stars in the modern constellations visible from southern England. New!!: 6 Lyncis and Flamsteed designation · A giant star is a star with substantially larger radius and luminosity than a main-sequence (or dwarf) star of the same surface temperature. New!!: 6 Lyncis and Giant star · HD 167042 is a 6th magnitude K-type subgiant star located approximately 164 light-years away in Draco constellation. New!!: 6 Lyncis and HD 167042 · The Henry Draper Catalogue (HD) is an astronomical star catalogue published between 1918 and 1924, giving spectroscopic classifications for 225,300 stars; it was later expanded by the Henry Draper Extension (HDE), published between 1925 and 1936, which gave classifications for 46,850 more stars, and by the Henry Draper Extension Charts (HDEC), published from 1937 to 1949 in the form of charts, which gave classifications for 86,933 more stars. New!!: 6 Lyncis and Henry Draper Catalogue · Hipparcos was a scientific satellite of the European Space Agency (ESA), launched in 1989 and operated until 1993. New!!: 6 Lyncis and Hipparcos · The light-year is a unit of length used to express astronomical distances and measures about 9.5 trillion kilometres or 5.9 trillion miles. New!!: 6 Lyncis and Light-year · This is a list of exoplanets. New!!: 6 Lyncis and List of exoplanets · Lynx is a constellation named after the animal, usually observed in the northern sky. New!!: 6 Lyncis and Lynx (constellation) · In astronomy, minimum mass is the lower-bound calculated mass of observed objects such as planets, stars and binary systems, nebulae, and black holes. New!!: 6 Lyncis and Minimum mass · The Okayama Planet Search Program (OPSP) was started in 2001 with the goal of spectroscopically searching for planetary systems around stars. The Smithsonian Astrophysical Observatory Star Catalog is an astrometric star catalogue. A star is type of astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. New!!: 6 Lyncis and Star · The Sun is the star at the center of the Solar System. New!!: 6 Lyncis and Sun · 14 Andromedae (abbreviated 14 And), also named Veritate, is an orange giant star of spectral type K0III situated approximately 258 light-years away in the constellation of Andromeda. New!!: 6 Lyncis and 14 Andromedae · 14 Andromedae b (abbreviated 14 And b), also named Spe, is an extrasolar planet approximately 249 light years away in the constellation of Andromeda. New!!: 6 Lyncis and 14 Andromedae b · 6 Lyncis b (abbreviated 6 Lyn b) is an extrasolar planet orbiting the K-type subgiant star 6 Lyncis which is approximately 182 light years away in the Lynx constellation. New!!: 6 Lyncis and 6 Lyncis b · 81 Ceti (abbreviated 81 Cet) is the Flamsteed designation of a G-type giant star approximately 300 light years away in the constellation of Cetus. New!!: 6 Lyncis and 81 Ceti · 81 Ceti b (abbreviated 81 Cet b) is an extrasolar planet approximately 300 light years away in the constellation of Cetus. New!!: 6 Lyncis and 81 Ceti b ·
<urn:uuid:93e7c770-bfae-430e-8e0f-805410d34af4>
3.0625
1,294
Structured Data
Science & Tech.
48.046703
95,641,458
Google Guava is an open-source(a decentralized software-development model that encourages open collaboration) set of common libraries for Java, mainly developed by Google engineers. It helps in reducing coding errors. It provides utility methods for collections, caching, primitives support, concurrency, common annotations, string processing, I/O, and validations. The most recent release is Guava 25.0, released 2018-04-25. Why Guava ? - By replacing the existing library classes with those from guava, you can reduce the amount of code you need to maintain. - It is a reliable, fast, and efficient. - It provides many utility classes like Iterables, Lists, Sets, Maps, Multisets, Multimaps, Tables which are regularly required in programming application development. - Many Guava utilities reject and fail fast on nulls, rather than accepting them blindly, as null can be ambiguous sometimes. - It simplifies implementing Object methods, like hashCode() and toString(). - Guava provides the Preconditions class with a series of common preconditions. - The Guava library is highly optimized. - It simplifies propagating and examining exceptions and errors with help of Throwables utility. - Guava’s powerful API helps in dealing with ranges on Comparable types, both continuous and discrete. - It provides tools for more sophisticated hashes than what’s provided by Object.hashCode(), including Bloom filters. - It provides Optimized, thoroughly tested math utilities not provided by the JDK. - Guava provides a few extremely useful string utilities like splitting, joining, padding, and more. - It provides powerful collection utilities, for common operations not provided in java.util.Collections. and many more…. Example : As we know the primitive types of Java are the basic types : byte, short, int, long, float, double, char, boolean. These types cannot be used as objects or as type parameters to generic types, which means that many general-purpose utilities cannot be applied to them. Guava provides a number of these general-purpose utilities, ways of interfacing between primitive arrays and collection APIs, conversion from types to byte array representations, and support for unsigned behaviors on certain types. Let us have an overview of utilities and classes that Guava provides over existing library classes. - Optional Class : Optional object is used to represent null with absent value. Many of the cases where programmers use null is to indicate some sort of absence, perhaps where there might have been a value, but there is none, or the value could not be found. Optional<T> is a way of replacing a nullable T reference with a non-null value. An Optional may either contain a non-null T reference i.e, the case we say the reference is “present”, or it may contain nothing i.e, the case we say the reference is “absent”. It is never said to “contain null.” - Preconditions Class : Guava provides a number of precondition checking utilities. Preconditions provide static methods to check that a method or a constructor is invoked with proper parameter or not. Each method has three variants : - No extra arguments. - An extra Object argument. - An extra String argument, with an arbitrary number of additional Object arguments. After static imports, the Guava methods are clear and unambiguous. - Ordering Class : Ordering is Guava’s “fluent” Comparator class, which can be used to build complex comparators and apply them to collections of objects. For additional power, Ordering class provides chaining methods to tweak and enhance existing comparators. - Objects Class : Objects class provides helper functions applicable to all objects such as equals, hashCode, toString, compare/compareTo. - Throwables : Throwables class provides utility methods related to Throwable interface. Sometimes, when you catch an exception, you want to throw it back up to the next try/catch block. This is frequently the case for RuntimeException or Error instances, which do not require try/catch blocks, but can be caught by try/catch blocks when you don’t mean them to. Guava provides several utilities to simplify propagating exceptions. - Collection Utilities : Guava introduces many advanced collections. These are among the most popular and mature parts of Guava. Some of useful collections provided by Guava are : Multiset, Multimap, BiMap, Table, ClassToInstanceMap, RangeSet, RangeMap. - Graphs : Guava’s common.graph is a library for modeling graph-structured data, that is, entities and the relationships between them. Some examples can be : - Webpages and hyperlinks. - Airports and the routes between them. - People and their family trees. - String Utilities : Guava introduces many advanced string utilities such as Joiner, Splitter, CharMatcher, Charsets, CaseFormat. - Primitive Utilities : As primitive types of Java cannot be used to pass in generics or in collections as input, Guava provided a lot of Wrapper Utilities classes to handle primitive types as Objects. - Math Utilities : Guava provides Mathematics related Utilities classes to handle int, long and BigInteger. These utilities are already exhaustively tested for unusual overflow conditions. They have been benchmarked and optimized. They are designed to encourage readable, correct programming habits. - Caches : Caches are tremendously useful in a wide variety of use cases. For example, you should consider using caches when a value is expensive to compute or retrieve, and you will need its value on a certain input more than once. A Cache is similar to ConcurrentMap, but not quite the same. Generally, the Guava caching utilities are applicable whenever : - You are willing to spend some memory to improve speed. - You expect that keys will sometimes get queried more than once. - Your cache will not need to store more data than what would fit in RAM. To summarize the cool features of Guava, refer the table given below : We will be discussing more in detail about these Classes and utilities in our future articles. Reference : Google Guava If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. - Java | How to start learning Java - Java | MIDI Introduction - JavaFX | Polygon with examples - WeakHashMap get() Method in Java - WeakHashMap entrySet() Method in Java - WeakHashMap containsValue() Method in Java - WeakHashMap containsKey() Method in Java - WeakHashMap clear() Method in Java - JavaFX | CheckMenuItem with examples - WeakHashMap isEmpty() Method in Java with Examples
<urn:uuid:6836726a-d220-442c-a206-49c216207b89>
3.015625
1,501
Knowledge Article
Software Dev.
30.878644
95,641,459
With a brush, some batteries, a small motor and a few wires, it’s easy to create a robot that spins, bumps and buzzes around on any smooth surface. -a small brush, like a vegetable brush or a cleaning brush -two AA batteries –battery holder for 2 AA batteries (optional) -a small toy motor with lead wires and pencil eraser or small rubber stopper (or vibrating motor) –battery clip (optional) -zip ties (optional) -electrical tape or duct tape Make your bristlebot! - Attach the motor to one end of the top of the brush. If it’s not a vibrating motor, stick a eraser or rubber stopper onto the spinning post to make it vibrate. Use a zip tie or duct tape to secure it. Make sure the spinning parts can rotate freely. - Attach the battery holder to the top of the brush near the motor. - Insert batteries in motor. - Twist wires around the motor terminals and secure with tape. (These may be the wires on the battery clip, if you have one.) - To start the motor, attach wires directly to the battery terminals, or to the battery clip and snap it onto the batteries. - Place your robot on a smooth surface to see what happens. Enrichment: Try different brush shapes, sizes and angles to see how they move. Does your robot spin in the same direction as the motor, or the opposite direction? The Science Behind the Fun: In this experiment, you complete a battery-powered electrical circuit to spin a vibrating motor. The vibrations traveling through the bristles of the brush move your robot around on the floor. Electrons (negatively charged particles) can flow through substances called conductors. Graphite, used to make pencil lead, among other things, is a conductor and can be used to make a simple circuit on paper. A circuit is just a path for electrical current. You have to do this experiment with a graphite pencil, rather than the kind you use at school, but you can pick them up at most art supply stores. You’ll also need a few small LED bulbs, 2 wires with alligator clips on either end, and a 9 volt battery. Adult supervision recommended. - Make a thick, black rectangle using a graphite pencil. We used a #9 graphite crayon. - Hook the two wires up to the battery terminals. - Clip the wire attached to the positive battery terminal to one wire of an LED bulb. (Don’t test it on the battery, or you may blow it out.) 4. Touch the un-attached LED wire to the other (left) side of the graphite bar. 5.Touch the alligator clip attached to the negative battery terminal to the right side of the graphite bar you drew. 6.If it doesn’t light, switch the positive alligator clip to the other wire of the LED bulb and try it again. 7. Move negative clip closer to the bulb. It should get brighter as you decrease the distance. Repost from Dec.19th, 2010 (Photos from Kitchen Science Lab for Kids, Quarry Books 2014) Have you ever gotten a shock from a doorknob after shuffling across a carpet? The term “static electricity” refers to the build-up of a positive or negative electrical charge on the surface of an object. In this case, the charged object is your body. You feel an electric shock as the charge you’ve collected from the carpet jumps from your hand to the metal doorknob. Tiny particles called electrons have negative charges and can jump from object to object. When you rub a balloon on your hair, or a comb through it, many of these electrons are stripped from your hair and move to the balloon or comb giving it a negative charge (and often leaving your hair all positively charged and standing up as the strands try to avoid each other.) The negatively charged balloon or comb then makes a great tool for making electrons jump around! You can easily make a contraption called an electroscope using: -some thin aluminum foil or mylar (the shiny stuff balloons and candy wrappers are made from) -a balloon or comb. - Cut the cardboard to fit over the mouth of the jar, poke the nail through the cardboard, tape on two long, thin strips of foil or mylar (see photo) and place the whole thing in the jar so the foil strips hang down, touching each other. 2. Charge your balloon or comb by rubbing it on your hair or clothing to give it a negative charge. Bring the charged object close to the nail head. You don’t even have to touch it! What happened? Some negatively-charged electrons jump from the comb to the nail and into the strips of foil. The negative charge on the comb will push electrons (which are also negatively charged) down to the foil/mylar and give both strips a negative charge. The two strips try to move away from one another as the like charges repelled each other. What happens when you make the strips out of different materials like paper? Are there other charged objects you can use to make your foil strips “dance”? You can also bend a thin stream of water from the faucet by holding your charged comb next to it. The water is uncharged and is pulled toward the negative charge of the comb. Try making small pieces of tissue paper float or dance by holding a charged comb or balloon next to them! We filled an empty soda bottle with tiny pieces of foil and made them jump around with a charged comb held close to the bottle. Spring is egg season. You may prefer dyed eggs, hard-boiled eggs, deviled eggs, or even dinosaur eggs. No matter what kind of eggs you like best, you’ll love these eggsperiments that let you play with the amazing architecture of eggs, dissolve their shells and even dye them with the pigments found in your refrigerator. Just click on experiments for directions and the science behind the fun! From surface tension to evaporation, science come into play every time you blow a bubble. Water molecules like to stick to each other , and scientists call this sticky, elastic tendency “surface tension.” Soap molecules, have a hydrophobic (water-hating) end and (hydrophilic) a water-loving end and can lower the surface tension of water. When you blow a bubble, you create a thin film of water molecules sandwiched between two layers of soap molecules, with their water-loving ends pointing toward the water, and their water-hating ends pointing out into the air. As you might guess, the air pressure inside the elastic soapy sandwich layers of a bubble is slightly higher than the air pressure outside the bubble. Bubbles strive to be round, since the forces of surface tension rearrange their molecular structure to make them have the least amount of surface area possible, and of all three dimensional shapes, a sphere has the lowest surface area. Other forces, like your moving breath or a breeze can affect the shape of bubbles as well. The thickness of the water/soap molecule is always changing slightly as the water layer evaporates, and light is hitting the soap layers from many angles, causing light waves to bounce around and interfere with each other, giving the bubble a multitude of colors. Try making these giant bubbles at home this summer! They’re a blast! (It works best a day when it’s not too windy, and bubbles love humid days!) To make your own giant bubble wand, you’ll need: -Around 54 inches of cotton kitchen twine -two sticks 1-3 feet long -a metal washer 1. Tie string to the end of one stick. 2. Put a washer on the string and tie it to the end of the other stick so the washer is hanging in-between on around 36 inches of string. (See photo.) Tie remaining 18 inches of string to the end of the first stick. See photo! For the bubbles: -6 cups distilled or purified water -1/2 cup cornstarch -1 Tbs. baking powder -1 Tbs. glycerine (Optional. Available at most pharmacies.) -1/2 cup blue Dawn. The type of detergent can literally make or break your giant bubbles. Dawn Ultra (not concentrated) or Dawn Pro are highly recommended. We used Dawn Ultra, which is available at Target. 1. Mix water and cornstarch. Add remaining ingredients and mix well without whipping up tiny bubbles. Use immediately, or stir again and use after an hour or so. 2. With the two sticks parallel and together, dip bubble wand into mixture, immersing all the string completely. 3. Pull the string up out of the bubble mix and pull them apart slowly so that you form a string triangle with bubble in the middle. 4. Move the wands or blow bubbles with your breath. You can “close” the bubbles by moving the sticks together to close the gap between strings. What else could you try? -Make another wand with longer or shorter string. How does it affect your bubbles? -Try different recipes to see if you can improve the bubbles. Do other dish soaps work as well? -Can you add scent to the bubbles, like vanilla or peppermint, or will it interfere with the surface tension? -Can you figure out how to make a bubble inside another bubble? Every fossil has a story to tell. Whether it’s the spectacular specimen of a dinosaur curled up on it’s eggs or a tiny Crinoid ring, mineralized remains offer us a snapshot of the past, telling us not only what creatures lived where, but about how they lived and the world they inhabited. Growing up surrounded by the flat-topped, windswept Flint Hills of Kansas, it was hard to imagine that I was living in the bottom of an ancient seabed, but there was evidence of the Permian period all around. Now, when my kids and I return to my hometown, a fossil-hunting trip is always part of our routine, and we hunt for shells and coral where roads cut through crumbling limestone and and chert (flint.) Looking up at layer after layer of rock and shells, I can almost feel the weight of the water that once covered the land. An episode of RadioLab we heard on the drive North from Kansas to Minnesota explained that coral keeps time and that by comparing modern coral to ancient coral fossils, scientists discovered that millions of years ago, years were about 40 days shorter than they are now. Can you guess why? Give the podcast a listen here. My mind was blown! A visit to the Flint Hills Discovery Center in Manhattan, KS gave us more insight into the amazing geology, ecology and anthropology of the Flint Hills and the Konza Prairie that blankets them. Most people don’t know that the great tallgrass prairies of the United States wouldn’t exist if not for humans, who have been burning them for thousands of years. What do you know about where you live? What’s it like now? What do you think it was like long, long ago? Are there fossils nearby? Here are some fossil-hunting resources I found online, in case you want to go exploring: I got together with some friends this weekend to do a quick iPhone recording of a chemistry song (on my Kitchen Pantry Scientist YouTube channel soon) and these awesome kids were nice enough take a break from playing to sing the Science Song with me. They had me laughing so hard that I could hardly get the words out! Can you make up a song about science? My book, “Kitchen Science Lab for Kids,”is finally out, and over Labor Day weekend, I traveled to Dragon Con in Atlanta to talk about it and do science with the kids at the convention. At the convention, I got to meet lots of fantastic scientists, science writers, science entertainers and science enthusiasts. One of them was the amazing Paul Zaloom, of “Beakman’s World.” I checked out his “Beakman Live” show and learned some awesome new experiments. I tried one of them out this morning. Check it out, and then try it out! All you need is a playing card, a glass and some water. The science explanation is in the video. Be sure to catch some episodes of Beakman’s World online! It’s been a busy summer, but we’re working on some sweet new experiments to share with you soon! Last week, the kids and I got an advance copy of my new book “Kitchen Science Lab for Kids,” which will be available September 15th and we love how it turned out! If you pre-order a copy from Amazon, Barnes&Noble, IndieBound, or Indigo before August 15th, I’ll send you a personalized, signed bookplate for each copy you order. Just email your receipt number and the address where you’d like the bookplate(s) sent. My email address is email@example.com. (Be sure to include the name(s) you’d like the book signed for!) At-home science provides an environment for freedom, creativity and invention that’s not always possible in a school setting. In your own kitchen, it’s simple, inexpensive, and fun to whip up a number of amazing science experiments using everyday ingredients. Science can be as easy as baking. Hands-On Family: Kitchen Science Lab for Kids offers 52 fun science activities for families to do together. The experiments can be used as individual projects, for parties, or as educational activities groups. Kitchen Science Lab for Kids will tempt families to cook up some physics, chemistry and biology in their own kitchens and back yards. Many of the experiments are safe enough for toddlers and exciting enough for older kids, so families can discover the joy of science together. Can kids in middle school come up with world-changing inventions? Absolutely. Most 5-8th graders don’t have free access to labs full of chemicals and equipment, which is probably a good thing, but they’re armed with more curiosity and creativity than most adults. When given the opportunity and encouragement to let their imaginations run wild, kids come up with the most amazing ideas. The Discovery Education 3M Young Scientist Challenge helps address the gap between idea and reality, and offers kids amazing incentives to come up with big ideas. The competition encourages kids in middle school to make two-minute videos about their ideas for using science, technology, math and engineering (STEM) to solve real-life problems. The videos are judged based on - Creativity (ingenuity and innovative thinking) (30%); - Scientific knowledge (30%); - Persuasiveness and effective communication (20%); and - Overall presentation (20%). 3M‘s Innovation Page gives overviews of how their scientists are impacting our daily lives, and some of their scientists will mentor the contest’s ten finalists, helping them envision how to take their creations from dream to reality. Ten finalists will travel to the 3M Innovation Center for the final competition. Want to enter? Here’s the link: http://www.youngscientistchallenge.com/enter. It seemed like the best way to learn about how kids come up with ideas was to ask my own two middle schoolers if they’d like to enter the contest, so I asked them to think about problems that they could help solve with STEM. They were less than excited until I showed them a few of the videos from the Young Scientist Challenge website. Like me, they were blown away by what Peyton Robertson and Deepika Kurup created to win the 2012 and 2013 Young Scientist Challenge and decided, without any prodding from me, that they wanted to come up with their own ideas. My son, who is a voracious reader of all things science, and is somewhat obsessed with meteorology, immediately knew what particular area he wanted to focus on. It took a few days, but now he’s got a great idea and is working to make a model to test. My oldest daughter was another story. She likes science, but spends much more time thinking about acting, basketball, photography, her friends, and our German Wirehaired Pointer. She quickly got frustrated and worried that she didn’t know enough about science to come up with a good idea. To encourage her, I asked her to think about how she could solve a health problem in animals, prevent basketball injuries, make a camera app, or solve an environmental problem. She decided to try to think of something people throw away and use it for something really great. While researching ocean trash, she came up with another idea, addressing a water pollution problem and is excited to test out her idea. They need to get going, since the entry deadline is April 22nd, but I know they can do it, and love the ideas they’ve come up with! If you’re on Twitter, you can follow the contest @DE3MYSC and join us for #STEMchat on Twitter April 8 from 9 – 10 PM Eastern as we talk about How to Raise America’s Top Young Scientist (this is the title earned by the winner of the DE 3M YSChallenge.) Although I don’t usually write sponsored posts, I made an exception for this contest, since I think it’s a fantastic way to get kids excited about STEM. This post is sponsored by the Discovery Education 3M Young Scientist Challenge.
<urn:uuid:0a035c13-dc5f-4f68-af52-4a782a80f723>
3.96875
3,742
Personal Blog
Science & Tech.
59.984803
95,641,462
1. In a density experiment, a student experimentally measures the following four values for the density of liquid ethanol: 0.772g/ml, 0.774g/ml, 0.785g/ml, and 0.775g/ml. The theoretical density of ethanol is 0.789 g/ml at 20ºC. a. Compute the average density of ethanol. b. Compute the % error for the average density (to 2 sig figs). c. Compute the standard deviation for this data (to 2 sig figs). d. Is this data accurate? precise? Explain. 2. Write the following measurements in scientific notation. Assume that terminal zeros of whole number are not significant. Remember that WebAssign uses the calculator notation of e instead of 10. a. 850 g b. 0.00025 L c. 2350000 m 3. Using appropriate rules for rounding, round each of the following numbers to three significant figures: a. 34.7823 m b. 0. 003117 L c. 3356.8 s d. 1.2936 kg Multiplication and Division Perform the following calculations of measured numbers. Give the answers with the correct number of significant figures. Round your final answer only. Assume all numbers are measurements. a. 57 x 0.55 b. 61.7 / 11 d. 95.0 / 5.00 Addition and Subtraction Perform the following calculations and give the answer with the correct number of significant figures. a. 20.6 cm + 0.179 cm b. 106.21 mL + 0.773 mL + 53 mL c. 153.751 g - 15.57 g 4.Identifying each of the following numbers as measured or exact; give the number of significant figures in each. You may also type infinite or vague when appropriate. a. 47.0 g b. 10 pencils c. 0.0005 cm d. 550000 km e. 3 eggs 5. Identify each of the following as an exact or a measured number. (a) thickness of a book (b) number of tea bags needed to make a pot of tea (c) number of pencils in a pack (d) number of eggs in a 3-egg omelet 6. Type of Measurement/Metric Unit/Abbreviation© BrainMass Inc. brainmass.com July 20, 2018, 4:39 pm ad1c9bdddf The expert examines measurement and scientific notations. The standard deviations for the data is computed.
<urn:uuid:9b2100f5-4279-42ee-845b-eec52b1dc0aa>
3.375
561
Tutorial
Science & Tech.
89.220866
95,641,468
Science and technology was discovered years ago.Science is a field of observation, conducting experiment, assumption, curiosity, and it is all about failure and success.Many inventions was performed and some of them were failed and due to this failure ,some new invention was accidentally discovered like, Percy Spencer was an American engineer who, while working for Raytheon, walked in front of a magnetron, a vacuum tube used to generate microwaves, and noticed that the chocolate bar in his pocket melted. In 1945 after a few more experiments (one involving an exploding egg), Spencer successfully invented the first microwave oven. The first models were a lot like the early computers: bulky and unrealistic. In 1967, compact microwaves would begin filling American homes. 2.The Big Bang The secret to discovering the prevailing theory to how the universe was made began with noise, like common radio static. In 1964, while working with the Holmdel antenna in New Jersey, the two astronomers discovered a background noise that left them perplexed. After ruling out possible interference from urban areas, nuclear tests, or pigeons living in the antenna, Wilson and Penzias came across an explanation with Robert Dicke's theory that radiation leftover from a universe-forming big bang would now act as background cosmic radiation. In fact, only 37 miles from the Holmdel antenna at Princeton University, Dicke and his team had been searching for this background radiation. When he heard the news of Wilson and Penzias' discovery, he famously told his research partners, "well boys, we've been scooped." Penzias and Wilson would go on to receive the Nobel Prize. For more details visithttps://techiefield.com/accidental-di... Accidenta | Schadenregulierung ganz einf, Page: accidentally engaged stream deutsch , accidentally in love , accidentalia negotii , accidentally engaged cda , accidentally engaged stream , accidental switch film , accidentally on purpose , accidental renaissance , accidental icon , accidentally engaged putlocker , accidentally changed excel file
<urn:uuid:cfb38e86-9dc6-402a-b786-10300fc7ed69>
2.65625
433
Spam / Ads
Science & Tech.
24.623097
95,641,474
MOLA Digital Altimetry. NASA Image. Martian Oceans. Evidence for a Northern Ocean on Mars. Overview. Where could the water have come from? ( Origin of water on Mars) Could the Martian climate have been favorable for a liquid ocean? ( Climate conditions and obliquity simulations) Lunine et al. discuss: Lunine et al. conclude that: Collisional history of water-laden asteroids with Mars expressed as cumulative fraction of “C-type” asteroids accreted vs. time (Ma) Probability of cometary collisions with Mars as a function of their initial semi-major axes Abe, Y. and Abe-Ouchi, A. discuss: Parker et al. discuss: Head et al. point out that: Flood depth of 500 m Flood depth of 1490m (contact 1) Flood depth of1680 m, (mean depth of contact 1, level of contact 2 shown underneath) Lunine, J. et. al., The origin of water on Mars, Icarus, 165(1):1-8, 2003. Abe, Y. and Abe-Ouchi, A. (2003) 34th Annual Lunar and Planetary Science Conference, Abstract #1617. Head, J. et al., Possible ancient oceans on Mars: Evidence from Mars Orbiter Laser Altimeter Data, Science, 286:2134-2137, 1999. Parker, T.J. et al., (2001) 32th Annual Lunar and Planetary Science Conference, Abstract #2051. Parker, T.J. et al., (2002) 33th Annual Lunar and Planetary Science Conference, Abstract #2027. Parker, T.J. and Banerdt W.B., (1999), International Conference on Mars 5, Abstract #6114.
<urn:uuid:9e6a887a-24ed-44b2-b1bd-29eab4b73e95>
2.84375
393
Content Listing
Science & Tech.
68.856721
95,641,485
What happens when an insect touches a spider’s web? Most web-spinning spiders line their silken threads with droplets of glue, which snag blundering insects. But one group—the cribellate spiders—does something different. Their threads are surrounded by clouds of even more silk—extremely fine filaments, each a hundred times thinner than regular spider silk. These nanofibers give the silk a fuzzy, woolly texture, and since they have no glue, they’re completely dry. And yet they’re clearly sticky. Insects that stumble into the webs of cribellate spiders don’t stumble out again. Raya Bott and colleagues at Aachen University in Germany have now shown that cribellate silk adheres to insects in a previously unknown and unsettlingly macabre way. When an insect touches the strands, waxy chemicals in its outer surface get sucked into the woolly nanofibers and reinforce them, turning the tangled mass of delicate threads into a solid, sturdy rope. The victim literally becomes a part of the web, inadvertently strengthening the instrument of its own capture. Cribellate spiders are among the most ancient of spiders, and they use their silk in a variety of startling ways. The ogre-faced or net-casting spiders hold their webs in their front legs and drop them onto passing insects. The uloborid spiders have lost the venom that most spiders use, and instead crush their prey to death by wrapping them in excessive amounts of silk; one species was documented using 140 meters to envelop a single insect. And some uloborids—the triangle spiders—spin triangular bungee webs. They hold one corner in tension; when an insect lands, the spider lets go and the entire web collapses onto the target. Despite these different tactics, all of these species rely on the sticky nature of their dry silk. Scientists used to think that the nanofibers in cribellate silk make such close contact with surfaces that they stick using the forces that hold molecules together over very small distances. But that couldn’t be the whole story, because cribellate silk adheres far more strongly to insects than it does to artificial surfaces. Bott found a clue to cribellate silk’s powers by breaking out a powerful microscope. She noticed that whenever the silk had touched an insect, she couldn’t make out the individual nanofibers any more. It was as if they had fused together. She even filmed the process, showing that a wave of fusion begins at the point of contact, and then travels up the silk. Looking more closely, she saw that the fibers were still there. They had just become embedded in some kind of fluid—think spaghetti strands drenched in a thick marinara sauce. And when she analyzed the chemicals in the fluid, she realized that it was a match for the waxes found in insect shells. It seemed like the silk absorbs these waxes right off the insects, just as cotton balls will soak up water. In the process, the silk reinforces itself. Bott confirmed her idea by showing that cribellate silk sticks eight times more strongly to normal insect shells than to those that were chemically treated to remove their waxes. Similarly, it sticks more strongly to artificial surfaces that have been coated in those same waxes. “Insects typically use the wax to reduce evaporation, but the spider misuses that protective layer,” says Anna-Christin Joel, who led the study. And all of this happens automatically. “They make a strong case,” says Todd Blackledge from the University of Akron, who studies the evolution of spider silk. “It makes sense that silk would have features that make it function best on natural insect surfaces rather than synthetic surfaces commonly tested in the laboratory.” Insects, in turn, evolved countermeasures. They couldn’t get rid of their waxes entirely or they would lose too much water, but they could make the wax so viscous that it wouldn’t soak into the silk, or simply cover it with a protective shield. Joel suspects that these adaptations drove spiders to evolve new ways of trapping their prey, which might explain why some of them started adding glue to their silk. But ironically, the gluey threads stick less well to insects with unprotected waxy shells. Joel thinks that insects can defend against either the dry cribellate silk or the wet glue-coated kind, but not both. And conversely, both kinds of silk only work on some kinds of prey, which is why the ancient cribellate spiders weren’t totally displaced by their glue-using descendants. Thanks to their self-reinforcing silk, they’ve stuck around. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:f8bf1c3a-5233-4c20-81a6-a00dd26813cd>
3.984375
1,008
News Article
Science & Tech.
50.564439
95,641,486
- Full paper - Open Access Active monitoring at an active volcano: amplitude-distance dependence of ACROSS at Sakurajima Volcano, Japan © Yamaoka et al.; licensee Springer. 2014 Received: 3 October 2013 Accepted: 22 April 2014 Published: 7 May 2014 First testing of volcanic activity monitoring with a system of continuously operatable seismic sources, named ACROSS, was started at Sakurajima Volcano, Japan. Two vibrators were deployed on the northwestern flank of the volcano, with a distance of 3.6 km from the main crater. We successfully completed the testing of continuous operation from 12 June to 18 September 2012, with a single frequency at 10.01 Hz and frequency modulation from 10 to 15 Hz. The signal was detected even at a station that is 28 km from the source, establishing the amplitude decay relation as a function of distance in the region in and around Sakurajima Volcano. We compare the observed amplitude decay with the prediction that was made before the deployment as a feasible study. In the prediction, we used the existing datasets by an explosion experiment in Sakurajima and the distance-dependent amplitude decay model that was established for the ACROSS source in the Tokai region. The predicted amplitude in Sakurajima is systematically smaller than that actually observed, but the dependence on distance is consistent with the observation. On the basis of the comparison of the noise level in Sakurajima Volcano, only 1-day stacking of data is necessary to reduce the noise to the level that is comparable to the signal level at the stations in the island. Many observation methods are used in monitoring volcanic activity to estimate the migration of magma associated with volcanic eruptions. Crustal deformation is widely used in volcano monitoring to obtain information on the pressure source beneath volcanoes. Leveling surveys in and around volcanoes have long been used to detect the location and the pressure change of magma chambers since the pioneering work of Omori (1916) and a model calculation by Mogi (1957). Data of leveling surveys are also used for detecting the intrusion of dykes (e.g., Hashimoto and Tada1990). Quantitative modeling of dykes has been widely used using the comprehensive formulation by Okada (1985). Since the innovative success of Global Positioning System (GPS), geodetic networks have been established in many volcanoes, and their crustal deformations are monitored in near-real time. The crustal deformations are usually interpreted as inflation or deflation of magma reservoirs or dyke intrusions. Seismic activities have also long been used to monitor volcanic activities. Activation or deactivation of earthquakes, changes in hypocenter distribution, and types of earthquakes are widely used as empirical tools to warn volcanic eruptions. McNutt (1996) proposed a generic swarm model of volcanic earthquakes to evaluate the temporal sequence of volcanic activity. Intrusions of dykes are usually inferred from hypocenter migrations (e.g., Sakai et al.2001 for the Miyakejima Volcano). Long-period (LP) events or volcanic tremors are used to infer magma or hydrothermal activity in volcanoes. Very long period (VLP) events, especially, are used to model the vibration of magma plumbing systems (e.g., Kumagai2006). Electromagnetic observations are generally used for monitoring thermal activity of volcanoes. Heating or cooling of rocks in volcanoes can be detected with the increase or decrease of magnetic fields near volcanoes. Emplacement of magma or changes in hydrothermal activity affect the resistivity structure beneath volcanoes, which can be monitored by electromagnetic surveys. For example, pre-eruptive activity of the caldera formation at Miyakejima Volcano in 2000 was monitored with magnetic and electric field variations (Sasai et al.2002). Temporal changes in the propagation properties of seismic wave are a relatively new tool for monitoring volcanic activities. S-wave splitting has been regarded as a stress measure (Crampin1994) and used for monitoring volcanic activity (e.g., Gerst and Savage2004). Seismic interferometry using passive sources, such as ambient noise, has recently been applied to detect the temporal variation of seismic propagation in volcanic regions to monitor their activity. Grêt and Roel (2005) monitored temporal changes of the seismic structure in Mt. Erebus with coda wave interferometry from December 1999 to February 2000, suggesting that the change of cross correlation between two different seismic events might indicate changes in the uppermost lava lake. Seismic interferometry has been applied to other volcanoes, such as Merapi (Sens-Schoenfelder and Wegler2006) and Reunion Volcanoes (Brenguier et al.2008), to monitor the temporal changes in volcanic activity. Active sources are also used to detect temporal variations of seismic propagation properties in volcanoes, though few trials have been made. Active sources have an advantage over natural sources in that source parameters such as time and location are well constrained. Nishimura et al. (2005) examined a temporal change in seismic velocity by repeating six explosion sources in a 6-year period near Iwate Volcano throughout its active period. Tsutsui et al. (2011) conducted repeating reflection seismic surveys in Sakurajima Volcano to investigate the temporal changes in reflection image. Traditional artificial sources were used for monitoring seismic propagation properties using repeating operations with regular time intervals. Explosive or impact sources, which have been frequently used, can destroy the ambient rocks and could change the propagation property around the source. To overcome such shortcomings, a controlled source for long-term continuous monitoring, named ACROSS, which stands for Accurately Controlled Routinely Operated Signal System, was developed (Kumazawa and Takei1994). In the ACROSS, the seismic signal is generated by the centrifugal force of a rotating eccentric mass. The rotation is highly stabilized to maximize the stacking performance in order to increase the signal-to-noise ratio. The transfer functions between the source and receivers are calculated with a convolution of received signals with the source signal. ACROSS was first deployed in 1996 at two test sites, Awaji Island and Tono region in Japan, to evaluate its performance. The rotation is accurately controlled by an AC servo motor with a feedback inverter, and the rotational angle is synchronized to the pulse sequences given by a GPS clock. As most of the seismic stations are operated with reference to a GPS clock, we can establish a remote synchronization between the ACROSS source and seismic stations (Yamaoka et al.2001). Fifteen months of monitoring at Awaji site started in January 2000, and it succeeded in detecting a temporal change associated with strong ground motion by nearby earthquakes (Ikuta and Yamaoka2004), which shows a sudden delay and gradual recovery of seismic velocity, presumably due to ground water movement. Deployment of ACROSS in Sakurajima Sakurajima Volcano, Japan, is one of the most active volcanoes in the world and is located in the southern part of Kyushu Island, Japan. Sakurajima Volcano is a small island and is located at the rim of the Aira Caldera that produced a gigantic silicic eruption about 25,000 years ago (Aramaki1969). Sakurajima Volcano experienced flank and summit eruptions in historic times (Kobayashi and Tameike2002). Violent eruptions with effusion of andesitic lava took place in 1476, 1779, 1914, and 1946. The volume of lava in the 1914 eruption was about 1.5 km3 (Ishihara et al.1981), and the flow covered the channel between the Sakurajima and Kyushu Islands. The results of instrumental observations for recent eruptive activities are summarized by the Japan Meteorological Agency (2013). The current eruptive activity started in 1955 and remained at a high level from 1974 to 1992. After this period, Sakurajima was less active until 2006. The eruption activity started anew in June 2006 from the Showa Crater located about 500 m east of the Minamidake Crater, which has been active and exploded tens of thousands of times since 1974. The number of explosions at the Showa Crater amounts to more than a thousand in 2010. Studies on crustal movement revealed two main magma sources in and around Sakurajima Volcano (Ishihara1990), showing a large magma source beneath the Aira Caldera located at a depth of about 10 km below sea level, and a small magma source located about 4 km beneath the summit of Sakurajima Volcano. Hidayati et al. (2007) estimated the location of magma sources by analyzing volcano tectonic earthquakes. They showed that a magma source exists at a depth of 5 km and small magma pockets are distributed vertically from the depths of 2 to 4 km below the summit. Deployment of ACROSS A PC-based controller that can make precise operations on the motor is the essential part of ACROSS (Kunitomo and Kumazawa2004). It consists of a GPS clock (XL-DC, TrueTime Inc., Santa Rosa, CA, USA), pulse generators (ACROSS-SG2, Digital Signal Technology Inc., Asaka, Japan), and control software (ROSET-TA2012) on a Windows PC. The pulse generator provides a series of pulses to the power control gear, which drives the AC servo motor so that the rotation angle is proportional to the number of pulses given to it. The timing of pulses produced by the pulse generator is precisely synchronized to the GPS clock. Therefore, this pulse generator can drive the vibrator so that it produces a sinusoidal force with frequency modulation (FM) by expanding and shrinking the intervals of pulses. In ACROSS, the FM operation is a fundamental technique to produce plural sinusoids simultaneously with one vibrator. The control software has multiple functions. It monitors the states of the mass rotation, such as the rotational velocity, mass position, and motor torque. The notable function of the software is to switch the rotation direction automatically at regular time intervals. The switching interval is usually chosen to be either 1 or 2 h. We can synthesize a linear vibration in any direction with a combination of clockwise and anticlockwise rotations. This means that we can monitor the temporal variation of seismic propagation properties for multiple excitation directions. As the vibrator rotates around the vertical axis, radial and transverse vibrations with reference to the station direction are synthesized to obtain the transfer function related to P and S waves, respectively, which is a great advantage of the system. The source site has Internet connection for remote monitoring and remote control of ACROSS. This dramatically reduces the frequency of maintenance visits. In the summer time, electric power outages and vibrator stoppages occur frequently due to unstable weather conditions, including heavy rain and typhoons with lightning. The vibrators can be restarted via the Internet connection using a VNC protocol, which enables us to stop or restart the vibrator remotely, without visiting the site. The rotation frequency and phase, the motor torque, and oil temperature are monitored and recorded by the PC to infer the cause of any troubles, shortening the time before repairs. A web camera with a microphone is also deployed at the vibrators' location so that the lubricant circulators, the control gear, and PC-based controllers are visible, making it easy to perform a visual inspection of the system. First signal test The first test operation began on 12 June 2012 and continued until 17 September 2012, for 97 days. During this test period, one vibrator was operated with a single frequency at 10.01 Hz, and the other was operated with FM from 10 to 15 Hz and a modulation period of 50 s. We chose 10.01 Hz for the single frequency so as not to overlap with the frequency components produced by the other vibrator in FM operation. The FM operation produces a series of sinusoids from 10.00 to 15.00 Hz with a 0.02-Hz interval. The rotation direction was switched every 2 h to synthesize two independent linear excitation forces. The received signals of two sequential operations, one of which is clockwise and the other is anticlockwise rotation of the source, are linearly combined with an appropriate phase shift. This operation synthesizes the signals at the seismic stations by radial and transverse linear excitations. The rate of operation in the first test period was 88%. The main cause of the suspension was power instability or failure due to lightning or storm around the site. In most cases, the system was restarted remotely. The unit of spectral amplitude is adjusted to meters per second (m/s), which represents the amplitude of a single sinusoidal wave. Note that the amplitude unit in this spectrum plot is not (m/s)/Hz1/2 which is generally used in conventional spectrum plots. We adopt the unit m/s, because the signal emitted from ACROSS is composed of a finite number of sinusoids with constant amplitudes. In this unit, the amplitude of ACROSS signals in the frequency domain stays constant, independent of the stacking length. If we adopt (m/s)/Hz1/2, the spectral peaks of the ACROSS signal increase with the length of the stacking period, while the noise level stays constant. We prefer m/s for the amplitude unit to make the ACROSS signal invariable while noise levels decrease proportionally to the square root of the stacking time period. The signal from the source that is operated in a single frequency (10.01 Hz) is clearly seen even at the stations off Sakurajima Island. The station KORH (Figure 3a), which is located 19.5 km from the source site, shows a clear spectral peak at 10.01 Hz, with a signal-to-noise ratio (SNR) of about 100 in all the components, with a stacking length of 88 days. The signal of the source in FM operation is also seen in the spectrum from 10 to 15 Hz. The station KURN (Figure 3b), which is located 7.5 km from the source site on the other side of the summit, shows clear spectral peaks. The SNRs are larger than those at KORH, even though the stacking period is 71 days. The signal at HAR (Figure 3c), which is located 1.1 km from the source site, has a very good SNR not only because of the distance, but also because of the low-noise environment in the deep borehole. The SNR for a single sinusoid at 10.01 Hz is about 1,000 for most of the components, and the SNR for the FM signal is about 100. Other spectral peaks, which are typically seen at KORH, are caused by the data telemetry system at the station. Small noise that is synchronized to the GPS clock is generated in the data telemetry system, resulting in spectral peaks with multiples of exactly 1.0 Hz. Comparison to Toyohashi site The same models of ACROSS vibrators were already in operation at the Toyohashi site in the Tokai region, Japan, when ACROSS was deployed in Sakurajima. We have been operating the ACROSS at Toyohashi site to monitor the temporal change of seismic propagation property associated with the subduction process of the Philippine Sea plate. In addition, we deployed an ACROSS at Sakurajima Volcano for monitoring its activities. Before the deployment, we investigated the detectability of signal from the ACROSS source that was to be deployed in Sakurajima using the observation data in the Tokai region with ACROSS at Toyohashi. In this section, we show the results of the investigation comparing them with the actual observational result. To assess the feasibility of signal detection at Sakurajima, we used the amplitude decay relation as a function of distance obtained in the monitoring experiment of ACROSS in the Tokai region and an explosion experiment in Sakurajima. The process shown here is useful to assess the feasibility of using ACROSS at any other locations. Explosion experiment at Sakurajima An explosion experiment for which we analyzed the amplitude decay as a function of distance was carried out in 2008 in and around Sakurajima Island (Iguchi et al.2009). We compared the amplitude decays between the explosion source and the ACROSS source in the Tokai region. From the experiment dataset, we used the record of shot 2 (S02 in Figure 6) for the estimation of amplitude decay with distance. Shot 2 was located close to the ACROSS source site and is suitable for comparison. Although there were several other shots that were closer to the ACROSS site than shot 2, we do not use them because their dynamite weight was only 20 kg, while the dynamite for shot 2 weighed as much as 200 kg. where x is distance from the source and a is the intercept term that is regarded as the source intensity. The term 1/x indicates the amplitude decay by geometrical spreading for the body wave, and exp (-bx) indicates attenuation within the medium. This equation is the same as that used in the location of hypocenters of volcanic earthquakes by Battaglia and Aki (2003). As there are no marked later phases that correspond to surface waves (Figure 7), we used the geometrical spreading factor for body waves. The b is connected with quality factor Q with Equation 2, where β denotes wave velocity. In this calculation, we used the horizontal distances between the source and the receivers. Let us examine the decay change when the distance along the ray for more realistic velocity structure is used. To examine the difference, we try to use the one-dimensional velocity structure that follows a simple linear function. We assume a velocity gradient of 0.5 km s-1 m-1 and surface velocity of 3.5 km s-1, which provides a maximum depth of 5.2 km for the ray traveling 25 km from the source. The corresponding b values for the ray-based distance are 0.16, 0.19, 0.11, and 0.13, respectively. These values provide Q p of 80, 100, 120, and 150, respectively. The differences between the two estimations of Q are about 50%, which is the uncertainty owing to the model difference of the velocity structure. It is interesting to compare these Q p values with those obtained in other methods or other volcanoes. Iguchi (1994) estimated the Q p value of Sakurajima Volcano using the spatial decay of amplitude of volcanic earthquakes and obtained the Q p value of about 20. He used A-type earthquakes, in which short-period component of around 10 Hz predominates, with straight-line approximation of ray path between the epicenters and seismic stations in the island. As he assumed body wave for geometrical spreading factor, his Q p value can be directly compared with our result, which is about 50. His result is smaller than ours, which may result from the hypocenter locations of A-type earthquakes, being just beneath the summit crater with depths of 1 to 4 km. Low Q p value may indicate the high attenuation nature in the region beneath the summit crater. Hirata and Uchiyama (1981) estimated the attenuation in the Aira Caldera region, which neighbors Sakurajima Volcano, with spatial amplitude decay ratio, showing a Q p of about 80. This supports our inference that the region of low Q value estimated by Iguchi (1994) is localized near the vent. Q values are estimated for other volcanoes. Sudo (1991) estimated the attenuation beneath Aso Volcano, Japan, concluding that the Q p is about 100. Bianco et al. (1999) estimated the Q p at Mt. Vesuvius, Italy, to be about 35 with the frequency decay method. Patane et al. (1994) estimated the attenuation using the records of seismic stations around Mt. Etna, showing that the Q p varies between 50 and 110 among the stations in the frequency range of 2 to 15 Hz. Giampiccolo et al. (2007) estimated the attenuation at Mt. Etna to fit the power law Q = Q 0 f n with the spectral ratio method. They found that Q p for the upper 5 km of the crust was estimated to be 16f0.8 with some ambiguity, that is about 100 at 10 Hz. Martinez-Arevalo et al. (2005) built the 3-D attenuation tomography in the shallow part (-2 to 2 km depth) at Mt. Etna and showed the large heterogeneity in Q p , ranging between 10 and 250, in the frequency range of 2 to 30 Hz. They also found a region of low Q p between 10 and 30 at the place of presumed dike intrusion in 2001. The Q p we obtained for Sakurajima Volcano is comparable to the Q p values in other volcanoes. Amplitude decay in Tokai We predict the amplitude decay relations as a function of distance for the ACROSS source at Sakurajima from that in Tokai. We assume that both relations for Sakurajima and the Tokai region share the same source intensity a in Equation 1, but that each has its own b that is characteristic of each region. The red curve in Figure 10 indicates the predicted amplitude decay with distance for the ACROSS source in Sakurajima Volcano. In this curve, we use b = 0.30 that is obtained for the stations on Sakurajima Island. Both lines run very close to each other below 1 km, and the curve for Sakurajima decays more rapidly and becomes one tenth of the curve for Tokai region at 10 km. We compare the amplitude decay curve that is predicted for Sakurajima with the actual data we obtained in the first operation test. As the decay curve in Figure 10 is obtained from the FM operation from 10 to 20 Hz at the Toyohashi site, and the data in the first operation test in Sakurajima are obtained by the FM operation from 10 to 15 Hz and a single frequency of 10.01 Hz, we need to convert their amplitudes to be able to compare them with each other. Therefore, we convert the amplitude decay curve that is predicted for FM operation of 10 to 20 Hz into the decay curve for FM operation of 10 to 15 Hz. We also convert the amplitude of the single frequency of 10.01 Hz to the mean spectral amplitude of FM operation of 10 to 15 Hz. In the conversion, we make two assumptions. One is that the energy of the ACROSS signal is the sum of the square of the peaks in the amplitude spectrum. The other is that the attenuation is the same for the frequency range in this study. Based on these assumptions, we make the conversion as described below. Next, we convert the amplitude of the spectral peak of single frequency operation at 10.01 Hz into the corresponding mean amplitude of FM operation of 10 to 15 Hz. We calculate the energy ratio between single frequency operation at 10.01 Hz and FM operation of 10 to 15 Hz. The energy ratio is calculated by integrating the square of the force by FM operation (Equation 3) and single frequency over the same period T. The calculated energy ratio of the FM operation over the single frequency is 2.627. Assuming the energy is equally distributed into 251 peaks between 10 and 15 Hz, the calculated mean amplitude is 0.102 times that of the single frequency of 10.01 Hz. The converted amplitudes are plotted in Figure 11. In the figure, we plot the results of two types of amplitude averaging. One is the averaging over the receiver components (left panel), and the other is the averaging over the source excitations (right panel). As explained in the ‘First signal test’ section, the transfer function for each station has six components, which is the combination of two components for the source excitations and three components for the receiver. The panel on the left shows the mean amplitude of three components at receivers for transverse and radial excitations of the source. The mean amplitudes for the single sinusoid operation at 10.01 Hz are labeled 10.01 t and 10.01 r for transverse and radial excitations, respectively. Those of the FM operations are labeled FM t and FM r. There is no systematic difference in the mean amplitude between transverse and radial excitations. The panel on the right shows the amplitudes of transverse and radial components of the horizontal motions at receivers averaged over two excitation components of the source. There is no systematic difference in the receiver components. In Figure 11, two decay curves are drawn. Red curves are the same as that in Figure 10, which use the b value for the attenuation on Sakurajima Island. The black curves show the decay corresponding to the b value for the attenuation on and around Sakurajima Island. The decay trend of observation amplitude is similar to what we have predicted in the above method, but most of the amplitudes are above the prediction curve. Apparently, the reason for the discrepancy is the underestimation of the source intensity a. A possible reason for the underestimate is the difference in the deployment condition between Sakurajima and Toyohashi, i.e., stiffness of the ground to which the vibrator is fixed. The ACROSS vibrators at the Sakurajima site are deployed in a pyroclastic deposit, whereas those at the Toyohashi site are in a clay layer. The conversion efficiency from force to energy that is transmitted to the far field may depend on the stiffness of the ground where the vibrator is deployed. The simple analogy of strain energy in a compressed or stretched spring suggests that more strain energy is stored in a medium with less stiffness. Therefore, it is natural to infer that the vibrator transfers more far-field energy from the source that is deployed in the ground with less stiffness. In other words, the ground coupling is better at the Sakurajima site than at the Toyohashi site, which results in larger source intensity a for Sakurajima even if the same vibrator is used. The ground coupling is the issue of dynamic interaction of vibrators and the surrounding ground, which is to be solved in future work. Mass of the vibrators and elastic nature of the ground may affect the coupling efficiency. Another possible reason for the underestimate of source intensity a could be the use of epicenter distances rather than the distances along ray paths. We try to evaluate the effect of the use of ray path-based distance on the source intensity with one-dimensional simple velocity structure model for Tokai area as in the section ‘Explosion experiment at Sakurajima’. We assume a velocity gradient of 0.13 km s-1 m-1 and surface velocity of 4.5 km s-1, which gives a maximum depth of 7.4 km for the ray traveling 48 km of epicenter distance. The source intensity that is estimated by using the distance along ray paths is just 10% more than that by epicenter distances. Therefore, the use of epicenter distance is not the main cause of the underestimate. Surface deployment of a seismometer may record a signal with large amplitude. The two stations between 2.0 and 3.0 km in Figure 11 are deployed on the surface (Tameguri et al.2011), showing a larger amplitude compared with other stations, which are deployed in boreholes. Once the amplitude decay curve is obtained, the SNR is predicted using the level of ground noise at the seismic stations. Resolution of temporal variation of the ACROSS signal depends on the SNR after the data stacking (Ikuta et al.2002). We have estimated the noise level in and around Sakurajima Volcano using the same seismic data of the explosion experiment as used in the previous section. We picked up 10-s-long records before the onset of the P wave in the data at each station to estimate the noise level. We calculated the RMS of the amplitude spectrum of the noise in the frequency bands between 10 and 20 Hz. This value is recognized as the noise level in the corresponding frequency range. We carried out this calculation on the data from each station for all the 15 shot sources. We adopted the third lowest value of 15 data as the representative noise level at each station. This operation avoids the traffic noise that occasionally disturbs the signal, as well as the artificially small noise level that might be due to the missed selection of amplitude gain. The spectral plot for station KURN, for example, is created with the stacked data of 71 days. The noise level for 71 days is reduced by 783 times smaller than that for 10-s data. Therefore, the mean noise on Sakurajima Island is reduced to 2.5 × 10-12 m/s after stacking, which is comparable to the noise level at KURN between 10 and 20 Hz. The noise level at HAR, which is also on Sakurajima, is almost the same level. This means that the noise estimation in this study is valid. We shall compare the noise level with the amplitude decay relation as shown in Figure 11. The noise level for the 10-s data is 2.0 × 10-9 m/s, which is comparable to the predicted amplitude decay curves at 1.0-km distance. The noise level after stacking of 1 day is reduced to 2.1 × 10-11 m/s, which is comparable to the level of predicted curves at 10 km, and is several times smaller than the actual observation. Seismic sources, named ACROSS, are deployed at Sakurajima Volcano for the first time in volcanic area in March of 2012. Two sources are deployed at the northwestern flank of Sakurajima Volcano with a distance of 3.6 km from the main crater. Signals received by the seismic stations are investigated for the test operation from 12 June to 18 September 2012, in which sources are operated with a single frequency at 10.01 Hz and with frequency modulation of 10 to 15 Hz. The signal of the ACROSS source is detectable even at the station off Sakurajima Island. The amplitudes and the decay relation with source distance are compared with the amplitude decay model established in the Tokai region for the ACROSS source. The amplitude decay relation with distance in Sakurajima using the ACROSS sources is predicted from the amplitude data of an explosion experiment, assuming the same source intensity as that of the ACROSS source in the Tokai region. The predicted amplitude is systematically smaller than that actually observed, but the dependence with distance is consistent with the observation, probably because of the difference in the ground stiffness at the source site. The noise level in Sakurajima that is estimated using the data of explosion experiment is consistent with the noise in the stacking data of the ACROSS signal. We are very grateful to the Sakurajima Volcano seismic exploration group for providing observation data for our analysis. We used the continuous data from seismic stations that are operated by National Research Institute for Earth Science and Disaster Prevention, Kyoto University, Kagoshima University, and Nagoya University. The study was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research (B) 23340130. - Aramaki S: Geology and pyroclastic flow deposits of the Kokubu area, Kagoshima prefecture. J Geolog Soc Japan 1969, 75: 425–442. 10.5575/geosoc.75.425View ArticleGoogle Scholar - Battaglia J, Aki K: Location of seismic events and eruptive fissures on the Piton de la Fournaise volcano using seismic amplitudes. J Geophys Res 2003. doi:10.1029/2002JB002193Google Scholar - Bianco F, Castellano E, Del Pezzo E, Ibanez JM: Attenuation of short-period seismic waves at Mt Vesuvius, Italy. Geophys J Int 1999, 138: 67–76. 10.1046/j.1365-246x.1999.00868.xView ArticleGoogle Scholar - Brenguier F, Sapiro NM, Campillo M, Ferrazzini V, Duputel Z, Coutant O, Nercessian A: Towards forecasting volcanic eruptions using seismic noise. Nat Geosci 2008, 1: 126–130. 10.1038/ngeo104View ArticleGoogle Scholar - Crampin S: The fracture criticality of crustal rocks. Geophys J Int 1994, 118: 428–438. 10.1111/j.1365-246X.1994.tb03974.xView ArticleGoogle Scholar - Gerst A, Savage MK: Seismic anisotropy beneath Ruapehu volcano: a possible eruption forecasting tool. Science 2004, 306: 1543–1547. doi:10.1126/science.113445View ArticleGoogle Scholar - Giampiccolo E, D'Amico S, Patane D, Gresta S: Attenuation and source parameters of shallow microearthquakes at Mt. Etna volcano, Italy. Bull Seismol Soc Am 2007, 97: 184–197. doi:10.1785/0120050252View ArticleGoogle Scholar - Grêt A, Roel S: Monitoring rapid temporal change in a volcano with coda wave interferometry. Geophys Res Lett 2005. doi:10.1029/2004GL021143Google Scholar - Hashimoto M, Tada T: Crustal deformations associated with the 1986 fissure eruption of Izu-Oshima volcano, Japan, and their tectonic significance. Phys Earth Planet Int 1990, 60: 324–338. 10.1016/0031-9201(90)90272-YView ArticleGoogle Scholar - Hidayati S, Ishihara K, Iguchi M: Volcano-tectonic earthquakes during the stage of magma accumulation at the Aira caldera, southern Kyushu, Japan. Bull Volcanol Soc Japan 2007, 52: 1289–1309.Google Scholar - Hirata T, Uchiyama T: Damping area in the Aira caldera of south Kyushu. Bull Seismol Soc Japan 1981, 34: 435–437.Google Scholar - Iguchi M: A vertical expansion source model for the mechanisms of earthquakes originated in the magma conduit of an andesitic volcano: Sakurajima, Japan. Bull Volcanol Soc Japan 1994, 39: 49–67.Google Scholar - Iguchi M, Tameguri T, Yamamoto K, Osima H, Maekawa T, Mori H, Suzuki A, Tsutui T, Imai M, Tsushima K, Yagi N, Ueki S, Nakayama T, Yamamoto Y, Takagi R, Ii S, Koga S, Nishimura T, Anggono T, Yamamoto M, Oikawa J, Osada N, Ichihara M, Tsuji H, Aoki Y, Morita Y, Watanabe A, Nogami K, Yamawaki T, Watanabe T, et al.: The 2008 project of artificial explosion experiment at Sakurajima volcano. Ann Disaster Prev Res Inst, Kyoto Univ 2009, 52: 293–307.Google Scholar - Ikuta R, Yamaoka K, Miyakawa K, Kunitomo T, Kumazawa M: Continuous monitoring of propagation velocity of seismic wave using ACROSS. Geophys Res Lett 2002. doi:10.1029/2001GL013974Google Scholar - Ikuta R, Yamaoka K: Temporal variation in the shear wave anisotropy detected using the Accurately Controlled Routinely Operated Signal System (ACROSS). J Geophys Res 2004. doi:10.1029/2003JB002901Google Scholar - Ishihara K, Takayama T, Tanaka Y, Hirabayashi J: Lava flows at Sakurajima Volcano (I) – volume of the historical lava flows. Ann Disaster Prev Res Inst, Kyoto Univ 1981, 24: 1–10.Google Scholar - Ishihara K: Pressure sources and induced ground deformation associated with explosive eruptions at an andesitic volcano: Sakurajima volcano, Japan. In Magma transport and storage. Edited by: Ryan M. Chichester: John Wiley and Sons; 1990.Google Scholar - Japan Meteorological Agency: National catalogue of the active volcanoes in Japan. 4th edition. Tokyo: Japan Meteorological Agency; 2013.Google Scholar - Kobayashi T, Tameike T: History of eruptions and volcanic damage from Sakurajima volcano, southern Kyushu, Japan. Quaternary Res 2002, 41: 269–278. 10.4116/jaqua.41.269View ArticleGoogle Scholar - Kumagai H: Temporal evolution of a magmatic dike system inferred from the complex frequencies of very long period seismic signals. J Geophys Res 2006. doi:10.1029/2005JB003881Google Scholar - Kumazawa M, Takei Y: Active method of monitoring underground structures by means of accurately controlled rotary seismic source (ACROSS) 1. Purpose and principle. Abstracts of fall meeting of the Seismological Society of Japan 1994, 158.Google Scholar - Kunitomo T, Kumazawa M: Active monitoring of the Earth's structure by the seismic ACROSS - transmitting and receiving technologies of the seismic ACROSS. Proceedings of the 1st International Workshop on Active Monitoring in the Solid Earth Geophysics, Mizunami 2004.Google Scholar - Martinez-Arevalo C, Patane D, Rietbrock A, Ibanez JM: The intrusive process leading to the Mt. Etna 2001 flank eruption: constraints from 3-D attenuation tomography. Geophys Res Lett 2005, 32: L21309.View ArticleGoogle Scholar - McNutt SR: Seismic monitoring and eruption forecasting of volcanoes: a review of the state-of-the-art and case histories. In Monitoring and mitigation of volcano hazards. Edited by: Scarpa R, Tilling R. New York: Springer; 1996:99–146.View ArticleGoogle Scholar - Miyamachi H, Tomari C, Yakiwara H, Iguchi M, Tameguri T, Yamamoto K, Ohkura T, Ando T, Onishi K, Shimizu H, Yamashita Y, Nakamichi H, Yamawaki T, Oikawa J, Ueki S, Tsutsui T, Mori H, Nishida M, Hiramatsu H, Koeda T, Masuda Y, Katou K, Hatakeyama K, Kobayashi T: Shallow velocity structure beneath the Aira caldera and Sakurajima volcano as inferred from refraction analysis of the seismic experiment in 2008. Bull Volcanol Soc Japan 2013, 58: 227–237.Google Scholar - Mogi K: On the relation between the eruptions of Sakurajima Volcano and the crustal movements in its neighborhood. Bull Volcanol Soc Japan 1957, 1: 9–18.Google Scholar - Nishimura T, Tanaka S, Yamawaki T, Yamamoto H, Sano T, Sato M, Nakahara H, Uchida N, Hori S, Sato H: Temporal changes in seismic velocity of the crust around Iwate volcano, Japan, as inferred from analyses of repeated active seismic experiment data from 1998 to 2003. Earth Planets Space 2005, 57: 491–505.View ArticleGoogle Scholar - Okada Y: Surface deformation due to shear and tensile faults in a half-space. Bull Seismol Soc Am 1985, 75: 1135–1154.Google Scholar - Omori F: The Sakura-Jima eruptions and earthquakes II. Bull Imperial Earthquake Investigation Committee 1916, 8: 35–179.Google Scholar - Patane D, Ferrucci F, Cresta S: Spectral features of microearthquakes in volcanic areas: attenuation in the crust and amplitude response of site at Mt. Etna, Italy. Bull Seismol Soc Am 1994, 84: 1842–1860.Google Scholar - Sakai S, Yamada T, Ide S, Mochizuki M, Shiobara H, Urabe T, Hirata N, Shinohara M, Kanazawa T, Nishizawa A, Fujie G, Mikada H: Magma migration from the point of view of seismic activity in the volcanism of Miyake-jima island in 2000. J Geography 2001, 110: 145–155. 10.5026/jgeography.110.2_145View ArticleGoogle Scholar - Sasai Y, Uyeshima M, Zlotnicki J, Utada H, Kagiyama T, Hashimoto T, Takahashi Y: Magnetic and electric field observation during the 2000 activity of Miyake-jima volcano, Central Japan. Earth Planet Sci Lett 2002, 203: 769–777. 10.1016/S0012-821X(02)00857-9View ArticleGoogle Scholar - Sens-Schoenfelder C, Wegler U: Passive image interferometry and seasonal variations of seismic velocities at Merapi Volcano. Indonesia Geophys Res Lett 2006. doi:10.1029/2006GL027797Google Scholar - Sudo Y: An attenuating structure beneath the Aso Caldera determined from the propagation of seismic waves. Bull Volcanol 1991, 53: 99–111.View ArticleGoogle Scholar - Tameguri T, Iguchi M, Sonoda T, Ichikawa N: Hypocenter distributions of volcanic earthquakes at Sakurajima Volcano (2011–2012). In Annual report of research on Sakurajima volcano with comprehensive observation. Edited by: Iguchi M. Uji, Japan: Disaster Prevention Research Institute of Kyoto University; 2011:2–6.Google Scholar - Tsutsui T, Tameguri T, Iguchi M, Oikawa J, Oshima H, Maekawa T, Aoyama H, Ueki S, Hirahara S, Nogami K, Ohminato T, Ichihara M, Tsuji H, Horikawa S, Okuda T, Shimizu H, Matsushima T, Ohkura T, Yoshikawa S, Sonoda T, Miyamachi H, Yakiwara H, Hirano S, Saito K, Suemine K, Goto S, Ikegami T, Kato K, Matsusue S, Kohno T, et al.: The repeated seismic survey 2010 in Sakurajima Volcano, south Kyushu, Japan, the second round. Ann Disaster Prev Res Inst, Kyoto Univ 2011, 54: 195–208.Google Scholar - Yamaoka K, Kunitomo T, Miyakawa K, Kobayashi K, Kumazawa M: A trial for monitoring temporal variation of seismic velocity using an ACROSS system. Island Arc 2001, 10: 336–347. 10.1046/j.1440-1738.2001.00332.xView ArticleGoogle Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
<urn:uuid:a28fa863-af2c-4d9d-b0e8-620d6fcca424>
2.796875
9,150
Academic Writing
Science & Tech.
46.435196
95,641,494
The code I'm working on manipulates routing tables on three different platforms: Linux, Windows and Solaris. Each of them has a different behavior for different scenarios. Here I attempt to document those differences. First some definitions: - Interface A device which can directly reach a subnet via ARP or other protocols. An example is - Direct route A route which indicates which Interface to use to reach a directly connected subnet. - Gateway route A route which indicates a gateway to use to reach a subnet which is connected via a router. - Default route A special case of a Gateway route in which the destination subnet is all possible addresses. - Host route A special case of any of either a Direct route or a Gateway route in which the destination is a single machine. - Multicast route A special case of a Direct route in which the destination subnet is the multicast address space, 220.127.116.11/8or a subset thereof. Note that example route entries in this table is based on the format emitted from Linux's |Linux 2.6||Windows 2003||Solaris 10| |The interface chosen to access a gateway via a route is determined by traversing the route table, not hardcoded into the route entry.| |A Gateway route which is also a Host route can be added where the destination is an address that exists in a subnet of another Direct route.| For instance, the route |A Direct route which is also a Host route can be added where the destination is an address that exists in a subnet of another Direct route.| For instance, the route |Direct routes can be deleted.||yes||yes3||yes| |Route priority can be programmatically controlled.||yes||yes||no4| |When an interface is administratively taken down, do the associated Direct route entries disappear?||yes||yes||yes| |When an interface is administratively taken down, and associated Direct route entries disappear, do they return when the interface is brought up again?||yes||yes||yes| |When an interface is administratively taken down, do the associated Gateway route entries disappear?||yes||yes||yes| |When an interface is administratively taken down, and associated Gateway route entries disappear, do they return when the interface is brought up again?||no||yes||no| |When an interface is administratively down, is it an error to add a route that references that interface?||yes||yes||yes| |When an interface is unplugged, do the associated route entries disappear?||no||yes||no| |When an interface is unplugged, and associated route entries disappear, do they return when the interface is plugged in again?||N/A||yes||N/A| |When an interface is unplugged, is it an error to add a route that references that interface?||no||yes5||no| |When an interface is unplugged, and associated route entries do not disappear, will an alternate route be chosen because the interface is unplugged?||no||N/A||no| |If two interfaces are connected to the same subnet, will ARPrespond on either interface for an address on one of the interfaces?||yes||no||no| |Can routes be modified for all attributes including priority? An answer of no means they must be destroyed and recreated to modify attributes.||no||yes||no| |Does the operating system create Multicast routes by default?||no||yes||no| |Can multicast routes be deleted?||yes||yes3||yes| |If Multicast routes do not exist, do multicast packets exit the machine? To where?||yes6||no||no| |Can two routes be created with the same destination and priority, but a different interface?||no||yes||yes7| |Can a Gateway route be specified with an interface, where that interface does not have a Direct route for the gateway in the Gateway route?||no8||yes9||no8| |When you remove a Direct route which is required by a Gateway route to reach the gateway, does the Gateway route disappear?||no||no||no| |If you specify a Gateway route with a gateway that is not reachable via a Direct route, is this allowed?||no||yes10||no| 1 This configuration is possible if the route entry is set for this behavior. The route entry can also be configured for a specific interface. 2 When a ping is performed on the destination address, it is sent to the gateway, not via the direct route; this is what should be expected by following the rules in reading a route table. However, the gateway in my test was a Linux 2.6 machine, and it rejected the ping with an ICMP of "unreachable." This means such a configuration is possible, but useless. 3 One cannot directly delete a default route or other "protected" routes, but there is a way to fool Windows into deleting it. I found this fascinating discussion 4 The metric attribute cannot be set for a route in Solaris. 5 The error that's returned is ERROR_INVALID_PARAMETER. That doesn't differentiate this condition from other problems. 6 It appears to choose the first available interface. 7 This question is partly irrelevant in Solaris; the priority or metric cannot be set for a route. However, you can create two routes with the same destination but different interfaces. 8 It works even if the Direct route is on another interface, but it must exist. 9 You can set the route, but it doesn't do anything. 10 This strange behavior is apparently allowed; the source address that's used is the address on the interface that is preferred for the Default gateway.
<urn:uuid:cb3940de-c9d2-494c-a9bd-30384d89aea7>
2.84375
1,232
Personal Blog
Software Dev.
44.364934
95,641,503
There are a few ways to create sets, depending on which version of Python you use. For 2.6 and earlier, which is available in later Pythons. To make a set object, pass in a sequence or other iterable object to the built-in set function: x = set('abcde') y = set('bdxyz') print( x ) # Pythons <= 2.6 display format # from w w w . j ava2s .c o m Sets support the common mathematical set operations with expression operators. We can't perform the following operations on plain sequences like strings, lists, and tuples. We must create sets from them by passing them to set in order to apply these tools: x = set('abcde') y = set('bdxyz') print( x - y ) # Difference print( x | y ) # Union print( x & y ) # Intersection print( x ^ y ) # Symmetric difference (XOR) print( x > y, x < y ) # Super set, subset # from w w w . j a va2 s . c o m Set membership test x = set('abcde') print( 'e' in x ) # Membership (sets) print( 'e' in 'Camelot', 22 in [11, 22, 33] ) # But works on other types too # w w w. ja v a2 s .c om
<urn:uuid:44b41b76-559e-445d-b5c1-724583e6bf87>
3.75
306
Documentation
Software Dev.
87.001131
95,641,522
To our knowledge this alliteration was used for the first time within the photonics field in an editorial article in Physics Today, written by the well-respected MIT-physicist Dan Kleppner in the year 1989 (Physics Today 42 (11) 9, 1989). It resonated immediately within the part of the atomic physics community that tries to improve the precision of measurement of fundamental physics laws and particularly nature constants. The term “constant” for these people is not much more than a working title and its constant nature has to be proven by an ever increasing precision of measurement, by refining the method year over year. This working method, in particular when applied to the most fundamental atom conceivable, hydrogen, has generated many fundamental physical insights in the 20th century without which our world would be not close to where it is now. We are convinced that the method will continue to be the driving force for our understanding of the quantum nature of the world in the future too. A Passion for Precision is therefore much more than just an attitude of an engineering trade, it is the very nature of scientific discovery in the quantum world. TOPTICA has started with such a scientific mindset. The company has its origins in the days of laser cooling of atomic species, where the dream of spectroscopy became a reality: suddenly it became possible to precisely prepare atoms by laser light such that even a single atom could be probed by laser light for its physical secrets for a (nearly) infinite amount of time, or bunched together in a way that the periodic table can be placed into virtual optical lattices to experience and demonstrate the quantum nature of matter. This experiments are not only very precise allowing to prepare data out of noise with sensitivities that are improving routinely by orders of magnitudes in a few decades, they are very difficult to execute and frustrating to a high level since they are set up as so-called zero-experiments: only the deviation from the expected is searched for, and which would reveal new physics. Without passion this would be impossible. We were even more happy to see that Prof. Theodor Hänsch, who was instrumental to start TOPTICA and supports the company since then, adopted the phrase as a title for his nobel lecture in the year 2005, in which he gave a fascinating description of the technology path leading to the self-referenced optical frequency combs that he coinvented, deriving from the very method of the field, A Passion of Precision, digging down in precision year by year.
<urn:uuid:cef3978a-b12a-4c17-8535-cb3ddb48ad57>
2.625
518
News (Org.)
Science & Tech.
22.428836
95,641,523
Southern Analysis of Transgenic Tobacco Plants Southern analysis (1) is routinely carried out to determine whether a plant regenerated from tissue-culture has been transformed with foreign DNA. The technique of Southern analysis begins with the extraction of genomic DNA from the plant, digestion of the DNA with diagnostic-restriction enzymes, and fractionation of the restricted DNA by agarose gel electrophoresis. Following the transfer of the fractionated DNA to a nylon membrane by capillary blotting (Southern blotting), a radioactively labeled fragment of the foreign DNA is used for the detection of homologous sequences within the plant genomic DNA. This technique allows not only the detection of foreign DNA, but also an estimation of the number of copies and the arrangement of the foreign gene(s) within the plant genome. KeywordsSouthern Analysis Isoamyl Alcohol Geiger Counter Ethidium Bromide Solution Saran Wrap - 3.Croy, E. J., Ikemura, T., Shirsat, A., and Croy, R. R. D. (1993) Plant nucleic acids, in Plant Molecular Biology Lab Fax (Croy, R. R. D., ed.), Bios, Oxford, pp. 21–47.Google Scholar - 6.Scott, R. (1988) DNA restriction and hybridization, in Plant Genetic Transformation and Gene Expression (Draper, J., Scott, R., Armatage, P., and Walden, R., eds.), Blackwell, Oxford, pp. 237–261.Google Scholar
<urn:uuid:8dd1ae85-34f1-4ce7-97a7-68dc1f050bc0>
2.734375
316
Knowledge Article
Science & Tech.
53.966377
95,641,535
Climatology is the study of climate. The term climate can be defined as weather conditions averaged over a period of time. Climate models are used in a variety of different ways to study the dynamics of the weather and climate systems to make projections of future climate. Basic knowledge of the climate can be used within shorter term weather forecasting using analog techniques such as El Nino. Climate models can be used for a variety of purposes from studying the dynamics of the weather and climate systems to formulating projections about future climate conditions. Scientists use climate indices based on several climate patterns in the attempt to characterize and understand the various climate mechanisms that culminate in our daily weather. Climate indices are used to represent the essential elements of climate. They are generally devised with the twin objective of simplicity and completeness. Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surfaces and ice. They can be used to study the dynamics of the weather and climate system to projections of future climate. All models balance incoming energy as short wave electromagnetic radiation to the earth with outgoing energy as long wave electromagnetic radiation from the earth. Title Image Credit: Wikimedia Commons© BrainMass Inc. brainmass.com July 18, 2018, 6:10 pm ad1c9bdddf
<urn:uuid:fba9ca05-f558-4a22-b519-68971c75235d>
3.546875
257
Knowledge Article
Science & Tech.
22.467661
95,641,546
"These signals are the first direct evidence that thunderstorms make antimatter particle beams," said Michael Briggs, a university researcher whose team, located at UAHuntsville, includes scientists from NASA Marshall Space Flight Center, the University of Alabama in Huntsville, Max-Planck Institute in Garching, Germany, and from around the world. He presented the findings during a news briefing at the American Astronomical Society meeting in Seattle. Scientists think the antimatter particles are formed in a terrestrial gamma-ray flash (TGF), a brief burst produced inside thunderstorms that has a relationship to lighting that is not fully understood. As many as 500 TGFs may occur daily worldwide, but most go undetected. The spacecraft, known as Fermi, is designed to observe gamma-ray sources in space, emitters of the highest energy form of light. Fermi’s GBM constantly monitors the entire celestial sky, with sensors observing in all directions, including some toward the Earth, thereby providing valuable insight into this strange phenomenon. When the antimatter produced in a terrestrial thunderstorm collides with normal matter, such as the spacecraft itself, both the matter and antimatter particles immediately are annihilated and transformed into gamma-rays observed by the GBM sensors. The detection of gamma-rays with energies of a particular energy -- 511,000 electron volts -- is the smoking-gun, indicating that the source of the observed gamma-rays in these events is the annihilation of an electron with its antimatter counterpart, a positron, produced in the TGF. Since the spacecraft’s launch in 2008, the GBM team has identified 130 TGFs, which are usually accompanied by thunderstorms located directly below the spacecraft at the time of detection. However, in four cases, storms were a far distance from Fermi. Lightning-generated radio signals, detected by a global monitoring network, indicated the only lightning at the time of these events was hundreds or more miles away. During one TGF, which occurred on December 14, 2009, Fermi was located over Egypt. However, the active storm was in Zambia, some 2,800 miles to the south. The distant storm was below Fermi’s horizon, so any gamma-rays it produced could not have been detected directly. Although Fermi could not see the storm from its position in orbit, it was still connected to it through sharing of a common magnetic field line of the Earth, which could be followed by the high-speed electrons and positrons produced by the TGF. These particles travelled up along the Earth’s magnetic field lines and struck the spacecraft. The beam continued past Fermi along the magnetic field, to a location known as a mirror point, where its motion was reversed, and then 23 milliseconds later, hit the spacecraft again. Each time, positrons in the beam collided with electrons in the spacecraft, annihilating each other, and emitting gamma-rays detected by Fermi’s GBM. NASA's Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership. The spacecraft is managed by NASA's Goddard Space Flight Center in Greenbelt, Md. The GBM instrument is a collaboration between scientists at NASA's Marshall Space Flight Center, the University of Alabama in Huntsville, and the Max-Planck Institute in Garching, Germany. The Fermi mission was developed in collaboration with the U.S. Department of Energy, with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States. Ray Garner | Newswise Science News New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:7bd0bbb9-5835-4ff1-b71e-a78e935daeda>
3.859375
1,369
Content Listing
Science & Tech.
38.71497
95,641,551
Print and go reading comprehension practice. Use it to teach reading skills and science at the same time. Your students will better understand chemistry after learning its history. This one-page reading passage teaches about Robert Boyle's place in the history of chemistry. This is not a complete biography of Robert Boyle, but focuses on his experiment on gas volume and pressure as a part of the progression of chemistry in history. * The reading passage is titled The Sceptical Chymist after Boyle's famous book. ☂ 1 Reading Passages on the History of Chemistry: Robert Boyle ☂ 1 Question Page with Multiple Choice and Short Answer Questions ☂ Answer Key This resource is a part of a bundle! History of Chemistry Other History of Chemistry Topics ☂ Ancient Times ☂ Ancient Greece ☂ Robert Boyle ☂ John Dalton ☂ Electricity and Water ☂ Avogadro's Law ☂ The Periodic Table ☂ The Gold Foil Experiment ☂ Atomic Numbers ☂ Strong Nuclear Force You Might Also Like ☂ Layers of the Atmosphere Reading Passages and Support Materials ☂ Reading Comprehension Passages and Questions: Layers of the Earth ☂ Reading Comprehension Passages and Questions: Tyrannosaurus rex ☂ Reading Comprehension Passages and Questions: Blackbeard, the Pirate Remember to follow my store to be the first to know about all my new products! I list all my new products at 50% off for the first 24 hours!
<urn:uuid:395e5c57-411f-4698-bcb5-b19e8391b2be>
3.65625
318
Product Page
Science & Tech.
34.99448
95,641,553
Make a two-digit string from a single-digit integer How can I have a two-digit integer in a a string, even if the integer is less than 10? [NSString stringWithFormat:@"%d", 1] //should be @"01" - Sorting NSDictionary - Cocoa error 260 - Suppressing “'…' is deprecated” when using respondsToSelector - Mac OS X: Getting detailed process information (specifically its launch arguments) for arbitrary running applications using its PID - How to sort NSMutableArray using sortedArrayUsingDescriptors? - How to get the computer's current volume level? 3 Solutions Collect From Internet About “Make a two-digit string from a single-digit integer” I believe that the stringWithFormat specifiers are the standard IEEE printf specifiers. Have you tried [NSString stringWithFormat:@"%02d", 1]; Use the format string %02d. This specifies to format an integer with a minimum field-width of 2 characters and to pad the formatted values with 0 to meet that width. See man fprintf for all the gory details of format specifiers. If you are formatting numbers for presentation to the user, though, you should really be using NSNumberFormatter. Different locales have wildly different expectations about how numbers should be formatted. [NSString stringWithFormat:@"%00.02d", intValue] This help me to convert 1 to 01. - UIScrollView cancels UIPageViewController gestures when scrolling - Saving And Loading Up A Highscore In Swift Using NSUserDefaults - How to change IOS badge notification colour from default Red to other colours? - google-tv development Environment - Detect screen on/off from iOS service - Are “Complex Numbers” already defined in Objective-C? - How do I return a struct value from a runtime-defined class method under ARC? - How can I find the address of a stack trace in LLDB for iOS - type 'Any?' has no subscript members (using Firebase) - How to know the current storyboard name? - Left Align Cells in UICollectionView - Get the length of a String - Data from Plist is Incorrectly Ordered in UITableView - Sending an iMessage as simple as possible iOS - Can I watch an NSNotification from another class?
<urn:uuid:f7d82a41-6d2a-4601-a6bc-414a19a5ac36>
2.90625
532
Tutorial
Software Dev.
42.947874
95,641,555
Course Descriptions - Geology GEO143 Physical Geology (3-3-4) This is the first part of a two-course sequence introducing students to the nature, processes and formation of Earth's material and the majors features of the earth's crust and topography. This course will consider the mineralogy of the rocks, different rock types and structures. Detailed consideration will be given to the internal processes that shape the earth's surface, including plate tectonics, igneous activities, weathering, erosion and deposition and earthquakes. PR: Two years of high school science and mathematics. F GEO145 Surface Geology (3-3-4) This is the second part of a two-semester sequence introducing students to the features of the earth's crust and topography. This course will consider the various geologic agents and processes that produce, shape and modify the surface environment. Detailed consideration will be given to the rise and decay of mountains, moving water, glaciers, deserts, shorelines and oceans as well as comparative planetary geology with other bodies in the Solar System. PR: Two years of high school science and mathematics. NOTE: Students using Geology as a lab science sequence are advised to take GEO 143 before GEO 145. Either course may be taken alone as a single lab science elective. S Last Updated: 07/20/18 08:03pm ET
<urn:uuid:fd6c974a-9abf-4e7b-a80e-b71cf8cb5040>
3.046875
291
Content Listing
Science & Tech.
39.956624
95,641,578
Introduction to CSS This document explain the CSS page composition with example code ,training courses in PDF under 27 pages designated to beginners. Table of contents - Size setting: size property - Margin boxes - Left/Right/First page setting - Running header and page number - Running header setting?stringt-set property and string() function - Page number: counter(page) - Numbering chapter and section - Cross reference - Creation of table of contents - Cross reference and table of contents - Control of page breaks - Page break: page-break-before, page-break-after - Page break prohibition - Rounded borders - Shadowed boxes - Setup of vertical writing mode: writing-mode: tb-rl - File Size: - 398.89 Kb - Submitted On: Take advantage of this course called Introduction to CSS to improve your Web development skills and better understand css. This course is adapted to your level as well as all css pdf courses to better enrich your knowledge. All you need to do is download the training document, open it and start learning css for free. AJAx and JQuery Download PDF tutorial about AJAx and JQuery ,the basics that you should know to build an interacting web site without requiring a page reload. Introduction to HTML This tutorial is an introduction designated to some basics of HTML (HyperText Markup Language) ,small document training under 24 pages for brginners. This tutorial provides some basics Ajax process ,training document in PDF under 38 pages for beginners. HTML5 and responsive Web design With this tutorial you will learn the secrets of HTML5 and responsive websites capable of interfacing with mobile devices as tablet or smartphone ,free PDF courses by Benjamin LaGrone . HTML5 and CSS3 This tutorial contain a brief overview about HTML5 and CSS3 , a free training document in PDF under 45 pages by Jason Clark. Cascading style sheets (CSS) free pdf tutorial Download free Cascading style sheets (CSS) course material and training (PDF file 34 pages) designated to beginners.
<urn:uuid:e86d62bc-78e2-490e-ba6e-883a1ea8aa8b>
2.9375
448
Content Listing
Software Dev.
40.173923
95,641,582
Oliver Morton has written a great book about geoengineering, and that is no small achievement. Say that word aloud—“geoengineering”—and it sounds wonky and remote. But there is a good chance that “geoengineering” will soon be on the lips of every politician on this planet. The world’s rich economies have steered humankind into a difficult position. In the two centuries since the Industrial Revolution, we have wrapped the Earth in a layer of technology, powered by the burning of fossil fuels, whose exhaust products are pooling in the gas bubble that surrounds our blue marble. The atmosphere is becoming a greenhouse. The surface of our planet is now warming, with unpredictable consequences. Earth’s biota may experience a sixth mass extinction. Cities may flood. Crops may be ruined. Hundreds of millions of people may become refugees, or starve outright. Our descendants may one day say that ours was a time of affluence, subsidized by their suffering. Humans must now choose between two unattractive options. We can continue pumping CO2 into the atmosphere. We can cross our fingers that we adapt to a warming climate, and that earth’s natural systems adapt too. Or we can transition to a cleaner global energy system, at a speed that is unprecedented, across all of history. The geoengineers offer us a third way, and their pitch is seductive, even flattering. Human ingenuity landed us in this pickle, they say. Maybe human ingenuity will set us free. Maybe we can deploy a technical fix, something that will buy us enough time to phase out fossil fuels, without crashing civilization in the process. Maybe we can spray a mist of particles into the stratosphere, to block sunlight that would otherwise beam down to Earth’s surface. Maybe we can use drone ships to pump chemical glitter into clouds that hover over the oceans, so they reflect sunlight back into black space. Maybe we can nourish massive plankton blooms in the Southern Ocean, to suck CO2 straight out of the sky. At universities across the world, well-funded, tenured scientists are working hard on these ideas. For the past 6 years, Oliver Morton, a longtime editor at The Economist, has been following their work, testing their ideas, thinking about the practical implications of geoengineering—and the philosophical implications, too. Earlier this week we talked about his new book, The Planet Remade, at length. What follows is a transcript of our conversation, condensed and edited for clarity. Ross Andersen: In your book, you use the word geoengineering in a more expansive way than most people. Why is that? Oliver Morton: Well, there's a way of talking about geoengineering that's too large for me. There's a way that you sometimes hear people who are naively in favor of geoengineering say “well, we're already geoengineering the earth system—we should just do it better.” There are various problems I have with that thinking. There are bits of the earth system that humans are just making a mess of. I would put the carbon balance of the atmosphere in that category. But that’s not deliberate. When I sat down to write this book, I wondered if there were parts of the earth system that humans are deliberately changing, with forethought. The one I develop most in the book is the example of the nitrogen cycle, which was once driven by soil bacteria, but has been absolutely taken over by human fertilizer factories. That wasn’t something that just happened. It was something that senior chemists wanted to happen. It was something that institutions like the Rockefeller Foundation, and the U.S. government, and the Central Committee of China's Communist Party wanted to happen. This was a willed thing. And although it's not a perfect analogy to the geoengineering people might use to mitigate climate change, it does reset your thinking. It frames geoengineering within this broader question about what kinds of choices people should make in the anthropocene, when the earth is under human dominion, to some extent. Andersen: One of the pleasures of your book is its emphasis on the intellectual history of geoengineering. How long have humans been thinking about doing something like this? Morton: Humans have thought that they have an influence on climate for a very long time, and especially during the Enlightenment. Thomas Jefferson believed that America’s climate was much nastier than Europe’s, and there was a widespread view that this was because Europe's climate had been ameliorated by the human presence in a way that America's climate had not. Jefferson and others thought that bringing European farming to America would improve its climate, and they put this idea forward in various unconvincing ways. Then, in the later 19th century, you start seeing these grand engineering schemes whereby people talk about taking directly taking on the climate and other aspects of the earth system, without just having to lead good European lives. And people also imagine this going on elsewhere. When Percival Lowell looks at Mars, he sees canals, and he thinks of them as a form of what we might call terraforming, or what we might call geoengineering. He sees it as the Martians taking an active control over their environment. Andersen: Here at The Atlantic we recently published an interview with Bill Gates on the subject of climate change and what can be done about it. And on Twitter you expressed surprise that geoengineering wasn't discussed. Is that part of a larger pattern? Are you surprised there isn't more frank discussion of geoengineering in the pubic sphere right now? Morton: I'm not surprised about it in the public sphere so much, because I understand why people try to stop discussion of geoengineering. The idea that there might be a way to soften the blow of human-created climate change with technology is dangerous, because it weakens the resolve that will be necessary to actually deal with emissions. That's a very serious issue, and one of the things I'm very keen to get across in this book is that I do not in any way see geoengineering as an alternative to a program of emissions reduction. I think that that would be a very foolish approach to the problem. I'm slightly surprised that it doesn't come up in a conversation with Bill Gates in The Atlantic, because both you and he think more widely about these things—and indeed Bill Gates has funded some work on geoengineering. It just seems to me that if you really don’t think the climate should get more than 2 degrees warmer, then you kind of have to be talking about geoengineering in one way or another. You either have to be talking about sucking a lot of CO2 out of the atmosphere once emissions have peaked, or you have to talk about reducing, slightly, the amount of sunlight that's coming into the system. And, yeah, it does frustrate me that that point isn't made more often—and I thought that your conversation with Gates might have been a good chance to do that. Andersen: In your book, you argue that it would be impossible to transition away from fossil fuels quickly, because our current global-energy infrastructure simply can't be replaced within a single generation. Can you give me a sense of the scale of that infrastructure? What would need replacing? Morton: Well, you have to remember that over 80 percent of the world's energy comes from fossil fuels—and the world uses a lot of energy, and will be using even more energy soon. There are, after all, a large number of people who still don't have access to modern energy services. In the beginning of the 20th century, no one lived the sort of life that well-off people in developing countries live or aspire to. Now about 1.6 to 2 billion people live that kind of life. And that's great, but there are 5 billion people who aren't leading that kind of life. They are going to use a lot of energy. At the end of this century there will be 9 or 10 billion people on the face of the planet. You would kind of hope that in a century's time, they would all have the access to energy that you and I enjoy. That would mean going from 2 billion people with access to 10 billion, a much larger increase than we saw during the 20th century. Of course, there's a huge amount that we can do with better energy technology over the course of the 21st century. But as the world develops, I think there's still going to be an awful lot of fossil fuels burned. I think it's a fundamental mistake to think that with just a bit more political will, you can suddenly go to a zero-carbon world. Andersen: Should people take solace from the speed at which France transitioned to nuclear? Or the speed at which Germany has converted to wind and solar? Morton: France and Sweden both transitioned very well onto nuclear. But France had a culture that was well suited to the quick distribution of nuclear power—and I don't think that sort of culture is as widespread as one would wish. For instance, when one talks about whether South Africa should have more nuclear power, it occurs to me that nuclear power requires strong, independent regulation. And if I look at the political situation in South Africa, it seems very unlikely to provide strong independent regulation for a nuclear-power program. The other thing is that even though France has great nuclear electricity potential, it still uses fossil fuels for other things. It still runs its cars on fossil fuels, it still runs various industries on fossil fuels. If you want to cut 30 or 40 percent of your fossil-fuels budget, you can transition to nuclear. But if you want to cut it all the way back to zero that gets really hard—and if you actually want, in the long term, a balanced composition of the atmosphere, you really do have to get the CO2 emissions from fossil fuels more or less to zero. I think that the scale of that endeavor still escapes people. It doesn't escape some very deep greens who think that it's absolutely impossible and that instead we’ll see an industrial crash, or the end of civilization. But, that's something that I would rather avert. Andersen: Am I right to think that currently the most plausible geoengineering scheme is to seed the stratosphere with sulfate aerosols to reflect away some portion of sunlight? Morton: That's certainly the approach that's been most widely discussed. And there are various reasons for this. One is historical—this is, after all, something we have seen before. After a large volcanic eruption, the layer of sulfate aerosols in the stratosphere gets thicker, and we see, in the historic record, that the Earth cools down in response. Another reason you see this technique discussed so often is that it fits into global-climate modeling rather well. Computer scientists that do global-climate modeling are fairly well set up to deal with thinking about that sort of thing. There are other technologies that might have regional or even global applications that work differently, and which are in a similar place, with respect to the sophistication of the research. The one that's most notable is something that's called marine cloud brightening, where tiny particles are added to existing clouds over the oceans, in order to brighten them and reflect away more sunlight. The exact mechanism there is not as clear as the stratospheric mechanism, but the possibility of actually trying it out on a limited scale is much more attractive, because you could create a little particle generator and put it under some clouds and see whether they were, indeed, brightened. Whereas you can't isolate a tiny bit of the stratosphere in the same way. So in terms of what might be experimented on first, it wouldn't be surprising if it was marine cloud brightening. Andersen: If you were to tinker with the stratosphere that would obviously affect people across the entire planet. Do you think geoengineering of that sort could be moral outside of some kind of global democratic decision process? Morton: I think it might be defensible. But my argument is precisely that we would need to find a way of politically dealing with this. The real challenge of geoengineering is developing the institutions that might use this technology in a just and responsible way. I give some examples of how that might happen in the book, but I see it as a huge, difficult, open question. Andersen: When people worry about unilateral, undemocratic acts of geoengineering, they tend to worry about some consortium of rich countries jamming this through. But of course, it could go the other way. I sometimes wonder what stops Bangladesh from seeding the stratosphere with these aerosols on its own, because the cost of doing so is trivial. Morton: We should always remember that these are notional technologies. Actually implementing them might be quite difficult, but, yes, it seems as though putting a million tons of stuff into the stratosphere on an annual basis is not something that's going to break the bank of even quite a small country. It is, however, going to upset a lot of other people. And unless your country is used to upsetting a lot of other people, and able to withstand what happens when you upset a lot of people, then that might not be where you wish to go. Also, for Bangladesh in particular, the amount of geoengineering that you would have to do to have a near-term effect on sea level would be a great deal. It's very hard to actually have near-term sea-level effects with geoengineering. Because there's a lot of other warming already in the pipe that you would have to kind of wish away. Andersen: In your book, you spend some time with David Keith, a professor of applied physics at Harvard. I watched Keith give a talk in San Francisco earlier this year, and at dinner afterward I asked him which objection to geoengineering kept him up at night. He told me it was “addiction,” this idea that geoengineering could work so well that people would turn to it again and again without actually doing anything about carbon emissions themselves. And that would be a big problem, because carbon emissions have all of these other harmful effects like ocean acidification and so forth. Is “addiction” your biggest worry as well? Morton: What I really worry about with geoengineering is that conflict over its use will lead to a greater conflict that leads to a nuclear war. And that’s because I worry about nuclear wars a lot more than I worry about geoengineering, frankly, because we don't even know if anyone's going to try geoengineering, but we know the wherewithal to have a nuclear war is out there in the world already. David Keith's scenario, though it is undeniably troubling, is also, to some extent, a problem of success. It’s not clear to me whether David's worry is that it works very well and humanity ends up in this dependency that puts us in this ever-more-artificial world, in which all these things have been wiped out by ocean acidification. Or is it that his worry is that it works really well until it absolutely stops working and then there's a collapse. Those are slightly different arguments, and I wouldn't care to guess which he was talking about. But I will say that the other worry I do have is that people in the 23rd or 24th century will look back and say that it's kind of a pity that people in the 21st century didn't very slightly cool the planet to give themselves a bit more time, so that the horrible thing that happened in 2055 didn't happen. Which is to say that when one worries about what geoengineering might do, one should also worry about the opportunity cost that might arise if a well-regulated, just, defensible form of geoengineering were not on the table. Andersen: If we went forward with one of these global geoengineering schemes, would that be the end of wilderness on this planet, and does wilderness exist now? Morton: Ah, the wilderness question. A particularly popular question, I find, with Americans. As someone who's British and lives on an island with basically no wilderness, this is less of a pressing question to me. I think wilderness is, to some extent, a state of mind, and to some extent a necessarily paradoxical state of mind, because it's about a place that exists by your denial of it; the more you as a civilized person are in it, the less wild it is, because you are there. It's a strange, easily deconstructed way of thinking about the world, which I don't find particularly powerful. I like the wild, and I have a great sympathy for the people in the rewilding business who talk about self-willed nature. In general, I try to think about nature in terms of processes not in terms of stocks. I think that endless fount of the unexpected and the self-willed that nature provides is very valuable. It does that in all sorts of different places. You can see the little plants growing in the cracks of abandoned paving stones and they're very much like what you see in a natural limestone pavement up in the Yorkshire moors, or in the mountains of Portugal, where I was recently walking. The processes of nature and the processes of the wild, and that sense of inhuman autonomy—that’s where I get that wilderness kick. And I think those will entirely continue. There's no way you're going to stop those sorts of processes from going on. As for whether they meet some sort of categorization as “wilderness” or not, that’s not really my fight. Andersen: People often get caught up in thinking about geoengineering strictly in the context of human-caused climate change. What other sorts of applications could geoengineering have in the deeper future? Might humans try geoengineering to ward off the extreme temperature swings we see across geologic time? Morton: To me, the whole thing about the geology of the anthropocene is that geological time has now been massively compressed—and we are now responsible for those intense temperature swings. But more than that, thinking about human agency in terms of more than a few centuries just doesn’t make a great deal of sense to me. Humans are so different in what they can do and in their ways of relating to each other than they were even 10,000 years ago. And 10,000 years ago is nothing in geological time. That does remind me, though, of a larger problem of human agency. You and I have been talking a lot about what “we” humans might do. If there's a lesson that I’ve taken from thinking about geoengineering, it's that there isn't a “we” that can talk about this yet. Making a group of people that can identify as a group and have sensible opinions about this—that's a lot of work. That's something that needs to be achieved. There's a laziness in which writers say “we” and they mean “you,” the reader, and “me” and all us right-thinking people, and not “them,” who are dubious and shifty and other. And that's a way that a lot of pro- and anti- geoengineering talk goes. They ask what “we” should do. But who is we? At the moment, there are no institutions that you could possibly trust this to. I want to start having this discussion so that those institutions can be developed in this century, and then when that development is done, then maybe we can turn to the deepest of deep times. Scientist, activists, and filmmakers envision the future of life on Earth. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:6e5d5ff7-8381-44e3-98b3-e39703ed50fb>
2.515625
4,148
Audio Transcript
Science & Tech.
53.325523
95,641,586
How do they Calculate Distances to the Stars? Distance is usually measured with a ruler, measuring tape or a wheel. All these techniques rely on us being able to physically measure the distance between two points, but for stars and planets, this isn't very practical. Instead of measuring, astronomers have to calculate these values and they do this using a variety of methods. For Close Objects For objects up to around a few hundred light years, we can use parallax shift. The technique relies on some simple geometry, and the smaller the parallax shift, the further away the object is. The principal of Parallax can easily be demonstrated by holding your finger up at arm's length. Close one eye, then the other and notice how your finger appears to move in relation to the background. This occurs because each eye sees a slightly different view because they are separated by a few inches. If you measure the distance between your eyes and the distance your finger appears to move, then you can calculate the length of your arm. Parallax can be calculated using this formula: Equation 10 - Distance Calculation using Parallax Where the distance d is measured in parsecs and the parallax angle p or theta is measured in arcseconds. A parsec is the distance at which 1 AU subtends 1 arcsecond. So an object located at 1pc would, by definition, have a parallax of 1 arcsecond. After a few hundred light years distance, parallax shift is so small it cannot be recorded, which makes this technique ineffective. For Far Objects Beyond 100 light years, but within our own galaxy, we can use a technique called distance modulus. Using the distance modulus it is possible to establish a relationship between the absolute magnitude of a star, its apparent magnitude, and its distance. Distance modulus can be obtained by combining the definition of absolute magnitude with an expression for the inverse square law and Pogson's relation. Equation 25 - Distance Modulus Distance modulus calculations are covered in more detail when we look at apparent magnitude and absolute magnitude. For Very Far Objects (Another Galaxy) For objects outside our galaxy, we can use the unique properties of a Cepheid variable star. These stars vary in brightness over time, in a frequency that is exactly in ratio to its apparent brightness, thus we can measure its frequency and its brightness and compute how far away it is using distance modulus. Every galaxy has a bunch of Cepheid variables, so its quite easy to map fairly accurate distances of all the galaxies we can see. Last updated on: Wednesday 24th January 2018 A look at the celestial event which causes day and night to be the same length. Testing out the Thousand Oaks solar filter. There are no comments for this post. Be the first!
<urn:uuid:63129bf1-b549-48f6-b9ba-84d8a3e341b9>
4.4375
586
Tutorial
Science & Tech.
42.219361
95,641,609
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 4 times found in oak woodland, 3 times found in pine/oak woodland, 4 times found in fir-pine-oak forest, 3 times found in shrub steppe, 2 times found in conifer forest, 2 times found in pine/fir forest, 2 times found in oak woods, 2 times found in ponderosa pine woodland, 2 times found in chaparral/oak woods, 2 times found in conifer woodland, ... Found most commonly in these microhabitats: 6 times ex sifted leaf litter, 6 times under stone, 3 times in dead oak branch, 2 times strays, 2 times nest under stone, 2 times nest in dead wood on ground, 2 times nest in dead oak branch, 2 times in oak knot, 2 times in dead wood on ground, 1 times in dead branch Tsuga canadensis, 1 times hilltopping alates, ... Collected most commonly using these methods: 31 times search, 5 times hand collecting, 1 times Winkler, 1 times Davis sifter, 1 times Hand. Elevations: collected from 2 - 2920 meters, 1358 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0
<urn:uuid:12a4ea12-da0c-43eb-933f-e0e367ace3f2>
2.53125
386
Knowledge Article
Science & Tech.
66.276667
95,641,612
Tipping the scale in wildlife habitats / Human development causes disruption that favorsome species 2001-09-03 04:00:00 PDT Los Angeles -- These are difficult, even perilous times if you're a timber wolf, grizzly bear, mule deer, greater prairie chicken or spotted owl. The kind of digs you like - virgin wildlands unmarred by human endeavor - are in short supply. Habitat you once relied on for foraging, shelter or mating is shorn of trees, growing a crop of soybeans or supporting tract homes. But if you're a coyote, black bear, white-tailed deer, striped skunk, ruffed grouse or barred owl? Welcome to the Golden Age. Everything you need is here: clear-cuts, second-growth woodlands, industrial-scale agriculture and miles and miles of suburbs. LATEST SFGATE VIDEOS - Saint Marys PMBA San Francisco Chronicle - Hult San Francisco Campus San Francisco Chronicle - Humpbacks lunging near boat in Monterey Bay Princess Monterey Whale Watch - Watch SF's world-famous summer fog roll in San Francisco Chronicle - Terrifying video shows a swirling column of fire in SoCal San Francisco Chronicle - Lost bulldog reunited with owner in Antioch San Francisco Chronicle - Dolores Hill Bomb 2018: Skateboarders descend on San Francisco's Mission District Derek John Dudek / derekdudekdesign.com - Dolores Hill Bomb 2018: Skateboarders descend on San Francisco's Mission District Desi Pena - A Conversation with London Breed Manjula Varghese - SF's Boxing Room makes New Orleans cocktails SFGATE While it's true that scores of wild species are flirting with extinction, scores more are doing just fine, thank you - and for precisely the same reasons. The fate of both groups hinge on "disturbed" habitats, an ecologist's term for wildlands that have been logged, farmed or partially developed. For many animals, disturbance equals death. For others, it is an opportunity to excel. . Exploit disturbed habitat "Disturbed ecosystems are the wave of the future," said Frank Almeda, senior curator of botany at the California Academy of Sciences. "The whole issue of invasion by species able to exploit disturbance is assuming center stage in the conservation community." Nature is flexible, observe scientists, but it is simplistic to say all of nature is flexible. Parts of it aren't - the wild salmon, for example, which needs cold, clear, unpolluted water to survive, or the grizzly bear, which requires 100 to 500 square miles of range. But certain parts are malleable. White-tailed deer, originally creatures of the woodland meadow, have adapted to farmland and suburbs with blithe ease, consuming field corn or Grandma's prize roses with equal relish. Black bears actually prefer clear-cuts to virgin forests, since primeval woodlands support few of the berries, roots, insects and rodents that are ursine staples. . Coyotes, foxes spread With the elimination of the timber wolf from most of North America, coyotes have expanded their numbers and range dramatically. Originally a predator of the West, they now inhabit every state east of the Mississippi. Even urban areas fail to daunt them - their presence has been confirmed in New York City. "Ecologists describe these species as 'weedy' or 'tramps,' " Almeda said, "and they share certain characteristics. They're often fecund - able to multiply prolifically. They reach sexual maturation quickly. They typically don't have many predators and competitors. They're usually generalists in their diets and habitat requirements." In a mature, pristine ecosystem, wildlife species tend to be in essential balance, though there may be cyclical fluxes in the populations of one species or another. But when humans intrude, the scales can tip dramatically in favor of a species that embodies one or more of the criteria Almeda describes. That's the case with red foxes in the Great Plains. These wily and beautiful canids have become the dominant predator over wide portions of the Plains, often outcompeting coyotes - which are in their own right a robustly weedy species. Red fox dens sometimes reach densities of five per square mile, said Ron Stromstad, director of operations for the Western regional office of Ducks Unlimited, and a former director of the North Dakota Department of Game and Fish. "Those are extremely high densities, considering each den accounts for a pair of foxes and their litter of kits," Stromstad said. "The impact they have on many native wildlife species can be tremendous, especially nesting birds. In one study of North Dakota foxes, the remains of almost 60 hen mallards were found at a single den site." Red foxes extended their range and numbers in the Great Plains after major predators, particularly wolves, were wiped out by ranchers, Stromstad said. "Now, every culvert and abandoned farmstead in the Plains is a loafing area not just for foxes, but also skunks and raccoons," he said. Woodland areas are no more immune to disturbance-related wildlife shifts than the prairie. One example is how logging has threatened the spotted owl, but is proving a boon for its kissing cousin, the barred owl. Spotted owls prefer to eat the wood rats and flying squirrels that inhabit old-growth forests. But the barred owl thrives on the abundant and more varied rodents that teem in the brush that sprouts in logged-over terrain. . Golf courses boon to geese Even full-scale suburbanization benefits certain species. In the past two decades, populations of Canada geese have zoomed upward across the country as the big, aggressive fowl have discovered that golf courses and urban parks provide ample nesting habitat and acres of lovely grass to graze. Disturbance may not significantly alter the actual biomass - the amount of living tissue - that an area supports, but it inevitably decreases biodiversity. "One of the things we're seeing is the homogenization of fauna the world over as certain species adapt to disturbed conditions across many continents," said Steven Beissinger, professor of conservation biology and chairman of the Department of Environmental Science Policy and Management at the University of California at Berkeley. This is especially true of birds, Beissinger said. European starlings and English sparrows are perhaps the best-known examples, having proven supremely adaptable to North American cities and croplands. They are now among the continent's commonest birds, often supplanting less adaptable native species. What to do? At this point, say conservation biologists, the goal should be to minimize further habitat disruption and rehabilitate disturbed areas as much as possible. "For the past several years, we've had programs in place that have encouraged farmers (on the Great Plains) to keep portions of their properties in wetlands and uncultivated uplands," Stromstad said. "The benefit to wildlife has been tremendous."
<urn:uuid:bd82757b-a2b7-4dc4-be88-55f1dfb81803>
2.671875
1,474
News Article
Science & Tech.
36.934997
95,641,636
Does Open Source Development Produce Secure Applications? A Focus on Mozilla and Firefox Projects Dr. Yonglei Tao, firstname.lastname@example.org Open Source Software (OSS) has long been of interest to developers and system administrators since it is very affordable. OSS development has recently received greater attention in the user community due to security concerns in established commercial web browsers and the availability of a newly maturing Open Source alternative. Some members of the OSS community argue that open source code is more secure because the code is available to the public. This is the famous "many eyes" argument that vulnerabilities are easier to find and fix. Countering this, some argue that OSS is less secure because hackers also have access to the source code and can exploit vulnerabilities much quicker than in a closed source product. More generally, there are ongoing debates regarding the development model of OSS projects and whether they can produce applications that are secure and robust. The goal of this presentation is to review the security of an Open Source Development project and the robustness of its output. The first part of this presentation provides general information about OSS such as the definition, benefits, issues and motivation. The second part reviews secure application development principles and practices, and develops a basic secure application development criteria for use in evaluating OSS projects. The last part applies the security criteria to a high profile Open Source project, the Mozilla Firefox web browser. Le, Quan; Ross, Bryan; and Whitcomb, Steve, "Does Open Source Development Produce Secure Applications? A Focus on Mozilla and Firefox Projects" (2004). Technical Library. 106.
<urn:uuid:9825623c-4f50-4e57-9fad-0980a9568ec3>
2.78125
327
Academic Writing
Software Dev.
23.35
95,641,676
Real-time tracking of eddies and currents could help fishermen avoid protected species Increased computing power has given fisheries researchers new tools to identify "hotspots of risk," where ocean fronts and eddies bring together masses of fish, fishermen and predators, raising the risk of entangling non-target fish and protected species such marine mammals, sea turtles and sharks. Using a novel, high-resolution "Lagrangian Coherent Structures" mapping technique, scientists are able to model dynamic features in ocean surface currents. The capacity for improved, near real-time mapping of ocean fronts and eddies may now help alert fishermen and fisheries managers to the increased risk so they can try to avoid those protected species and better target the species they are after, the scientists wrote in an article in Proceedings of the National Academy of Sciences. "Understanding how pelagic fish and marine predators interact with the environment can help fishers and managers avoid bycatch of non-target and protected species, while maintaining the catch of commercial species," said Kylie L. Scales of the University of the Sunshine Coast (USC) in Australia, and lead author of the new research. "Our findings give managers information at the broad scales they need, which could help inform development of dynamic ocean management." Lagrangian Coherent Structures are known from the field of fluid dynamics and represent areas of mixing, where different water masses meet and tend to concentrate marine life and in turn fishermen. The new approach uses high-resolution ocean modeling to help detect and predict the areas as they form and move through the ocean, and highlight the elevated risk they may present. Mako shark at Cape Point, South Africa. Credit: Steve Woods Increased computing power in recent years gives researchers the tools to recreate and better understand complex oceanographic processes such as the formation of such zones, which when mapped off the California Coast look like an interlocking network of whirls and swirls of varying water masses. "Understanding where bycatch risk is greatest can help fishermen avoid it, and can help us manage fisheries sustainably," said Elliott Hazen, a research ecologist at NOAA Fisheries' Southwest Fisheries Science Center and coauthor of the new research. "By identifying which oceanographic features are most likely to result in bycatch, we can improve existing dynamic ocean management tools such as EcoCast to provide novel solutions to address the challenges of fisheries management." EcoCast is a mapping tool NOAA Fisheries scientists recently developed to help fishermen identify productive fishing areas, while avoiding areas with high risk of catching other, unintended species. In the new study, scientists used the models of ocean dynamics to assess the probability of capturing commercial fish species such as swordfish, opah and tuna against the risk of capturing other species fishermen want to avoid such as some species of sharks, sea turtles and whales. They found that drift gillnets set in conjunction with such zones were more likely to catch the swordfish they target, but also greatly raise the risk of catching certain non-target species. Fishermen likely target the zones because they tend to attract swordfish, but the scientists suggested that shifting fisheries closures in certain areas at certain times, or fishing at particular depth ranges, might help preserve the swordfish catch while minimizing the risk of catching other, unintended species. NOAA Fisheries is increasingly considering options for such "dynamic ocean management," where managers adjust fishing rules in real time based on ocean data. "Given the complexity of the bycatch problem, a suite of complimentary solutions will be necessary to support a sustainable seafood supply sufficient to meet future demand," wrote the scientists from USC, NOAA Fisheries, the National Center for Atmospheric Research, Old Dominion University, and San Diego State University. "Our results highlight the conservation and management value of understanding the mechanisms through which the physical environment structures marine species distributions."
<urn:uuid:703e93e6-4fb3-40f0-94f4-99174e57c50d>
3.484375
775
Truncated
Science & Tech.
15.679229
95,641,688
10,000-year record shows dramatic uplift at Andean volcano - December 21, 2015 - 211 Views - 0 Likes - 0 Comment The temperature 3,000 kilometers below the surface of Earth is much more varied than previously thought, scientists have found. The discovery of the regional variations in the lower mantle where it meets the core, which are up to three times greater than expected, will help scientists explain the structure of Earth and how it formed.
<urn:uuid:679fca98-a9ac-427e-98d7-ba6489b58180>
2.75
95
Truncated
Science & Tech.
40.1805
95,641,696
Microkernel of the GNU system GNU Mach is the microkernel of the GNU system. A microkernel provides only a limited functionality, just enough abstraction on top of the hardware to run the rest of the operating system in user space. The GNU Hurd servers and the GNU C library implement the POSIX compatible base of the GNU system on top of the microkernel architecture provided by Mach. Currently, GNU Mach runs on IA32 machines. GNU Mach should, and probably will, be ported to other hardware architectures in the future. Mach was ported to many operating systems in the past. released on 18 December 2016 |License||Verified by||Verified on||Notes| |License:GPLv2orlater||Janet Casey||8 June 2004| Leaders and contributors Resources and communication |VCS Repository Webview||https://git.savannah.gnu.org/cgit/hurd/gnumach.git| |Debian (Ref) (R)||https://tracker.debian.org/pkg/mach| |Python (Ref) (R)||https://pypi.org/project/mach| |Ruby (Ref) (R)||https://rubygems.org/gems/mach| |Required to build||mig| This entry (in part or in whole) was last reviewed on 10 May 2018. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the page “GNU Free Documentation License”. The copyright and license notices on this page only apply to the text on this page. Any software or copyright-licenses or other similar notices described in this text has its own copyright notice and license, which can usually be found in the distribution or license text itself.
<urn:uuid:3987e6b9-ad86-4f28-a860-48f04dd2d995>
2.59375
433
Knowledge Article
Software Dev.
51.813507
95,641,716
1797 of 2221« Return to Image BankGeneral Image Bank "Detail: "Key Changes In Evolution" Timeline"Image by Gail Guth (visit gallery ») Copyright © 2006 Prentice Hall. This image is not available for reuse and is protected by copyright. TitleDetail: "Key Changes In Evolution" Timeline Contact »Gail Guth DescriptionTimeline illustrating key changes in evolution (detail). Created for the textbook "Biological Anthropology: The Natural History of Humankind", by Craig Stanford, John S. Allen, and Susan C. Anton; published by Prentice Hall. Textbook art developed and managed by Precision Graphics, Inc. Keywordstimeline, evolution, dinosaur, Tyrannosaurus rex, Archaeopteryx, Aegyptopithecus, Paleocene, primates, Jurassic, Cretaceous, Eocene, Oligocene, Miocene, watercolor, colored pencil, digital Media Color Categories Science Subject(s) Amphibians, Anthropology, Birds, Dinosaurs, Invertebrates, Mammals, Marine Life, Paleontology, Reptiles, Vertebrates, Human, Editorial
<urn:uuid:44e566ae-1630-4106-ac62-5865b0246b53>
2.765625
233
Truncated
Science & Tech.
-11.046613
95,641,743
Share this article: The case for a giant plume of water vapor wafting from Jupiter's potentially life-supporting moon Europa just got a lot stronger. NASA's Hubble Space Telescope has spotted tantalizing signs of such a plume multiple times over the past half decade, but those measurements were near the limits of the powerful instrument's sensitivity. Now, researchers report in a new study that NASA's Galileo Jupiter probe, which orbited the planet from 1995 to 2003, also detected a likely Europa plume, during a close flyby of the icy moon in 1997. The newly analyzed Galileo data provides "compelling independent evidence that there seems to be a plume on Europa," said study lead author Xianzhe Jia, an associate professor in the Department of Climate and Space Sciences and Engineering at the University of Michigan. This is exciting news for astrobiologists: If the plume is indeed real, it could offer a way for a spacecraft to sample Europa's buried ocean of liquid water without even touching down on the moon. And NASA is working on a mission that could do just that. Hot and dry summer weather is expected to persist in the western U.S. this week, perpetuating the wildfire threat and risk of heat-related illness. In the wake of showers and thunderstorms that will enhance the risk of flash flooding, cooler air will invade the northeastern United States by midweek. Beryl has redeveloped well off the coast of the mid-Atlantic, but is not expected to have major impacts on land. While the southeastern U.S. is no stranger to humid, stormy conditions, widespread wet weather will be more disruptive than usual this week. In the aftermath of the disastrous and historic flooding across western Japan, survivors and recovery crews will continue to face sweltering heat and humidity. In the United States, more people have died from being left in hot cars than from lightning strikes so far this year. A mudslide and a freight train derailment led to the closure of U.S. 95 near the Nevada-California state line on Friday. Two people, a 17-year-old boy and a 30-year-old man, were hospitalized after being bitten by sharks in Fernandina Beach, Florida, on Friday afternoon.
<urn:uuid:3719b826-1916-44b2-97b4-f3b4951d27f4>
3.1875
470
News Article
Science & Tech.
47.450261
95,641,821
Rudnicki, Mark D; Elderfield, Henry; Spiro, Baruch (2001): Sulphate concentrations and sulphur isotopic composition in pore fluids from ODP Leg 168 sites. PANGAEA, https://doi.org/10.1594/PANGAEA.708287, Supplement to: Rudnicki, MD et al. (2001): Fractionation of sulfur isotopes during bacterial sulfate reduction in deep ocean sediments at elevated temperatures. Geochimica et Cosmochimica Acta, 65(5), 777-789, https://doi.org/10.1016/S0016-7037(00)00579-2 Always quote above citation when using data! You can download the citation in several formats below. A numerical model of sulfate reduction and isotopic fractionation has been applied to pore fluid SO4**2- and d34S data from four sites drilled during Ocean Drilling Program (ODP) Leg 168 in the Cascadia Basin at 48°N, where basement temperatures reach up to 62°C. There is a source of sulfate both at the top and the bottom of the sediment column due to the presence of basement fluid flow, which promotes bacterial sulfate reduction below the sulfate minimum zone at elevated temperatures. Pore fluid d34S data show the highest values (135 per mil) yet found in the marine environment. The bacterial sulfur isotopic fractionation factor, a, is severely underestimated if the pore fluids of anoxic marine sediments are assumed to be closed systems and Rayleigh fractionation plots yield erroneous values for a by as much as 15 per mil in diffusive and advective pore fluid regimes. Model results are consistent with a = 1.077+/-0.007 with no temperature effect over the range 1.8 to 62°C and no effect of sulfate reduction rate over the range 2 to 10 pmol/ccm/day. The reason for this large isotopic fractionation is unknown, but one difference with previous studies is the very low sulfate reduction rates recorded, about two orders of magnitude lower than literature values that are in the range of µmol/ccm/day to tens of nmol/ccm/day. In general, the greatest 34S depletions are associated with the lowest sulfate reduction rates and vice versa, and it is possible that such extreme fractionation is a characteristic of open systems with low sulfate reduction rates. Median Latitude: 47.838860 * Median Longitude: -128.291400 * South-bound Latitude: 47.762600 * West-bound Longitude: -128.792000 * North-bound Latitude: 47.917300 * East-bound Longitude: -127.753000 Date/Time Start: 1996-06-23T09:15:00 * Date/Time End: 1996-08-05T12:33:00 168-1023A * Latitude: 47.917300 * Longitude: -128.792000 * Date/Time Start: 1996-06-23T09:15:00 * Date/Time End: 1996-06-25T02:15:00 * Elevation: -2593.3 m * Penetration: 194.5 m * Recovery: 193.05 m * Location: Juan de Fuca Ridge, North Pacific Ocean * Campaign: Leg168 * Basis: Joides Resolution * Device: Drilling/drill rig (DRILL) * Comment: 22 cores; 194.5 m cored; 0 m drilled; 99.3 % recovery 168-1025A * Latitude: 47.887500 * Longitude: -128.648000 * Date/Time Start: 1996-06-26T12:15:00 * Date/Time End: 1996-06-26T15:00:00 * Elevation: -2606.2 m * Penetration: 6.3 m * Recovery: 6.34 m * Location: Juan de Fuca Ridge, North Pacific Ocean * Campaign: Leg168 * Basis: Joides Resolution * Device: Drilling/drill rig (DRILL) * Comment: 1 core; 6.3 m cored; 0 m drilled; 100.6 % recovery 168-1025B * Latitude: 47.883300 * Longitude: -128.650000 * Date/Time Start: 1996-06-26T19:30:00 * Date/Time End: 1996-06-27T13:45:00 * Elevation: -2313.6 m * Penetration: 99.5 m * Recovery: 90.98 m * Location: Juan de Fuca Ridge, North Pacific Ocean * Campaign: Leg168 * Basis: Joides Resolution * Device: Drilling/drill rig (DRILL) * Comment: 11 core; 99.5 m cored; 0 m drilled; 91.4 % recovery
<urn:uuid:a6aae942-1dd9-4f76-8c31-16c551ee4bb4>
2.625
1,047
Academic Writing
Science & Tech.
70.925553
95,641,833
Regular quadrangular pyramid The height of the regular quadrangular pyramid is 6 cm, the length of the base is 4 cm. What is the angle between the ABV and BCV planes? Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: A meter pole perpendicular to the ground throws a shadow of 40 cm long, the house throws a shadow 6 meters long. What is the height of the house? Line segments 67 cm and 3.1 dm long we divide into equal parts which lengths in centimeters is expressed integer. How many ways can we divide? - A clock A clock was set right at 6:00 AM. If it gains 3 1/2 minutes per hour, what time will it show at 6:00 PM on the same day? Show your solution - A pipe A radius of a cylindrical pipe is 2 ft. If the pipe is 17 ft long, what is its volume? In the city are 3/9 of women married for 3/6 men. What proportion of the townspeople is free (not married)? Express as a decimal number. - Summer camp Out of the 180 students at a summer camp, 72 signed up for canoeing. There were 23 students who signed up for trekking, and 13 of those students also signed up for canoeing. Use a two-way table to organize the information and answer the following questio - Pizza master Master says that he can splits pizza to 16 parts by five equals straight cuts. Is it possible? - Gasoline tank 2 A gasoline tank is 1/6 full. When 25 liters of gasoline were added, it became 3/4 full. How many liters more is needed to fill it? Show your solution. How many 19 element's subsets can be made from the 26 element set? - Solve 2 Solve integer equation: a +b+c =30 a, b, c = can be odd natural number from this set (1,3,5,7,9,11,13,15) - Two doctors Doctor A will determine the correct diagnosis with a probability 86% and doctor B with a probability 87%. Calculate probability of correct diagnosis if patient is diagnosed by both doctors. - Count of roots How many solutions has equation x. y = 7757 with two unknowns on the set of natural numbers? Two fifth-graders teams competing in math competitions - in Mathematical Olympiad and Pytagoriade. Of the 33 students competed in at least one of the contest 22 students. Students who competed only in Pytagoriade was twice more than those who just competed - Rings groups 27 pupils attend some group; dance group attends 14 pupils, 21 pupils sporty group and dramatic group 16 pupils. Dance and sporting attend 9 pupils, dance and drama 6 pupil, sporty and dramatic 11 pupils. How many pupils attend all three groups? Of the 35 students of class were 7 on holiday in Germany and just as much in Italy. 5 students visited Austria. In none of these countries was 21 students, all three visited by one student . In Italy and Austria were 2 students and in Austria and Germany. - Supermarket 2 A supermarket had a buko pie sale. In the morning 2/3 of the pies were sold and in the afternoon 1/6 of the pies were sold. If 150 pies were left, how many pies had been sold? Show your solution. - The larger The larger of two numbers is nine more than four times the smaller number. The sum of the two numbers is fifty-nine. Find the two numbers.
<urn:uuid:7c8b7844-da33-4fb9-bf36-972197025ca0>
2.875
793
Tutorial
Science & Tech.
75.724745
95,641,840