text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Any comments, suggestions or just looking for a chat about this subject? Don't hesitate and leave a comment on our improved comment section down below the article! By Matt Williams Shortly after Einstein published his Theory of General Relativity in 1915, physicists began to speculate about the existence of black holes. These regions of space-time from which nothing (not even light) can escape are what naturally occur at the end of most massive stars’ life cycle. While black holes are generally thought to be voracious eaters, some physicists have wondered if they could also support planetary systems of their own. Looking to address this question, Dr. Sean Raymond – an American physicist currently at the University of Bourdeaux – created a hypothetical planetary system where a black hole lies at the center. Based on a series of gravitational calculations, he determined that a black hole would be capable of keeping nine individual Suns in a stable orbit around it, which would be able to support 550 planets within a habitable zone. He named this hypothetical system “The Black Hole Ultimate Solar System“, which consists of a non-spinning black hole that is 1 million times as massive as the Sun. That is roughly one-quarter the mass of Sagittarius A*, the super-massive black hole (SMBH) that resides at the center of the Milky Way Galaxy (which contains 4.31 million Solar Masses). As Raymond indicates, one of the immediate advantages of having this black hole at the center of a system is that it can support a large number of Suns. For the sake of his system, Raymond chose 9, thought he indicates that many more could be sustained thanks to the sheer gravitational influence of the central black hole. As he wrote on his website: “Given how massive the black hole is, one ring could hold up to 75 Suns! But that would move the habitable zone outward pretty far and I don’t want the system to get too spread out. So I’ll use 9 Suns in the ring, which moves everything out by a factor of 3. Let’s put the ring at 0.5 AU, well outside the innermost stable circular orbit (at about 0.02 AU) but well inside the habitable zone (from about 2.7 to 5.4 AU).” Another major advantage of having a black hole at the center of a system is that it shrinks what is known as the “Hill radius” (aka. Hill sphere, or Roche sphere). This is essentially the region around a planet where its gravity is dominant over that of the star it orbits, and can therefore attract satellites. According to Raymond, a planet’s Hill radius would be 100 times smaller around a million-sun black hole than around the Sun. This means that a given region of space could stably fit 100 times more planets if they orbited a black hole instead of the Sun. As he explained: “Planets can be super close to each other because the black hole’s gravity is so strong! If planets are little toy Hot wheelscars, most planetary systems are laid out like normal highways (side note: I love Hot wheels). Each car stays in its own lane, but the cars are much much smaller than the distance between them. Around a black hole, planetary systems can be shrunk way down to Hot wheels-sized tracks. The Hot wheels cars — our planets — don’t change at all, but they can remain stable while being much closer together. They don’t touch (that would not be stable), they are just closer together.” This is what allows for many planets to be placed with the system’s habitable zone. Based on the Earth’s Hill radius, Raymond estimates that about six Earth-mass planets could fit into stable orbits within the same zone around our Sun. This is based on the fact that Earth-mass planets could be spaced roughly 0.1 AU from each other and maintain a stable orbit. Given that the Sun’s habitable zone corresponds roughly to the distances between Venus and Mars – which are 0.3 and 0.5 AU away, respectively – this means there is 0.8 AUs of room to work with. However, around a black hole with 1 million Solar Masses, the closest neighboring planet could be just 1/1000th (0.001) of an AU away and still have a stable orbit. Doing the math, this means that roughly 550 Earths could fit in the same region orbiting the black hole and its nine Suns. There is one minor drawback to this whole scenario, which is that the black hole would have to remain at its current mass. If it were to become any larger, it would cause the Hill radii of its 550 planets to shrink down further and further. Once the Hill radius got down to the point where it was the same size as any of the Earth-mass planets, the black hole would begin to tear them apart. But at 1 million Solar masses, the black hole is capable of supporting a massive system of planets comfortably. “With our million-Sun black hole the Earth’s Hill radius (on its current orbit) would already be down to the limit, just a bit more than twice Earth’s actual radius,” he says. Lastly, Raymond considers the implications that living in such a system would have. For one, a year on any planet within the system’s habitable zone would be much shorter, owing to the fact their orbital periods would be much faster. Basically, a year would last roughly 1.6 days for planets at the inner edge of the habitable zone and 4.6 days for planets at the outer edge of the habitable zone. In addition, on the surface of any planet in the system, the sky would be a lot more crowded! With so many planets in close orbit together, they would pass very close to one another. That essentially means that from the surface of any individual Earth, people would be able to see nearby Earths as clear as we see the Moon on some days. As Raymond illustrated: “At closest approach (conjunction) the distance between planets is about twice the Earth-Moon distance. These planets are all Earth-sized, about 4 times larger than the Moon. This means that at conjunction each planet’s closest neighbor appears about twice the size of the full Moon in the sky. And there are two nearest neighbors, the inner and outer one. Plus, the next-nearest neighbors are twice as far away so they are still as big as the full Moon during conjunction. And four more planets that would be at least half the full Moon in size during conjunction.” He also indicates that conjunctions would occur almost once per orbit, which would mean that every few days, there would be no shortage of giant objects passing across the sky. And of course, there would be the Sun’s themselves. Recall that scene in Star Wars where a young Luke Skywalker is watching two suns set in the desert? Well, it would a little like that, except way more cool! According to Raymond’s calculations, the nine Suns would complete an orbit around the black hole every three hours. Every twenty minutes, one of these Suns would pass behind the black hole, taking just 49 seconds to do so. At this point, gravitational lensing would occur, where the black hole would focus the Sun’s light toward the planet and distort the apparent shape of the Sun. To illustrate what this would look like, he provides an animation (shown above) created by @GregroxMun – a planet modeller who develops space graphics for Kerbal and other programs – using Space Engine. While such a system may never occur in nature, it is interesting to know that such a system would be physically possible. And who knows? Perhaps a sufficiently advanced species, with the ability to tow stars and planets from one system and place them in orbit around a black hole, could fashion this Ultimate Solar System. Something for SETI researchers to be on the lookout for, perhaps? This hypothetical exercise was the second installment in two-part series by Raymond, titled “Black holes and planets”. In the first installment, “The Black Hole Solar System“, Raymond considered what it would be like if our system orbited around a black hole-Sun binary. As he indicated, the consequences for Earth and the other Solar planets would be interesting, to say the least! If you enjoy our selection of content please consider following Universal-Sci on social media:
<urn:uuid:99b4a010-62f4-4260-b728-b27be8d23cd2>
3.5
1,768
Nonfiction Writing
Science & Tech.
58.259181
95,601,538
Seafloor massive sulfide (SMS) mining will likely occur at hydrothermal systems in the near future. Alongside their mineral wealth, SMS deposits also have considerable biological value. Active SMS deposits host endemic hydrothermal vent communities, whilst inactive deposits support communities of deep water corals and other suspension feeders. Mining activities are expected to remove all large organisms and suitable habitat in the immediate area, making vent endemic organisms particularly at risk from habitat loss and localised extinction. As part of environmental management strategies designed to mitigate the effects of mining, areas of seabed need to be protected to preserve biodiversity that is lost at the mine site and to preserve communities that support connectivity among populations of vent animals in the surrounding region. These "set-aside" areas need to be biologically similar to the mine site and be suitably connected, mostly by transport of larvae, to neighbouring sites to ensure exchange of genetic material among remaining populations. Establishing suitable set-asides can be a formidable task for environmental managers, however the application of genetic approaches can aid set-aside identification, suitability assessment and monitoring. There are many genetic tools available, including analysis of mitochondrial DNA (mtDNA) sequences (e.g. COI or other suitable mtDNA genes) and appropriate nuclear DNA markers (e.g. microsatellites, single nucleotide polymorphisms), environmental DNA (eDNA) techniques and microbial metagenomics. When used in concert with traditional biological survey techniques, these tools can help to identify species, assess the genetic connectivity among populations and assess the diversity of communities. How these techniques can be applied to set-aside decision making is discussed and recommendations are made for the genetic characteristics of set-aside sites. A checklist for environmental regulators forms a guide to aid decision making on the suitability of set-aside design and assessment using genetic tools. This non-technical primer document represents the views of participants in the VentBase 2014 workshop. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:bb198715-24b8-4962-b166-37a30fcb29ee>
3.375
419
Academic Writing
Science & Tech.
4.907898
95,601,545
Chemistry review worksheet answers the best worksheets i on density problems chemistry worksheet with answers lovely mass volu. Separating mixtures worksheet worksheets for all downl elements on elements compounds and mixtures worksheet answers. Elements compounds and mixtures worksheet answer key new bes on elements compounds and mixtures worksheet part kidz activities. Chemistry worksheets medium size of worksheet matter works on composition of matter elements compounds and mixtures works. Worksheets lovely naming compounds worksheet hi res wallpap on worksheet mixed ionic covalent compound naming grass. Chemistry worksheet classification of matter and changes on classifying matter worksheet answers fresh works. Chemical equations and reactions worksheet balancing answers on and molecular formulas worksheets practice questions for chemistry. Kindergarten worksheet physical properties of matter worksheets lab on matter and its changes worksheet worksheets pdf nucl.
<urn:uuid:fc77ccb6-9936-4ad0-bbea-e45c8f7151d6>
3
199
Content Listing
Science & Tech.
24.889184
95,601,566
Titel: Migrations and Dispersal of Marine Organisms Proceedings of the 37th European Marine Biology Symposium held in Reykjavík, Iceland, 5-9 August 2002. 'Developments in Hydrobiology'. Herausgegeben von K. Gunnarsson, Gudmundur V. Helgason, A. Ingólfsson 31. Januar 2004 - gebunden - 276 Seiten This book represents the Proceedings of the 37th European Marine Biology Symposium, held in Reykjavík, Iceland, 5-9 August 2002. The main themes of the symposium were Migrations and Dispersal of Marine Organisms. These themes are highly relevant today. There is widespread man-aided dispersal (e.g. by ballast water) of marine plants and animals, which may have substantial effects on the regions receiving new species. The new introductions may result in reduced diversity of plants and animals and may affect natural resources in the countries receiving toxic algae and other foreign elements. Studies of changes in distribution and dispersal of marine animals and plants are also highly relevant with reference to the changing climate taking place. The study of dispersal has recently gained new impetus with the discovery of the remarkable communities found on isolated hydrothermal vents and cold water seeps in the world's oceans. Preface. List of Participants. Dispersal of marine organisms. Keynote presentations. Community assembly and historical biogeography in the North Atlantic Ocean: The potential role of human-mediated dispersal vectors; J.T. Carlton. Dispersal at hydrothermal vents: a summary of recent progress; P.A. Tyler, C.M. Young. Other presentations. The spread of Chinese mitten crab (Eroicheir sinensis) in Continental Europe: Analysis of a historical data set; L.-M. Herborg, et al. Characterising invasion processes with genetic data: an Atlantic clade of Clavelina lepadiformis (Asscidiacea) introduced into Mediterranean harbours; X. Turon, et al. Shallow-water hydrothermal vents in the Mediterranean Sea: stepping stones for Lessepsian migration? A.M.De Biasi, S. Aliani. Local population persistence as a pre-condition for large-scale dispersal of Idotea metallica (Crustacea, Isopoda) on drifting habitat patches; L. Gutow. Rafting of benthic macrofauna: important factors determining the temporal succession of the assemblage on detached macroalgae; M. Thiel. Hitch-hiking on floating marine debris: macrobenthic species in the Western Mediterranean Sea; S. Aliani, A. Molcard. Diurnal horizontal and vertical dispersal of kelp fauna; N.M. Jørgensen, H. Christie. Short-term dispersal of kelp fauna to cleared (kelp harvested) areas; E. Waage-Nielsen, et al. Regulation of species richness by advection and richness-dependent processes in a coastal fish community; K. Lekve, et al. Secondary settlement of cockles Cerastoderma edule as a function of current velocity and substratum: a flume study with benthic juveniles; X.de Montaudouin, et al. Anchovy egg and larvae distribution in relation to biological and physical oceanography in the Strait of Sicily; A. Cuttitta, et al. Spatialdistribution of Engraulis enrasicolus eggs and larvae in relation to juveniles stick to adults: recruitment of the tube-dwelling polychaete Lanice conshilega (Pallas, 1766); R. Callaway. Settlement of bivalve spat on artificial collectors in Eyjafjordur, North Iceland; E.G. Garcia, et al. Barnacle larval supply to sheltered rocky shores: a limiting factor? S.R. Jenkins, S.J. Hawkins. Migrations of marine organisms. Keynote presentations. Go with the flow: tidal migration in marine animals; R.N. Gibson. A review of the adaptive significance and ecosystem consequences of zooplankton diel vertical migrations; G.C. Hays. Other Presentations. Temporal and spatial variability of mobile fauna on a submarine cliff and boulder scree complex: a community in flux; J.J. Bell, J.R. Turner. Diatom migration and sediment armouring - an example from the Tagus Estuary, Portugal; T.J. Tolhurst, et al. Life-cycle strategies and seasonal migrations of oceanic copepods in the Irminger Sea; A. Gislason. Seasonality of harpacticoids (Crustacea, Copepoda) in a tidal pool in subarctic south-western Iceland; M.B. Steinarsdóttir, et al. Open session. Spatio-temporal distribution of recruits (0 groups) of Merluccius merluccius and Phycis blennoides (Pisces, Gadiformes) in the Strait of Sicily (Central Mediterranean); F. Fiorentino, et al. Growth aspect of Flustra foliacea (Bryozoa, Ceilostomata) in laboratory culture; J. Kahle, et al. Distribution pattern of rays (Pisces, Rajidae) in the Strait of Sicily in relation to fishing pressure; G. Garofalo, et al. Distribution of tintinnid species from 42°N to 43°S through the Indian Ocean; M. Modigh, et al.
<urn:uuid:7a0b5ebf-7422-470f-b2a4-cdd1fa799612>
2.5625
1,191
Truncated
Science & Tech.
38.072232
95,601,574
Experimental and Theoretical Study of the Atmospheric Degradation of Aldehydes Part of the NATO Science Series book series (NAIV, volume 16) Aldehydes are ubiquitous key components in the chemistry of the troposphere. They are common primary pollutants from biogenic emissions and in residues of incomplete combustion (Ciccioli et al., 1993). Relevant natural sources are vegetation, forest fires and microbiological processes (Kotzias et al, 1997). Aldehydes are also nearly mandatory intermediates in the photo-oxidation processes of most organic compounds in the troposphere (Kerr and Sheppard, 1981; Carlier et al, 1986). Formaldehyde (HCHO) and acetaldehyde (CH3CHO) are among the most abundant carbonyls in the atmosphere. Ambient levels are in the order of a few tens of pptv in clean background conditions (Zhou et al., 1996; Ayers et al., 1997) but may reach tens of ppbv in polluted urban areas as a consequence of the elevated anthropogenic emissions of aldehydes and their precursors from automobile traffic, industrial and domestic heating, and industrial activity (Carlier et al, 1986; Yokouchi et al, 1990). The atmospheric loss processes include photolysis, day-time reaction with OH radicals and with Cl and Br atoms in the marine boundary layer, and reaction with NO3 radicals during the night-time. The photolytic cleavage of aldehydes constitute an important source of free radicals, particularly in the moderately and strongly polluted areas (Carlier et al, 1986; Yokouchi et al, 1990). Aldehydes are toxic compounds themselves, and some of their photo-oxidation products, the peroxyacylnitrates, are phytotoxic and strong eye-irritant compounds (Carlier et al, 1986; Carter et al, 1981). Further, peroxyacylnitrates, such as peroxyacetyl-nitrate (PAN), are long-lived species, which can act as a NO2 reservoir in the troposphere. KeywordsRate Coefficient Kinetic Isotope Effect Aliphatic Aldehyde Marine Boundary Layer Minimum Energy Path These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Alvarez-Idaboy, J. R., N. Mora-Diez, R. J. Boyd and A. Vivier-Bunge; On the importance of prereactive complexes in molecule-radical reactions: Hydrogen abstraction from aldehydes by OH, J. Am. Chem. Soc. (2001) 2018–2024.CrossRefGoogle Scholar Atkinson, R.; Kinetics and mechanisms of the gas-phase reactions of the NO3 radical with organic compounds, J. Phys. Chem. Ref. Data 20 (1991) 459–507.CrossRefGoogle Scholar Atkinson, R.; Gas-phase tropospheric chemistry of organic compounds, J. Phys. Chem. Ref. Data Atkinson, R., D. L. Baulch, R. A. Cox, R. F. Hampson Jr., J. A. Kerr, M. J. Rossi and J. Troe; Evaluated kinetic, photochemical and heterogeneous data for atmospheric chemistry. 5. IUPAC Subcommittee on Gas Kinetic Data Evaluation for Atmospheric Chemistry, J. Phys. Chem. Ref. Data (1997) 521–1011.CrossRefGoogle Scholar Ayers, G. P., R. W. Gillet, H. Granek, C. de Serves and R. A Cox; Formaldehyde production in clean marine air, Geophys. Res. Lett. (1997) 401–404.CrossRefGoogle Scholar Beukes, J. A., B. D’Anna, V. Bakken and C. J. Nielsen; Experimental and theoretical study of the F, Cl and Br reactions with formaldehyde and acetaldehyde, Phys. Chem. Chem. Phys. (2000) 4049–4060.CrossRefGoogle Scholar Carlier, P., H. Hannachi and G. Mouvier; The chemistry of carbonyl compounds in the atmosphere — a review, Atmos. Environ. (1986) 2079–2099.CrossRefGoogle Scholar Carter, W. P. L., A. M. Winer and J. N. Pitts; Effect of peroxyacetyl nitrate on the initiation of photochemical smog. Environ. Sci. Technol. (1981) 831–834.CrossRefGoogle Scholar CATOME “Carbonyls in Tropospheric Oxidation Mechanisms”, CEC Environment and Climate program contract ENV4-CT97-0416, Coordinated by C. Dye (2000).Google Scholar Ciccioli, P., E. Brancaleoni, M. Frattoni, A. Cecinato and A. Brachetti; Ubiquitous occurrence of semivolatile carbonyl-compounds in tropospheric samples and their possible sources, Atmos. Environ. (1993) 1891–1901.Google Scholar D’Anna, B. and C. J. Nielsen; Kinetic study of the vapour-phase reaction between aliphatic aldehydes and the nitrate radical, J Chem. Soc. Faraday Trans. (1997) 3479–3483.CrossRefGoogle Scholar D’Anna, B., S. Langer, E. Ljungstrom, C. J. Nielsen and M. Ullerstam; Rate coefficients and Arrhenius parameters for the reaction of the NO3 radical with acetaldehyde and acetaldehyde-1d, Phys. Chem. Chem. Phys. (2001a) 1631–1637.CrossRefGoogle Scholar D’Anna, B., Ø. Andresen, Z. Gefen and C. J. Nielsen; Kinetic study of OH and NO3 radical reactions with 14 aliphatic aldehydes, Phys. Chem. Chem. Phys. (2001b) 3057–3063.CrossRefGoogle Scholar D’Anna, B. V. Bakken, J. A. Beukes, J. T. Jodkowski and C. J. Nielsen; Experimental and theoretical study of gas phase NO3 and OH radical reactions with formaldehyde, acetaldehyde and their isotopomers, Phys. Chem. Chem. Phys. Kerr, J. A. and D. W. Sheppard; Kinetics of the reactions of hydroxyl radicals with aldehydes studied under atmospheric conditions. Environ. Sci. Technol. (1981) 960–963.CrossRefGoogle Scholar Kotzias, D., C. Konidari and C. Spartà; Volatile carbonyl compounds of biogenic origin — emission and concentration in the atmosphere, in Biogenic Volatile Organic Compouns in the Atmosphere — Summary of present knowledge (Eds. G. Helas, S. Slanina and R. Steinbrecher), SPB Academic Publishers, Amsterdam, 1997, 67–78.Google Scholar Morris, E. D. Jr. and H. Niki; Mass spectrometric study of the reaction of hydroxyl radical with formaldehyde, J. Chem. Phys. (1971) 1991–1992.CrossRefGoogle Scholar Niki, H., P. D. Maker, L. P. Breitenbach and C. M. Savage; FTIR studies of the kinetics and mechanism for the reaction of chlorine atom with formaldehyde, Chem. Phys. Lett. (1978) 596–599.CrossRefGoogle Scholar Niki, H., P. D. Maker, C. M. Savage and L. P. Breitenbach; An Fourier transform infrared study of the kinetics and mechanism for the reaction of hydroxyl radical with formaldehyde, J Phys. Chem. (1984) 5342–5344.CrossRefGoogle Scholar Niki, H., P. D. Maker, C. M. Savage and L. P. Breitenbach; FTIR study of the kinetics and mechanism for chlorine-atom-initiated reactions of acetaldehyde, J. Phys. Chem. (1985) 588–591.CrossRefGoogle Scholar Papagni, C, J. Arey and R. Atkinson; Rate constants for the gas-phase reactions of a series of C-3 — C-6 aldehydes with OH and NO3 radicals. Int. J. Chem. Kin. (2000) 79–84.CrossRefGoogle Scholar RADICAL “Evaluation of Radical Sources in Atmospheric Chemistry through Chamber and Laboratory Studies”, CEC Environment and Climate program contract ENV4-CT97-0419, Coordinated by G. Moortgat (2000).Google Scholar Soto, M. R. and M. Page; Features of the potential energy surface for reactions of hydroxyl with formaldehyde, J. Phys. Chem. (1990) 3242–3246.CrossRefGoogle Scholar Taylor, P. H., M. S. Rahman, M. Arif, B. Dellinger and P. Marshall; Kinetics and mechanistic studies of the reaction of hydroxyl radicals with acetaldehyde over an extended temperature range, 26th International Symposium on Combustion (1996) 497–504.Google Scholar Ullerstam, M., S. Langer and E. Ljungström, Gas phase rate coefficients and activation energies for the reaction of butanal and 2-methyl-propane with nitrate radicals, Int. J. Chem. Kit. (2000) 294–303.CrossRefGoogle Scholar Wallington, T. J., L. M. Skewes, W. O. Siegel, C. H. Wu and S. M. Japar; Gas phase reaction of chlorine atoms with a series of oxygenated organic species at 295 K, Int. J. Chem. Kin. (1988) 867–875.CrossRefGoogle Scholar Yokouchi, Y., H. Mukai, K. Nakajima and Y. Ambe; Semivolatile aldehydes as predominant organic gases in remote areas, Atmospheric Environment (1990) 439–442.Google Scholar Zhou, X. L., Y-N. Lee, L. Newman, X. H. Chen and K. Mopper; Tropospheric formaldehyde concentration at the Mauna Loa observatory during the Mauna Loa observatory photochemistry experiment 2, J. Geophys. Res. (1996) 14711–14719.CrossRefGoogle Scholar © Springer Science+Business Media Dordrecht 2002
<urn:uuid:54892c14-9414-401d-99f9-4a06aa7bb38d>
2.984375
2,329
Academic Writing
Science & Tech.
61.568656
95,601,587
Synthetic pieces of biological molecule form framework and glue for making nanoparticle clusters and arrays In a new twist on the use of DNA in nanoscale construction, scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and collaborators put synthetic strands of the biological material to work in two ways: They used ropelike configurations of the DNA double helix to form a rigid geometrical framework, and added dangling pieces of single-stranded DNA to glue nanoparticles in place. Scientists built octahedrons using ropelike structures made of bundles of DNA double-helix molecules to form the frames (a). Single strands of DNA attached at the vertices (numbered in red) can be used to attach nanoparticles coated with complementary strands. This approach can yield a variety of structures, including ones with the same type of particle at each vertex (b), arrangements with particles placed only on certain vertices (c), and structures with different particles placed strategically on different vertices (d). Credit: Brookhaven National Laboratory The method, described in the journal Nature Nanotechnology, produced predictable clusters and arrays of nanoparticles--an important step toward the design of materials with tailored structures and functions for applications in energy, optics, and medicine. "These arrays of nanoparticles with predictable geometric configurations are somewhat analogous to molecules made of atoms," said Brookhaven physicist Oleg Gang, who led the project at the Lab's Center for Functional Nanomaterials (CFN, http://www. Using the new method, the scientists say they can potentially orchestrate the arrangements of different types of nanoparticles to take advantage of collective or synergistic effects. Examples could include materials that regulate energy flow, rotate light, or deliver biomolecules. "We may be able to design materials that mimic nature's machinery to harvest solar energy, or manipulate light for telecommunications applications, or design novel catalysts for speeding up a variety of chemical reactions," Gang said. The scientists demonstrated the technique to engineer nanoparticle architectures using an octahedral scaffold with particles positioned in precise locations on the scaffold according to the specificity of DNA coding. The designs included two different arrangements of the same set of particles, where each configuration had different optical characteristics. They also used the geometrical clusters as building blocks for larger arrays, including linear chains and two-dimensional planar sheets. "Our work demonstrates the versatility of this approach and opens up numerous exciting opportunities for high-yield precision assembly of tailored 3D building blocks in which multiple nanoparticles of different structures and functions can be integrated," said CFN scientist Ye Tian, one of the lead authors on the paper. Details of assembly This nanoscale construction approach takes advantage of two key characteristics of the DNA molecule: the twisted-ladder double helix shape, and the natural tendency of strands with complementary bases (the A, T, G, and C letters of the genetic code) to pair up in a precise way. First, the scientists created bundles of six double-helix molecules, then put four of these bundles together to make a stable, somewhat rigid building material--similar to the way individual fibrous strands are woven together to make a very strong rope. The scientists then used these ropelike girders to form the frame of three-dimensional octahedrons, "stapling" the linear DNA chains together with hundreds of short complementary DNA strands. "We refer to these as DNA origami octahedrons," Gang said. To make it possible to "glue" nanoparticles to the 3D frames, the scientists engineered each of the original six-helix bundles to have one helix with an extra single-stranded piece of DNA sticking out from both ends. When assembled into the 3D octahedrons, each vertex of the frame had a few of these "sticky end" tethers available for binding with objects coated with complementary DNA strands. "When nanoparticles coated with single strand tethers are mixed with the DNA origami octahedrons, the 'free' pieces of DNA find one another so the bases can pair up according to the rules of the DNA complementarity code. Thus the specifically DNA-encoded particles can find their correspondingly designed place on the octahedron vertices" Gang said. The scientists can change what binds to each vertex by changing the DNA sequences encoded on the tethers. In one experiment, they encoded the same sequence on all the octahedron's tethers, and attached strands with a complementary sequence to gold nanoparticles. The result: One gold nanoparticle attached to each of octahedron's six vertices. In additional experiments the scientists changed the sequence of some vertices and used complementary strands on different kinds of particles, illustrating that they could direct the assembly and arrangement of the particles in a very precise way. In one case they made two different arrangements of the same three pairs of particles of different sizes, producing products with different optical properties. They were even able to use DNA tethers on selected vertices to link octahedrons end to end, forming chains, and in 2D arrays, forming sheets. Visualization of arrays Confirming the particle arrangements and structures was a major challenge because the nanoparticles and the DNA molecules making up the frames have very different densities. Certain microscopy techniques would reveal only the particles, while others would distort the 3D structures. To see both the particles and origami frames, the scientists used cryo-electron microscopy (cryo-EM), led by Brookhaven Lab and Stony Brook University biologist Huilin Li, an expert in this technique, and Tong Wang, the paper's other lead co-author, who works in Brookhaven's Biosciences department with Li. They had to subtract information from the images to "see" the different density components separately, then combine the information using single particle 3D reconstruction and tomography to produce the final images. "Cryo-EM preserves samples in their near-native states and provides close to nanometer resolution," Wang said. "We show that cryo-EM can be successfully applied to probe the 3D structure of DNA-nanoparticle clusters." These images confirm that this approach to direct the placement of nanoparticles on DNA-encoded vertices of molecular frames could be a successful strategy for fabricating novel nanomaterials. This research was supported by the DOE Office of Science. Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. Brookhaven is operated and managed for DOE's Office of Science by Brookhaven Science Associates, a limited-liability company founded by the Research Foundation for the State University of New York on behalf of Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization. Scientific paper: "Prescribed nanoparticle cluster architectures and low-dimensional arrays built using octahedral DNA origami frames" LINK will be active after embargo: http://dx. Karen McNulty Walsh | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:dfd760bc-e426-4108-b290-98de9e54cff7>
3.734375
2,250
Content Listing
Science & Tech.
27.673104
95,601,589
A new set of measurements has allowed a Florida State University geochemist to confirm what other scientists have only suspected about what lies deep below the Earths surface. Professor Munir Humayun has found that there is a higher iron content in the Earths mantle beneath Hawaii compared to other regions of the mantle. Hotspot islands, such as Hawaii, arise from hot plumes of solid rock from deep within the mantle or the core-mantle boundary that ascend at rates of a few centimeters per year. While seismologists had long thought that the Earths deep mantle - the rocky layer between 1,000 to 3,000 kilometers deep - beneath the Hawaiian islands has a higher concentration of iron, no one had ever precisely measured it until now, according to Humayun. Iron is one of the four main components of the mantle. "This is a major intellectual advance for science," Humayun said. "The fact that scientists can stand on the Earths surface and tell you whats going on 3,000 kilometers below is a real breakthrough." New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:58667087-e3e3-4bcf-986a-cb806f7eea91>
3.359375
838
Content Listing
Science & Tech.
43.359147
95,601,602
Electrons in graphene superlattices are different and behave as neutrinos that acquired a notable mass. This results in a new, relativistic behaviour so that electrons can now skew at large angles to applied fields. The effect is huge. This new membrane lasts twice as long when compared to conventional membranes, is highly resistant to breakage, and has anti-bacterial and anti-biofouling properties. Another groundbreaking characteristic - it allows for an unprecedented flow rate of at least ten times faster than current water filtration membranes. In a first-of-its-kind demonstration, a team of researchers has developed a powerful technique to focus laser light through even the murkiest of surroundings without the need for a guide star. This innovation, a specialized version of an adaptive optics microscope, can resolve a point less than one thousandth of a millimeter across. A team of Berkeley Lab researchers believes it has uncovered the secret behind the unusual optoelectronic properties of single atomic layers of transition metal dichalcogenide (TMDC) materials, the two-dimensional semiconductors that hold great promise for nanoelectronic and photonic applications. Funding provided by the UK Research Partnership Investment Fund, the Technology Strategy Board and Masdar, an Abu Dhabi-based clean technology and renewable energy company University of Manchester and Masdar Institute to establish graphene commercial application programs. Scientists have discovered a novel cause of glaucoma in an animal model, and related to their findings, are now developing an eye drop aimed at curing the disease. They believe their findings will be important to human glaucoma. The surface of graphene, a one atom thick sheet of carbon, can be randomly decorated with oxygen to create graphene oxide; a form of graphene that could have a significant impact on the chemical, pharmaceutical and electronic industries. Applied as paint, it could provide an ultra-strong, non-corrosive coating for a wide range of industrial applications.
<urn:uuid:ea8cc013-8789-4ce2-95c1-e33aa9321b9e>
2.703125
404
Content Listing
Science & Tech.
16.617386
95,601,603
Cosmologists from Durham University, publishing their results in the prestigious international academic journal, Science, suggest that the formation of the first stars depends crucially on the nature of ‘dark matter’, the strange material that makes up most of the mass in the universe. The discovery takes scientists a step further to determining the nature of dark matter, which remains a mystery since it was first discovered more than 70 years ago. It also suggests that some of the very first stars that ever formed can still be found in the Milky Way galaxy today. Early structure formation in the Universe involves interaction between elusive particles known as ‘dark matter’. Even though little is known about their nature, evidence for the presence of dark matter is overwhelming, from observations of galaxies, to clusters of galaxies, to the Universe as a whole. After the Big Bang, the universe was mostly ‘smooth’, with just small ripples in the matter density. These ripples grew larger due to the gravitational forces acting on the dark matter particles contained in them. Eventually, gas was pulled into the forming structures, leading to the formation of the very first stars, about 100 million years after the Big Bang. For their research, the team from Durham University’s Institute for Computational Cosmology carried out sophisticated computer simulations of the formation of these early stars with accepted scientific models of so-called ‘cold’ as well as ‘warm’ dark matter. The computer model found that for slow moving ‘cold dark matter’ particles, the first stars formed in isolation, with just a single, larger mass star forming per developing spherical dark matter concentration. In contrast, for faster-moving ‘warm dark matter’, a large number of stars of differing sizes formed at the same time in a big burst of star formation. The bursts occurred in long and thin filaments. One of the researchers, Dr Liang Gao, who receives funding from the UK’s Science and Technologies Facilities Council, said: “These filaments would have been around 9000 light years long, which is about a quarter of the size of the Milky Way galaxy today. The very luminous star burst would have lit-up the dark universe in spectacular fashion.” Stars forming in the cold dark matter are massive. The larger a star is, the shorter its life span, so these larger mass stars would not have survived until today. However the warm dark matter model predicts the formation of low mass stars as well as larger ones and the scientists say the low mass stars would survive until today. The research paves the way for observational studies which could bring scientists closer to finding out more about the nature of dark matter. Co-researcher, Dr Tom Theuns, said: “A key question that astronomers often ask is ‘where are the descendants of the first stars today"’ The answer is that, if the dark matter is warm, some of these primordial stars should be lurking around our galaxy.” The Durham University scientists also give new insights into the way that black holes could be formed. Most galaxies harbour in their centres monster black holes, some with masses more than a billion times the mass of the sun. The team hypothesises that collisions between stars in the dense filament in the warm dark matter scenario lead to the formation of the seeds for such black holes. Dr Theuns added: “Our results raise the exciting prospect of learning about the nature of dark matter from studying the oldest stars. Another tell-tale sign could be the gigantic black holes that live in centres of galaxies like the Milky Way. They could have formed during the collapse of the first filaments in a universe dominated by warm dark matter.” Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e15b4f01-d67b-4727-a25b-868fbe63382f>
3.984375
1,353
Content Listing
Science & Tech.
42.849644
95,601,621
Farmed salmon show full reproductive potential to invade wild gene pools and should be sterilised - according to new research from the University of East Anglia (UEA). Findings published today reveal that, while farmed salmon are genetically different to their wild counterparts, they are just as fertile. This is important information because millions of farmed salmon escape into the wild – posing threats to wild gene pools. Lead Researcher Prof Matt Gage from UEA’s school of Biological Sciences said: “Around 95 per cent of all salmon in existence are farmed, and domestication has made them very different to wild populations, each of which is locally adapted to its own river system. “Farmed salmon grow very fast, are aggressive, and not as clever as wild salmon when it comes to dealing with predators. These domestic traits are good for producing fish for the table, but not for the stability of wild populations. “The problem is that farmed salmon can escape each year in their millions, getting into wild spawning populations, where they can then reproduce and erode wild gene pools, introducing these negative traits. “We know that recently-escaped farmed salmon are inferior to wild fish in reproduction, but we do not have detailed information on sperm and egg performance, which could have been affected by domestication. Our work shows that farm fish are as potent at the gamete level as wild fish, and if farm escapes can revive their spawning behaviour by a period in the wild, clearly pose a significant threat of hybridisation with wild populations.” Researchers used a range of in vitro fertilization tests in conditions that mimicked spawning in the natural environment, including tests of sperm competitiveness and egg compatibility. All tests on sperm and egg form and function showed that farmed salmon are as fertile as wild salmon – identifying a clear threat of farmed salmon reproducing with wild fish. “Some Norwegian rivers have recorded big numbers of farmed fish present – as much as 50 per cent. Both anglers and conservationists are worried by farmed fish escapees which could disrupt locally adapted traits like timing of return, adult body size, and disease resistance. “Salmon farming is a huge business in the UK, Norway and beyond, and while it does reduce the pressure on wild fish stocks, it can also create its own environmental pressures through genetic disruption. “A viable solution is to induce ‘triploidy’ by pressure-treating salmon eggs just after fertilisation - where the fish grows as normal, but with both sex chromosomes; this is normal for farming rainbow trout. The resulting adult develops testes and ovaries but both are much reduced and most triploids are sterile. These triploid fish can’t reproduce if they escape, but the aquaculture industry has not embraced this technology yet because of fears that triploids don’t perform as well in farms as normal diploid fish, eroding profits.” This research was funded by the Natural Environment Research Council (NERC) and the Royal Society. ‘Assessing risks of invasion through gamete performance: farm Atlantic salmon sperm and eggs show equivalence in function, fertility, compatibility and competitiveness to wild Atlantic salmon’ is published by Evolutionary Applications on March 10, 2014. Lisa Horton | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:f7b8db78-9fa2-4751-b537-34dca6c7cb92>
3.140625
1,269
Content Listing
Science & Tech.
36.479856
95,601,622
Spectral remote sensing has been practiced on a large scale since the launch of Landsat 1 in 1972. The limited information contained in this spectrally undersampled data set has led to the development of sophisticated statistical-inferential methods for data analysis. The results are usually limited by the availability of ground truth information. Recent technological developments have made it feasible to create narrow-band, contiguous, spectral image data sets that make possible the identification of surface cover materials based on the complete reflectance spectrum for each picture element. This capability will revolutionize the use of remote sensing data and require new deterministic image processing techniques to extract the full information content from the data. Sensors, based on the concept of imaging spectrometry and the new technology of area array infrared detectors, have been constructed and are candidates for shuttle and space platform flights. Alexander F. H Goetz, "High Spectral Resolution Remote Sensing Of The Land", Proc. SPIE 0475, Remote Sensing: Critical Review of Technology, (16 October 1984); doi: 10.1117/12.966241; https://doi.org/10.1117/12.966241
<urn:uuid:89c869bb-1542-4eea-a1f3-4b5868e1499c>
3.796875
239
Academic Writing
Science & Tech.
31.221242
95,601,625
For the galaxy type, see Flocculent spiral galaxy. Process of contact and adhesion whereby dispersed molecules or particles are held together by weak physical interactions ultimately leading to phase separation by principles of colloid and surface chemistry pdf download formation of precipitates of larger than colloidal size. Note 1: In contrast to aggregation, agglomeration is a reversible process. Note 3: Quotation from ref. Flocculation, in the field of chemistry, is a process wherein colloids come out of suspension in the form of floc or flake, either spontaneously or due to the addition of a clarifying agent. The action differs from precipitation in that, prior to flocculation, colloids are merely suspended in a liquid and not actually dissolved in a solution. In the flocculated system, there is no formation of a cake, since all the flocs are in the suspension. Coagulation and flocculation are important processes in water treatment with coagulation to destabilize particles through chemical reaction between coagulant and colloids, and flocculation to transport the destabilized particles that will cause collisions with floc. According to the IUPAC definition, flocculation is “a process of contact and adhesion whereby the particles of a dispersion form larger-size clusters”. Basically, coagulation is a process of addition of coagulant to destabilize a stabilized charged particle. Meanwhile, flocculation is a mixing technique that promotes agglomeration and assists in the settling of particles. During flocculation, gentle mixing accelerates the rate of particle collision, and the destabilized particles are further aggregated and enmeshed into larger precipitates. Flocculation is affected by several parameters, including mixing speeds, mixing intensity, and mixing time.
<urn:uuid:7b33601d-22b4-4e2d-8142-1889dd3ca2e4>
3.53125
370
Knowledge Article
Science & Tech.
11.632328
95,601,639
Authors: George Rajna The historic first detection of gravitational waves from colliding black holes far outside our galaxy opened a new window to understanding the universe. Using data from the first-ever gravitational waves detected last year, along with a theoretical analysis, physicists have shown that gravitational waves may oscillate between two different forms called "g" and "f"-type gravitational waves. Astronomy experiments could soon test an idea developed by Albert Einstein almost exactly a century ago, scientists say. It's estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%. But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff. The proposal by the trio though phrased in a way as to suggest it's a solution to the arrow of time problem, is not likely to be addressed as such by the physics community— it's more likely to be considered as yet another theory that works mathematically, yet still can't answer the basic question of what is time. The Weak Interaction transforms an electric charge in the diffraction pattern from one side to the other side, causing an electric dipole momentum change, which violates the CP and Time reversal symmetry. The Neutrino Oscillation of the Weak Interaction shows that it is a General electric dipole change and it is possible to any other temperature dependent entropy and information changing diffraction pattern of atoms, molecules and even complicated biological living structures. Comments: 21 Pages. [v1] 2018-05-12 04:01:53 Unique-IP document downloads: 9 times You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:89518ee0-2789-49a0-8d01-f99175387de0>
3.53125
396
Knowledge Article
Science & Tech.
27.09555
95,601,641
Japanese officials issued new warnings on Monday as a deadly heatwave blankets the country, producing record high temperatures in Tokyo just two years before the city hosts the 2020 Summer Olympics. Officials said last week that the heatwave had killed at least 15 people and forced the hospitalization of over 12,000 others in the first two weeks of July. Underground internet cables criss-crossing coastal regions will be inundated by rising seas within the next 15 years, according to a new study. Thousands of miles of fibre optic cables are under threat in US cities like New York, Seattle and Miami, and could soon be out of action unless steps are taken to protect them. Street crews in the US Northeast raced through the night into January 5 to clear snow-clogged streets after a powerful blizzard and restore power to homes ahead of a brutal cold spell that has killed more than a dozen people in the United States. Dead zones are areas of the sea where the lack of oxygen makes it difficult for fish to survive and the one in the Arabian Sea is "is the most intense in the world," says Lachkar, a senior scientist at NYU Abu Dhabi in the capital of the United Arab Emirates. More electricity demand for fridges, fans and other appliances will add to man-made climate change unless power generators shift from fossil fuels to cleaner energies, according to the report by the non-profit Sustainable Energy for All group. The surface area of Colombia's six glaciers has shrunk from 45 square kilometers in 2010 to 37 square kilometers in 2017, a decline of 18 per cent, the Institute of Hydrology, Meteorology and Environmental Studies said. Producers and traders told EIA researchers posing as buyers that the majority of Chinese companies manufacturing foam in high demand as an insulator in the booming construction sector continue to use CFC-11 because of its better quality and lower price. Rising temperatures are boosting the wine trade in Europe's northern climbs - production has quadrupled in Belgium since 2006, according to government figures. The amount of land given over to grapes there has risen even faster. Pope Francis is urging governments to make good on their commitments to curb climate change, warning that continued unsustainable development and rampant consumption threaten to turn the Earth into a vast pile of "rubble, deserts and refuse." The International Energy Agency (IEA) reported in 2017 that a critical technology — capturing carbon dioxide emissions from generators and either burying or otherwise disposing of them — isn’t expanding fast enough. “More than 600 million people live in low-elevation coastal areas, less than 10 metres above sea level. In a warming climate, global sea level will rise due to melting of land-based glaciers and ice sheets, and from the thermal expansion of ocean waters,” said Svetlana Jevrejeva, from the NOC.
<urn:uuid:39088ba8-0d52-4232-9709-01d4587cd5c3>
2.53125
582
News Article
Science & Tech.
30.928548
95,601,642
A breakthrough in controlling defects could lead to new generation of electronic devices Reporting in Nature Materials this week, researchers from the Physics Department of Sapienza University of Rome and the London Centre for Nanotechnology have discovered a technique to ‘draw' superconducting shapes using an X-ray beam. This ability to create and control tiny superconducting structures has implications for a completely new generation of electronic devices. Superconductivity is a special state where a material conducts electricity with no resistance, meaning absolutely zero energy is wasted. The research group has shown that they can manipulate regions of high temperature superconductivity, in a particular material which combines oxygen, copper and a heavier, "rare earth" element, lanthanum. Illuminating with X-rays causes a small scale re-arrangement of the oxygen atoms in their material, resulting in high temperature superconductivity, of the type originally discovered for such materials 25 years ago by IBM scientists. The X-ray beam is then used like a pen to draw shapes in two dimensions. A well as being able to write superconductors with dimensions much smaller than the width of a human hair, they are able to erase those structures by applying heat treatments. They now have the tools to write and erase with high precision, in very few simple steps and without the chemicals ordinarily used in device fabrication. This ability to re-arrange the underlying structure of a material in turn has wider applications to similar compounds containing metal atoms and oxygen, ranging from fuel cells to catalysts. Prof. Aeppli, Director of the London Centre for Nanotechnology and the UCL investigator on the project, commented that "Our validation of a one-step, chemical-free technique to generate superconductors opens up exciting new possibilities for electronic devices, particularly in re-writing superconducting logic circuits. Of profound importance is the key to solving the notorious ‘travelling salesman problem', which underlies many of the world's great computational challenges. We want to create computers on demand to solve this problem, with applications from genetics to logistics. A discovery like this means a paradigm shift in computing technology is one step closer." Prof Bianconi, the leader of the team from Sapienza, added "It is amazing that in a few simple steps, we can now add superconducting ‘intelligence' directly to a material consisting mainly of the common elements copper and oxygen." The X-ray experiments were performed at the Elettra (Trieste) synchrotron radiation facility. The work is published in Nature Materials of 21 August 2011 (doi:10.1038/nmat3088) and follows on from previous discovery of fractal-like structures in superconductors (doi:10.1038/nature09260). Figure: The experiments show that X-ray beams could be used in the future to write superconducting circuits, such as those depicted in the illustration. Here, solid lines indicate electrical connections while semicircles denote superconducting junctions, whose states are indicated by red arrows. Professor Gabriel Aeppli is available for interview on: +44 207 679 0055 or email: firstname.lastname@example.org. Professor Antonio Bianconi is available for interview on: +39 338 843 8281 or email: email@example.com About the London Centre for Nanotechnology: The London Centre for Nanotechnology is an interdisciplinary joint enterprise between University College London and Imperial College London. In bringing together world-class infrastructure and leading nanotechnology research activities, the Centre has the critical mass to compete with the best facilities world-wide. Research programmes are aligned to three key areas, namely Planet Care, Healthcare and Information Technology and exploit core competencies in the biomedical, physical and engineering sciences. About Sapienza University of Rome: The "Studium Urbis" was founded in 1303. In 1872 became the national university of the capital of Italy. Sapienza University is by far the largest university in Europe, with 140.000 students and more than 100 buildings. Enrico Fermi, Ettore Majorana, Ugo Fano and others have contributed to establish the Physics Department today as a leading centre of research and academic excellence Website : www.superstripes.com
<urn:uuid:369a660e-8445-40c6-889b-0c801bd47eb7>
3.828125
890
News (Org.)
Science & Tech.
20.657815
95,601,644
3 lectures: Nuclear Physics Particle Physics 1 Particle Physics 2. Nuclear and Particle Physics. Nuclear Physics Topics. Composition of Nucleus features of nuclei Nuclear Models nuclear energy Fission Fusion Summary. About Units. Energy - electron-volt opposite of “cathode rays” nucleus (1900 – 1920) 613C* 612C + n Both a, gdiscrete spectrum because Ea, g= Ei – Ef But b spectrum continuous Energy conservation violated?? Bohr:: “At the present stage of atomic theory, however, we may say that we have no argument, either empirical or theoretical, for upholding the energy principle in the case of β-ray disintegrations” F. A. Scott, Phys. Rev.48, 391 (1935)Puzzle with Beta Spectrum Positron track going upward through lead energy units MeV/c2 Problem: Estimate the lowest possible energy of a neutron contained in a typical nucleus of radius 1.33×10-15 m. E = p2/2m = (cp)2/2mc2 x p = h/2 x (cp) = hc/2 (cp) = hc/(2 x) = hc/(2 r) (cp) = 6.63x10-34 Js * 3x108 m/s / (2 * 1.33x10-15 m) (cp) = 2.38x10-11 J = 148.6 MeV E = p2/2m = (cp)2/2mc2 = (148.6 MeV)2/(2*940 MeV) = 11.7 MeV At t = 1/, N is 1/e (0.368) of the original amount In each potential well, the lowest energy states are occupied. Because of the Coulomb repulsion the proton well is shallower than that of the neutron. But the nuclear energy is minimized when the maximum energy level is about the same for protons and neutrons Therefore, as Z increases we would expect nuclei to contain progressively more neutrons than protons. U has A = 238, Z = 92Fermi-Gas Model of Nucleus + about 200 MeV energy 1H + 1H → 2H + e+ + n e+ + e- → g + g 2H + 1H → 3He + g 3He + 3He → 4He + 1H + 1H 1 pp collision in 1022→ fusion! Sir Fred Hoyle 7.65 MeV above 12C ground state
<urn:uuid:e150a0bd-2a4b-43fd-abe1-6998caa4c8b0>
3.546875
574
Academic Writing
Science & Tech.
88.497429
95,601,652
|This dried specimen of Teredo navalis was extracted from the wood and the calcareous tunnel that originally surrounded it and curled into a circle artificially. The two valves of the shell are the white structures at the anterior end; they are used to dig the tunnel in the wood.| |Class:||Bivalvia (or Pelecypoda)| The shipworms are marine bivalve molluscs in the family Teredinidae: a group of saltwater clams with long, soft, naked bodies. They are notorious for boring into (and commonly eventually destroying) wood that is immersed in sea water, including such structures as wooden piers, docks and ships; they drill passages by means of a pair of very small shells borne at one end, with which they rasp their way through. Sometimes called "termites of the sea", they also are known as "Teredo worms" or simply Teredo, from the Greek τερηδών teredōn, via Latin. Eventually biologists adopted the common name Teredo as the name for the best-known genus. On April 17, 2017, it was reported that live giant tube shipworms Kuphus polythamia had been discovered in the Philippines. Removed from its burrow, the fully grown teredo ranges from several centimetres to about a metre in length, depending on the species. The body is cylindrical, slender, naked and superficially vermiform, meaning "worm-shaped". In spite of their slender, worm-like forms shipworms nonetheless possess the characteristic morphology of bivalves. The ctinidia lie mainly within the branchial siphon, through which the animal pumps the water that passes over the gills. The two siphons are very long and protrude from the posterior end of the animal. Where they leave the end of the main part of the body the siphons pass between a pair of calcareous plates called pallets. If the animal is alarmed, it withdraws the siphons and the pallets protectively block the opening of the tunnel. The pallets are not to be confused with the two valves of the main shell, which are at the anterior end of the animal. Because they are the organs that the animal applies to boring its tunnel, they generally are located at the tunnel's end. They are borne on the slightly thickened, muscular anterior end of the cylindrical body and they are roughly triangular in shape and markedly concave on their interior surfaces. The outer surfaces are convex and in most species are deeply sculpted into sharp grinding surfaces with which the animals bore their way through the wood or similar medium in which they live and feed. The valves of shipworms are separated and the aperture of the mantle lies between them. The small "foot" (corresponding to the foot of a clam) can protrude through the aperture. The range of various species has changed over time based on human activity. Many waters in developed countries that had been plagued by shipworms were cleared of them by pollution as the industrial revolution and into the modern era; as environmental regulation led to cleaner waters, shipworms returned and became a problem. Climate change has also changed the range of species; some once found only in warmer and more salty waters like the Caribbean have established habitats in the Mediterranean. When shipworms bore into submerged wood, bacteria (Teredinibacter turnerae strain ATCC 39867 / T7901) in a special organ called the gland of Deshayes digest the cellulose exposed in the fine particles created by the excavation. The excavated burrow is usually lined with a calcareous tube. The valves of the shell of shipworms are small separate parts located at the anterior end of the worm, used for excavating the burrow. Ruth Turner of Harvard University was the leading 20th century expert on the Teredinidae; she published a detailed monograph on the family, the 1966 volume "A Survey and Illustrated Catalogue of the Teredinidae" published by the Museum of Comparative Zoology. More recently, the endosymbionts that are found in the gills have been subject to study the bioconversion of cellulose for fuel energy research. Shipworm species comprise several genera, of which Teredo is the most commonly mentioned. The best known species is Teredo navalis. Historically, Teredo concentrations in the Caribbean Sea have been substantially higher than in most other salt water bodies. The longest marine bivalve Kuphus polythalamia was found from a lagoon near Mindanao island in the southeastern part of the Philippines, which belongs to the same group of mussels and clams. The existence of huge mollusks was established for centuries and studied by the scientists, based on the shells they've left behind that were the size of baseball bats. The bivalve animal is a rare creature that spends its life inside an elephant tusk-like hard shell made of calcium carbonate. It has a protective cap over its head which it reabsorbs to burrow into the mud for food. The case of the shipworm is not just the home of the black slimy worm. Instead, it acts as the primary source of nourishment in a non-traditional way. The animal can reach a length of 1.5 meters (5 ft.) and a diameter of 6 cm (2.3 in.). It has the ability to reabsorb the shell when it needs to grow and burrow deeper into the mud. The K.polythalamia sifts mud and sediment with its gills. Most shipworms are relatively smaller and feed on rotten wood. Instead, the shipworm does not eat, they rely on a beneficial symbiotic bacteria living in its gills. The bacteria use the hydrogen sulfide as energy to produce organic carbons that feed the shipworms. The process is similar to the green plants' photosynthesis to convert the carbon dioxide in the air into simple carbon compounds during photosynthesis. Scientists found that K. polythalamia cooperates with different bacteria than other shipworms which could be the reason why it evolved from consuming rotten wood to living on hydrogen sulfide in the mud. The internal organs of the shipworm have shrunk from lack of use over the course of its evolution. The scientists are planning to study the microbes found in the single gill of the K.polythalamia to find a new possible antimicrobial substance. Genera within the family Teridinidae include: - Bactronophorus Tapparone-Canefri, 1877 - Bankia Gray, 1842 - Dicyathifer Iredale, 1932 - Kuphus Guettard, 1770 - Lyrodus Binney, 1870 - Nausitoria Wright, 1884 - Neoteredo Bartsch, 1920 - Nototeredo Bartsch, 1923 - Psiloteredo Bartsch, 1922 - Spathoteredo Moll, 1928 - Teredo Linnaeus, 1758 - Teredora Bartsch, 1921 - Teredothyra Bartsch, 1921 - Uperotus Guettard, 1770 - Zachsia Bulatoff & Rjabtschikoff, 1933 Shipworms greatly damage wooden hulls and marine piling, and have been the subject of much study to find methods to avoid their attacks. Copper sheathing was used on wooden ships in the latter 18th century and afterwards, as a method of preventing damage by "teredo worms". The first historically documented use of copper sheathing was experiments held by the British Royal Navy with HMS Alarm, which was coppered in 1761 and thoroughly inspected after a two-year cruise. In a letter from the Navy Board to the Admiralty dated 31 August 1763 it was written "that so long as copper plates can be kept upon the bottom, the planks will be thereby entirely secured from the effects of the worm." In the Netherlands the shipworm caused a crisis in the 18th century by attacking the timber that faced the sea dikes. After that the dikes had to be faced with stones. Teredo has recently caused several minor collapses along the Hudson River waterfront in Hoboken, New Jersey, due to damage to underwater pilings. In the early 19th century, the behaviour and anatomy of the shipworm inspired the French engineer Marc Brunel. Based on his observations of how the shipworm's valves simultaneously enable it to tunnel through wood and protect it from being crushed by the swelling timber, Brunel designed an ingenious modular iron tunnelling framework—the very first tunnelling shield—which enabled workers to tunnel successfully through the highly unstable river bed beneath the Thames. The Thames Tunnel was the first successful large tunnel ever built under a navigable river. Henry David Thoreau's poem "Though All the Fates" pays homage to "New England's worm" which, in the poem, infests the hull of "[t]he vessel, though her masts be firm". In time, no matter what the ship carries or where she sails, the shipworm "her hulk shall bore,/[a]nd sink her in the Indian seas". In the Norse Saga of Erik the Red, Bjarni Herjólfsson, said to be the first European to discover the Americas, had his ship drift into the Irish Ocean where it was eaten up by shipworms. He allowed half the crew to escape in a smaller boat covered in seal tar, while he stayed behind to drown with his men. In Palawan and Aklan in the Philippines, the shipworm is called tamilok and is eaten as a delicacy there. It is prepared as kinilaw—that is, raw (cleaned) but marinated with vinegar or lime juice, chopped chili peppers and onions, a process very similar to ceviche. The taste of the flesh has been compared to a wide variety of foods, from milk to oysters. Similarly, the delicacy is harvested, sold, and eaten from those taken by local natives in the mangrove forests of West Papua, Indonesia and the central coastal peninsular regions of Thailand near Koh Phra Thong. - "This Is a Giant Shipworm. You May Wish It Had Stayed In Its Tube". The New York Times. 18 April 2017. - Live example seen on 19 April 2017 on the BBC's website. - "Historic shipwrecks could be preserved in the Antarctic". sciencenordic.com. Retrieved 2017-02-28. - Gilman, Sarah (December 5, 2016). "How a Ship-Sinking Clam Conquered the Ocean". Smithsonian. - Distel, D. L.; Morrill, W.; MacLaren-Toussaint, N.; Franks, D.; Waterbury, J. (2002). "Teredinibacter turnerae gen. nov., sp. nov., a dinitrogen-fixing, cellulolytic, endosymbiotic gamma-proteobacterium isolated from the gills of wood-boring molluscs (Bivalvia: Teredinidae)". International Journal of Systematic and Evolutionary Microbiology. 52 (6): 2261–2269. doi:10.1099/ijs.0.02184-0. ISSN 1466-5026. - Ponder, Winston F.; Lindberg, David R., eds. (2008). Phylogeny and Evolution of the Mollusca. University of California Press. ISBN 978-0-520-25092-5. - Yang, JC; Madupu, R; Durkin, AS; Ekborg, NA; Pedamallu, CS; Hostetler, JB; Radune, D; Toms, BS; Henrissat, B; Coutinho, PM; Schwarz, S; Field, L; Trindade-Silva, AE; Soares, CA; Elshahawi, S; Hanora, A; Schmidt, EW; Haygood, MG; Posfai, J; Benner, J; Madinger, C; Nove, J; Anton, B; Chaudhary, K; Foster, J; Holman, A; Kumar, S; Lessard, PA; Luyten, YA; Slatko, B; Wood, N; Wu, B; Teplitski, M; Mougous, JD; Ward, N; Eisen, JA; Badger, JH; Distel, DL (Jul 1, 2009). "The complete genome of Teredinibacter turnerae T7901: an intracellular endosymbiont of marine wood-boring bivalves (shipworms)". PLoS ONE. 4 (7): e6085. doi:10.1371/journal.pone.0006085. PMC . PMID 19568419. - WoRMS (2015). "Teredinidae Rafinesque, 1815". World Register of Marine Species. Retrieved 2015-02-14. - "Pier-eating monsters: Termites of the sea causing piers to collapse". Hudson Reporter. Retrieved 2009-09-29. - "Thames Tunnel Construction". Brunel Museum. Archived from the original on 2008-06-14. Retrieved 2008-08-31. - Henry D. Thoreau, "Though All the Fates". - "The Saga of Erik the Red - Icelandic Saga Database". Icelandic Saga Database. Retrieved 2017-07-04. - Jodelen O. Ortiz (May 2, 2007). "Tamilok A Palawan: Delicacy". Archived from the original on April 17, 2009. Retrieved 2009-04-30. - Borges, L. M. S., et al. (2014). Diversity, environmental requirements, and biogeography of bivalve wood-borers (Teredinidae) in European coastal waters. Frontiers in Zoology 11:13. - Powell A. W. B., New Zealand Mollusca, William Collins Publishers Ltd, Auckland, New Zealand 1979 ISBN 0-00-216906-1 - "Teredinidae". Integrated Taxonomic Information System. - Texts on Wikisource: - "Ship-worm". New International Encyclopedia. 1905. - Baumhauer, Eduard Hendrik von (September 1878). "The Teredo and its Depredations II". Popular Science Monthly. 13. - Baumhauer, Eduard Hendrik von (August 1878). "The Teredo and its Depredations I". Popular Science Monthly. 13. - "The Borers of the Sea". Popular Science Monthly. 3. May 1873.
<urn:uuid:c2950277-e4f8-4bce-97ad-137580da2dde>
3.4375
3,122
Knowledge Article
Science & Tech.
54.967468
95,601,662
A new study shows that iron-bearing rocks that formed at the ocean floor 3.2 billion years ago carry unmistakable evidence of oxygen. The only logical source for that oxygen is the earliest known example of photosynthesis by living organisms, say University of Wisconsin-Madison geoscientists. "Rock from 3.4 billion years ago showed that the ocean contained basically no free oxygen," says Clark Johnson, professor of geoscience at UW-Madison and a member of the NASA Astrobiology Institute. Aaron Satkoski, a scientist in the UW-Madison Geoscience Department, holds a sample sawn from a 3.23-billion-year-old rock core sample found in South Africa. The bands show different types of sediment falling to the ocean floor and solidifying into rock. The sample provides the earliest known evidence for oxygenic photosynthesis. Credit: David Tenenbaum/University of Wisconsin-Madison "Recent work has shown a small rise in oxygen at 3 billion years. The rocks we studied are 3.23 billion years old, and quite well preserved, and we believe they show definite signs for oxygen in the oceans much earlier than previous discoveries." The most reasonable candidate for liberating the oxygen found in the iron oxide is cyanobacteria, primitive photosynthetic organisms that lived in the ancient ocean. The earliest evidence for life now dates back 3.5 billion years, so oxygenic photosynthesis could have evolved relatively soon after life itself. Until recently, the conventional wisdom in geology held that oxygen was rare until the "great oxygenation event," 2.4 to 2.2 billion years ago. The rocks under study, called jasper, made of iron oxide and quartz, show regular striations caused by composition changes in the sediment that formed them. To detect oxygen, the UW-Madison scientists measured iron isotopes with a sophisticated mass spectrometer, hoping to determine how much oxygen was needed to form the iron oxides. "Iron oxides contained in the fine-grained, deep sediment that formed below the level of wave disturbance formed in the water with very little oxygen," says first author Aaron Satkoski, an assistant scientist in the Geoscience Department. But the grainier rock that formed from shallow, wave-stirred sediment looks rusty, and contains iron oxide that required much more oxygen to form. The visual evidence was supported by measurements of iron isotopes, Satkoski said. The study was funded by NASA and published in Earth and Planetary Science Letters. The samples, provided by University of Johannesburg collaborator Nicolas Beukes, were native to a geologically stable region in eastern South Africa. Because the samples came from a single drill core, the scientists cannot prove that photosynthesis was widespread at the time, but once it evolved, it probably spread. "There was evolutionary pressure to develop oxygenic photosynthesis," says Johnson. "Once you make cellular machinery that is complicated enough to do that, your energy supply is inexhaustible. You only need sun, water and carbon dioxide to live." Other organisms developed forms of photosynthesis that did not liberate oxygen, but they relied on minerals dissolved in hot groundwater -- a far less abundant source than ocean water, Johnson adds. And although oxygen was definitely present in the shallow ocean 3.2 billion years ago, the concentration was only estimated at about 0.1 percent of that found in today's oceans. Confirmation of the iron results came from studies of uranium and its decay products in the samples, says co-author Brian Beard, a senior scientist at UW-Madison. "Uranium is only soluble in the oxidized form, so the uranium in the sediment had to contain oxygen when the rock solidified." Measurements of lead formed from the radioactive decay of uranium showed that the uranium entered the rock sample 3.2 billion years ago. "This was an independent check that the uranium wasn't added recently. It's as old as the rock; it's original material," Beard says. "We are trying to define the age when oxygenic photosynthesis by bacteria started happening," he says. "Cyanobacteria could live in shallow water, doing photosynthesis, generating oxygen, but oxygen was not necessarily in the atmosphere or the deep ocean." However, photosynthesis was a nifty trick, and sooner or later it started to spread, Johnson says. "Once life gets oxygenic photosynthesis, the sky is the limit. There is no reason to expect that it would not go everywhere." --David Tenenbaum, 608-265-8549, firstname.lastname@example.org DOWNLOAD PHOTO: https:/ Clark Johnson | EurekAlert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:d7475543-ba37-4f86-8c27-f7d8313b830a>
3.96875
1,585
Content Listing
Science & Tech.
37.893645
95,601,669
See other News & Comment articles from Evolution Wikipedia. Evolution is change in the heritable characteristics of biological populations over successive generations Evolutionary processes give rise to biodiversity at every level of biological organisation including the levels of species individual organisms and molecules Many scientists and philosophers of science have described evolution as fact and theory a phrase which was used as the title of an article by paleontologist Stephen Jay Gould in 1981 Unter Evolution von lateinisch evolvere herausrollen auswickeln entwickeln versteht man im deutschen Sprachraum heute in erster Linie die biologische Evolution Evolution 進化 is the first evolution in the Pokémon franchise when one Pokémon upon reaching a certain level using a certain stone learning a certain move or being traded evolves into a different kind of Pokémon
<urn:uuid:a3f5905e-4f36-4a3e-ab68-d5880bdb532d>
2.96875
174
Knowledge Article
Science & Tech.
-104.448837
95,601,676
In numerical analysis, Richardson extrapolation is a sequence acceleration method, used to improve the rate of convergence of a sequence. It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated." Practical applications of Richardson extrapolation include Romberg integration, which applies Richardson extrapolation to the trapezoid rule, and the Bulirsch–Stoer algorithm for solving ordinary differential equations. Example of Richardson extrapolation Suppose that we wish to approximate , and we have a method that depends on a small parameter in such a way that Let's define a new function where and are two distinct step sizes. is called the Richardson extrapolation of A(h), and has a higher-order error estimate compared to . Very often, it is much easier to obtain a given precision by using R(h) rather than A(h') with a much smaller h' , which can cause problems due to limited precision (rounding errors) and/or due to the increasing number of calculations needed (see examples below). Let be an approximation of (exact value) that depends on a positive step size h with an error formula of the form where the ai are unknown constants and the ki are known constants such that hki > hki+1. k0 is the leading order step size behavior of Truncation error as The exact value sought can be given by which can be simplified with Big O notation to be Using the step sizes h and h / t for some t, the two formulas for A are: Multiplying the second equation by tk0 and subtracting the first equation gives which can be solved for A to give Therefore using the truncation error has been reduced to . This is in contrast to where the truncation error is for the same step size By this process, we have achieved a better approximation of A by subtracting the largest term in the error which was O(hk0). This process can be repeated to remove more error terms to get even better approximations. A general recurrence relation beginning with can be defined for the approximations by The Richardson extrapolation can be considered as a linear sequence transformation. Additionally, the general formula can be used to estimate k0 (leading order step size behavior of Truncation error) when neither its value nor A* (exact value) is known a priori. Such a technique can be useful for quantifying an unknown rate of convergence. Given approximations of A from three distinct step sizes h, h / t, and h / s, the exact relationship yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two O terms is aprroximately valid) which can be solved numerically to estimate k0. Using Taylor's theorem about x=0, the derivative of f(x) is given by If the initial approximations of the derivative are chosen to be then ki = i+1. For t = 2, the first formula extrapolated for A would be For the new approximation we can extrapolate again to obtain One can go on recursively in a similar fashion for higher order corrections. Example pseudocode code for Richardson extrapolation The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE , with the Trapezoidal method. In this example we halve the step size each iteration and so in the discussion above we'd have that . The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise to the second power and to take powers of in the pseudocode. We want to find the value of , which has the exact solution of since the exact solution of the ODE is . This pseudocode assumes that a function called Trapezoidal(f, tStart, tEnd, h, y0) exists which attempts to computes y(tEnd) by performing the trapezoidal method on the function f, with starting point tStart and step size Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance. tStart = 0 %Starting time tEnd = 5 %Ending time f = -y^2 %The derivative of y, so y' = f(t, y(t)) = -y^2 % The solution to this ODE is y = 1/(1 + t) y0 = 1 %The initial position (i.e. y0 = y(tStart) = y(0) = 1) tolerance = 10^-11 %10 digit accuracy is desired maxRows = 20 %Don't allow the iteration to continue indefinitely initialH = tStart - tEnd %Pick an initial step size haveWeFoundSolution = false %Were we able to find the solution to within the desired tolerance? not yet. h = initialH %Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates %Note that this will be a lower triangular matrix and that at most two rows are actually % needed at any time in the computation. A = zeroMatrix(maxRows, maxRows) %Compute the top left element of the matrix A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) %Each row of the matrix requires one call to Trapezoidal %This loops starts by filling the second row of the matrix, since the first row was computed above for i = 1 : maxRows - 1 %Starting at i = 1, iterate at most maxRows - 1 times h = h/2 %Half the previous value of h since this is the start of a new row %Call the Trapezoidal function with this new smaller step size A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) for j = 1 : i %Go across the row until the diagonal is reached %Use the value just computed (i.e. A(i + 1, j)) and the element from the % row above it (i.e. A(i, j)) to compute the next Richardson extrapolate A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1); end %After leaving the above inner loop, the diagonal element of row i + 1 has been computed % This diagonal element is the latest Richardson extrapolate to be computed %The difference between this extrapolate and the last extrapolate of row i is a good % indication of the error if(absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) %If the result is within tolerance print("y(5) = ", A(i + 1, i + 1)) %Display the result of the Richardson extrapolation haveWeFoundSolution = true break %Done, so leave the loop end end if(haveWeFoundSolution == false) %If we weren't able to find a solution to within the desired tolerance print("Warning: Not able to find solution to within the desired tolerance of ", tolerance); print("The last computed extrapolate was ", A(maxRows, maxRows)) end - Richardson, L. F. (1911). "The approximate arithmetical solution by finite differences of physical problems including differential equations, with an application to the stresses in a masonry dam". Philosophical Transactions of the Royal Society A. 210 (459-470): 307–357. doi:10.1098/rsta.1911.0009. - Richardson, L. F.; Gaunt, J. A. (1927). "The deferred approach to the limit". Philosophical Transactions of the Royal Society A. 226 (636-646): 299–349. doi:10.1098/rsta.1927.0008. - Page 126 of Birkhoff, Garrett; Gian-Carlo Rota (1978). Ordinary differential equations (3rd ed.). John Wiley and sons. ISBN 0-471-07411-X. OCLC 4379402. - Extrapolation Methods. Theory and Practice by C. Brezinski and M. Redivo Zaglia, North-Holland, 1991. - Ivan Dimov, Zahari Zlatev, Istvan Farago, Agnes Havesi:``Richardson Extrapolation: Practical Aspects and Applications, Walter de Gruyter GmbH & Co KG, ISBN 9783110533002 (2017).
<urn:uuid:9239dab4-edc9-4377-b68c-b301f14e17ec>
3.4375
1,934
Knowledge Article
Science & Tech.
49.173229
95,601,701
sing Experimental Protein Data to Construct a Configuration Space for Protein Folding Simulation Using Motion Planning Tech. Hans Dulimarta, firstname.lastname@example.org Deriving the structure of a protein from only its DNA sequence is theoretically possible, but the computational demands are so enormous that it’s impossible to complete a simulation of any average-sized protein in one’s lifetime. There is, therefore, a great deal of effort being expended on techniques that can shorten the time it takes to do a simulation. One technique being explored is the application of Probabilistic Road Map techniques, which were developed to solve the problem of moving a robot arm with multiple degrees of freedom from one configuration to another, to protein folding, using the insight that a protein is a chain of amino acids with fixed-length links between them. To date, the technique has been used to explore known proteins, starting from a known configuration, varying it and seeing how quickly and closely the simulation brings you back to your starting point. Unfortunately, this is not terribly useful in the case of proteins with unknown structures, but finding a way to constrain the search space is difficult. A simple method is to use aggregate data about known protein structures to construct a search space. That search space imposes some geometric constraints on the possible paths and opens up several possible approaches. The most interesting is the possibility of doing multiple passes within the search space, with each one being more granular and based on the results of the prior passes. To do these experiments, a code base from a prior experiment by Apaydin, et al (2002) is being modified so that the experimental data and multiple passes can both be taken into account. Bracey, Eric, "sing Experimental Protein Data to Construct a Configuration Space for Protein Folding Simulation Using Motion Planning Tech." (2006). Technical Library. 60.
<urn:uuid:1e5d9b41-3694-4a3a-8cdc-b2f24241cb73>
2.65625
379
Academic Writing
Science & Tech.
32.848877
95,601,703
Pandemonium! Scientists are Puzzled by Motion of Pluto's Moons - December 08, 2015 - 1558 Views - 0 Likes - 0 Comment Most of the familiar moons in the solar system orbit their planets calmly. Normally, one side of the moon always faces the host world, the same way the same side of a horse on a carousel always faces the center. This “synchronous rotation,” in the case of moons, is due to the gravitational tug of the central planet. But Pluto’s small moons seem to break those rules and more, a study has found. In the months leading up to the July 14 flyby of Pluto by NASA’s New Horizons spacecraft, astronomers—who were searching for any new satellites around Pluto—also had a chance to carefully measure the spin rates of the known ones. The investigations, they said, revealed some startling behavior for the four tiny outer moons: Styx, Nix, Kerberos and Hydra. They were spinning wildly. “These are four of the strangest moons in the Solar System,” said Mark Showalter, Senior Research Scientist at the SETI Institute in Mountain View, Calif. and a co-investigator on the New Horizons mission. One moon, Nix, is tilted on its axis and spinning backwards, he said. The outermost moon, Hydra, is spinning extraordinarily fast, turning 89 times every time it circles the dwarf planet. “If Hydra were spinning much faster, material would fly off its surface,” the way dust would fly off a spinning top, Showalter said. He suspects that Charon, Pluto’s large inner moon, is responsible for the odd behavior. Recently, he and collaborator Douglas Hamilton of the University of Maryland predicted that Charon’s strong gravitational force would disrupt synchronous rotation, causing the small moons to tumble chaotically. In the fields of physics and mathematics, “chaos” is a technical term indicating unpredictable behavior. But chaos alone—while describing the motion of these moons—is not an explanation. “There’s clearly something fundamental about the dynamics of the system that we do not understand,” Showalter said. “We expected chaos, but this is pandemonium.” Source : www.world-science.net
<urn:uuid:0b4fec6b-636a-4d7e-b0be-be67cca20e89>
3.328125
484
News Article
Science & Tech.
50.097899
95,601,713
Hector is the name given to a cumulonimbus, or thundercloud, that forms regularly nearly every afternoon on the Tiwi Islands, Northern Territory, Australia, from approximately September to March each year. Hector, or sometimes "Hector the Convector", is known as one of the world's most consistently large thunderstorms, reaching heights of approximately 20 kilometres (66,000 ft). Named by pilots during the Second World War, the recurring position of the thunderstorm made it a navigational beacon for pilots and mariners in the region. Hector is caused primarily by a collision of several sea breeze boundaries across the Tiwi Islands and is known for its consistency and intensity. Lightning rates and updraft speeds are notable aspects of this thunderstorm and during the 1990s National Geographic Magazine published a comprehensive study of the storm with pictures of damaged trees and details of updraft speeds and references to tornadic events. Since the late 1980s the thunderstorm has been the subject of many meteorological studies, many centered on Hector itself but also utilising the consistency of the storm cell to study other aspects of thunderstorms and lightning. - The cloud called Hector. The Cloud Appreciation Society. Retrieved on 2010-11-30. - P. T. May; et al. (2009). "Aerosol and thermodynamic effects on tropical cloud systems during TWPICE and ACTIVE" (PDF). Atmos. Chem. Phys. 9: 15–24. doi:10.5194/acp-9-15-2009. - Beringer, Jason; Tapper, Nigel J.; Keenan, Tom D. (2001). "Evolution of maritime continent thunderstorms under varying meteorological conditions over the Tiwi Islands" (PDF). International Journal of Climatology. 21 (8): 1021. Bibcode:2001IJCli..21.1021B. doi:10.1002/joc.622. - Crook, N. Andrew (1 June 2001). "Understanding Hector: The Dynamics of Island Thunderstorms". Monthly Weather Review. 129 (6): 1550–1563. Bibcode:2001MWRv..129.1550C. doi:10.1175/1520-0493(2001)129<1550:UHTDOI>2.0.CO;2. - Beringer, Jason; Tapper, Nigel J.; Keenan, Tom D. (30 June 2001). "Evolution of maritime continent thunderstorms under varying meteorological conditions over the Tiwi Islands". International Journal of Climatology. 21 (8): 1021–1036. Bibcode:2001IJCli..21.1021B. doi:10.1002/joc.622. - Barker, Anne (14 November 2005). "Researchers to investigate impact of storms". The World Today. Australian Broadcasting Corporation. Retrieved 11 July 2011. - Casben, Liv (14 February 2006). "Scientists complete storm study". PM. Australian Broadcasting Corporation. Retrieved 11 July 2011. - "Our changing atmosphere". Planet Earth Online. Natural Environment Research Council. 23 April 2007. Archived from the original on 6 October 2011. Retrieved 11 July 2011. |This climatology/meteorology–related article is a stub. You can help Wikipedia by expanding it.| |This Northern Territory, Australia article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:6c19da4f-2245-41ef-bdce-2cb8e51c158f>
3.40625
720
Knowledge Article
Science & Tech.
59.434929
95,601,729
A population estimate of fruit bats was carried out in the Kampala Bat Valley roost. The model used was a single-stage systematic sampling of unequal primary units (trees). The trees were first listed in a 'serpentine manner' with neighbouring trees having contiguous serial numbers. After a random start, every seventeenth tree was selected so that a sample of fourteen out of a total of 238 trees were counted. The exercise was carried out over 3 months, January, February and March, in 1979. Information was gathered on the number of branches with bats, the number of bat clusters and cluster size. From the analysis of the data, the following monthly averages were obtained: total number of bats in the colony = 70,388; average number of bats on each tree = 310; average number of clusters per branch=4; average cluster size = 7'8. Measures of reliability of the estimates were made. The implications of these results and the conservation of the habitat are adjusted. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:3c7644b5-44c7-4211-b0c9-875e057fd2f5>
3.421875
218
Academic Writing
Science & Tech.
41.079063
95,601,734
The group that oversees the Climate Community and Biodiversity (CCB) Standards has more than 100 projects in its pipeline, and more than half of these are designed to generate carbon credits by reducing emissions from deforestation and forest degradation (REDD). In Copenhagen, they’ll be introducing the new “REDD + Social and Environmental” Standards. 27 November 2009 | The run-up to the United Nations Climate Change Conference that kicks off next week in Copenhagen has been anything but smooth – underscoring enduring North-South and transatlantic divides, especially regarding emission-reduction commitments. Most observers, however, expect Copenhagen to yield some degree of agreement on how best to account for carbon offsets generated through reduced emissions from deforestation and forest degradation (REDD) after a series of tedious talks over the past six months. Yet even while REDD advances, some discussion participants continue to raise serious concerns about the social, environmental and equity impacts of a global REDD agenda. At Copenhagen, CARE International and the Climate, Community and Biodiversity Alliance (CCBA) hope to answer these concerns head-on when they present the REDD + Social and Environmental Standards (REDD + SE) at a side event hosted by the government of Nepal. REDD + SE aims to help governments institute equitable REDD programs on a national level. Begun just this year, the joint CARE/CCBA initiative has also been consulting with a few national governments on testing the standards on a national scale and aims to finalize the new standards in March, 2010. Yet Another Standard? The REDD + SE Standards build from – and, in fact, resemble – the CCBA’s flagship Climate, Community and Biodiversity Project Design Standards (CCB Standards), whose purpose is to secure positive co-benefits for conservation projects. As REDD scales up, however, governments need guidance on how to equitably conserve forests and other conservation settings on a magnitude for which CCB is not tailored. REDD + SE will try to fill in this gap. At the same time, conservationists and investors interested in REDD projects have not waited for the national vagaries to be resolved. In fact, REDD advocates have broken ground on many projects since the 2007 Climate Change Conference in Bali, Indonesia, and the follow-up 2008 Climate Change Conference in Poznan, Poland opened the door to REDD credits. Many believe that that the success or failure of REDD projects will turn on how the co-benefits are addressed, and the CCB Standards are the preeminent measure for these aspects. CCB Standards: Co-Benefits Attracting Investors The CCB Standards were designed to measure the social and biodiversity impacts of conservation projects with the same vigor that earlier standards set out to measure carbon sequestration. Today, they serve as the template for working social and equity concerns into land-based carbon mitigation projects. In fact, a recent survey of carbon market buyers and investors indicates that these participants have greater confidence and interest in forest carbon projects that have a CCB label matched with another carbon-focused certificate. The standards have attracted much attention since Bali, because they can be so readily applied to REDD projects. “In the last two years, we’ve seen a rapid uptake of the CCB Standards,” says conservation biologist and CCBA director Joanna Durbin, whose conservation experience includes almost 20 years of community conservation and sustainable development in Madagascar. The Pipeline: Turning REDD The CCBA launched the first edition of CCB in 2005, and now it has more than 100 projects in the pipeline. Durbin says more than half of these are REDD-based – although just 13 projects have been fully validated so far, and just three of these are REDD projects. “Project developers, buyers, and investors use the CCB Standards to demonstrate a high-quality project,” she explains. “They haven’t just patched together an emissions-reduction project, but have put together a project based on deep knowledge of people and place, ecological knowledge, and strong relationships and partnerships to create a win-win situation that will bring a sustained flow of real benefits.” Advantage to Investors Sydney, Australia-based investor Brer Adams agrees with Durbin. He works on carbon-related investments for Macquarie Capital’s Utilities and Climate Change Team, and his interest in REDD was spurred on by the creation of the Bali Action Plan, which he feels has boosted the prospect of an eventual compliance market for forest carbon. He says that his investment group’s teaming up with the conservation NGO Fauna and Flora International and their joint work bringing CCB Standards into REDD projects has helped quite a bit in appraising REDD investments. “We bring a due diligence process to understand the drivers of deforestation and what could be achieved to displace these drivers,” he says. “CCB Standards are a key part of that due diligence process, right from the start, as they helped us to understand community support for a project.” The Three Advantages Adams lists three major reasons why utilizing CCB Standards and certification are so helpful: “First, it is important to have a good framework for the design of forest-carbon projects right from the start to make sure community and biodiversity impacts are front and center,” he says. “Second, as we move to compliance markets for carbon, it makes sense to design projects to meet high international standards in anticipation of what may be required in future markets. Third, we believe that markets will favor projects that have co-benefits, which the CCB process highlights.” The CCB accreditation process is a hefty, multi-step process. Beginning with a project developer’s identifying land and coming up with an idea for emissions reductions potential, the CCB process then entails a feasibility study, followed by project design informed by CCB Standards with their specification of climate, community, and biodiversity measuring criteria. The next step is contracting a CCBA-approved independent auditor, which will then entail community outreach, the auditor’s field visit, and then a comment period for the auditor’s report. A period of corrections or redesign then occurs, followed by the final CCB statement on conformance. A very important part of the process occurs right before the field visit by the auditor, and that is the requirement for notifying the public for input and comment. For this step, the onus is on the project developer, and Durbin makes clear that the “the CCB Standards require the project developer to publicize the project documents to the local communities and to facilitate their submission of comments to the CCBA and the auditor.” The Amazonian Precedent The Juma Sustainable Development Reserve Project in Amazonas, Brazil, is a very vivid example of a CCB REDD-validated project with co-benefits as a design priority. Awarded the CCB Standards highest accreditation, the Gold Rating in 2008, the Juma project will prevent deforestation on approximately 366,000 hectares of tropical rainforest, with an expected mitigation of 3.6 million tons of CO2 emissions from 2006 to 2016, the first crediting period. According to conservation biologist and Juma project director Gabriel Ribenboim, the Juma Project “has a strong component of local involvement and this is the primary element for the project success.” He adds that the CCB Standards have greatly facilitated the project, and the CCB Standards set was chosen “because of its strong focus on quantifying the co-benefits of socio-economic and biodiversity factors.” Administered by the State of Amazonas-supported Amazonas Sustainable Foundation (FAS) and receiving a mix of public and private financial support, the Juma REDD project secures avoided deforestation, carbon mitigation gains, and co-benefits by means of the Bolsa Floresta ecosystem services payment program, a state wide program. In the Juma project, Bolsa Floresta distributes ecosystem services financial payments at the family, association, income, and social level. The Bolsa Floresta Program offers “a practical way to reward traditional communities for their commitment to zero deforestation and for their roles in the conservation of the flora and fauna, river, lakes and creeks,” says Ribenboim. “In each type, there is a beneficiary, a value, a form of payment, and a resource use.” So, for example with the “association” focus, the Balsa Floresta program for the Juma project supports association work on the sustainable marketing of nuts, oils, fish, and seeds from the area. In the social area, the Juma project has led to the purchase or construction of school and teacher housing, health and communication centers, four commercial boats and two saw mills. REDD + SE: The Next Level With the clear attractions of focusing on REDD co-benefits at the project level, it didn’t take long to start looking at these aspects on a national level. In fact it was necessary. Many had deep reservations about the risks of a large scale REDD program, with concerns that a few good projects would not prevent systematic abuse and iniquities as to land use. Even environmental organizations had serious differences as to whether to endorse REDD in the UNFCCC process. Given all this, the CCBA Alliance joined with CARE International to initiate REDD + SE to help governments set up equitable REDD programs on the national scale. Coming from her experience with the CCB Standards, the CCBA’s Durbin, also co-leads the REDD + SE initiative with CARE’s Phil Franks. “We thought it helpful for governments to have a mechanism to help them show that their national level REDD program is following best practices for consultation for forest carbon,” Durbin explains. She adds that throughout this REDD + SE process, CCBA and CARE have been very diligent to make the process transparent and bring in many civil society representatives, including from indigenous and local peoples groups and social and environmental NGOS, who have a stake in REDD. How REDD + SE Works Similar in structure to the CCB Standards, the REDD + SE Standards first lays down overarching principles, followed with more specified directives. The current Draft REDD + Social & Environmental Standards has eight principles, such as “Rights to land, territories, and resources are recognized and respected” (Principle One) and “Biodiversity and ecosystem services are maintained and enhanced”(Principle Five). Each principle is complemented by instructive criteria, and a specific “Framework for indicators” which offers methods for a principle’s adherence. Development to Date Since its first workshop in Copenhagen in May 2009, the joint CCBA/CARE led initiative has issued two sets of draft principles for comment (with one comment period ending November 30, 2009),and held consultations in Ecuador, Nepal and Tanzania, the countries who will be test cases for the standards on a nationwide basis. After their presentation in Copenhagen on Wednesday December 9, the CCBA/CARE initiative will offer the REDD + SE standards for another comment period. Also, the CCBA/CARE initiative is exploring the possibility of adding two more pilot countries before the standards are finalized, with current finalization goal being March 2010. While clearly non-binding, the REDD + SE Standards may turn out to be very influential in how nations tailor their REDD programs. “At the moment these are formulated as standards that will be adopted voluntarily by the REDD country,” Durbin points out. “At some point they could be adopted or used to inform a regulatory framework.” Future Synergies, Future Variations The REDD + SE Standards and the CCB standards pertain to different scales, and they cover different gaps within the REDD agenda. Certainly with the CCB standards, investors and project developers have already expressed confidence that these standards play an important role in generating the diverse returns that they hope to get from REDD. Similarly, the possibility of carbon mitigating REDD projects undercut by festering problems on the national level points to a crucial role for the REDD + SE Standards. Although different in scale, the CCB and REDD + SE Standards should intersect in the future as national contexts and individual project fit together. Early on, it is likely that the two standards would synergize attractions, especially for investments. The two standards already independently carry extra promise for securing financing. For example for the REDD + SE Standards, Durbin says that by fostering transparency, attention to rights, and delivering co-benefits, the REDD + SE Standards could attract “international support that might encourage preferential access to funds, premiums for REDD credits that also deliver co-benefits, and even co-financing for the co-benefits.” On the project level, Adams points to the parallel experience of the CDM market. “What we do know from the CDM market is that the market can be discretionary in what it pays in the carbon market,” explains Brer. ” We believe that in many cases there will be a bias for projects with additional benefits. As investors in forest carbon, we believe that social and environmental benefits can command a premium.” Beyond the question of premiums, it is clear if there is progress on the national and international questions that pertain to REDD, REDD projects and programs with secure co-benefits look to have a good future. For example, Ribenboim points out that the Bolsa Floresta program has a goal to operate in over 20 protected areas by 2012 with direct benefits for approximately 10,000 families. “We are committed to show the world that REDD projects are real, measurable and verifiable and fully capable to generate co- benefits for the local community and the environment.” Similarly, Adams reflects that “Until now there hasn’t been an economic incentive to reduce deforestation. We think that is about to change.” Bringing in some in some cross-sector sentiments, Adams adds, after all, “forests are the eco-infrastructure of our planet.” Richard Blaustein is a free-lance journalist based in Washington, DC. He can be reached at firstname.lastname@example.org. Please see our Reprint Guidelines for details on republishing our articles.
<urn:uuid:749c6627-769a-48c2-a71f-2756fd7a238e>
2.71875
3,047
Knowledge Article
Science & Tech.
29.964115
95,601,764
The behavior of interstitial Mg atoms at an edge dislocation is studied in the wurtzite-type GaN crystal by molecular dynamics (MD) simulation. Parameters for a two-body interatomic potential are determined by the Hartree-Fock ab initio method. First, an edge dislocation extending to the 0001 direction is generated in an MD basic cell composed of about 11,000 atoms. Second, Mg atoms are placed at substitutional and interstitial positions in the MD basic cell, and the Mg atoms are traced. It is found that the diffusivity of Mg atoms at a dislocation is enhanced, along the dislocation. At 1000 K, the diffusivity of interstitial Mg atoms inside the dislocation core is approximately three orders of magnitude larger than that of interstitial Mg atoms located outside the dislocation. The enhanced diffusion along the dislocation originates from unbalanced atomic forces between the Mg atom and surrounding atoms. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:3993172e-25de-4236-a52e-ae72ec769085>
2.953125
219
Academic Writing
Science & Tech.
23.342331
95,601,767
This section explains creating custom converter. When the user inputs value to the component, it is simple string value. Now you may be in the need of using this value as a different object like Boolean, Date etc. Converters can help in this conversion. JSF framework has provided many converters like Boolean Converter, Byte Converter, Number Converter etc. These converters converts values into appropriate type of object and returns it also to the page in the appropriate format. JSF flexible architecture provides you freedom to create your own converters. These can be used to check the value in the correct format. For example, In our application user is provided an input box to fill time in "hours:minutes:seconds" format. This String is converted as Object by the converter and also converted back in String when it needs to display in the web-page. Now if the user doesn't fill time in correct format then it displays error message showing the conversion could not be successful. To create custom converter you need to implement "Converter" interface of javax.faces.converter package in your class. Steps to follow : The steps above have been implemented in our application "customconverter". This will help you to understand the process of creating custom converter. Just go through the following steps : Step1 : Create a class "hr_mi_se_Converter" that implements the Converter interface and implements two abstract methods "getAsObject()", "getAsString()". Save this file as "hr_mi_se_Converter.java" in WEB-INF/classes directory of your application. In this class "param" represents the string provided by the user in the component. This string is passed to the getAsObject() method. Now we can use this according to our requirement of manipulation and return the appropriate object. "obj" parameter passed in getAsString() method represents the converted object in the previous method. This method is called while displaying in the page. So return the appropriate String by manipulating this object. If there is any problem in this process we can handle it by try and catch block. An error message is shown to the current page if conversion is not successful. Step2 : Configure the configuration file (faces-config.xml). Open this file and add the following code. Here <converter-id> gives ID to the converter that will be used in our page and <converter-class> specifies the implementing class. Step3 : Now we can code for the page "page.jsp" where <f:converter> tag is used to associate the converter to the component using converterId attribute. Value of this attribute is given the ID of the converter which we have specified in the configuration file. Output : When user fills wrong input then it displays error message in the current page as it is shown below otherwise processes according to the logic of the application.
<urn:uuid:af6661d8-4484-449d-a8db-3f648d27353f>
2.609375
612
Tutorial
Software Dev.
46.364565
95,601,777
Found in nature only on the island nation of Madagascar, off Africa’s southeastern coast, lemurs and their close relatives the lorises represent the sister lineage to all other primates. And that makes lemurs key to understanding what distinguishes us and the rest of our primate cousins from all other animals, according to Julie Horvath, a post-doctoral researcher in the IGSP. “If we find a trait or characteristic shared between lemurs and other primates, it can tell us what is or isn’t primate-specific and when those traits arose,” said Horvath, who works in the laboratory of IGSP director Huntington Willard. The new “phylogenomic toolkit” the researchers developed will also play into conservation efforts aimed to save the critically endangered lemurs, by helping to define the number of existing species, said David Weisrock, a post-doctoral researcher working with Duke Lemur Center Director Anne Yoder. The researchers report their findings in the March 1 issue of Genome Research. Scientists uncover evolutionary relationships among species based on similarities and differences in their genetic codes. The increasing number of fully sequenced genomes available for major evolutionary groups has allowed resolution of relationships that had been considered unmanageable before. But except for humans’ close evolutionary ties to chimpanzees, many of the relationships among other apes, monkeys and pre-monkeys called prosimians have remained somewhat murky, according to Horvath. To find out where Madagascar’s lemurs fit in, the Duke team first needed to develop the tools for comparing sequences from the many lemur species to one another, and to those of other primates including humans. The researchers identified stretches of DNA sequence held in common between the genomes of the human, the ringtailed lemur and the mouse lemur. These "conserved sequences" served as primers, allowing them to sample comparable bits of sequence across the genomes of the various primate species. Their analysis confirmed that the first to branch off from the rest of the lemurs, some 66 million years ago, was the aye-aye--a nocturnal primate that taps on trees with its fingers to listen for insects inside, making it Madagascar’s version of a woodpecker. They also resolved the relationships among species within the remaining four evolutionary lineages, which includes a diverse cast of characters: the sifakas, named for the hissing “shee-fak” sound they make; the sportive lemurs, which are strictly nocturnal; the mouse lemurs, the smallest of all living primates; and the many so-called “true lemurs,” including the blue-eyed black lemur (one of only three blue-eyed primates in the world) and the ringtailed lemur, which is often found in zoos. “By throwing this much data at the problem, we have absolutely confirmed, beyond any statistical doubt, that the spectacular array of lemurs all descended from a single ancestral species,” said Yoder, noting that lemurs account for about 20 percent of primate species and live on less than one percent of the earth’s surface. “It further highlights the importance of Madagascar as a cradle for biodiversity.” The study lays the groundwork for doing future studies of lemurs and other primates. The methods the group developed for this study can also be applied to understanding evolutionary relationships among other animal groups for which genomic sequences are hard to come by. Kendall Morgan | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:1833cbdd-e033-474b-b83e-2908581d2d68>
3.828125
1,395
Content Listing
Science & Tech.
35.410834
95,601,786
Global Warming Worksheet Information regarding Global Warming Worksheet has been submitted by admin and tagged in this category. Home, house or office is just about the places where we regularly use to expend amount of time in our living. its look should make you feel at home. Sometimes, we would have to marginally change the design, color, or even accessories. We need a new idea for this then one of the is Global Warming Worksheet global warming worksheets printable worksheets showing top 8 worksheets in the category global warming some of the worksheets displayed are effects of global warming lesson plan effects of global warming work global warming lesson plan teacher information lesson title global warming green global climate change misp ecologypollution global warming work 2 l3 global warming swann house 22 william street melbourne victoria 30 global climate change germanwatch worksheet global climate change lead to a global warming of 4–6 °c however because arrhenius couldn’t substantiate his theory with measure effects of global warming worksheet global warming effects name period as the average global temperature continues to rise the glaciers and ice caps will continue to melt increasing the Related Posts with Global Warming Worksheet - Properties Of Exponents Worksheet Algebra 2 - Swot Analysis Worksheet - Written Document Analysis Worksheet - Long Division Worksheets 5th Grade - Analyzing Data Worksheet - Dividing Decimals Worksheet 7th Grade - Sentence Diagramming Worksheet Awesome Global Warming Worksheet Climate change calamities English Pinterest – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/96/eb/a1/96eba1ce7598dfb15636215ab40b7000.jpg 43 best ENVIRONMENT images on Pinterest – Global Warming Worksheet Image Source : https://i.pinimg.com/736x/ef/dd/e8/efdde8f9404fd2fac951da4b11c48878–english-exam-english-time.jpg Everyday Uses Rock & Card Set – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/5e/cc/0d/5ecc0d74b5bd413c30c885826c5ff92d.png outline for persuasive essay – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/09/89/b3/0989b3d2b4fb3b8566bd56a398885fa8.jpg reading exam quiz worksheet JOBS medical vocabulary – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/0f/e1/18/0fe118f1891ef3b8716390a24a015c00.jpg Natural disasters Teacher Resources Pinterest – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/b4/91/8a/b4918a076b9b21a0b794f378183b9b05.jpg Renewable Energy Lesson Plan and Printable Worksheets – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/34/2c/c7/342cc7b2bede2c5f23899c8ba0a946fa.gif Story Structure Worksheets Story Structure Worksheets Diilz – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/ae/b6/0d/aeb60db6cfac9723d55516e3c5ca6848.jpg Science 7 Structure and Forces Unit and Lesson Plans Resource – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/c6/0d/2e/c60d2e74d660726d67e8935591b501f9.png worksheets weather instruments Weather Worksheets PDF – Global Warming Worksheet Image Source : https://i.pinimg.com/originals/56/b7/91/56b791d63fe7155afb02966a27254e5b.png Related Posts with Global Warming Worksheet - Blank Periodic Table Worksheet - Free Addition Worksheets for Kindergarten - Solving Two Step Equations Worksheet Answer Key - One-step Equations Worksheet - Solving Systems Of Inequalities by Graphing Worksheet - Simple Budget Worksheet for Young Adults - English Worksheet for Grade 1 - Food Webs and Food Chains Worksheet A worksheet, in the word’s original so this means, is a sheet of newspaper which one performs work. In education, a worksheet may have questions for students and places to record answers. In accounting, a worksheet is, or was, a sheet of ruled newspaper with rows and columns which an accountant could track record information or perform computations. In processing, spreadsheet software presents, on the computer keep an eye on, a interface that resembles a number of newspaper accounting worksheets. Microsoft Excel, a favorite spreadsheet program, identifies an individual spreadsheet (more officially, a two-dimensional matrix or array) as a worksheet, and it identifies a assortment of worksheets as a workbook. In the class preparing worksheets usually make reference to a loose sheet of newspaper with questions or exercises for students to complete and track record answers. These are used, to some extent, in most things, and have common utilization in the mathematics curriculum where there are two major types. The first kind of math worksheet is made up of a assortment of similar mathematics problems or exercises. They are designed to help students become experienced in a particular numerical skill that was educated to them in category. They are generally directed at students as research. The second kind of math worksheet is supposed to present new matters, and tend to be completed in the class room. They are made of a intensifying group of questions that causes a knowledge of this issue to be discovered. Worksheet generators can be used to develop the sort of worksheets which contain a assortment of similar problems. A worksheet generator is a computer software that quickly creates a assortment of problems, specifically in mathematics or numeracy. Such software is often employed by educators to make class materials and assessments. Worksheet generators may be filled on local personal computers or accessed with a website. In accounting a worksheet often identifies a loose leaf little bit of stationery from a columnar pad, instead of the one that has been destined into a physical ledger publication. From this, the word was expanded to designate an individual, two-dimensional selection of data in just a computerized spreadsheet program. Common types of worksheets found in business include financial claims, such as earnings and loss information. Analysts, traders, and accountants keep tabs on a company’s financial assertions, balance linens, and other data on worksheets. In spreadsheet programs like Microsoft’s Excel or the open up source LibreOffice Calc, an individual document is actually a ‘workbook’ and could have by default three arrays or ‘worksheets’. One good thing about such programs is they can contain formulae so that if one cell value is modified, the whole file is automatically modified, predicated on those formulae.
<urn:uuid:bd3f477a-a75f-4efb-970c-ee216920f071>
3.1875
1,602
Product Page
Science & Tech.
37.104127
95,601,788
Low-Cost, Nontoxic Superhydrophobic Coating A new class of superhydrophobic nanomaterials might simplify the process of protecting surfaces from water. The material was made by scientists at Rice University, the University of Swansea, the University of Bristol, and the University of Nice Sophia Antipolis. It is inexpensive, nontoxic, and can be applied to a variety of surfaces via spray- or spin-coating. The hydrocarbon-based material may be an environmentally friendly replacement for costly, hazardous fluorocarbons commonly used for superhydrophobic application.
<urn:uuid:a6457cdf-c45b-44fa-b093-0cd66075fd56>
3.015625
120
Knowledge Article
Science & Tech.
0.370139
95,601,840
Do you that the java program, when running on different architectures, it runs on different virtual machine (VM)? Why it need different VM when running the similar syntactical code? Let me take the opportunity to share and explain what I have gone through I have been read about 3 main VMs. Java Virtual Machine Kilobyte Virtual Machine Dalvik Virtual Machine Java Virtual Machine: JVM executes Java .class bytecode to run the Java applications. JVM with set of libraries form a JRE, which is installed on computer to execute any Java program. Kilobyte Virtual Machine: Java API contains large set of standard libraries. To use the same no.of libraries in Mobile is not possible due to memory constraint in the mobile. So, they have come up with an idea to take a subset of libraries and form a profile in Java 2 Platform Micro Edition (J2ME). So JVM is replaced with less memory running KVM. KVM is designed to run in kilobytes memory for small devices. Dalvik Virtual Machine: When it comes to Android Operating System, they don't want to either JVM or KVM. Because they want to run a virtual machine for every application. This concept can't be implemented with existing virtual machines. Due to this reason, this virtual machine is integral part of the Android Operating System. In Android, the java .class bytecode converted to dalvik compatible .dex which will be executed with Dalvik Virtual Machine (DVM). The set of .dex files forms an Android Package (.apk) file, which gets installed on the devices to install and run the applications. All the virtual machines primary source is developed in java. Set of libraries have changed. Memory allocation is different. Please send your review and feedback to firstname.lastname@example.org Hi friends, I have updated my Aadhaar details from Aadhaar update center. It is mentioned that within 72 hours, my details will be update... Hi everyone, I booked my train ticket even it is in WaitingList. Because, I guessed that, It will get confirmed when chart prepares. Aft... For the academic purpose i had written a small program in Matlab. I have tried encoding and decoding the image by using Huffman coding. ...
<urn:uuid:5ba5380a-29a6-4e43-b40f-a2eee2179945>
2.640625
480
Comment Section
Software Dev.
52.382465
95,601,849
Scientists in the United Kingdom at University College London have released a new study that suggests green spaces in cities may be capable of holding the same amount of carbon as rainforests, Metro reports. Published in Carbon Balance and Management, the study analyzed parts of UCL’s campus in Camden, and north London, which included 85,000 trees. Using laser pulses, they estimated the amount of carbon absorbed by the trees in the course of their lifetime. The technique is known as LiDAR (Light Detection and Ranging), and the team used both their own measurements and those collected by the U.K. Environment Agency. The pulses take a detailed picture of the 3D structure of the trees, which makes the calculations of carbon storage far more accurate. They found that an area like Hampstead Heath, one of London's most popular green spaces, stores close to 178 tons of carbon for every hectare, or 2.47 acres. In comparison, rainforests capture about 190 tons of carbon in the same amount of space. The study's lead author, Dr. Phil Wilkes, says they want the benefits of urban green spaces to be considered from every side. "Urban trees provide many ecosystem services essential for making cities liveable," he explained. "This includes providing shade, flood mitigation, filtering air pollution, habitat for birds, mammals and other plants, as well as wider recreational and aesthetic benefits. Urban trees are a vital resource for our cities that people walk past every day. We were able to map the size and shape of every tree in Camden, from forests in large parks to individual trees in back gardens. This not only allows us to measure how much carbon is stored in these trees but also assess other important services they provide such as habitat for birds and insects." It can also be cost-effective for cities, and helps to offset fossil fuel emissions in congested streets with considerable traffic. It's estimated that the value of storing that carbon is about £4.8 million to London every year, or about £17.80 for every tree. The team hopes that the research will continue, praising the LiDAR system for what it can potentially reveal about how urban trees differ from their more wild counterparts. But ultimately, they hope this study is used to influence city planning. "An important outcome of our work was to highlight the value of urban trees, in their various and very different settings. The approach has been really successful so far, so we're extending it across London, to other cities in the UK and internationally,” said co-author Dr. Mat Disney. It's incredible what a nice park in a city can do. These seven Etsy shops from around the world offer an impressive range of cruelty-free products you can feel good about putting on your face. A new report shares why decentralized energy grids will power the homes of the future and make a major difference in the lives of those in developing countries currently with limited or zero access to electricity. Starbucks and McDonalds are working together to rethink to-go cups and inviting others to join them in creating eco-friendly packaging in an effort to reduce waste and environmental impact. A new report finds that meat and dairy producers are on track to surpass the oil industry's greenhouse gas emissions.
<urn:uuid:0369916e-fadf-49f5-a662-fe7eed77cbeb>
3.984375
676
News Article
Science & Tech.
47.668548
95,601,863
Spread F at Tropical Latitude Stations in India Ahmedabad Regular ionospheric soundings over Ahmedabad were started in 1953 and several studies describe the various features of the ionosphere close to the Equatorial Ionization Anomaly crest. Finer characteristics of the spread F echoes could not be studied earlier due to the very wide pulse of the transmitter. The characteristics of the spread F at Ahmedabad are described using the recordings of recently installed Digisonde. The spread F echoes at Ahmedabad are not due to the in-situ produced irregularities as at an equatorial station Thumba. The spread F echoes at Ahmedabad are due to reflection (not scattering) from the blobs or tongues of ionization transported from equatorial region along the magnetic lines of force. These produce multiple traces from off-vertical direction overlapping over the main vertical p'-f trace, sometimes giving an appearance of diffuse echoes near the critical frequencies. Full Text: PDF (downloaded 349 times) - There are currently no refbacks.
<urn:uuid:53035bd9-1878-4321-8b5f-0cea6360d55d>
2.625
208
Truncated
Science & Tech.
21.646522
95,601,870
Calcium silicates have proven to be potential candidates for biomedical applications because of their osteogenic properties. Sol–gel methods are typically used for the preparation of calcium silicate powders. However, in the sol–gel route, an acid or base and ethanol are used to catalyze the precursors. From the perspective of green chemistry, it is better to avoid the use of organic solvents. The objective of this study was to prepare calcium silicate powders using a green synthesis route (hydrothermal method) without organic solvents. The powders were also prepared via the sol–gel process using tetraethoxysilane (TEOS) and calcium nitrate as the raw materials for the purpose of comparison. The powders were sintered at temperatures ranging from 600 to 1000 °C after the application of both methods. To understand the feasibility of using the resulting materials in medical applications for bone repair, the powders were mixed with water to form cements. The results indicated that the powder composition was not significantly affected by the different techniques but was dependent on the Ca:Si ratio of the precursors and on the sintering temperature. The different techniques produced no differences in powder morphology. In addition, the setting times of the powder-derived cements were found to be independent of the sintering temperature and synthesis technique, but it was affected by the Ca:Si ratio of the precursors. The mechanical strength of the cements was similar. These encouraging results suggest that the hydrothermal method is a potentially beneficial alternative to the sol–gel route for the production of calcium silicate powders.
<urn:uuid:0a00ba4b-2bf9-4508-b80a-72276f59a2db>
2.53125
340
Academic Writing
Science & Tech.
21.98605
95,601,883
Walking along the beaches of New England, it is easy to spot large amounts of a fine red seaweed clogging the coastline, the result of sweeping changes in the marine environment occurring beneath the water. To further investigate, researchers at the University of New Hampshire looked at seaweed populations over the last 30 years in the Southwestern Gulf of Maine and found the once predominant and towering kelp seaweed beds are declining and more invasive, shrub-like species have taken their place, altering the look of the ocean floor and the base of the marine food chain. In the study, recently published in the Journal of Ecology, researchers compared photos of sections of the sea floor, collected over 30 years, at several subtidal sites in the Southwestern Gulf of Maine. They also collected individual seaweed species to determine their complexity and the biodiversity of meso-invertebrates (smaller ocean species that fish and shellfish, such as crabs, feed on) associated with each seaweed species. The data showed that the seaweed community, as well as the number and types of small creatures, had significantly changed. The invasive fiber-like red seaweeds (Dasysiphonia japonica ) had covered up to 90 percent of some areas, altering the visual landscape, and the newly created habitat structure now supported two to three times more small creatures at the base of the food chain. "We were very surprised by what we saw," said Jennifer Dijkstra, research assistant professor in the Center for Coastal and Ocean Mapping at UNH and the lead author of the study. "In some areas, what was once a forest of tall blades of kelp with a high canopy height was now composed of bushy invasive seaweed species which had a much shorter canopy and a very different physical form." Studies have found that kelp forests are one of the most productive systems in the ocean with high biodiversity and ecological function. They occur along the coastlines of most continents. Kelp provides a long three dimensional structure that offers protection and a source of food for many juvenile species of fish (pollock, cod, and flounder), juvenile and adult shellfish (lobsters and crabs), seals and birds (terns and gulls). "While the changing seascape has dramatically altered and increased the diversity and number of small creatures at the base of the marine food web, we still don't know how these changes in the ecosystem will propagate through the entire chain. Even though there may be more creatures at the base, it's not clear what their effects will be on fish or other crabs in the habitat, and how much protection the new landscape will provide," said Dijkstra. Researchers say on-going studies are looking at the effects of the invasive types of seaweed and why they are so successful in the Gulf of Maine. They speculate that a number of events that relate to historical fishing practices, both commercial and recreational, combined with the warming waters in the Gulf of Maine may be increasing the negative effects of the growth of kelp. Co-authors on this study, all from UNH, are Larry Harris, professor of zoology; Kristen Mello '14, research technician; Amber Litterer '16, Shoals Marine Laboratory; Christopher Wells, former research technician currently at the University of Washington; and Colin Ware, director of the Data Visualization Lab at the Center for Coastal and Ocean Mapping. The University of New Hampshire is a flagship research university that inspires innovation and transforms lives in our state, nation and world. More than 16,000 students from all 50 states and 71 countries engage with an award-winning faculty in top ranked programs in business, engineering, law, liberal arts and the sciences across more than 200 programs of study. UNH's research portfolio includes partnerships with NASA, NOAA, NSF and NIH, receiving more than $100 million in competitive external funding every year to further explore and define the frontiers of land, sea and space. Images to Download: Image of one species of invasive seaweed, Dasysiphonia japonica. Photo credit: Amber Litterer/UNH Photo of historical kelp forest bed (before introduction of invasive seaweed). Photo credit: Larry Harris/UNH Photo of what seaweed community looks like after introduction of invasive seaweed (Dasysiphonia japonica) Photo credit: Kristen Mello/UNH Robbin Ray | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:b109aee1-004b-42a3-af9b-9ca2ebfbcc5d>
3.84375
1,570
Content Listing
Science & Tech.
38.178095
95,601,897
Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output based on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:f7dbed4e-b7c4-486e-8f98-3b8868d4d235>
3.109375
244
Academic Writing
Science & Tech.
5.989341
95,601,898
Marine scientists are bracing for the loss of the world-class research vessel Marcus G. Langseth. The National Science Foundation plans to sell the 235-foot ship in 2020, according to a "Dear Colleague" letter published on the agency's website last month. Without a vessel to replace the Langseth, ocean seismologists fear their field will suffer. "We're not trying to save the Langseth at all costs," said James Austin, a geoscientist at the University of Texas, Austin. "We're trying to save deep-ocean crustal imaging." Deep-ocean crustal imaging is where the Langseth excels. It is no ordinary ship. Its sophisticated array of pneumatic guns generates a blast that bounces off the Earth's crust and penetrates dozens of miles into the planet. Unspooled behind the ship, miles of cables strung with microphones capture the blast's reflection. This sonic bounce creates maps of mid-ocean-ridge magma chambers and tectonic plate edges, features that are otherwise difficult, if not impossible, to survey. "There really aren't any comparable vessels that are available to academic scientists," said geophysicist Douglas Wiens, a professor at Washington University in St. Louis and chair of the Iris Consortium, a network of 100-plus universities that collect seismological data. This ship has propelled "huge scientific advances" in marine seismology, he said. Marine imaging, for instance, helps scientists identify where underwater earthquakes could occur. Recent research conducted on the Langseth found a fault near the Alaskan coast similar to the fault responsible for the 2011 tsunami that devastated Japan and other areas across the Pacific. In 2004, the NSF purchased the ship from a contractor for the drilling industry, which uses ships like the Langseth to locate oil and other natural resources. Over the next three years, dockworkers in Nova Scotia modified the vessel into a research platform, able to support a host of sensors and gadgetry. The academic community had grand ambitions for the ship, said Sean Higgins, director of marine operations at Columbia University's Lamont-Doherty Earth Observatory, which runs the Langseth on behalf of the NSF. The ship, which accommodates 55 or so people, can make observations as varied as the salt content in seawater and the detection of nearby marine mammals. Researchers do not fire the ship's air guns when whales or dolphins are close, Austin said, to avoid harming the animals. The ship generates 3-D views into the Earth's crust, peering deeper than the Langseth's retired predecessor, the Maurice Ewing. The Langseth has buoyed the careers of scientists who never set foot aboard it. The marine science community shares seafloor data collected by the Langseth, similar to the way astronomers over the world can access images from NASA's Hubble Space Telescope. But financial problems plagued the Langseth from the start. Its planned $4.4 million refit in Canada ran over budget by $600,000. An agreement between the NSF and the international Integrated Ocean Drilling Program to support the ship fell through during the 2008 recession, Higgins said, leaving the NSF holding the check. Rising fuel prices drove up the cost of research excursions. The ship remains docked more often than not. The Langseth sails only about 150 days a year. And an operational day at sea costs $70,000, give or take, Austin said. Despite the high price tag, the Langseth has traveled from the Arctic to the Pacific to the Atlantic over the past decade. It recently weathered 30-foot swells south of New Zealand. But a 200-page document may have sealed its fate. In 2015, the National Research Council published an influential report called "Sea Change: 2015-2025," a map of the next decade of marine science. This report was a "game changer," Higgins said. The council recommended the "immediate lay-up of the R/V Langseth" to shift resources elsewhere. The NSF concluded that it could pay about $10 million of the Langseth's $13 million annual operational costs. In the three years since the report, the NSF has held workshops and invited scientists to propose solutions for the $3 million divide. From these workshops, "the conclusion was that the Langseth was still the best option" for academic seismology, Higgins said. One suggestion was to lease the ship to offshore companies. But because the Langseth's cruises often take it to areas rich in scientific interest and poor in natural resources, it would not be a good fit for industry demands. William Easterling, the NSF's geosciences assistant director, announced to the scientific community in an April 10 letter that the Langseth is no longer sustainable. The science agency will divest itself of the ship in mid-2020 and will no longer accept research proposals that involve the Langseth. "NSF has committed to several Langseth projects between now and 2020," Richard Murray, the NSF's division director of ocean sciences, said in an email to The Washington Post. "Cruises are complex and require several years to plan, which is why there will be a full two-year transition period." The April 10 letter took many marine scientists by surprise. "We can understand that the ship might be too expensive," Wiens said. But, he said, the NSF had previously assured scientists that the agency would provide alternative sources for marine seismologic tests. "It looks like they just completely abandoned that effort." The NSF's letter recommends that scientists secure time on industry ships or find international partners. Marine seismologists "feel that the NSF has betrayed us a bit here," Austin said. He said he knew of at least two proposals to fix the Langseth's finances or provide comparable access to seismic imaging, submitted as part of a 2017 formal solicitation by the NSF. The agency rejected both. "It's hard not to see that they are making a statement about the science," he said. The agency "will continue to support seismic research through a variety of mechanisms," Murray said. "The 2017 solicitation as well as other NSF communications clearly state our commitment to seismic research and education. This decision is about finding the best means to fulfill this commitment." In recent weeks, Columbia University and the Iris Consortium issued strongly worded letters to express their concerns. In its letter, the Iris Consortium said the loss of the Langseth will have a disproportionate impact on the careers of young scientists, who may not have the clout or contacts to gather seismic data beyond this ship. "The board doesn't have a personal stake in this ship," said Wiens, a signatory of the letter. "We see it as being a foundational capability for seismology and tectonics and studying the structure of the oceans." It is also a step backward for the U.S., he said, which has been at the forefront of marine seismology since the field's inception in the 1950s. Austin said he could not fathom why the NSF was unwilling to pay more than $10 million for seismic imaging but has pledged, to the International Ocean Discovery Program, six times as much money for deep-sea scientific drilling. "Without imaging, it's really irresponsible to drill holes in the ocean," he said, likening the scenario to turning on a Tesla's autopilot while shutting off the car's radar and GPS. Normally, in cases like these, scientists would seek the support of the White House's science adviser. Except the Trump administration has not yet filled this role. So expect more letters, Austin said. "We're going to battle." 2018 © The Washington Post This article was originally published by The Washington Post.
<urn:uuid:5d369791-f0aa-46fa-b2f5-158aa0d49f20>
2.6875
1,635
Truncated
Science & Tech.
51.43998
95,601,899
Updated with Post-Eclipse photos and video. (Jump to Updates) A solar eclipse occurs when the moon gets between the sun and Earth, casting a shadow on the Earth’s surface. This can happen only during a new moon when the sun and the moon are in conjunction as seen from Earth in an alignment referred to as syzygy. In a total eclipse, the disk of the sun is fully obscured by the moon, as seen from Earth. In partial and annular eclipses, only part of the sun is obscured. Anytime there is a total solar eclipse, there is a partial solar eclipse nearby, outside a rather narrow path of totality. (A lunar eclipse happens when the Earth gets between the sun and the moon, and the Earth’s shadow crosses the moon’s surface.) Even if you’re not in the path of totality during 2017’s August 21 eclipse, where the total eclipse lasts just seconds to minutes, the partial eclipse — when the moon is covering just part of the sun — takes much, much longer (hours) over a much much wider area: you’ll even be able to see a partial eclipse from Hawaii, and every other state of the U.S., every province in Canada, to south of Central America. Hundreds of millions of people will be able to see something — but definitely read on to understand what you need to do to ensure you don’t risk permanent eye damage by looking at it wrong! More on that below. When you are in the path of totality during a solar eclipse — a narrow strip of land where the entire disk of the sun is covered by the disk of the moon — things happen pretty fast. Let’s start there. What to Look For Those right in the middle of that strip, the path of totality (which is about 70 miles wide for this eclipse), see the total eclipse for the longest period. The longest-longest period of totality for this eclipse occurs from Missouri to southeast Tennessee, for 2 minutes and 40 seconds. As you know, the sun and the moon both “rise” in the east and set in the west. But the shadow of the moon during an eclipse starts in the west, and moves east at the speed of the moon’s orbital velocity minus the Earth’s rotational velocity. This eclipse starts in the morning on the west coast, and ends in the afternoon on the east coast. The path of totality starts in Madras, Ore., at 9:06 a.m., but totality doesn’t begin until 10:19 a.m.; totality ends at 10:21 a.m., and the eclipse ends there at 11:41 a.m. (all times local). The eclipse is last seen on land in Charleston, S.C., where it begins at 1:17 p.m., is in totality from 2:46 to 2:49 p.m., and ends at 4:09 p.m. Totality’s span across the U.S.: 94 minutes. In between, the path of totality runs through Idaho, Wyoming, Nebraska, Kansas, Missouri, Illinois, Kentucky, Tennessee, Georgia, and South Carolina. You have to be right in the middle of the path of totality to get the full length of totality. Along the edge, just 35 miles from the middle, you might only see a few seconds of totality. NASA has national and state maps where you can learn what you can see from where you are, and when. Stages of the Eclipse At what astronomers call first contact, the moon begins to cover the sun’s western limb (or the right side of the sun as we are observing from Earth). Until totality begins, you are going to need eye protection to directly view the eclipse. See below. Over the next hour, the moon will obscure more and more of the sun, as if eating away at it, creating a crescent of the sun. Stories passed down through native peoples worldwide have a similar theme of an animal eating the sun. Cherokee tell of a frog eating the sun, the Chinese blame a dragon, and in Scandinavia it’s the wolves that chase the sun through sky each day, who have finally caught it! Once about 80 percent of the sun is covered, about 15 minutes before totality, changes in your local environment will become noticeable. Ambient light levels are obviously lower, like at sunset, but the landscape takes on a blue-gray tone, very much unlike sunset. The reds of sunset aren’t there because the light isn’t traveling through any more atmosphere, especially with the 2017 total solar eclipse occurring so close to solar noon. As the moon continues to nibble away at the sun’s disk (aka the photosphere), this is a good time to look around you. Birds and other animals may become quiet, bedding down for the night, others may become anxious. Some plants and flowers may even close up because they sense it’s turning to night! Five minutes before totality, look to the western horizon: it will look as if a large thunderstorm is approaching, darkening significantly. If you are viewing from a hilltop, you may be able to see the edge of the darkness approaching. You are seeing the shadow of totality coming toward you! The temperature may drop noticeably. About 15 seconds before totality, with only the thinnest crescent of the sun remaining uncovered, the first evidence of the sun’s corona, or outer atmosphere, becomes visible. Latin for crown, the corona is irregularly shaped and only visible during totality, which is very exciting for astronomers to see firsthand. About 5-10 seconds before totality, the last rays of sunlight from the photosphere merge into a brilliant point of light — known as the Diamond Ring effect. If you can take a second to look down to the western horizon, you’ll see that shadow now really rushing toward you. The Diamond Ring will then fade into what is known as Baily’s Beads, about 3 seconds before totality. Along the left side of the moon, sunlight breaks through the valleys and craters of the moon’s surface, forming points of light resembling dazzling jewels on a necklace. They’re named for Francis Baily, who first described the source of the phenomenon in 1836. Second contact is when totality begins, and you can safely remove your solar glasses from your eyes (and your camera or telescope). You’ll have just a few seconds look for the vivid red of the chromosphere, the gaseous layer below the corona and just above the photosphere. You may catch vibrant red prominences stretching into the corona. These prominences can be many times larger than Earth. If you miss it, you’ll have another chance on the right side of the moon, seconds before totality ends. Over the next seconds to about 2 minutes and 30 seconds, depending on where you are in the path of totality, observe the corona extending out many solar diameters. Each eclipse is different; sometimes the corona appears very round, sometimes it’s wider at the equator. This is also a good time to observe the sun’s magnetic influences in the form of loops and arcs — solar flares — tracing out those magnetic fields. Also look for planets and stars to appear. Venus and the bright star Regulus may appear right above the sun/moon. Also look for Mars about 8 degrees to the right. It’s been hidden for several weeks in the sun’s glare. Take note of the environment around you. How are animals reacting? How are your fellow observers reacting? Snap a photo of your friends and family, especially the younger kids with mouths agape. Take note of the temperature during totally. A 10-15 degree drop is pretty typical. Have a jacket ready so you don’t have to waste precious eclipse time finding it. As the right edge of the moon begins to brighten, it’s time to get those eclipse glasses back on: the end of totality (third contact) is seconds away, and the whole process reverses. Look for the shadow to speed away eastward, and also watch as nature pretty much returns to normal. Flowers open up and birds and other animals calm down. Last, fourth contact is when the eclipse is over. Not In that Tiny Strip? Even if you’re not in the path of totality (and it’s too late to plan a trip there if you don’t already have them made), there’s still a lot to see. It’s simply cool for the midday sun to suddenly be dimmed, for the temperature to suddenly drop, and the shadows change in bizarre ways — like in the attached photo (click to see larger) from an earlier partial solar eclipse. Use eclipse glasses or a pinhole camera to see the “bite” out of the sun, and watch how it changes over time. Know that this was all predicted many years in advance, where in past centuries the sun disappearing was thought to be a sign of evil, or even the end of the world. Feel a bit smug knowing that here in the 21st century, no one is so superstitious anymore! (Right? Right?!) What Determines How Long an Eclipse Is How do we know to the second how long this eclipse will be? Several factors determine the duration of a total solar eclipse (in order of decreasing importance, adapted from Wikipedia): - The moon being almost exactly at perigee (closest to Earth, making its angular diameter as large as possible). - Earth being very near aphelion (furthest away from the sun in its elliptical orbit, making its angular diameter as small as possible). - The midpoint of the eclipse being very close to the equator, where the orbital velocity is greatest. - The vector of the eclipse path at the midpoint of the eclipse aligning with the vector of Earth’s rotation (i.e. not diagonal but due east). - The midpoint of the eclipse being near the subsolar point (the part of Earth closest to the sun). The longest eclipse that has been calculated is the eclipse of July 16, 2186, which will have a maximum duration of 7 minutes and 4 seconds over northern Guyana. And If You Miss It in 2017? There’s another total solar eclipse coming to North America, running from Mexico through Texas, Arkansas, Missouri, Illinois, Kentucky, Indiana, Ohio, Pennsylvania, New York, northern Vermont and New Hampshire, and then the southern parts of Ontario, Quebec, New Brunswick, western Prince Edward Island, and Newfoundland, Canada. But that doesn’t happen until April 8, 2024. If you miss that one, the next one in the Continental United States won’t come until August 12, 2045. In between, though, there are other total solar eclipses that are visible in other parts of the globe. And that super-long one in Guyana in 2186, just 169 years away! NEVER look directly at the unshielded sun, even during partial eclipse, without proper eye protection! Doing so can easily cause permanent damage to your eyes, up to and including blindness. Sunglasses definitely aren’t enough. Only when you are in the path of totality, and during totality, when the sun’s disk is completely covered right after the Diamond Ring fades, can you safely take off eye protection and look directly at the corona, Baily’s Beads, and other phenomena during the eclipse. Other than that, you need eye protection. Rather than try to describe what sort of eye protection myself, I’ll refer you to the safe viewing guide jointly developed by NASA, the American Astronomical Society, the American Academy of Opthalmology, the American Academy of Optometry, and the National Science Foundation. That page is here. NASA’s 2017 Solar Eclipse information site is here. – – – August 21 Update Kit and I went to western Wyoming, and found a great spot in the center of the path of totality — and on a bluff with a great view to the west so we could see the approaching shadow. All I can say about the experience of seeing a total solar eclipse: mind-boggling amazing. Drop-dead awesome. Electrifying. Watching the shadow come at us from the west was astounding, and when it was suddenly darker, we whipped around to see the “Diamond Ring” in the eastern sky. We didn’t see a lot of stars, but could see Sirius — and a very bright Venus. And a deeply black disc where the sun was supposed to be, surrounded by the sun’s corona. A total eclipse is pretty much an accident of celestial mechanics: there has to be a moon of an apparent size that’s larger than the sun, and everything has to be right to see it. I understand why my astronomy-crazed father saved money to travel, and saw (as far as I know) two of them: one in Baja California (Mexico) and one in Kenya. He’s been dead for almost 10 years now, and it’s stuff like this that helps keep that connection alive. I get it now, dad. Wow. Photo taken by me this morning (handheld DSLR, polarizing filter only). Click to see larger. As ever, I looked for something different that illustrates the effect of an eclipse, and where I looked was in the dashboard of the online monitor of my home photovoltaic power system. I was not disappointed! Here’s what I saw there — a dip in output late in the morning: (The afternoon reduction is due to clouds rolling in.) The University of Wisconsin at Madison did a great time lapse from the GOES-16 weather satellite showing just how big the shadow was (yet remember, only a tiny 20-mile swath in the center got totality): And here’s my own video of the effect I mentioned above: watching the shadow of the approaching totality — we were sure to park on a hillside with a great view to the west: – – – Portions of this article were adapted — with permission — from “What to look for during a total solar eclipse” by Tony Rice, Solar System Ambassador from NASA’s Jet Propulsion Laboratory. The rest was written by Randy Cassingham, Solar System Ambassador from NASA’s Jet Propulsion Laboratory and the proprietor of this site. The photos of Rebel the dog are ©2017 Lauren Nicholson, Solar System Ambassador from NASA’s Jet Propulsion Laboratory and owner of Photography by Lauren, used with permission. - - - This page is an example of This is True’s style of “Thought-Provoking Entertainment”. “True” is a newsletter that uses “weird news” as a vehicle to explore the human condition, and bring up questions about society — in an entertaining way. If you enjoyed this page, consider scrolling up to the top of the page for a free e-mail subscription. To really support True, sign up for a paid subscription to the much-expanded “Premium” edition: Q: Why would I want to pay more than the regular rate? A: To support the publication to help it thrive and stay online, and this kind of support means less future need for price increases (and smaller increases when they do happen), which enables more people to upgrade. This option was requested by existing Premium subscribers.
<urn:uuid:cbcac0ee-bef3-4d03-898c-6faceaa4af9a>
3.453125
3,271
Personal Blog
Science & Tech.
57.858343
95,601,913
Zoanthus - Blue-Green (Zoanthus sp.) Family: Zoanthidae (Colonial Anemones) Habitat: Shallow high flow and mix areas of reefs. Light: High Water Flow: High Space: 1+ gal. Reef Safe: Yes Care Level: Easy Temperament: Peaceful Diet: Zoanthids rely mostly on their zooxanthellae for nutrition, as well as microplankton capture. Natural History: The Zoanthus are a colonial genus. Their polyps are no more than one-half inch across, on short and fat stalks, short tentacles. They do not incorporate sediments into their mesoglea (as the Palythoa do). They reproduce by budding from the base of the parent colony (which is different from Palythoa which buds off of stolons). Zoanthus colonies are often so densely packed that the polyps are crowding each other on all sides. Their zooxanthellae and absorptive ability are the basis of their nutrition. Their colors are often bright and commonly include greens, yellows, oranges, and browns. Their larvae have been observed to crawl on the sand to new growth locations. Husbandry: Zoanthus prefer strong water flow and bright light. This genus is highly dependent on zooxanthellae photosynthesis. They will feed on sea urchin eggs. They do not have to be fed because of their reliance on photosynthesis and their ability to absorb dissolved nutrients, bacteria, and algae. They will change color shades according to their lighting regime and they are resistant to coral bleaching because of this. AKA: Sea Mat, Button Polyps In Stock: 08/09/14, Yes |© SeaScape Studio| |Home > Library > Cnidaria > Colonial Anemones > Zoanthidae > Zoanthus - Blue-Green <> [References]||Back|
<urn:uuid:151ba37a-2b5e-4a35-9e7b-baab3bbcca1c>
2.59375
410
Knowledge Article
Science & Tech.
43.739118
95,601,925
The team used two cameras, Wide Field Camera 3 (WFC3), and Advanced Camera for Surveys (ACS), plus observations from the Hubble’s Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS), the largest project in the scope’s history with 902 assigned orbits of observing time, to explore the shapes and colors of distant galaxies over the last 80 percent of the Universe’s history. Results appear in the current online issue of The Astrophysical Journal. NASA, ESA, M. Kornmesser This image shows a "slice" of the Universe some 11 billion years back in time. The shape is that of the Hubble tuning fork diagram, which describes and separates galaxies according to their morphology into spiral (S), elliptical (E), and lenticular (S0) galaxies. On the left of this diagram are the ellipticals, with lenticulars in the middle, and the spirals branching out on the right side. The spirals on the bottom branch have bars cutting through their centres. The galaxies at these distances from us are small and still in the process of forming. This image is illustrative; the Hubble images used were selected based on their appearance. The individual distance to these galaxies is only approximate. Lee points out that the huge CANDELS dataset allowed her team to analyze a larger number of these galaxies, a total 1,671, than ever before, consistently and in detail. “The significant resolution and sensitivity of WFC3 was a great resource for us to use in order to consistently study ancient galaxies in the early Universe,” says Lee. She and colleagues confirm for an earlier period than ever before that the shapes and colors of these extremely distant young galaxies fit the visual classification system introduced in 1926 by Edwin Hubble and known as the Hubble Sequence. It classifies galaxies into two main groups: Ellipticals and spirals, with lenticular galaxies as a transitional group. The system is based on their ability to form stars, which in turn determines their colors, shape and size. Why modern galaxies are divided into these two main types and what caused this difference is a key question of cosmology, says Giavalisco. “Another piece of the puzzle is that we still do not know why today ‘red and dead’ elliptical galaxies are old and unable to form stars, while spirals, like our own Milky Way, keep forming new stars. This is not just a classification scheme, it corresponds to a profound difference in the galaxies’ physical properties and how they were formed.” Lee adds, “This was a key question: When, and over what timescale did the Hubble Sequence form? To answer this, you need to peer at distant galaxies and compare them to their closer relatives, to see if they too can be described in the same way. The Hubble Sequence underpins a lot of what we know about how galaxies form and evolve. It turns out that we could show this sequence was already in place as early as 11.5 billion years ago.” Galaxies as massive as the Milky Way are relatively rare in the young Universe. This scarcity prevented previous studies from gathering a large enough sample of mature galaxies to properly describe their characteristics. Galaxies at these early times appear to be mostly irregular systems with no clearly defined morphology. There are blue star-forming galaxies that sometimes show structures such as discs, bulges and messy clumps, as well as red galaxies with little or no star formation. Until now, nobody knew if the red and blue colors were related to galaxy morphology, the UMass Amherst authors note. There was previous evidence that the Hubble Sequence holds true as far back as around 8 billion years ago, the authors point out, but their new observations push a further 2.5 billion years back in cosmic time, covering 80 percent of the history of the Universe. Previous studies had also reached into this epoch to study lower-mass galaxies, but none had conclusively looked at large, mature galaxies like the Milky Way. Lee and colleagues’ new observations confirm that all galaxies this far back, big and small, already fit into the sequence a mere 2.5 billion years after the Big Bang. “Clearly, the Hubble Sequence formed very quickly in the history of the cosmos, it was not a slow process,” adds Giavalisco. “Now we have to go back to theory and try to figure out how and why.” Besides Lee, Giavalisco, and C.C. Williams at UMass Amherst, with van der Wel in Heidelberg, the team includes astronomers from the University of California, the Space Telescope Science Institute, the University of Kentucky, the University of Nottingham, U.K.), the Max Planck Institute for Extraterrestrial Physics, The Hebrew University, Israel, National Optical Astronomy Observatory, Tucson, and the University of Michigan. This work was funded by NASA through a grant administered by the Space Telescope Science Institute, which operates the Hubble Space Telescope. The telescope is a project of international cooperation between the European Space Agency and NASA.BoMee Lee BoMee Lee | Newswise What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:a4f239dd-6eaa-47b9-a585-dec218ed5d25>
3.828125
1,706
Content Listing
Science & Tech.
41.400236
95,601,932
Forget El Niño, forecasters say 'The Blob' was to blame for strange seasonal shifts in the world's weather - Warming pattern known as The Blob was first detected in late 2013 - This was followed by the strong El Nino, disrupting marine ecosystems - Analysis has revealed that much of the impact was a result of The Blob Since 2013, a phenomenon known as 'The Blob' has brought unusually warm ocean temperatures to the Pacific. This was soon followed by the onslaught of the 2015-2016 El Niño – and combined, the two created a period of major climate disruption, forcing reductions in the productivity of coastal ecosystems along California. Now, researchers say these two warming patterns are both on their way out, and for the first time, they've assessed the damage, to reveal the dramatic effects on marine life. The graphic reveals wintertime temperature anomalies off the U.S. west coast during the strong El Niños of 1997-98 and 2015-16. In 1997-98 warming was strongest near the coast, consistent with effects of El Niño. In 2015-16, warming was more uniform and widespread, as seen in ‘the Blob’ WHAT IS THE BLOB? The blob in the ocean was discovered in late 2013, with temperatures one to four degrees Celsius (two to seven degrees Fahrenheit) above surrounding ‘normal’ water. By 2015, the blob had extended about 1,000 miles (1,600km) offshore, from Mexico up to Alaska. Researchers say the ‘warm blob’ of water off the West coast of the US likely drove most of the impact seen in productivity just off the West Coast. Both the Blob and El Nino are now on their way out, and have left behind dramatic effects the marine ecosystems. While this year's powerful El Niño is known for causing extreme weather events around the world, from droughts in South America, Africa, and Asia, to excessive rain to the Pacific Northwest, researchers say The Blob drove most of the impact seen in productivity just off the West Coast. Data from ocean models and autonomous gliders, which can track undersea conditions, has allowed researchers from the NOAA Fisheries, Scripps Institution of Oceanography and University of California, Santa Cruz to understand what really happened as a result of these two patterns. 'Last year there was a lot of speculation about the consequences of 'The Blob' and El Niño battling it out off the US West Coast,' said lead author Michael Jacox, of UC Santa Cruz and NOAA Fisheries' Southwest Fisheries Science Center. 'We found that off California El Niño turned out to be much weaker than expected. The Blob continued to be a dominant force, and the two of them together had strongly negative impacts on marine productivity. 'Now, both The Blob and El Niño are on their way out, but in their wake lies a heavily disrupted ecosystem.' These two disrupting patterns are both associated with warming conditions, which slows the flow of nutrients from the deep ocean, in turn reducing productivity in the region's ecosystems. Temperatures climbed to roughly 5 degrees Fahrenheit above average, leading to sightings of warm-water species much farther north than their typical range. Since 2013, a phenomenon known as 'The Blob' has brought unusually warm ocean temperatures to the Pacific. This was soon followed by the onslaught of the 2015-2016 El Niño. Now, researchers say these two warming patterns are both on their way out WHAT IS EL NIÑO? El Niño is caused by a shift in the distribution of warm water in the Pacific Ocean around the equator. Usually the wind blows strongly from east to west, due to the rotation of the Earth, causing water to pile up in the western part of the Pacific. This pulls up colder water from the deep ocean in the eastern Pacific. However, in an El Niño, the winds pushing the water get weaker and cause the warmer water to shift back towards the east. This causes the eastern Pacific to get warmer. But as the ocean temperature is linked to the wind currents, this causes the winds to grow weaker still and so the ocean grows warmer, meaning the El Niño grows. This change in air and ocean currents around the equator can have a major impact on the weather patterns around the globe by creating pressure anomalies in the atmosphere. Researchers say this warming also likely played a part in the formation of the largest harmful algal bloom ever recorded on the West Coast. 'These past years have been extremely unusual off the California coast, with humpback whales closer to shore, pelagic red crabs washing up on the beaches of central California, and sportfish in higher numbers in southern California,' said Elliot Hazen of the Southwest Fisheries Science Center, a co-author of the paper. 'This paper reveals how broad scale warming influences the biology directly off our shores.' In the new research, the scientists describe the real-time monitoring of the California Current Ecosystem. 'This work reflects technological advances that now let us rapidly assess the effects of major climate disruptions and project their impacts on the ecosystem,' Jacox said. In a separate paper by the same scientists, the researchers identify the optimal conditions for productivity in this region. They say this will help to plan the future effects of other climate variability patterns, like El Niño. 'Wind has a 'goldilocks effect' on productivity in the California Current,' Hazen said. 'If wind is too weak, nutrients limit productivity, and if wind is too strong, productivity is moved offshore or lost to the deep ocean. 'Understanding how wind and nutrients drive productivity provides context for events like the Blob and El Niño, so we can better understand how the ecosystem is likely to respond.' The studies highlight the importance of monitoring the West Coast marine ecosystems as the climate continues to change. While El Nino was associated with strong tropical signals, the drivers of this pattern were not as effective as they've been in the past. 'Not all El Niños evolve in the same way in the tropics, nor are their impacts the same off our coast,' said Steven Bograd, a research scientist at the Southwest Fisheries Science Center and co-author of both papers. 'Local conditions, in this case from the Blob, can modulate the way our ecosystem responds to these large scale climate events.' Most watched News videos - Courageous woman hides victim from kidnappers till cops arrive - 'We thought it wasn't real': say Thai cave boys of rescuers - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga' - Model Annabelle Neilson walks the catwalk in 2010 fashion show - Drowned woman and child found next to survivor clinging to wreck - The terrifying moment a plane comes crashing down in South Africa - The streets of Alcudia in Mallorca are flooded by mini-tsunami - 'Africa won the world cup': Trevor Noah mocks France World Cup win - Brutal bat attack caught on surveillance video in the Bronx - Police release video of Stormy Daniels' arrest outside strip club - Macron's security advisor IMPERSONATES police to beat protestors - Boris Johnson attacks Theresa May over Brexit 'fog of self-doubt'
<urn:uuid:17354fcc-2421-4021-90e7-823924acdc9b>
3
1,514
Truncated
Science & Tech.
40.491778
95,601,934
Sequential Tests of Statistical Hypotheses By a sequential test of a statistical hypothesis is meant any statistical test procedure which gives a specific rule, at any stage of the experiment (at the n-th trial for each integral value of n), for making one of the following three decisions: (1) to accept the hypothesis being tested (null hypothesis), (2) to reject the null hypothesis, (3) to continue the experiment by making an additional observation. Thus, such a test procedure is carried out sequentially. On the basis of the first trial, one of the three decisions mentioned above is made. If the first or the second decision is made, the process is terminated. If the third decision is made, a second trial is performed. Again on the basis of the first two trials one of the three decisions is made and if the third decision is reached a third trial is performed, etc. This process is continued until either the first or the second decision is made. KeywordsSequential Test Statistical Hypothesis Powerful Test Moment Generate Function Sequential Probability Ratio Test Unable to display preview. Download preview PDF. - H.F. Dodge and H.G. Romig, “A method of sampling inspection,” The Bell System Tech. Jour., Vol. 8 (1929), pp. 613–631.Google Scholar - Harold Hotelling, “Experimental determination of the maximum of a function”, Annals of Math. Stat., Vol. 12 (1941).Google Scholar - Abraham Wald, “On cumulative sums of random variables”, Annals of Math. Star., Vol. 15 (1944).Google Scholar - Z.W. Birnbaum, “An inequality for Mill’s ratio”, Annals of Math. Stat., Vol. 13 (1942).Google Scholar - P.C. Mahalanobis, “A sample survey of the acreage under jute in Bengal, with discussion on planning of experiments,” Proc. 2nd Ind. Stat. Conf., Calcutta, Statistical Publishing Soc. (1940).Google Scholar - Abraham Wald, Sequential Analysis of Statistical Data: Theory. A report submitted by the Statistical Research Group, Columbia University to the Applied Mathematics Panel, National Defense Research Committee, Sept. 1943.Google Scholar - Harold Freeman, Sequential Analysis of Statistical Data: Applications. A Report submitted by the Statistical Research Group, Columbia University to the Applied Mathematics Panel, National Defense Research Committee, July 1944.Google Scholar - G.A. Barnard, M.A., Economy in Sampling with Reference to Engineering Experimentation (British) Ministry of Supply, Advisory Service on Statistical Method and Quality Control, Technical Report, Series ‘R’ No. Q.C./R/7 Part 1.Google Scholar - CM. Stockman, A Method of Obtaining an Approximation for the Operating Characteristic of a Wald Sequential Probability Ratio Test Applied to a Binomial Distribution, (British) Ministry of Supply, Advisory Service on Statistical Method and Quality Control, Technical Report, Series ‘R’ No. Q.C./R/19.Google Scholar - Abraham Wald, A General Method of Deriving the Operating Characteristics of any Sequential Probability Ratio Test. A Memorandum submitted to the Statistical Research Group, Columbia University, April 1944.Google Scholar
<urn:uuid:53054e77-c351-4e15-b6c5-30799ca90684>
2.578125
707
Academic Writing
Science & Tech.
46.088385
95,601,943
With example below an introduction of phases in flex is demonstrated. In flex there are three phases. capturing, targeting and bubbling. These phases are actually the periods in which the flex does different jobs regarding an event, such as detecting the event listener for handling the event, triggering and then re-detecting listeners in reverse order for handling the event again. Event listeners are the functions or the methods that we create in our applications and inside event listeners, we create event associated objects. Event retorts (responds) , inside those function in which the corresponding event object is made. Now going one by one we first have capturing phase in which flex checks containers like VBox for the event listeners for handling the event inside them. Event listeners are either called or initialized inside the flex controls with click and initialize attributes respectively. In Targeting phase flex just triggers the event and in the Bubbling phase flex re-begins its search for listeners but in order opposite to that was in capturing phase. .eventPhase property of event object is used to detect the current phase of event. This property by default allocates the numbers 1, 2 and 3 to the phases capturing, targeting and bubbling respectively. In the example, event current phase is detected inside different TextInput controls.
<urn:uuid:4d01030d-cc28-4bb5-8d68-2e47fd4b3439>
3.21875
257
Documentation
Software Dev.
35.507054
95,601,951
NASA is hoping to launch an unmanned test flight of its Orion spacecraft today, though at the time of this writing it hasn't yet pushed the button: #Orion is currently "no-go" due to a range issue. There's a boat in the launch area. Teams are working to remove the boat from the area.— NASA (@NASA) December 4, 2014 For the team of scientists behind the project, these short delays must seem like nothing when seen within the context of Orion's timeline. Today's scheduled flight is only the first step in a drawn out process that should eventually lead to a manned mission to Mars during the mid-2030s. Justin Bachman, writing for Bloomberg Businessweek, calls today's launch NASA's "boldest test flight in decades." According to Bachman, one of NASA's major goals is to demonstrate to the world that the key technological components that will one day send a human to Mars are already in place. A side goal is to bolster the public's excitement for an eventual trip to the red planet. With the Space Shuttle program 3 years gone and American astronauts having to hitch rides on Russian rockets, some experts worry that the U.S. public may be losing hope in the sci-fi space dreams that fueled interest during the second half of the 20th century. NASA officials (and its Twitter feed) are never slow to point out that Orion is integral to an inevitable journey to Mars. One official mentioned to Bachman that, since manned flights to the Mars are still 20 years away, today's launch will hopefully inspire the students of today into becoming the engineers and astronauts of tomorrow: “My hope is that when we fly the capsule on Thursday, it will energize the public and energize that middle schooler [who] isn’t quite sure what he wants to do, but he likes math and science,” says Richard Boitnott, an engineer at NASA’s Langley Research Center. I'm sure Boitnott's choice of pronouns wasn't meant to exclude girls from the ambitious plan, as NASA has a good track record of promoting STEM careers for young women. The major point is that the 45-year-old astronaut of today is out of luck if he or she wants to be the first person to step foot on Mars. Those who fall in the 15-30 age range can still hold onto their hope. What's your take on the Orion program? Do you have faith in NASA's ambitious goals? Let us know below in the comments. Scrub. Today's planned launch of #Orion is postponed due to valve issue. Our next possible launch window opens at 7:05am ET Friday— NASA (@NASA) December 4, 2014 Read more at Businessweek Learn more at NASA Photo credit: NASA
<urn:uuid:a5d4b0f2-2a2c-430b-b7c6-99ce4e6ab553>
2.5625
577
News Article
Science & Tech.
60.984035
95,601,983
2. Acceleration (`v`-`t`) Graphs by M. Bourne Acceleration is the change in velocity per time. A common unit for acceleration is `"ms"^-2`. An acceleration of `7\ "ms"^-2` means that in each second, the velocity increases by `7\ "ms"^-1` (also written as `7\ "m/s"`). We can find the acceleration by using the expression: `text(acceleration)=text(change in velocity)/text(change in time` We can write the above using the equivalent where the Greek letter `Δ` (Delta) means "change in". In other words, the slope of the velocity graph tells us the acceleration. The Area Under the `v`-`t` Graph A very useful aspect of these graphs is that the area under the v-t graph tells us the distance travelled during the motion. This concept is important when we find areas under curves later in the integration chapter. A particle in a generator is accelerated from rest at the rate of `55\ "ms"^-2`. a. What is the velocity at `t = 3\ "s"`? b. What is the acceleration at `t = 3\ "s"`? c. What is the distance travelled in `3` seconds? d. Graph the acceleration (as a v - t graph) for `0 ≤ t ≤ 3\ "s"`. a. Velocity `= 55 × 3 = 165\ "ms"^-1` b. The acceleration is a constant `55\ "ms"^-2`, so at `t = 3\ "s"`, the acceleration will be `55\ "ms"^-2`. c. The distance travelled in `3` seconds is `165 × 1.5 = 247.5\ "m"`. We obtain this from the area under the line between `0` and `3` (i.e. the area of the shaded triangle below). d. Note in the graph that we have velocity on the vertical axis, and the units are m/s. The graph finishes at (3, 165). A body moves as described by the following v-t graph. a) Describe the motion. b) What is the distance travelled during the motion? c) What is the average speed for the motion? a) From `t = 0` to `2`, the acceleration was `a=(Deltav)/(Deltat)=3/2=1.5\ text(ms)^-2` From `t = 2` to `5`, the acceleration was `0\ "ms"^-1`. The body was neither speeding up nor slowing down. From `t = 5` to `8`, the acceleration was `a=(Deltav)/(Deltat)=(-3)/3=-1\ text(ms)^-2` The body was slowing down, so the acceleration was negative. b) The distance travelled is the area of the trapezoid (trapezium). c) `text(average speed)=text(distance travelled)/text(time taken)` Get the Daily Math Tweet! IntMath on Twitter
<urn:uuid:2e4e4e47-05ec-449e-b1ca-b905541c9a87>
4.59375
709
Tutorial
Science & Tech.
85.537607
95,601,990
A View from Peter Fairley Best Ways to Reengineer the Climate Revealed The benefits of some schemes aimed at cooling the planet have been miscalculated. When Time magazine included geoengineering in its “What’s Next for 2008” report, it wrote that “a few scientists are beginning to quietly raise the possibility of cooling the planet’s fever directly … as an option of last resort.” Today, scientists at the University of East Anglia (UEA) are smashing the hush surrounding geoengineering, publishing the first comprehensive assessment of the climate-cooling potential of the various schemes being contemplated to reengineer Earth. “The realisation that existing efforts to mitigate the effects of human-induced climate change are proving wholly ineffectual has fuelled a resurgence of interest in geo-engineering,” explains UEA environmental-sciences professor Tim Lenton, who wrote the report with UEA colleague Naomi Vaughan. Their report, published in today’s issue of the journal Atmospheric Chemistry and Physics Discussions, suggests that, while some approaches could play a contributing role in blunting climate change, the benefits of many schemes have been exaggerated in the past by “significant” errors in calculations: “We found that some geoengineering options could usefully complement mitigation, and together they could cool the climate, but geoengineering alone cannot solve the climate problem.” Reflecting sun away from the earth by launching sunshades into space or injecting reflective manufactured particles into the stratosphere tops UEA’s list, showing the greatest potential to cool Earth back to preindustrial temperatures by 2050, when combined with serious greenhouse-gas reductions. Lenton’s team judges stratospheric particle dispersal to also carry the most risk, because the particles would be both highly effective and short acting. Any interruption in the particle deployment (if, for example, we fell behind on the 135,000 space launches per year required to maintain an effective sunshade) would unleash extremely rapid warming. Next up are enhanced carbon sinks, such as burying carbon-rich charcoal (i.e., “bio-char”). What New Scientist calls “burn it and bury it” in its coverage of UEA’s geoengineering rankings could cut atmospheric CO2 to preindustrial levels. But not before 2100 and, again, only when combined with strong mitigation of CO2 emissions. Schemes that fail their back-of-the-envelope calculations include ocean fertilizing: phosphorus pollution from farms and laundries may already stimulate more carbon sequestration than proposed schemes to deliberately seed the ocean with iron or nitrogen. Making cities more reflective also comes up short: the UEA team says that this could make cities more livable but would have “minimal global effect.” What do you think? Should we be looking for more from geoengineering? Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:2ae098e2-0dad-40fa-8ff1-8ea39f7c11ed>
3.34375
633
News Article
Science & Tech.
26.598918
95,601,992
The process begins when CO (or carbonic acid - the old name for carbon dioxide was carbonic acid gas) Rainwater containing carbonic acid is able to react with most minerals at varying rates according to their chemical stability. Susie Welch, recently retired outreach coordinator at the New Mexico Bureau of Geology, was recognized by Governor Susana Martinez for outstanding accomplishments and invaluable contributions to the state of New Mexico. Congratulations to Susie on the well-deserved citation! Because uranium is radioactive, it is constantly emitting particles and changing into other elements. The ratio of carbon-12 to carbon-14 at the moment of death is the same as every other living thing, but the carbon-14 decays and is not replaced. The carbon-14 decays with its half-life of 5,700 years, while the amount of carbon-12 remains constant in the sample. Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. 1979, 1986 © Harper Collins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012 Cite This Source (rā'dē-ō-mět'rĭk) A method for determining the age of an object based on the concentration of a particular radioactive isotope contained within it. For inorganic materials, such as rocks containing the radioactive isotope rubidium, the amount of the isotope in the object is compared to the amount of the isotope's decay products (in this case strontium). There are a number of other assumptions implicit in the calculation.
<urn:uuid:84b1d63a-aa2f-4747-8e9d-4b234a8d50e3>
3.5625
328
Knowledge Article
Science & Tech.
35.445554
95,601,997
Almost lost in translation. Cryo-EM of a dynamic macromolecular complex: the ribosome - 232 Downloads Ribosomes are dynamic biological machines that perform numerous tasks during translation, the biosynthesis of proteins. Translocation, the movement of transfer RNAs (tRNAs) and messenger RNA (mRNA) to progress in the reading frame of codons in the mRNA, takes place after the addition of each amino acid. This process involves large ribosome conformational changes, where tRNAs proceed through intermediate states. The structural characterization of these translocation intermediates has remained elusive. Cryo-electron microscopy (cryo-EM) produces three-dimensional averages, and translocating ribosomes poise distinct conformational states, and hence, structurally heterogeneous populations. During the last decade, the quest for visualization of translocation intermediates has progressed together with the development of classification tools in cryo-EM. Some of these new tools have recently been tested in ribosomal translocation, uncovering a clearer picture of the process. This success goes along with the latest advances in cryo-EM and illustrates how the technique offers multiple possibilities for studying macromolecular complexes engaged in dynamic reactions. KeywordsCryo-electron microscopy 70S ribosome Translocation Classification Hybrid tRNA This work is supported by the Department of Industry, Tourism and Trade of the Government of the Autonomous Community of the Basque Country, and the Innovation Technology Department of Bizkaia County.
<urn:uuid:ddae80ea-cc26-4b0e-b48e-ab05fc113009>
2.53125
314
Academic Writing
Science & Tech.
1.682224
95,602,027
New findings from NEOWISE, the asteroid- and comet-hunting portion of NASAs Wide-field Infrared Survey Explorer mission, show that comet Hartley 2 leaves a pebbly trail as it laps the sun, dotted with grains as big as golf balls. IAUC nr.9220, issued on 2011, July 07, announces the discovery of a new comet (discovery magnitude 17.9) by R. H. McNaught with the 0.5-m Uppsala Schmidt telescope at Siding Spring, on images obtained on 2011, July 04.46. The new comet has been designated C/2011 N2 (McNAUGHT). IAUC nr.9219, issued on 2011, July 07, announces that an apparently asteroidal object (discovery magnitude 19.9) reported by Ignacio de la Cueva, Ibiza, Spain (from exposures taken by J. L. Ortiz, P. Santos-Sanz, N. Morales, and himself with a 0.40-m f/3.7 reflector at San Pedro de Atacama, Chile) was found to show cometary appearance after initial posting on the Minor Planet Center NEOCP webpage. The new comet has been designated P/2011 N1 A new bright comet diving into the Sun is visible right now in C3 and C2 images taken by SOHO spacecraft. IAUC nr.9218, issued on 2011, June 25, announces the discovery of a new comet (discovery magnitude 18.6) by the LINEAR survey with a 1.0-m f/2.15 reflector + CCD, on images obtained on 2011, June 22.4. The new comet has been designated C/2011 M1 (LINEAR) Nearly one year ago, a repurposed NASA spacecraft flew by the comet Hartley 2. As a result, a multitude of high-resolution images were gathered over 50 days that allow scientists to understand the nature of the comets surface and its hidden interior. Comet Hartley 2 hyperactive state, as studied by NASAs EPOXI mission, is detailed in a new paper published in this weeks issue of the journal Science. IAUC nr.9215, issued on 2011, June 08, announces the discovery of a new comet (discovery magnitude 19.4) on four CCD images taken with the 1.8-m Pan-STARRS 1 telescope at Haleakala, on images obtained on 2011, June 6.4. The new comet has been designated C/2011 L4 (PANSTARRS). The final command placing ESAs Rosetta comet-chaser into deep-space hibernation was sent June 8, 2011. With virtually all systems shut down, the probe will now coast for 31 months until waking up in 2014 for arrival at its comet destination. C/1680 V1, also called the Great Comet of 1680, Kirchs Comet, and Newtons Comet, has the distinction of being the first comet discovered by telescope. Discovered by Gottfried Kirch on 14 November 1680, New Style, it became one of the brightest comets of the 17th century (reputedly visible even in daytime) and was noted for its spectacularly long tail.
<urn:uuid:2371b5f6-d1dc-44c6-a245-ce3f14cc2c99>
2.84375
680
Content Listing
Science & Tech.
70.68832
95,602,059
The result of every possible measurement on a quantum system is coded in its wave function, which until recently could be found only by taking many different measurements of a system and estimating a wave function that best fit all those measurements. Just two years ago, with the advent of a technique called direct measurement, scientists discovered they could reliably determine a system’s wave function by “weakly” measuring one of its variables (e.g. position) and “strongly” measuring a complementary variable (momentum). Researchers at the University of Rochester have now taken this method one step forward by combining direct measurement with an efficient computational technique. The new method, called compressive direct measurement, allowed the team to reconstruct a quantum state at 90 percent fidelity (a measure of accuracy) using only a quarter of the measurements required by previous methods. “We have, for the first time, combined weak measurement and compressive sensing to demonstrate a revolutionary, fast method for measuring a high-dimensional quantum state,” said Mohammad Mirhosseini, a graduate student in the Quantum Photonics research group at the University of Rochester and lead author of a paper appearing today in Physical Review Letters. The research team, which also included graduate students Omar Magaña-Loaiza and Seyed Mohammad Hashemi Rafsanjani, and Professor Robert Boyd, initially tested their method on a 192-dimensional state. Finding success with that large state, they then took on a massive, 19,200-dimensional state. Their efficient technique sped up the process 350-fold and took just 20 percent of the total measurements required by traditional direct measurement to reconstruct the state. “To reproduce our result using a direct measurement alone would require more than one year of exposure time,” said Rafsanjani. “We did the experiment in less than 48 hours.” While recent compressive sensing techniques have been used to measure sets of complementary variables like position and momentum, Mirhosseini explains that their method allows them to measure the full wave function. Compression is widely used in the classical world of digital media, including recorded music, video, and pictures. The MP3s on your phone, for example, are audio files that have had bits of information squeezed out to make the file smaller at the cost of losing a small amount of audio quality along the way. In digital cameras, the more pixels you can gather from a scene, the higher the image quality and the larger the file will be. But it turns out that most of those pixels don’t convey essential information that needs to be captured from the scene. Most of them can be reconstructed later. Compressive sensing works by randomly sampling portions from all over the scene, and using those patterns to fill in the missing information. Similarly for quantum states, it is not necessary to measure every single dimension of a multidimensional state. It takes only a handful of measurements to get a high-quality image of a quantum system. The method introduced by Mirhosseini et al. has important potential applications in the field of quantum information science. This research field strives to make use of fundamental quantum effects for diverse applications, including secure communication, teleportation of quantum states, and ideally to perform quantum computation. This latter process holds great promise as a method that can, in principle, lead to a drastic speed-up of certain types of computation. All of these applications require the use of complicated quantum states, and the new method described here offers an efficient means to characterize these states. Research funding was provided by the Defense Advanced Research Projects Agency’s (DARPA) Information in a Photon (InPho) program, U.S. Defense Threat Reduction Agency (DTRA), National Science Foundation (NSF), El Consejo Nacional de Ciencia y Tecnología (CONACYT) and Canadian Excellence Research Chair (CERC). Senior Press Officer, Science & Public Media Peter Iglinski | newswise What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:af8662d7-8563-418d-b2a4-7e8299e01b8a>
3.265625
1,462
Content Listing
Science & Tech.
34.257649
95,602,064
This article needs additional citations for verification. (July 2013) (Learn how and when to remove this template message) In mathematics and statistics, the arithmetic mean ( / In addition to mathematics and statistics, the arithmetic mean is used frequently in fields such as economics, sociology, and history, and it is used in almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population. While the arithmetic mean is often used to report central tendencies, it is not a robust statistic, meaning that it is greatly influenced by outliers (values that are very much larger or smaller than most of the values). Notably, for skewed distributions, such as the distribution of income for which a few people's incomes are substantially greater than most people's, the arithmetic mean may not accord with one's notion of "middle", and robust statistics, such as the median, may be a better description of central tendency. The arithmetic mean (or mean or average) is the most commonly used and readily understood measure of central tendency. In statistics, the term average refers to any of the measures of central tendency. The arithmetic mean is defined as being equal to the sum of the numerical values of each and every observation divided by the total number of observations. Symbolically, if we have a data set containing the values , then the arithmetic mean is defined by the formula: For example, let us consider the monthly salary of 10 employees of a firm: 2500, 2700, 2400, 2300, 2550, 2650, 2750, 2450, 2600, 2400. The arithmetic mean is If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the population mean. If the data set is a statistical sample (a subset of the population), we call the statistic resulting from this calculation a sample mean. The arithmetic mean of a variable is often denoted by a bar, for example as in (read bar), which is the mean of the values . The arithmetic mean has several properties that make it useful, especially as a measure of central tendency. These include: - If numbers have mean , then . Since is the distance from a given number to the mean, one way to interpret this property is as saying that the numbers to the left of the mean are balanced by the numbers to the right of the mean. The mean is the only single number for which the residuals (deviations from the estimate) sum to zero. - If it is required to use a single number as a "typical" value for a set of known numbers , then the arithmetic mean of the numbers does this best, in the sense of minimizing the sum of squared deviations from the typical value: the sum of . (It follows that the sample mean is also the best single predictor in the sense of having the lowest root mean squared error.) If the arithmetic mean of a population of numbers is desired, then the estimate of it that is unbiased is the arithmetic mean of a sample drawn from the population. Contrast with median The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger than, and no more than half are smaller than, the median. If elements in the sample data increase arithmetically, when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample . The average is , as is the median. However, when we consider a sample that cannot be arranged so as to increase arithmetically, such as , the median and arithmetic average can differ significantly. In this case, the arithmetic average is 6.2 and the median is 4. In general, the average value can vary significantly from most values in the sample, and can be larger or smaller than most of them. There are applications of this phenomenon in many fields. For example, since the 1980s, the median income in the United States has increased more slowly than the arithmetic average of income. A weighted average, or weighted mean, is an average in which some data points count more strongly than others, in that they are given more weight in the calculation. For example, the arithmetic mean of and is , or equivalently . In contrast, a weighted mean in which the first number receives, for example, twice as much weight as the second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as . Here the weights, which necessarily sum to the value one, are and , the former being twice the latter. Note that the arithmetic mean (sometimes called the "unweighted average" or "equally weighted average") can be interpreted as a special case of a weighted average in which all the weights are equal to each other (equal to in the above example, and equal to in a situation with numbers being averaged). Continuous probability distributions When a population of numbers, and any sample of data from it, could take on any of a continuous range of numbers, instead of for example just integers, then the probability of a number falling into one range of possible values could differ from the probability of falling into a different range of possible values, even if the lengths of both ranges are the same. In such a case, the set of probabilities can be described using a continuous probability distribution. The analog of a weighted average in this context, in which there are an infinitude of possibilities for the precise value of the variable, is called the mean of the probability distribution. The most widely encountered probability distribution is called the normal distribution; it has the property that all measures of its central tendency, including not just the mean but also the aforementioned median and the mode, are equal to each other. This property does not hold however, in the cases of a great many probability distributions, such as the lognormal distribution illustrated here. Particular care must be taken when using cyclic data, such as phases or angles. Naïvely taking the arithmetic mean of 1° and 359° yields a result of 180°. This is incorrect for two reasons: - Firstly, angle measurements are only defined up to an additive constant of 360° (or 2π, if measuring in radians). Thus one could as easily call these 1° and −1°, or 361° and 719°, each of which gives a different average. - Secondly, in this situation, 0° (equivalently, 360°) is geometrically a better average value: there is lower dispersion about it (the points are both 1° from it, and 179° from 180°, the putative average). In general application, such an oversight will lead to the average value artificially moving towards the middle of the numerical range. A solution to this problem is to use the optimization formulation (viz., define the mean as the central point: the point about which one has the lowest dispersion), and redefine the difference as a modular distance (i.e., the distance on the circle: so the modular distance between 1° and 359° is 2°, not 358°). - Fréchet mean - Generalized mean - Geometric mean - Harmonic mean - Sample mean and covariance - Standard error of the mean - Summary statistics - Jacobs, Harold R. (1994). Mathematics: A Human Endeavor (Third ed.). W. H. Freeman. p. 547. ISBN 0-7167-2426-X. - Medhi, Jyotiprasad (1992). Statistical Methods: An Introductory Text. New Age International. pp. 53–58. ISBN 9788122404197. - Krugman, Paul (June 4, 2014) [Fall 1992]. "The Rich, the Right, and the Facts: Deconstructing the Income Distribution Debate". The American Prospect.
<urn:uuid:b341537a-ecf0-4bc3-9be5-652312e0240f>
4.1875
1,639
Knowledge Article
Science & Tech.
43.854366
95,602,072
Authors: George Rajna CfA astronomer Qirong Zhu led a group of four scientists investigating the possibility that today's dark matter is composed of primordial black holes, following up on previously published suggestions. If galaxy halos are made of black holes, they should have a different density distribution than halos made of exotic particles. A signal caused by the very first stars to form in the universe has been picked up by a tiny but highly specialised radio telescope in the remote Western Australian desert. This week, scientists from around the world who gathered at the University of California, Los Angeles, at the Dark Matter 2018 Symposium learned of new results in the search for evidence of the elusive material in Weakly Interacting Massive Particles (WIMPs) by the DarkSide-50 detector. If they exist, axions, among the candidates for dark matter particles, could interact with the matter comprising the universe, but at a much weaker extent than previously theorized. New, rigorous constraints on the properties of axions have been proposed by an international team of scientists. The intensive, worldwide search for dark matter, the missing mass in the universe, has so far failed to find an abundance of dark, massive stars or scads of strange new weakly interacting particles, but a new candidate is slowly gaining followers and observational support. " We invoke a different theory, the self-interacting dark matter model or SIDM, to show that dark matter self-interactions thermalize the inner halo, which ties ordinary dark matter and dark matter distributions together so that they behave like a collective unit. " Technology proposed 30 years ago to search for dark matter is finally seeing the light. They're looking for dark matter—the stuff that theoretically makes up a quarter of our universe. Comments: 46 Pages. [v1] 2018-04-20 10:40:26 Unique-IP document downloads: 17 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:78309df5-0802-42cf-9dc5-82f05b46b023>
2.59375
537
Knowledge Article
Science & Tech.
34.834983
95,602,089
Sendes vanligvis innen 7-15 dager More than three-quarters of a million programmers have benefited from this book in all of its editions Written by Bjarne Stroustrup, the creator of C++, this is the world's most trusted and widely read book on C++. For this special hardcover edition, two new appendixes on locales and standard library exception safety (also available at www.research.att.com/~bs/) have been added. The result is complete, authoritative coverage of the C++ language, its standard library, and key design techniques. Based on the ANSI/ISO C++ standard, The C++ Programming Language provides current and comprehensive coverage of all C++ language features and standard library components. For example: * abstract classes as interfaces * class hierarchies for object-oriented programming * templates as the basis for type-safe generic software * exceptions for regular error handling * namespaces for modularity in large-scale software * run-time type identification for loosely coupled systems * the C subset of C++ for C compatibility and system-level work * standard containers and algorithms * standard strings, I/O streams, and numerics * C compatibility, internationalization, and exception safety Bjarne Stroustrup makes C++ even more accessible to those new to the language, while adding advanced information and techniques that even expert C++ programmers will find invaluable.
<urn:uuid:4fc94c22-8780-4f2a-b3c3-3aefa613d1cf>
2.859375
288
Product Page
Software Dev.
20.659821
95,602,097
Gas may lie near slow-spreading tectonic plates on the seafloor Rocks formed beneath the ocean floor by fast-spreading tectonic plates may be a large and previously overlooked source of free hydrogen gas (H2), a new Duke University study suggests. The finding could have far-ranging implications since scientists believe H2 might be the fuel source responsible for triggering life on Earth. And, if it were found in large enough quantities, some experts speculate that it could be used as a clean-burning substitute for fossil fuels today because it gives off high amounts of energy when burned but emits only water, not carbon. Recent discoveries of free hydrogen gas, which was once thought to be very rare, have been made near slow-spreading tectonic plates deep beneath Earth's continents and under the sea. "Our model, however, predicts that large quantities of H2 may also be forming within faster-spreading tectonic plates -- regions that collectively underlie roughly half of the Mid-Ocean Ridge," said Stacey L. Worman, a postdoctoral fellow at the University of Texas at Austin, who led the study while she was a doctoral student at Duke's Nicholas School of the Environment. Total H2 production occurring beneath the oceans is at least an order of magnitude larger than production occurring under continents, the model suggests. "A major benefit of this work is that it provides a testable, tectonic-based model for not only identifying where free hydrogen gas may be forming beneath the seafloor, but also at what rate, and what the total scale of this formation may be, which on a global basis is massive," said Lincoln F. Pratson, professor of earth and ocean sciences at Duke, who co-authored the study. The scientists published their peer-reviewed study in the July 14 online edition of the journal Geophysical Research Letters. The new model calculates the amount of free hydrogen gas produced and stored beneath the seafloor based on a range of parameters -- including the ratio of a site's tectonic spreading rate to the thickness of serpentinized rocks that might be found there. Serpentinized rocks -- so called because they often have a scaly, greenish-brown-patterned surface that resembles snakeskin -- are rocks that have been chemically altered by water as they are lifted up by the spreading tectonic plates in Earth's crust. Molecules of free hydrogen gas are produced as a by-product of the serpentinization process. "Most scientists previously thought all hydrogen production occurs only at slow-spreading lithosphere, because this is where most serpentinized rocks are found. Although faster-spreading lithosphere contains smaller quantities of this rock, our analysis suggests the amount of H2 produced there might still be large," Worman said. "Right now, the only way to get H2 -- to use in fuel cells, for example -- is through secondary processes," Worman explained. "You start with water, add energy to split the oxygen and hydrogen molecules apart, and get H2. You can then burn the H2, but you had to use energy to get energy, so it's not very efficient." Mining free hydrogen gas as a primary fuel source could change that, but first scientists need to understand where the gas goes after it's produced. "Maybe microbes are eating it, or maybe it's accumulating in reservoirs under the seafloor. We still don't know," Worman said. "Of course, such accumulations would have to be quite significant to make hydrogen gas produced by serpentinization a viable fuel source." If further research confirms the model's accuracy, it could also open new avenues for exploring the origin of life on Earth, and for understanding the role hydrogen gas might play in supporting life in a wide range of extreme environments, from the sunless deep-sea floor to distant planets. Worman and Pratson conducted the study with Jeffrey Karson, professor of earth sciences at Syracuse University, and Emily Klein, professor of earth sciences at Duke. Worman received her Ph.D. in earth and ocean sciences from Duke in 2015. CITATION: "Global Rate and Distribution of H2 Gas Produced by Serpentinization within Oceanic Lithosphere," Stacey L. Worman, Lincoln F. Pratson, Jeffrey Karson, Emily Klein. Geophysical Research Letters, July 14, 2016. DOI: 10.1002/2016GL069066 Tim Lucas | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:17d4befd-deca-42e0-ae49-940274418f5c>
3.34375
1,502
Content Listing
Science & Tech.
42.543334
95,602,114
Morphology, Anatomy, and Ultrastructure of CAM Plants Part of the Ecological Studies book series (ECOLSTUD, volume 30) CAM is usually regarded as a typical feature of succulents because of its occurrence in many succulent species. However, two questions must be asked: Is the occurrence of CAM restricted to succulents? Do all succulents exhibit CAM? KeywordsMalic Acid Mesophyll Cell Water Tissue Surface Expansion Pith Tissue These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Unable to display preview. Download preview PDF. © Springer-Verlag Berlin Heidelberg 1978
<urn:uuid:df9ca11c-d479-4b45-b140-2bb276a399f3>
2.703125
155
Truncated
Science & Tech.
28.650882
95,602,121
Technological Advancements Support Sustainability The 2014 numbers are in and it’s great news for alternative energy. All the primary sources of clean, renewable energy are being developed and consumed globally at greater levels. As the respected Worldwatch Institute cites in its 2015 global status report, released in June 2015, renewable energy continued to grow during 2014 against the backdrop of increasing global energy consumption, particularly in developing countries, and a dramatic decline on oil prices. The report notes that there is growing awareness around the world that increased development of renewable energy is critical for addressing climate change, stimulating economic development, and making energy accessible to billions of people still in need of modern energy services. Efficiency is critical as the world makes strides toward sustainability. This is true not only for how societies consume energy, but also in project development and management. There is a wide variety of equipment and technological capabilities, including infrastructure enhancements and web-based software, to increase efficiencies in project management and help advance the continued development of renewable energy. Founded in 1974 as an independent research institute devoted to global environmental concerns, Worldwatch Institute is recognized for its foresight and accessible, fact-based analysis. Through research and outreach that inspire action, Worldwatch seeks to accelerate the transition to sustainability. Stable carbon emissions The organization’s new report offers the startling and encouraging observation that for the first time in decades, despite rising energy use, global carbon emissions associated with energy consumption remained stable in 2014. This occurred even as the global economy grew. This stabilization is attributed to increased development of renewable sources of energy and improvements in energy efficiency. Worldwatch reports that 135 gigawatts (GW) of renewable energy power were added in 2014, increasing the total installed capacity to 1,712 GW. The added capacity in 2014 was an 8.5 percent increase from the previous year. More than one quarter of the world’s electricity is generated through renewable, clean energy. By 2014’s end, global renewable electricity generation comprised approximately 27.7 percent of total capacity. Hydropower leads at 16.6 percent, followed by wind at 3.1 percent and biopower at 1.8 percent. Geothermal and solar were less. New investment in renewable power and fuels increased 17 percent over 2013, to $270.2 billion. The increased development of renewables also is creating jobs. In 2014, an estimated 7.7 million people worldwide worked directly or indirectly in the renewables sector. Solar power supports the most jobs at 3.2 million, following by bioenergy at 3 million and windpower at 1 million. Worldwatch reports that growth in renewables is driven by several factors, including renewable energy support policies and the increasing cost-competitiveness of renewable energy. In the United States, an example of an energy support policy is the Production Tax Credit that provides federal tax incentives for the development of energy from wind. Helping to make renewables more cost-competitive is technology that standardizes organizational processes, streamlines project management and improves the performance and longevity of infrastructure. Cloud-based software is one technological advancement that can pay dividends in the development and management of a wide variety of renewable energy projects, from wind farms, solar farms and geothermal plants to hydropower and biomass facilities. Standardized organizational processes, error elimination and other efficiencies provide opportunities for completion of projects at reduced costs. That translates into more capacity in total electricity generation from renewables. That helps the global community increase its commitment to renewable energy and decrease dependence on fossil fuels and foreign sources of energy. More advances in technology and equipment have been introduced to the wind energy industry. Power output sizes and ratings have increased. Towers are taller. Blades are longer. Rotor diameters have increased. Gearboxes, generators and bearings are more reliable. Onboard sensors are more effective at measuring and recording data. These and other improvements have increased the energy capture of turbines and made their operation more efficient. That, in turn, provides important incentive for developing more wind farms, and makes their financing more attractive to banks and other loan institutions. Technology improvements also have helped spur development of solar energy. A number of considerations come into play when developing utility-scale solar energy infrastructure. These considerations include environmental issues, such as land disturbance; and impacts to land use, specially designated areas, impacts to soil, water and air, vegetation, and wildlife. While solar power facilities reduce the environmental impacts of combustion used in fossil-fuel power generation, there are some adverse impacts associated with utility-scale solar facilities. For example, large solar panel fields require large areas for solar radiation collection. Such facilities can interfere with existing land uses, such as grazing, wild horse management and minerals production. Construction of these facilities requires the clearing and grading of large areas, which can result in soil compaction, alteration of drainage channels and increased erosion. There can be ecological impact, such as adverse impacts on wildlife and interference with rainfall and drainage. Cultural and paleontological artifacts and cultural landscapes may be disturbed. Geographical information system (GIS) mapping provides essential information on the project area. Developers gain data on each parcel, including ownership and other title information. GIS reveals necessary information about topography, existing infrastructure, existing utilities, land use and environmental assessments. Planners can review and monitor the project area through color-coded maps and layers that provide 3-D visualization. By providing an integrated geospatial view of an organization’s planned utility-scale solar panel field, GIS helps facilitate the siting of such a project, as well as its development and ongoing operation. Infrastructure advancements are helping to spur growth of the solar industry. Scientists at the University of Toledo recently unveiled a new type of light-sensitive nanoparticle called colloidal quantum dots. Many believe this offers a less expensive and more flexible material for solar cells, the key piece in solar PV systems. The new materials use n-type and p-type semiconductors. Plus, they can function outdoors. Panels using this new technology were found to be up to 8 percent more efficient at converting sunlight. There also have been advances in energy storage. These improvements are important, because currently electricity generated through solar powers is largely a “use it or lose it” resource. One such improvement is a new battery developed at The Ohio State University. This battery is 20 percent more efficient and 25 percent cheaper than other batteries on the market. Perhaps best of all is that it is a rechargeable battery built into the solar panel itself. The capabilities mentioned here are just a small sampling of the technological advancements helping to add energy capacity globally from renewable sources. Undoubtedly, as these capabilities are utilized and additional advancements developed, the Worldwatch Institute will be compiling annual reports in the future that show an even greater impact by renewables on global energy generation and consumption. Photo Credit: Sustainability and Tech/shutterstock
<urn:uuid:74b77bc0-8b80-4ede-8937-ef20cc8a4dbf>
2.90625
1,405
Truncated
Science & Tech.
15.268078
95,602,125
Increasing public awareness of environmental pollution influences the search and development of technologies that help in cleanup of organic and inorganic contaminants such as metals. Sludge waste of paper industries as toxic and hazardous material from specific source containing Pb, Zn, and Cu metal from waste soluble ink. An alternative and eco-friendly method of remediation technology is the use of biosurfactants and biosurfactant-producing microorganisms. Soil washing is among the methods available to remove heavy metal from sediments. The purpose of this research is to study effectiveness of biosurfactant with concentration=CMC for the removal of heavy metals, lead, zinc and copper in batch washing test under four different biosurfactant production by microbial origin; Pseudomonas putida T1(8), Bacillus subtilis 3K, Acinetobacter sp, and Actinobacillus sp was grown on mineral salt medium that had been already added with 2% concentration of molasses that it is a low cost application. The samples were kept in a shaker 120 rpm at room temperature for 3 days. Supernatants and sediments of sludge were separated by using a centifuge and samples from supernatants were measured by Atomic Absorption Spectrophotometer. The highest removal of Pb was up to 14.04% by Acinetobacter sp. Biosurfactant of Pseudomonas putida T1(8) have the highest removal for Zn and Cu was up to 6.5% and 2.01% respectively. Biosurfactant have a role for removal process of the metals, including wetting, contact of biosurfactant to the surface of the sediments and detachment of the metals from the sediment. Biosurfactant has proven its ability as a washing agent in heavy metals removal from sediments, but more research is needed to optimize the process of removal heavy metals.
<urn:uuid:b3fbb4f9-e7df-4f8a-9196-701eda79c9c1>
3.078125
395
Academic Writing
Science & Tech.
29.233306
95,602,128
Position of star HD 164595 in the constellation Hercules Epoch J2000 Equinox J2000 |Right ascension||18h 00m 38.894s| |Declination||+29° 34′ 18.92″| |Apparent magnitude (V)||7.08| |Spectral type||G2V D| |Surface gravity (log g)||4.44 ± 0.05 cgs| |Temperature||5790 ± 40 K| |Metallicity [Fe/H]||−0.06 dex| HD 164595 is a G-type star located in the constellation of Hercules, 28.927 parsecs (94.35 light-years) from Earth that is notably similar to the Sun. With an apparent magnitude of 7.075, the star can be found with binoculars or a small telescope in the constellation Hercules. HD 164595 has one known planet, HD 164595 b, which orbits HD 164595 every 40 days. It was detected with the radial velocity technique with the SOPHIE echelle spectrograph. The planet has a minimal mass equivalent of 16 Earths. The star has the same stellar classification as the Sun: G2V. It has a similar temperature, at K compared with 5790 for the Sun. It has a lower logarithm of 5778 Kmetallicity ratio, at −0.06 compared with 0.00, and a slightly younger age, at 4.5 versus 4.6 billion years.[a] In 2016, HD 164595 briefly attracted media attention after it was reported that a possible SETI signal had been detected from the direction of the star in the previous year. The signal was only heard once and never confirmed by other telescopes, and is thought to have been due to terrestrial interference. Signal observation and SETI On 15 May 2015, a brief, single radio signal at 11 GHz (2.7 cm wavelength) was observed in the direction of HD 164595 by a team led by N. N. Bursov involving Claudio Maccone at the RATAN-600 radio observatory. The signal may be due to terrestrial radio-frequency interference or gravitational lensing from a more distant source. It was observed only once (for two seconds), by a single team, at a single telescope, giving it a Rio Scale score of 1 (insignificant) or 2 (low). Discussions in the media from 29 August 2016 onwards featured speculation that the signal could be caused by an isotropic beacon from a Type II civilization. The senior astronomer of the SETI Institute, Seth Shostak, stated that confirmation by another telescope is required. Astronomer Nicholas Suntzeff of Texas A&M University stated that the signal is in a military frequency band, and that it could have been a satellite downlink, implying that some such systems may be kept secret and therefore would be unknown to SETI scientists. SETI and METI studies followed with the Allen Telescope Array and the Boquete Optical SETI Observatory. Also, scientists at Berkeley SETI Research Center at the University of California, Berkeley observed HD 164595 using the Green Bank Telescope as part of the Breakthrough Listen program. No signal was detected at the position and frequency of the transient reported by the RATAN group. - Arecibo message, a 3-minute-long message sent into space - KIC 8462852, "Tabby's star" - Wow! signal, possible alien radio signal - HD 162826 - "HD 164595 – Star". SIMBAD. Centre de données astronomiques de Strasbourg. Retrieved 13 July 2016. - Epps, E. A. (1972). "UBV photoelectric observations: I. Stars within 25 parsec of the sun; II. Stars in quasar and galaxy fields; III. Stars in Kapteyn Selected Areas; IV. Miscellaneous stars". Royal Observatory Bulletin. 176: 127. Bibcode:1972RGOB..176..127E. - "A photometric and spectroscopic survey of solar twin stars within 50 parsecs of the Sun page 7 & 11 & 17, 31 December 2013". Cornell University - Astronomy & Astrophysics manuscript. arXiv: . Bibcode:2014A&A...563A..52P. doi:10.1051/0004-6361/201322277. - "HD 164595 - Star - SKY-MAP". - "HD 164595 b Confirmed Planet Overview Page". NASA. Retrieved 31 August 2016. - Courcol, B.; Bouchy, F.; Pepe, F.; Santerne, A.; Delfosse, X.; Arnold, L.; Astudillo-Defru, N.; Boisse, I.; Bonfils, X. (2015-09-01). "The SOPHIE search for northern extrasolar planets". Astronomy & Astrophysics. 581. arXiv: . Bibcode:2015A&A...581A..38C. doi:10.1051/0004-6361/201526329. ISSN 0004-6361. - Porto de Mello, G. F.; da Silva, R.; da Silva, L.; de Nader, R. V. (March 2014). "A photometric and spectroscopic survey of solar twin stars within 50 parsecs of the Sun; I. Atmospheric parameters and color similarity to the Sun". Astronomy and Astrophysics. 563: A52. arXiv: . Bibcode:2014A&A...563A..52P. doi:10.1051/0004-6361/201322277. - Williams, D. R. (2004). "Sun Fact Sheet". NASA. Retrieved 23 June 2009. - "Solar Variability and Terrestrial Climate - NASA Science". Retrieved 8 January 2013. - Berger, Eric. "Ars Technica". Ars Technica. Ars Technica. Retrieved 29 August 2016. - Gilster, Paul (27 August 2016). "An Interesting SETI Candidate in Hercules". Centauri Dreams. Retrieved 29 August 2016. - Bursov, N.; Filippova, L.; Filippov, V.; Gindilis, L.; Maccone, C.; et al. (2016). "SETI observations on the RATAN-600 telescope in 2015 and detection of a strong signal in the direction of HD 164595". IAA SETI Permanent Committee. Guadalajara, Mexico. - "Mystery radio signal may be from distant star system — or a military transmitter". KurzweilAI. 29 August 2016. Retrieved 31 August 2016. - "Rio scale calculator". - Seemangal, Robin (29 August 2016). "Not a Drill: SETI Is Investigating a Possible Extraterrestrial Signal From Deep Space". Retrieved 29 August 2016. - Zolfagharifard, Ellie (29 August 2016). "Is Earth being contacted by ALIENS? Mystery radio signals coming from a Sun-like star baffle scientists". Retrieved 29 August 2016. - "They're not saying it's aliens, but signal traced to sunlike star sparks SETI interest". 29 August 2016. - "'Leaked' space signal report has SETI groups scrambling". - "Preliminary analysis of star HD 164595" (PDF). - "Breakthrough Listen Follow-up of a Transient Signal from the RATAN-600 Telescope in the Direction of HD 164595" (PDF).
<urn:uuid:6a8c33f1-f59a-46c6-ae62-2b3bae9b7776>
2.578125
1,611
Knowledge Article
Science & Tech.
72.986255
95,602,152
NGC 5643 by Hubble Space Telescope |Observation data (J2000 epoch)| |Right ascension||14h 32m 40.7s| |Declination||−44° 10′ 28″| |Redshift||1199 ± 2 km/s| |Distance||55 Mly (16.9 Mpc)| |Apparent magnitude (V)||10.7| |Apparent size (V)||4′.6 × 4′.0| |ESO 272- G 016, MCG -07-30-003, PGC 51969| NGC 5643 is an intermediate spiral galaxy in constellation Lupus. It is located at a distance of circa 60 million light years from Earth, which, given its apparent dimensions, means that NGC 5643 is about 100,000 light years across. NGC 5643 has an active galactic nucleus and is a type II Seyfert galaxy. The galaxy was first reported by James Dunlop on May 10, 1826 with his 9-inch reflector telescope and descripted it as exceedingly faint. The galaxy was also spotted by John Herschel and added it in the General Catalogue of Nebulae and Clusters as number 3572. The galaxy is located only 15 degrees from the galactic plane. NGC 5643 is a grand design spiral galaxy, with two well-defined, symmetric arms. In the circumnuclear region are present and other dust spirals, but the two main dust arms are wider. The galaxy is seen nearly face on, at an inclination of ∼ 27°. Active galactic nucleus The galaxy has an low-luminosity active galactic nucleus of Seyfert 2 type and is also an luminous infrared galaxy. The galaxy has a double sided diffuse radiojet. The galaxy exhibits an extended emission line region elongated in a direction close to the radio position angle of 87°±3°. Chris Simpson et al. analysed images takes from WFPC2 camera of the Hubble Space Telescope in [O III] λ5007 and Hα and found emission extending eastward for at least 1.8 kpc and in the [O III]/Hα map a well-defined V-shaped structure that they identified as the projection of a tridimensional ionisation cone, which shares the same axis with the radio emission. A dust lanes perpendicular to this axis obstructs the nucleus from direct view. A disk of material was found when the data cubes of VLT were analysed. It is aligned with the nucleus and circles it and it possibly provides gas to the active galactic nucleus. The mass of the supermassive black hole has been estimated based on the galaxy stellar velocity dispersion to be 106.4 M⊙. It has been proposed that the gas outflow has led to star formation on two locations on the bar of the galaxy which lie at the location where the gas from the nucleus encounters the dense material of the bar. Via observations of the galaxy from XMM Newton telescope in 2009, the galaxy is found to be a Compton–thick active galactic nucleus. Also the galaxy emits soft X-rays, mainly from photoionized matter. The prensence of the compton-thick column which obstructs the nucleus was confirmed from observations by NuSTAR. Ultraluminous X-ray source In 2004, Guainazzi et al. detected in the images from XMM-Newton an ultraluminous X-ray source, named NGC 5643 ULX1, located within 0.8 arcminutes from the nucleus. The source outshined the nucleus in X-rays and if it is located within NGC 5643 its luminosity is over 1040 erg/s. Its luminosity is variable. The X-rays could be produced either by an advection dominated disc or a Comptonising corona and the X-ray source considered to be a black hole of stellar origin of approximately 30 solar masses. NGC 5643 has been home of two supernovae, SN 2013aa and SN 2017cbv. SN 2013aa was discovered by Stuart Parker from New Zealand on a 30-s unfiltered CCD image taken on 13.621 UT February 2013, as part of the Backyard Observatory Supernova Search, at magnitude 11.9. It was classified as a Type Ia a few days before maximum brightness. SN 2017cbv was discovered on March 10, 2017 by the Swope 1-m telescope at Las Campanas Observatory and was classified as a very young Ia supernova. It increased in magnitude from 15.8 to 14.8 within the next day. - "NASA/IPAC Extragalactic Database". Results for NGC 5643. Retrieved 2016-01-18. - O'Meara, Stephen James (2013). Southern gems. Cambridge: Cambridge University Press. p. 272. ISBN 1107015014. - Martini, Paul; Regan, Michael W.; Mulchaey, John S.; Pogge, Richard W. (June 2003). "Circumnuclear Dust in Nearby Active and Inactive Galaxies. I. Data". The Astrophysical Journal Supplement Series. 146 (2): 353–406. arXiv: . Bibcode:2003ApJS..146..353M. doi:10.1086/367817. - "Hidden from view". www.eso.org. Retrieved 28 May 2018. - Simpson, Chris; Wilson, A. S.; Bower, Gary; Heckman, T. M.; Krolik, J. H.; Miley, G. K. (January 1997). "A One‐sided Ionization Cone in the Seyfert 2 Galaxy NGC 5643". The Astrophysical Journal. 474 (1): 121–128. Bibcode:1997ApJ...474..121S. doi:10.1086/303466. - Menezes, R. B.; da Silva, P.; Ricci, T. V.; Steiner, J. E.; May, D.; Borges, B. W. (16 April 2015). "A treatment procedure for VLT/SINFONI data cubes: application to NGC 5643". Monthly Notices of the Royal Astronomical Society. 450 (1): 369–396. arXiv: . Bibcode:2015MNRAS.450..369M. doi:10.1093/mnras/stv629. - Goulding, A. D.; Alexander, D. M.; Lehmer, B. D.; Mullaney, J. R. (21 July 2010). "Towards a complete census of active galactic nuclei in nearby galaxies: the incidence of growing black holes". Monthly Notices of the Royal Astronomical Society. 406 (1): 597–611. arXiv: . Bibcode:2010MNRAS.406..597G. doi:10.1111/j.1365-2966.2010.16700.x. - Cresci, G.; Marconi, A.; Zibetti, S.; Risaliti, G.; Carniani, S.; Mannucci, F.; Gallazzi, A.; Maiolino, R.; Balmaverde, B.; Brusa, M.; Capetti, A.; Cicone, C.; Feruglio, C.; Bland-Hawthorn, J.; Nagao, T.; Oliva, E.; Salvato, M.; Sani, E.; Tozzi, P.; Urrutia, T.; Venturi, G. (8 October 2015). "The MAGNUM survey: positive feedback in the nuclear region of NGC 5643 suggested by MUSE". Astronomy & Astrophysics. 582: A63. arXiv: . Bibcode:2015A&A...582A..63C. doi:10.1051/0004-6361/201526581. - Matt, G.; Bianchi, S.; Marinucci, A.; Guainazzi, M.; Iwawasa, K.; Jimenez Bailon, E. (2 August 2013). "X-ray observations of the Compton-thick Seyfert 2 galaxy, NGC 5643". Astronomy & Astrophysics. 556: A91. arXiv: . Bibcode:2013A&A...556A..91M. doi:10.1051/0004-6361/201321293. - Annuar, A.; Gandhi, P.; Alexander, D. M.; Lansbury, G. B.; Arévalo, P.; Ballantyne, D. R.; Baloković, M.; Bauer, F. E.; Boggs, S. E.; Brandt, W. N.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Moro, A. Del; Hailey, C. J.; Harrison, F. A.; Hickox, R. C.; Matt, G.; Puccetti, S.; Ricci, C.; Rigby, J. R.; Stern, D.; Walton, D. J.; Zappacosta, L.; Zhang, W. (4 December 2015). "NuSTAR Observations of the Compton-thick Active Galactic Nucleus and Ultraluminous X-ray Source Candidate in NGC 5643". The Astrophysical Journal. 815 (1): 36. arXiv: . Bibcode:2015ApJ...815...36A. doi:10.1088/0004-637X/815/1/36. - Guainazzi, M.; Rodriguez-Pascual, P.; Fabian, A. C.; Iwasawa, K.; Matt, G. (November 2004). "Unveiling the nature of the highly obscured active galactic nucleus in NGC 5643 with". Monthly Notices of the Royal Astronomical Society. 355 (1): 297–306. arXiv: . Bibcode:2004MNRAS.355..297G. doi:10.1111/j.1365-2966.2004.08317.x. - Pintore, Fabio; Zampieri, Luca; Sutton, Andrew D.; Roberts, Timothy P.; Middleton, Matthew J.; Gladstone, Jeanette C. (11 June 2016). "The ultraluminous X-ray source NGC 5643 ULX1: a large stellar mass black hole accreting at super-Eddington rates?". Monthly Notices of the Royal Astronomical Society. 459 (1): 455–466. arXiv: . Bibcode:2016MNRAS.459..455P. doi:10.1093/mnras/stw669. - "Electronic Telegram No. 3416; SUPERNOVA 2013aa in NGC 5643 = PSN J14323388-4413278". Central Bureau for Astronomical Telegrams. Retrieved 12 March 2017. - Sternberg, A.; Gal-Yam, A.; Simon, J. D.; Patat, F.; Hillebrandt, W.; Phillips, M. M.; Foley, R. J.; Thompson, I.; Morrell, N.; Chomiuk, L.; Soderberg, A. M.; Yong, D.; Kraus, A. L.; Herczeg, G. J.; Hsiao, E. Y.; Raskutti, S.; Cohen, J. G.; Mazzali, P. A.; Nomoto, K. (25 July 2014). "Multi-epoch high-spectral-resolution observations of neutral sodium in 14 Type Ia supernovae". Monthly Notices of the Royal Astronomical Society. 443 (2): 1849–1860. arXiv: . Bibcode:2014MNRAS.443.1849S. doi:10.1093/mnras/stu1202. - "ATel #10167: Swope Photometric Observations of SN 2017cbv = DLT17u". The Astronomer's Telegram. 11 Mar 2017. - Karachentsev, Igor D.; Kaisina, Elena I.; Makarov, Dmitry I. (3 December 2013). "SUITES OF DWARFS AROUND NEARBY GIANT GALAXIES". The Astronomical Journal. 147 (1): 13. arXiv: . Bibcode:2014AJ....147...13K. doi:10.1088/0004-6256/147/1/13. - Makarov, Dmitry; Karachentsev, Igor (21 April 2011). "Galaxy groups and clouds in the local (z∼ 0.01) Universe". Monthly Notices of the Royal Astronomical Society. 412 (4): 2498–2520. arXiv: . Bibcode:2011MNRAS.412.2498M. doi:10.1111/j.1365-2966.2010.18071.x.
<urn:uuid:3d509c5e-2fd5-4bb6-845e-f0045482059e>
3.046875
2,756
Knowledge Article
Science & Tech.
85.928619
95,602,153
2004 Global Vegetation from Blue Marble Next Generation The Blue Marble Next Generation data set provides a monthly global cloud-free true-color picture of the Earth's land cover at a 500-meter spatial resolution. This visualization of the data set shows seasonal variations such as snowfall, spring greening and droughts in a seamless fashion, thereby heightening awareness of changes in the Earth's climate. The image here shows a global view of the data. This data set is derived from imagery taken in 2004 by the MODIS instrument on the Terra satellite. Please give credit for this item to: NASA/Goddard Space Flight Center Scientific Visualization Studio The Blue Marble Next Generation data is courtesy of Reto Stockli (NASA/GSFC) and NASA's Earth Observatory. The Blue Marble data is courtesy of Reto Stockli (NASA/GSFC). Short URL to share this page: Terra and Aqua/MODIS/Blue Marble: Next Generation 1/1/2004 - 12/31/2004 Note: While we identify the data sets used in these visualizations, we do not store any further details nor the data sets themselves on our site. This item is part of this series: Blue Marble Next Generation >> Earth Science >> Land Surface >> Land Use/Land Cover >> Land Cover GCMD keywords can be found on the Internet with the following citation: Olsen, L.M., G. Major, K. Shein, J. Scialdone, S. Ritz, T. Stevens, M. Morahan, A. Aleman, R. Vogel, S. Leicester, H. Weir, M. Meaux, S. Grebas, C.Solomon, M. Holland, T. Northcutt, R. A. Restrepo, R. Bilodeau, 2013. NASA/Global Change Master Directory (GCMD) Earth Science Keywords. Version 184.108.40.206.0
<urn:uuid:340274d2-a2c8-4ed6-a6d1-76a038d90a62>
2.921875
411
Knowledge Article
Science & Tech.
68.946191
95,602,156
Production of Microcytospheres Cytoplasmic vesicles can be isolated by several methods which utilize agents with cross-linking capabilities, formaldehyde or glutaraldehyde (Scott et al., 1979). These partially fixed vesicles are unable to attach to a substrate. Since, in certain types of experiments, cytoplasmic vesicles containing only the active ribosomal components and a normal cell membrane are desirable, a method was needed to ensure minimal contamination of vesicles with other organelles. Cytochalasin B has been used in the past to break down the microfilaments associated with the cell membrane (Carter, 1967) and to enucleate cells (Prescott et al., 1971; Veomett et al., 1976; Wigler and Weinstein, 1975; Gopalakrishnan and Tompson, 1975). The resulting cyto-plasts can then be fused with cells or karyoplasts of different origin (Shay, 1977). However, these cytoplasts contain all of the cellular organelles. We describe here a procedure that is speedy, sterile, and applicable to a variety of experiments in which it is essential that cytoplasmic parts of one cell type containing only defined components are fused with other cells. These types of cytoplasmic vesicles are named microcytospheres. KeywordsHeLa Cell Mitotic Cell Size Marker Plastic Dish Cytoplasmic Vesicle Unable to display preview. Download preview PDF. - Veomett, G., Shay, J. W., Hough, P. V., and Prescott, D. M., 1976, Large-scale enucleation of mammalian cells, in: Methods in Cell Biology, Volume XIII (D. M. Prescott, ed.), Academic Press, New York, pp. 1–6.Google Scholar
<urn:uuid:276ced15-dfd6-4af4-84de-fc4141764a2b>
2.5625
388
Truncated
Science & Tech.
39.435585
95,602,165
This chapter provides a general introduction to software engineering- It describes the major activities in software development and the structure of a software engineering environment. KeywordsSoftware Development Software Engineering High Level Programming Language Software Engineering Research Stepwise Refinement These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Unable to display preview. Download preview PDF. © Springer-Verlag Berlin Heidelberg 1994
<urn:uuid:9e732bb8-9873-419b-a39c-fbb039fc370e>
2.59375
96
Truncated
Software Dev.
5.509632
95,602,166
+44 1803 865913 The study of life and its existence in the universe, known as Astrobiology, is now one of the hottest areas of both popular science and serious academic research, fusing biology, chemistry, astrophysics, and geology. In this masterful introduction, Lewis Dartnell tours its latest findings, and explores some of the most fascinating questions in science. What actually is `life'? Could it emerge on other planets or moons? Could alien cells be based on silicon rather than carbon, or need ammonia instead of water? Introducing some of the most extreme lifeforms on Earth - those thriving in boiling acid or huddled around deep-sea volcanoes - Dartnell takes us on a tour of our solar system and beyond to reveal how deeply linked we are to our cosmic environment, and what we might hope to find out there. Dartnell explores the latest theories for how life came to evolve on Earth, and adds fascinating speculations on the prospects for finding it elsewhere. The Times "An essential, enjoyable and highly readable insight into life in its cosmic context."Charles S. Cockell - author of Impossible Extinction and Professor of Microbiology, Planetary and Space Sciences Research Institute, Open University. There are currently no reviews for this book. Be the first to review this book! Lewis Dartnell is currently researching at CoMPLEX (the Centre for Mathematics & Physics in the Life Sciences and Experimental Biology), at University College London. Your orders support book donation projects Your prompt attention has beaten almost every other material supplier hands down. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:18dc2cba-ee27-448b-b035-d416aa13bdf8>
2.875
348
Product Page
Science & Tech.
35.403196
95,602,171
The Most Common of Our Gulls is Leaving, but Will Be Back Author: Jorge Ventocilla The laughing gull or Larus atricilla, without question the most common of gulls in Panama, chooses the month of March for going back to its nesting territories along the coasts of the U.S., the Gulf of Mexico, the Bahamas and other islands in the Caribbean. It will travel again south of the U.S., all the way to Peru and the north of Brazil, as a returning migratory bird species. After being away for a year and a half, it would come back to Panama usually around September. Some non-breeding adult and juvenile birds may stay close to our coasts all year round. This bird is rather small, when compared to other species; weighing just above the half-pound mark. Its beauty can be seen when it is in full flight, when its wide, angular wings can be easily seen. The laughing gull lives in coastal areas, marshes and salt lakes; it has also been seen close to fresh water sources, away from coastal areas (for example along the lakes of the Panama Canal). It can be seen in flocks of hundreds, sometimes in association with other gull species. I have seen them searching for food in urban settings like the transportation terminal (Terminal de Transporte). Could it be that they cannot find any food in their habitat? They feed on fish and some aquatic invertebrates and can be found chasing crabs in muddy areas. Carrion and garbage are also part of their diet, which makes them a valuable aid in keeping things clean, as helpful as our friend the turkey vulture in mainland. Lively and opportunistic as they are, they will try to steal fish from the pelicans, right from their beaks. Its name is derived from the bird’s cackling call resembling a human laughing somewhat loudly. This is the call they use during their mating season; when they are most likely to be away from Panama. I have not had the chance of hearing their call, but some of my colleagues have. They mention that they have heard them shortly before they travel to their nesting areas, in March.
<urn:uuid:82e7fb5b-6d8f-4857-a25b-14e66777cb3a>
2.59375
448
Nonfiction Writing
Science & Tech.
61.985366
95,602,190
Humidity is the amount of water vapor in a gas such as a portion of the Atmosphere. The Temperature and makeup of the gas determines a maximum amount of water vapor the air can hold (Saturation), and a measure of humidity as a percentage of this amount is termed Relative Humidity. A measure based purely on mass, i.e., that of water vapor as a percentage of that of the total gas mixture, is termed Specific Humidity. The concept of the humidity applies to the atmospheres of other worlds, thus are of interest in Atmospheric Models for them. This includes some solar system planets and Moons as well as some Extra Solar Planets. It may be used in analogy for other substances that can occur as either a gas or liquid on a world, e.g., for Methane on Titan.
<urn:uuid:b1bfd6a0-2839-4842-8062-f8439a8b3496>
3.796875
165
Knowledge Article
Science & Tech.
48.200714
95,602,204
|Class:||Animals (Animalia) - Jointed Legs (Arthropoda) - Insects (Insecta)| |Species:||Mosquito (cf Ochlerotatus sp)| |This Photo:||Female, whole| |Similar Species:||Non-biting Midge| General Species Information: Found on Ellura (in the Murray Mallee) & in the Adelaide Hills Mosquitoes look a lot like Midges. An easy differentiator is that, at rest, mozzies hold their rear legs up, midges hold their front legs up. Depending on the photo, a more guaranteed identifier is that Mosquitoes have a very long mouths / proboscis (straw like appendage that males drink nectare from and females suck blood with)
<urn:uuid:f01d7816-04cc-408a-a173-4597092b2f4c>
2.65625
172
Knowledge Article
Science & Tech.
7.256896
95,602,218
Finding these hidden gems in the Hubble archive gives astronomers an invaluable time machine for comparing much earlier planet orbital motion data to more recent observations. It also demonstrates a novel approach for planet hunting in archival Hubble data. Left: This is an image of the star HR 8799 taken by Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) in 1998. A mask within the camera (coronagraph) blocks most of the light from the star. In addition, software has been used to digitally subtract more starlight. Nevertheless, scattered light from HR 8799 dominates the image, obscuring any details. Center: Recent, sophisticated software processing of the NICMOS data removes most of the scattered starlight to reveal three planets orbiting HR 8799. The positions of these planets coincide with orbits of planets observed by ground-based telescopes in 2007 and 2008. Right: This is an illustration of the HR 8799 exoplanet system based on the reanalysis of Hubble NICMOS data and ground-based observations. The positions of the star and the orbits of the four known planets are shown schematically. The size of the dots is not to scale with their true size. The three outermost planets, a, b, and c are detected in both the NICMOS and ground-based data. A fourth, inner planet, e was detected in ground-based observations. The orbits appear elongated because of a slight tilt of the plane of the orbits relative to our line of sight. The size of the HR 8799 planetary system is comparable to our solar system, as indicated by the orbit of Neptune, shown to scale. Credit: NASA; ESA; STScI, R. Soummer Four giant planets are known to orbit the young, massive star HR 8799, which is130 light-years away. In 2007 and 2008 the first three planets were discovered in near-infrared ground-based images taken with the W.M. Keck Observatory and the Gemini North telescope by Christian Marois of the National Research Council in Canada and his team. Marois and his colleagues then uncovered a fourth innermost planet in 2010. This is the only multiple exoplanetary system for which astronomers have obtained direct snapshots. In 2009 David Lafreniere of the University of Montreal recovered hidden exoplanet data in Hubble images of HR 8799 taken in 1998 with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). He identified the position of the outermost planet known to orbit the star. This first demonstrated the power of a new data-processing technique for retrieving faint planets buried in the glow of the central star. A new analysis of the same archival NICMOS data by Remi Soummer of the Space Telescope Science Institute in Baltimore has recovered all three of the outer planets. The fourth, innermost planet is 1.5 billion miles from the star and cannot be seen because it is on the edge of the NICMOS coronagraphic spot that blocks the light from the central star. By finding the planets in multiple images spaced over years of time, the orbits of the planets can be tracked. Knowing the orbits is critical to understanding the behavior of multiple-planet systems because massive planets can perturb each other's orbits. "From the Hubble images we can determine the shape of their orbits, which brings insight into the system stability, planet masses and eccentricities, and also the inclination of the system," says Soummer. These results are to be published in the Astrophysical Journal. The three outer gas-giant planets have approximately 100-, 200-, and 400-year orbits. This means that astronomers need to wait a very long time to see how the planets move along their paths. The added time span from the Hubble data helps enormously. "The archive got us 10 years of science right now," he says. "Without this data we would have had to wait another decade. It's 10 years of science for free." Nevertheless, the slowest-moving, outermost planet has barely changed position in 10 years. "But if we go to the next inner planet we see a little bit of an orbit, and the third inner planet we actually see a lot of motion," says Soummer. The planets weren't found in 1998 when the Hubble observations were first taken because the methods used to detect them were not available at that time. When astronomers subtracted the light from the central star to look for the residual glow of planets, the residual light scatter was still overwhelming the faint planets. Lafreniere developed a way to improve this type of analysis by using a library of reference stars to more precisely remove the "fingerprint" glow of the central star. Soummer's team took Lafreniere's method a step further and used 466 images of reference stars taken from a library containing over 10 years of NICMOS observations assembled by Glenn Schneider of the University of Arizona. Soummer's team further increased contrast and minimized residual starlight. They completely removed the diffraction spikes, which are artifacts common to telescope imaging systems. This allowed them to see two of the faint inner planets in the Hubble data. The planets recovered in the NICMOS data are about 1/100,000th the brightness of the parent star when viewed in near-infrared light. Soummer next plans to analyze approximately 400 other stars in the NICMOS archive with the same technique, improving image quality by a factor of 10 over the imaging methods used when the data were obtained. Soummer's work demonstrates the power of the Hubble Space Telescope data archive, which harbors images and spectral information from over twenty years of Hubble observations. Astronomers tap into this library to complement new observations with a wealth of invaluable data already gathered, yielding much more discovery potential than new observations alone. From the NICMOS archive data Soummer's team will assemble a list of planetary candidates to be confirmed by ground-based telescopes. If new planets are discovered they will once again have several years' worth of orbital motion to measure. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. Cheryl Gundy | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:dabd52fa-88e6-4644-90be-ef4fc3f20183>
3.65625
1,907
Content Listing
Science & Tech.
43.012916
95,602,238
Share this article: One of the most powerful tornadoes ever recorded in the United States barreled across southern Plains on May 31, 2013, devastating areas near El Reno, Oklahoma. “The El Reno tornado is well-known to be the widest tornado on record,” AccuWeather Senior Storm Warning Meteorologist John Lavin said. The mammoth, 2.6-mile-wide twister also packed some of the most intense winds ever measured and capped off a week-long severe weather outbreak across the central United States. The days leading up to the tornado outbreak The El Reno tornado occurred on the last day of a tornado outbreak that started on May 26, 2013, and lasted through May 31, 2013. During this time, there were over 140 reports of tornadoes from Colorado through New York; however, a majority of the reports were focused on the central U.S., according to the Storm Prediction Center (SPC). This tornado outbreak was also right on the heels of the violent EF5 tornado that devastated Moore, Oklahoma, on May 20, 2013, which caused 24 fatalities and flatted over 300 homes. By the morning of May 31, 2013, it was becoming evident that the ingredients were coming together for another deadly tornado outbreak in central Oklahoma. When the SPC issued their daily thunderstorm outlook, they highlighted the potential for significant tornadoes in central Oklahoma late that afternoon into the early evening. The potential for life-threatening tornadoes prompted some emergency managers in the region to email schools, hospitals and businesses that morning to ensure that they had a plan in place when severe weather was approaching. Evolution of the record-breaking tornado The first severe storms of the day began to fire in the late afternoon with the El Reno tornado touching down at 6:03 p.m. CDT. After being on the ground for just over 15 minutes, the twister rapidly intensified, eventually becoming 2.6 miles wide, making it the widest tornado ever recorded. Before this, the widest tornado had been the 2004 Hallam tornado which peaked at 2.5 miles wide in Nebraska. In addition to being exceptionally large, the tornado had an unusual behavior, fluctuating in speed and direction, making its motion unpredictable to even the most experienced meteorologists and storm chasers. “At first, it took an east to southeast movement at 30-40 mph, but later on, it suddenly took a rapid turn to the north, moving at 50 mph,” Lavin said. “Then once the tornado moved north, it slowed down and only moved at 10 mph before finally dissipating,” Lavin added. “These changes in movement and speed were contributing factors that caught many [people] off guard.” The tornado killed eight people and injured 151 others before it lifted at 6:43 p.m. CDT. It was on the ground for 16.2 miles and left behind a scar on the ground large enough to be seen from space. All eight people that died that day were in their vehicles when they were caught in the tornado. This included three well-known storm chasers: Tim Samaras, Paul Samaras and Carl Young. Quiz Maker - powered by Riddle The tornado hit the El Reno area around the same time as the evening commute, meaning that more people were on the highway when the large tornado crossed Interstate 40 west of Oklahoma City. In addition to the rush hour traffic, some people in the area took to the roads to try to outrun the storm instead of seeking shelter. “The public felt the fear induced by the May 20 [Moore tornado] event and led people to take actions on May 31 they would not normally take,” the National Weather Service (NWS) said in an assessment of the tornado outbreak. “Many members of the public, as well as some emergency managers and other weather-savvy experts, mentioned that, given the fear they felt after the storms on May 19-20, this advice to evacuate overrode their typical emergency plans and advice about sheltering in place,“ the NWS said. “The atypical response of many residents evacuating from the tornadoes rather than following shelter plans led to the traffic jams and accidents,” they added. “On May 31, slow-moving storms around rush hour, and the context of recent events, resulted in confusion, frustration and, sadly, more fatalities.” Flash flooding disaster unfolds following the historic tornado The 2.6-mile-wide tornado nearly flattened buildings and flipped vehicles in its path, leading to approximately $35 million to $40 million in damages. However, some of the immediate response and recovery efforts had to be delayed due to the persistent threat of severe weather into that night. “This storm would go on to produce several other tornadoes in the Oklahoma City metro area, and a line of training supercells produced heavy rainfall and runoff that in turn caused historic flash flooding.” the NWS said. “Up to around 7 inches of rain fell in Oklahoma City, and 13 people died as a result of the flash flooding,” Lavin added. This was the deadliest flash flooding event in central Oklahoma since April 3, 1934, when flooding along the Washita River caused 17 fatalities. 5 years after devastation: Resilient Moore, Oklahoma, continues to fortify itself against future tornadoes What tornado safe room is right for you? 5 life-threatening tornado safety myths debunked In the days following the monster tornado, the NWS announced that it was an EF5, the highest possible rating for a tornado. However, three months after the storm, it was downgraded to an EF3. The Enhanced Fujita Scale is a rating system for tornadoes that is based on wind damage. The severity of the damage indicates how powerful the tornado was when it hit. One of the mobile Doppler radars near the tornado measured maximum winds near the ground of 296 mph, the second-highest ever measured on Earth following the 1999 Bridge Creek-Moore tornado. However, the tornado was at its strongest when it was tracking across open land and no damage was found that supported winds that high, and thus it was given a final rating of an EF3. While some may disagree on the final rating, many have looked back and have learned from that terrifying day. New storm shelters were opened across El Reno two years later that were equipped with auxiliary power, bathrooms and water pumps in case of flooding. “I'll be surprised if we couldn't fit the entire population of El Reno in just our storm shelters alone,” Craig McVay, El Reno Schools Superintendent, told News9. The NWS also examined their use of social media throughout that day and identified ways to better use platforms, such as Facebook and Twitter, to relay crucial information to the public during future severe weather outbreaks. Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated. Twelve people were injured when a flying lava bomb punctured the roof of a lava tour boat, causing a large hole in the vessel. While Trump and Putin met for a summit in Helsinki, Finland, on Monday morning, a message about climate change was hung from a church by an environmental activist group. En promedio, 37 niños mueren anualmente al ser olvidados dentro de un auto. The tournament is being held in eastern Scotland, a region which has a history of extreme weather that can create chaos for golfers and spectators. Following a push of dry air during the middle part of this week, a humid and rather wet weather pattern is forecast to evolve over the eastern third of the nation during the latter part of July. The ongoing Kilauea volcano eruptions in Hawaii have led to the formation of a tiny, new piece of land made of lava, which was initially considered to be an island. An organizing tropical threat will heighten the risk for flooding from the Philippines to Vietnam and Laos into midweek. A grueling heat wave caused at least eight deaths across Japan since Saturday, and the dangerous conditions are not forecast to subside through the duration of the week.
<urn:uuid:bce30fec-a274-40d9-b605-7c656df1ca76>
2.9375
1,726
News Article
Science & Tech.
46.623742
95,602,279
The wasp (Agenioideus nigricornis) was first described scientifically in 1775 by Danish entomologist Johan Christian Fabricius, thanks to samples collected in Australia during Captain Cook's first great voyage (1768–1771). "Since then, scientists have largely forgotten about the wasp," says Professor Andy Austin from the University of Adelaide's Australian Centre for Evolutionary Biology & Biodiversity. "It is widespread across Australia and can be found in a number of collections, but until now we haven't known the importance of this particular species." The wasp is now being dubbed the "redback spider-hunting wasp" after a family in Beaconsfield, Western Australia, discovered one of them with a paralyzed redback spider in their back yard. Florian Irwin, then aged 9, spotted the wasp dragging the spider several meters to its nest, and his father, Dr Peter Irwin, photographed the event and kept the specimens. Peter, who is an Associate Professor at Murdoch University, contacted the Western Australian Museum about the discovery; the Museum alerted Professor Austin and research fellow Dr Lars Krogmann at the University of Adelaide. "The Museum knew we were doing research into the Agenioideus, which belongs to the family Pompilidae, the spider-hunting wasps. Little is known about them, despite various species of Agenioideus being distributed throughout the world," Professor Austin says. "We're very excited by this discovery, which has prompted us to study this species of wasp more closely. It's the first record of a wasp preying on redback spiders and it contributes greatly to our understanding of how these wasps behave in Australia." With a body less than a centimeter in length, an adult redback spider-hunting wasp is no bigger than its prey. It stings and paralyses the redback spider and drags it back to its nest, where the wasp lays an egg on it. The spider remains alive but is paralyzed. Once the egg hatches, the larval wasp feeds on the spider. The redback spider (Latrodectus hasselti) is an Australian relative of North America's black widow spider. "The redback spider is notorious in Australia, and it has spread to some other countries, notably Japan and New Zealand. Redbacks are one of the most dangerous species in Australia and they're mostly associated with human dwellings, which has been a problem for many years," Professor Austin says. "The redback spider-hunting wasp is doing its part to keep the population of redback spiders down, but it doesn't hunt all the time and is unlikely to completely eradicate the spiders." Dr Krogmann (who is now based at the Stuttgart State Museum of Natural History) and Professor Austin have published a paper about the redback spider-hunting wasp in this month's issue of the Australian Journal of Entomology.Professor Andy Austin Professor Andy Austin | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:3c16eebd-c65d-46ef-96b7-316ac41b7fc0>
4
1,285
Content Listing
Science & Tech.
42.113862
95,602,281
Coprolite Chemistry – what fossilised faeces can tell us about extinct animals Presenter: Fiona Gill Published: December 2012 Age: 14-19 and upwards Views: 1555 views Source/institution: Royal Society Faeces contains a complex mixture of chemical compounds, including substances from the diet and digestive processes. By better understanding the biology of extinct animals we can gain insights into how they interacted with their environment and potentially why they became extinct. Dr Gill’s research lies at the interface between palaeontology and chemistry. She uses organic geochemistry as a tool for investigating interactions between microbes and animals in the fossil record. Her current research focuses on the chemistry of modern and fossilised faeces (coprolites).
<urn:uuid:e625422b-6e3b-4ea6-b586-506d93b61706>
3.328125
156
Content Listing
Science & Tech.
19.188035
95,602,312
What Caused the Glacial to Interglacial CO2 Change? Scenarios put forward to explain the 80 µatm glacial to interglacial change in atmospheric CO2 content are evaluated. The conclusion is that no single mechanism is adequate. Rather, contributions from temperature, sea ice, biologic pumping, nutrient deepening, and CаCOз cycling must be called upon. The observation that the 13C/12C ratio for Antarctic foraminifera was 0.9±0.1‰ lower during glacial than during interglacial time constitutes a huge fly in the ointment for all scenarios proposed to date. KeywordsBenthic Foraminifera Planktonic Foraminifera North Atlantic Deep Water Glacial Time Carbon Isotope Record Unable to display preview. Download preview PDF. - Broecker WS, Peng T-H (1986) Carbon cycle: 1985. Glacial to interglacial changes in the operation of the global carbon cycle. Radiocarbon 28:309–327Google Scholar - Charles CD, Fairbanks RG (1990) Glacial to interglacial changes in the isotopic gradients of southern ocean surface water. In: Bleil U, Thiede J (eds) Geological History of the Polar Oceans: Arctic Versus Antarctic. Kluwer Academic Publishers, Netherlands, pp 519–538Google Scholar - CLIMAP PROJECT MEMBERS (1981) Seasonal reconstruction of the Earth’s surface at the last glacial maximum. Geol. Soc. Amer., Map and Chart Series 36Google Scholar - Craig H, Gordon LI (1965) Deuterium and oxygen-18 variations in the ocean and the marine atmosphere. In: Tongiorgi T (ed), Stable Isotope in Oceanographic Studies and Paleotemperatures. Consiglio Nazional delle Richerche Laboratorio de Geologia Nucleare, Pisa, Italy, pp 9–130Google Scholar - Opdyke BN, Walker JCG (1991, in press) The return of the coral reef hypothesis: Glacial to interglacial partitioning of basin to shelf carbonate and its effect on Holocene atmospheric pCO2. GeologyGoogle Scholar - Volk T, Hoffert MI (1985) Ocean carbon pumps: Analysis of relative strengths and efficiencies in ocean-driven atmospheric CO2 changes. In: Sundquist ET, Broecker WS (eds) The Carbon Cycle and Atmospheric CO2: Natural Variations Archean to Present. Geophysical Monograph 32, American Geophysical Union, Washington, D.C., pp 99–110CrossRefGoogle Scholar
<urn:uuid:c8ee9229-812d-4117-85a4-b7ed41adb026>
2.671875
555
Truncated
Science & Tech.
37.070767
95,602,318
Flavins Keep a Handy Helper in Their Pocket In the active center of the enzyme is a flavin cofactor. In the enlargement we can see that near it, oxygen (O2) is bound - enabling the flavin to be activated. Credit: Robin Teufel, Raspudin Saleem-Batcha. In human cells, vitamins often serve as the precursors of “cofactors” - non-proteins which are an essential part of enzymes. Among them are the flavins, which the organism derives from vitamin B2. A team headed by Dr. Robin Teufel and Dr. Raspudin Saleem-Batcha of the University of Freiburg at the Center for Biological Systems Analysis has now shown in detail how oxygen interacts with the flavin in an enzyme - revealing for the first time precisely how it works. The researchers have published their results in the latest Proceedings of the National Academy USA (PNAS). Flavins play a key role in metabolic processes, in the immune system and in neural development in humans - and are equally important to bacteria, fungi and plants. Flavoenzymes often require oxygen to function. But until now many of the details of their interaction were not known. The researchers used x-ray diffraction analysis to show for the first time that oxygen is bound to a special pocket inside the enzyme. The nature of this compound makes it possible to activate the cofactor - making it essential for the enzyme to work. This knowledge may help, for example, to rationally modify flavoenzymes in the future - in basic research or for biotechnological applications. This article has been republished from materials provided by University of Freiburg. Note: material may have been edited for length and content. For further information, please contact the cited source. Raspudin Saleem-Batcha, Frederick Stull, Jacob N. Sanders, Bradley S. Moore, Bruce A. Palfey, K. N. Houk, Robin Teufel. Enzymatic control of dioxygen binding and functionalization of the flavin cofactor. Proceedings of the National Academy of Sciences, 2018; 201801189 DOI: 10.1073/pnas.1801189115. Neuroscience Milestone: Complete fly brain imaged at nanoscale resolutionNews Two high-speed electron microscopes +7,062 brain slices +21 million images = The complete volume of the drosophila brainREAD MORE Working Together Helps Phage Overcome CRISPRNews Surprising results show that phage join forces to overcome bacteria’s CRISPR -based immune defenses. Improved understanding of the interactions between phage and their bacterial hosts could help advance phage-based therapies and stimulate viral research.READ MORE UCL Technology Fund Investment MeiraGTx Raises $75 Million in IPONews MeiraGTx Holdings plc raised $75 million of gross proceeds upon close of the company’s initial public offering (IPO).READ MORE
<urn:uuid:b4fbf66d-dab0-4f5c-882d-c476652db5aa>
3.5625
643
News Article
Science & Tech.
38.513678
95,602,321
Weather on the Earth is driven by multiple factors, including thermal energy from within the Earth's core and from the sun. Certain areas of the Earth are known for specific weather patterns that occur as a result of these factors. One area that scientists, geologists and meteorologists study frequently is the Intertropical Convergence Zone, which is a band near the equator where the southern and northern trade winds meet. Low Air Pressure In the Intertropical Convergence Zone, the northern and southern trade winds come together. Because of the rotation of the Earth, the winds cannot really cross the equator without losing energy. Instead of continuing over the Earth horizontally, the winds thus move vertically toward the upper atmosphere. The heating of the Earth's ocean currents by the sun assists in this process, making the air warmer and letting it rise. The result is that the Intertropical Convergence Zone has low air pressure near the Earth's surface. The lack of horizontal wind movement in the region caused sailors to nickname the Intertropical Convergence Zone, "the doldrums." The frequent rising of air in the Intertropical Convergence Zone means that moisture constantly is being brought high enough in the atmosphere to a point cool enough to allow the moisture to condense into clouds. The Intertropical Convergence Zone therefore can see incredible amounts of precipitation and high humidity. Although some areas of the zone do have a dry season, others do not. Afternoon showers are a feature of the zone. Rainfall in the Intertropical Convergence Zone typically is not gentle rainfall that lasts for long periods. Instead, the high amounts of energy from thermal and solar heating cause moisture to condense quickly into clouds in the hottest part of the day. Circular typhoons thus often form as the air currents move. Some of the strongest winds on the Earth have been recorded in these storms. Thunderstorms with heavy lightening also are common. The Intertropical Convergence Zone is characterized by inconsistent location around the equator. As the Earth moves with the seasons, the area which receives the highest amount of heat energy from the sun varies. The thermal equator around which the Intertropical Convergence Zone forms thus moves, depending on the season. In some cases, this shift can result in the complete reversal of normal trade wind patterns, particularly in the Indian Ocean. The characteristics of the Intertropical Convergence Zone have an enormous impact on weather all around the globe. Shifting of wind patterns in the Intertropical Convergence Zone can move thermal energy and moisture to different parts of the Earth than usual and can slow or even stop ocean currents. This affects all plant and animal life either directly or indirectly, since ecosystems are dependent largely on weather patterns and temperature.
<urn:uuid:53d7200d-d321-406a-b326-e49e6bd64587>
4.1875
557
Knowledge Article
Science & Tech.
35.327261
95,602,345
Solar energy has some problems. First, no matter how clear the skies, a solar panel won't produce electricity at night, so a solar energy system needs to have some method of storing energy. And if there is bad weather for an extended time, a solar energy system will provide little output, which means you need to have backup energy generation alternatives available. But those disadvantages are balanced against the low maintenance costs of solar facilities and that the energy source itself -- sunlight -- costs nothing. Add in solar's environmental advantages, and the balance tips in favor of solar energy, leading to record growth in installed solar capacity for more than a decade at the time of publication. Principles and History Photovoltaic solar energy, or PV, is generated when electrons absorb sunlight with in a semiconductor material. Scientists developed photovoltaic technology in the 1950s and and adapted it almost immediately to supply electrical power to satellites -- a use that continues today. Another type of solar energy facility is the solar-thermal plant, also called a concentrating solar power, or CSP, facility. CSP plants use arrays of mirrors to focus sunlight into a heating chamber or linear receiver tubes. Within those elements, heated fluid directly or indirectly drives a turbine generator. Large-scale CSP was successfully demonstrated in the 1980s and continues to be used for some of the largest solar energy plants in the world. Solar Energy in the U.S. At the end of 2012, the U.S. Energy Information Administration estimated that the nation had more than 3,500 megawatts of grid-connected solar photovoltaic. Add to that the more than 1,000 megawatts of U.S. CSP energy production estimated by the International Energy Agency, and you reach an impressive total of more than 4,500 megawatts, or 4.5 gigawatts. Although that's a small percentage of the overall energy generation capacity of the United States, the Solar Energy Industries Association says there's enough solar capacity to supply the energy needs of 1 million households. Globally, at the end of 2012 Germany led the world with about 25 gigawatts of installed capacity, but other countries are no slackers. Bloomberg reports that there was more than 100 gigawatts of solar capacity online at the end of 2012, with significant growth in China and Japan as well as the United States. The United States and Spain have the largest contribution from CSP, while photovoltaics provide the largest component of installed solar capacity overall. Although the rapid growth in solar energy capacity in the 2000s -- and particularly since 2005 -- is a good indication that the technology is widely accepted, perhaps an even better indication is provided by the plans for increased growth in the future. In 2013, more than 800 megawatts of CSP energy is scheduled to come online in the United States, and South Africa, Spain and India all have large-scale CSP projects planned. China is expected to be the largest PV consumer in 2013, with projects slated to bring 10 gigawatts of solar electric capacity online. Bloomberg estimates that overall global growth in capacity will reach a new record of 34 gigawatts -- a huge vote of confidence reflecting the widespread acceptance of solar power technology.
<urn:uuid:03e6fef1-ce96-49b5-83d8-439b3e5e98eb>
3.75
652
News Article
Science & Tech.
34.666667
95,602,346
Almost two thirds of our universe is made up of dark energy. It is invisibly interwoven with the empty space, and forces the universe to expand at an ever-increasing speed. This discovery, which two teams published simultaneously in 1998, was nothing short of a sensation. Nobody yet knows what is behind dark energy. When a star is burned out and its time has come, it explodes in a supernova. Type 1a supernovae are particularly interesting for astronomers. They only arise in binary systems in which one star is a white dwarf and the other a red giant that is in a phase of expansion. The red giant’s mass flows to the white dwarf until this star reaches its maximum mass limit, which was already predicted by Chandrasekhar Subrahmanyan at the end of the 1930s; he was awarded the 1983 Nobel Prize for it. Consequently, the white dwarf explodes in a type 1a supernova. As the brightness of these explosions is physically always the same, observing the apparent brightness of supernovae allows astronomers to deduce their distance. A supernova occurs in every galaxy approximately once every 500 years. The gigantic universe has around ten type 1a supernovae every minute, however. One incredible feat achieved by the 2011 Laureates was detecting these supernovae at a distance of more than five billion light years, estimating their age, subtracting their signals from the vast quantity of digital data in order to record their luminosity. The other one was to not cast doubt on their own results: “Adam, did you do wrong?” Schmidt asked his colleague Adam Riess, when the latter showed him a diagram of his first measurements. Most physicists consider the source of the dark energy to be the vacuum, because it is here that matter and energy continuously convert into each other at almost infinite speed - as the laws of quantum physics suggest. The energy which the vacuum can theoretically obtain through these fluctuations is less than the dark energy by the unimaginably large factor of 10 to the power of 122, however. Where are the gaps between theory and observation? Does the dark energy - albeit with opposite sign - correspond to the cosmological constant that Einstein had introduced into his equations, in order to not have to abandon his belief in a static universe? Or is it not constant at all, but originates from temporary force fields? If, as some theoreticians believe, the early universe experienced a sudden expansion resulting in an enormous, temporary increase in its energy density - could something similar be occurring now? And could this explain dark energy? These questions revolve around the greatest physics mystery facing us today. The fine line between speculation and science which they reveal is what makes them so fascinating for a dialogue between young scientists and Nobel Laureates.The radiating beginning of the world Jan Keese | idw Leading experts in Diabetes, Metabolism and Biomedical Engineering discuss Precision Medicine 13.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Conference on Laser Polishing – LaP: Fine Tuning for Surfaces 12.07.2018 | Fraunhofer-Institut für Lasertechnik ILT A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:331c243d-532e-4165-a77e-60af6214cb8a>
3.765625
1,185
Content Listing
Science & Tech.
37.164784
95,602,347
Crops That Shut Down Pests' Genes Monsanto is developing genetically modified plants that use RNA interference to kill the insects that eat them. Researchers have created plants that kill insects by disrupting their gene expression. The crops, which initiate a gene-silencing response called RNA interference, are a step beyond existing genetically modified crops that produce toxic proteins. Because the new crops target particular genes in particular insects, some researchers suggest that they will be safer and less likely to have unintended effects than other genetically modified plants. Others warn that it is too early to make such predictions and that the plants should be carefully tested to ensure that they do not pose environmental problems. But most researchers agree that it’s unlikely that eating these plants would have adverse effects on humans. RNA interference occurs naturally in animals ranging from worms to humans. It’s a process whereby double-stranded RNA copies of specific genes prevent cells from translating those genes into proteins. The new genetically modified plants carry genes for double-stranded RNA targeted to particular insect genes. Two papers published concurrently in Nature Biotechnology this week show that in some insects, eating double-stranded RNA is enough to cause gene silencing. This is surprising: in previous research, RNA interfered with organisms’ gene expression only when it was injected. “People have been trying this, but there have been no reports of success before,” says Karl Gordon, a research scientist in entomology at the Commonwealth Scientific and Industrial Research Organisation, in Canberra, Australia. The recent work, he says, is the first to demonstrate the promise of RNA interference as a means of pest control. Researchers at the Chinese Academy of Sciences, in Shanghai, made cotton plants that silence a gene that allows cotton bollworms to process the toxin gossypol, which occurs naturally in cotton. Bollworms that eat the genetically engineered cotton can’t make their toxin-processing proteins, and they die. Researchers at Monsanto and Devgen, a Belgian company, made corn plants that silence a gene essential for energy production in corn rootworms; ingestion wipes out the worms within 12 days. The most effective genetic approach to pest control has been to make plants that produce a protein called Bt toxin, which causes insects to slow down, then stop eating crops, then die. More than 120,000 square miles of crops genetically engineered to produce Bt were grown last year. But Bt isn’t effective against many pests, including corn rootworm, which can cause such extensive damage to corn plants’ root systems that the plants blow over in the wind. And researchers are concerned that insect pests are becoming resistant to Bt. “We need a way to come around resistance to Bt,” says Abhaya Dandekar, professor of pomology at the University of California, Davis. RNA interference is attractive, he says, because insects are unlikely to become resistant to it. “The only way to go around RNA interference is to shut down the whole system.” What he means is that the new plants take advantage of a gene-silencing mechanism that the insects’ bodies already use: RNA interference is thought to be a critical part of insects’ and other animals’ immune systems. Insects that shut down RNA interference in order to safely eat genetically engineered plants would probably get sick, says Dandekar. Another drawback to Bt is its nonspecificity. The toxin may have what are called off-target effects: it can kill insects that pose no threat to crops. RNA interference, says Ty Vaughn, a researcher at Monsanto, “can be species specific,” allowing for “a higher level of control.” Other researchers agree and say that Monsanto has, so far, demonstrated a high level of specificity. “They should be able to avoid nonspecific, off-target effects,” says Gordon. But other researchers warn against jumping to that conclusion too soon. “RNA interference to control pests is an interesting idea, but it’s important to understand the ecology,” says Bernard Mathey-Prevot, director of the Drosophila (fruit fly) RNA Interference Screening Center at Harvard Medical School. “It’s very hard to know in advance whether other insects might be targeted.” In addition to killing nonpest insects, Mathey-Prevot says, the gene-silencing mechanism could spread between different species of plant, or from plants to other organisms, such as bacteria in the soil. Such spread might be harmless, but then again, it might not. “We need to understand it a little bit more,” Mathey-Prevot says. Vaughn says that the research is in its early stages and that Monsanto has not set a timeline for bringing gene-silencing crops to the market. Monsanto will put its new transgenic corn “through a battery of tests” to establish that its effects are specific to corn rootworms, he says. Tobacco cutworms that ingested the corn did not seem to be affected. But to prove conclusive, researchers say, such testing would have to be arduous. “You would have to anticipate all the species you wouldn’t want it to affect” and then test them, says David Root, project leader of the RNA Interference Consortium at the Broad Institute, Harvard and MIT’s jointly operated center for research on genomic medicine. And Gordon anticipates that regulatory agencies will demand broad screening. Although humans have genes similar to insect genes, researchers say that it is highly unlikely that ingesting Monsanto’s corn would cause gene silencing in people. “If you fed tons of it to a mouse, I don’t think you’d get anywhere,” says Root. RNA “just gets digested” by mice and humans. The U.S. government does not require the labeling of foods containing genetically modified organisms, but it does require safety testing. Fred Gould, professor of agriculture at North Carolina State University, says that because the new crops produce what’s effectively a pesticide, they would be regulated by the U.S. Environmental Protection Agency. Such foods must be tested both in animals and through exposure to what Gould calls “reconstituted human stomach juices.” It’s also unclear how widely applicable the use of RNA interference as a pesticide will be. In many insects, ingestion of RNA may not cause gene silencing. But cotton bollworms and corn rootworms are major agricultural pests, feeding on two of the most widely grown crops in the world. Even if RNA interference is helpless against any other insects, it could still have a major impact on agriculture. Mathey-Prevot counsels patience. At this point, he says, it’s too early to make claims about the safety of the technique. But, he says, that also means it’s too early to conclude that the ability to cause RNA interference is any more dangerous than current genetic modifications of food crops. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:dceeb55c-abb2-4828-b18d-934384b81d67>
3.328125
1,492
News Article
Science & Tech.
40.948561
95,602,350
A recent discovery of mushroom like creatures were formed under the sea, which was discovered by Scientists in Australia. According to the scientist, such a thing never existed, even in the subdivisions of the animal kingdom. The study states that these shapes did not originate overnight, but must have taken at least a 100 years to form a shape. The samples of these have been collected by the scientists, to further study and make conclusions. According to Co-author – Jorgen Olesen; “Finding something like this is extremely rare, it’s maybe only happened about four times in the last 100 years. We think it belongs in the animal kingdom somewhere; the question is where. What we can say about these organisms is that they do not belong with the bilateria,” Separate pictures have been collected and study is continuing, to understand their traits and give them a proper name. The researchers and scientists believe that these animals could be a very old branch on the tree of life. These grow within the same roots like plants and ultimately, take their shape. Check out search engines for more information on these with the pictures.
<urn:uuid:7df42808-b035-4b14-bc89-a163beaec45a>
3.578125
232
News Article
Science & Tech.
44.346018
95,602,352
Black holes are surrounded by many mysteries, but now researchers from the Niels Bohr Institute, among others, have come up with new groundbreaking theories that can explain several of their properties. The research shows that black holes have properties that resemble the dynamics of both solids and liquids. "With these new theories, we expect to be able to explain other black hole phenomena, and we expect to be able to better understand the physical properties of neutron stars. We also expect to gain a greater understanding of the so-called particle theories, which are, for example, relevant for understanding the quark-gluon-plasma in the primordial universe," explains Niels Obers, a professor of theoretical particle physics and cosmology at the Niels Bohr Institute at the University of Copenhagen. Black holes are extremely compact objects in the universe. They are so compact that they generate an incredibly strong gravitational pull and everything that comes near them is swallowed up. Not even light can escape, so light that hits a black hole will not be reflected, but will be entirely absorbed, as a result, they cannot be seen and we call them black holes. "But black holes are not completely black, because we know that they emit radiation and there are indications that the radiation is thermal, i.e. it has a temperature," explains Obers. Researchers know that the black holes are very compact, but they do not know what their quantum properties are. Obers works with theoretical modelling to better understand the physics of black holes. He explains that you can look at a black hole like a particle. A particle has in principle no dimensions. It is a point. If you give a particle an extra dimension, it becomes a string. If you give the string an extra dimension, it becomes a plane. Physicists call such a plane a 'brane' (the word 'brane' is related to 'membrane' from the biological world). "In string theory, you can have different branes, including planes that behave like black holes, which we call black branes. The black branes are thermal, that is to say, they have a temperature and are dynamical objects. When black branes are folded into multiple dimensions, they form a 'blackfold'," explains Niels Obers, who worked out this new way of looking at black branes with associate professor in theoretical physics at the Niels Bohr Institute, Troels Harmark, back in 2009. Obers and his two doctoral students Jay Armas and Jakob Gath have now made a new breakthrough in the description of the physics of black holes based on the theories of the black branes and blackfolds,"The black branes are hydro-dynamic objects, that is to say that they have the properties of a liquid. We have now discovered that black branes also have properties, which can be explained in terms of solids. They can behave like elastic material when we bend them," explains Jay Armas. He explains that when the black branes are bent and folded into a blackfold, a so-called piezoelectric effect (electricity that occurs due to pressure) is created. This new effect can be understood as a slightly bent and charged black string with a greater concentration of electric charge on the innermost side in relation to the outermost side. This produces two electrically charged poles on the black strings. Black holes are predicted by Einstein's theory of gravity. This means that there is a very surprising relationship between gravity and fluid mechanics and solid-state physics. Image Credit: NASA/MIT/F. Baganoff et al. Shows supermassive black hole at center of Milky Way Galaxy. Source: The Daily Galaxy via Niels Bohr Institute
<urn:uuid:23412550-35b7-454c-a7d2-34ce390b867a>
3.625
767
News Article
Science & Tech.
44.926645
95,602,357
Episode 221: Elastic collisions This episode extends the idea of conservation of momentum to elastic collisions, in which, because KE is conserved, useful information can also be found by calculating the changes in KE of the colliding objects. Demonstration + Discussion: To introduce totally elastic collisions. (20 minutes) Student experiment: To test conservation of momentum and kinetic energy in an elastic collision. (20 minutes) Worked examples + student questions: Calculations of final velocities in elastic collisions and loss of KE in inelastic collisions. (20 minutes): Discussion: More abstract problems and situations which commonly cause difficulties. (15 minutes) Demonstrations: Showing some applications such as catching a ball or finding the speed of an air rifle pellet. (15 minutes) Demonstration + Discussion: To introduce totally elastic collisions This follows a similar approach to the demonstration in Episode 220. TAP 220-1: Observing collisions Demonstrate „springy‟ (elastic) collisions using trolleys, one of which has its spring-load released so its spring can “soften the collisions” (Alternatively, use air-track gliders with repelling magnets Direct a single trolley at a second, stationary trolley. The first trolley stops, the second moves off at the speed of the first. Momentum is conserved. Now try a light trolley colliding with a heavy one, and vice versa. What pattern is seen? A light trolley bounces back from a heavier one (its momentum is negative); a heavier one moves on, but at a slower speed. Students may accept that momentum is conserved; alternatively, with suitable light gates you should be able to measure the initial speed of the one trolley and the final speeds of both. Momentum conservation can be shown. (Or try using a computer and motion sensor.) Now ask: How do the trolleys know at what speed they must move? There are many combinations of velocity which conserve momentum; there must be something else going on here. Introduce the idea that kinetic energy (KE) is also involved, and that, in a springy collision, there is as much KE after as before; in other words, KE is conserved. Ask whether KE is conserved in an explosion (obviously not; it is “created” in the explosion in the change from chemical PE to KE) and in an inelastic collision (the total amount decreases, but you may need to work through a numerical example to show this). Discuss where KE comes from in an explosion (from energy stored in a squashed spring, chemical explosive or whatever), and where it goes to in an inelastic collision (work is done in deforming material leads to heating; some sound). Terminology: Usually, „elastic‟ is taken to imply that KE is conserved. In some texts, this is written as „perfectly elastic‟. „Inelastic‟ describes a collision in which some KE is lost. Students should learn to use these terms, rather than „springy‟ and „sticky‟. To test conservation of momentum and kinetic energy in an elastic collision Ask students to carry out an experiment to determine whether, in a springy collision between trolleys or gliders, momentum and KE are both truly conserved. They can use the same approach as the experiment in Episode 220, but they will have to calculate values of KE as well as momentum. Worked examples + student questions: Calculations of final velocities in elastic collisions and loss of KE in inelastic collisions Work through examples involving elastic collisions to show: , Conservation of KE and momentum in an elastic collision, when all values of mass and velocity are known. , Calculation of final velocities in an elastic collision. Note that the second type requires solution of simultaneous equations, one of which is a quadratic; students may find this difficult, so check that it is required by the specification that you are following. , Non-conservation of KE in an inelastic collision. Students can now work through more examples. TAP 221-1: Worked examples in momentum: elastic and inelastic TAP 221-2: Student Questions More abstract problems and situations which commonly cause difficulties Reinforce students‟ ideas by discussing some of the following: How a rocket ship works, as a controlled explosion in which reaction mass travels backwards. The rocket needs nothing to lift off except the There is a video of this with a man sitting on a cart firing a fire http://www.wfu.edu/Academic-departments/Physics/demolabs/demos/ (1N22.10 [M-21 W] Fire Extinguisher Wagon ) The same site shows another video of two people on roller-blades throwing a cushion back and forth. Discuss situations in which the Earth is involved, as it may appear that momentum is not conserved. Where does momentum come from or go to in these situations? It helps to think about the forces involved. , You push a car to start it moving. (Your feet push back on the Earth, so that its momentum also changes, in the opposite direction. This is equivalent to an explosion.) , When a ball falls, it accelerates, i.e. it gains momentum. (The Earth is also accelerated minutely in the opposite direction, so momentum is conserved. The force is gravity.) , When a ball bounces off a wall, its momentum is reversed. (Momentum is transferred to the wall + Earth by the contact force.) , When a ball rolls to a halt, it loses momentum. (Its momentum is transferred to the Earth via These all emphasise the need to think of the closed system with which we are concerned. Emphasise that momentum is always conserved, but KE is not. One way to think of this is that KE is just one form of energy, so it can be transformed; there is only one form of momentum, so it cannot be transformed into anything else. Showing some applications such as catching a ball or finding the speed of an air rifle pellet TAP 221-3: Momentum demonstrations The following is a brief summary of the Resourceful Physics activities given in the link in TAP 221-3: RP13: Pendulum on a trolley – to show conservation of momentum. This could be filmed with a webcam and the film studied in slow motion to great effect. There is a link here between the motion of the trolley and the motion of a rowing boat, which shoots forwards when the rowers move back on their slides, rather than when they pull on their oars. RP18: Momentum in catching – compare to deforming barriers and elastic ones. This contrasts the momentum change on catching something and the greater change on reflecting it back. RP 10: If you have access to an air rifle demonstration, this is a lovely way to round off the topic. RP 24: Speed of a cricket ball can be used in place of the air rifle experiment TAP 221-1: Worked examples in momentum: elastic and inelastic collisions 1. When an object of mass M and velocity u collides head-on elastically with a stationary object of mass m then mass m moves off with velocity v given by: - b and the mass M changes speed from u to vwhere v is given by:- a a a) A golf club head has a mass very much larger than that of the ball. Use the above formula to show the golf ball will be driven from the tee at about twice the speed of the golf club b) An alpha particle makes a head on collision with a stationary helium atom. Show the alpha particle will stop and the helium atom will move off with the original speed of the alpha particle. (Assume an alpha particle and helium nucleus have the same mass) c) A fast moving electron collides with a stationary heavy atom. Show the electron bounces back with nearly the same speed -12. A cricket ball, mass 0.5 kg, was bowled at 50 m s at a batsman who misreads the ball and -1the 5 kg bat is knocked out of his hands, the ball rebounds at 25 m s. a) What is the initial momentum of the ball? b) What is the momentum of the ball just after it hit the bat? c) How much momentum is transferred to the bat? d) Calculate the velocity of the bat as it leaves the batsman. e) Calculate the kinetic energies before and after the collision of bat and ball. Is the collision elastic or inelastic? 3. A ball of mass 0.20 kg is dropped from a height of 3.2 m onto a flat surface which it hits at -1-28.0 m s. It rebounds to 1.8 m. (g = 9.8 m s) a) What is the rebound speed just after impact? b) What is the change in energy of the ball? c) What momentum change has the ball between just touching the surface and leaving it? 4. In October 1999 the United Nations announced that the world‟s population had reached 96 billion (6 x 10) and that the population of the People‟s Republic of China was rapidly approaching 1.3 billion. Imagine a situation in which the entire population of China were gathered together and, all at the same time, they each jumped as high as they were able. Estimate the speed that would be (temporarily) imparted to the Earth as a result of this jump. Show all your calculations and reasoning and state clearly any assumptions you 24have made. Take the mass of the Earth to be 6 x 10 kg. Answers and Worked Solutions 2M1. a) The ball has a much smaller mass than the club head so it can be ignored in ?vub;Mm M2so and. So the speed is twice that of the club head. As the ball does v?2uvu?bbM not have zero mass the ball speed will be slightly under 2u ?Mm b) The alpha particle speed after the collision is given by ? as M = m the speed vua;Mm 2Mwill be zero. The helium nucleus will move off with speed given by ? and as M vub;Mm 2M= m then so . Therefore the alpha particle stops and the helium v?u?vubb2M nucleus moves off with the same speed. ?MmM? c) m can be ignored so and v?uvuv?uaaa;MmM The larger mass will hardly move -12. a) Initial momentum of the ball = 0.5 x 50 = 25 kg m s -1b) Momentum of the ball just after it hit the bat = 0.5 x (-25) = -12.5 kg m s (Taking positive velocity to the right.) -1 c) Momentum transferred to the bat =25 - (-12.5) = 37.5 kg m s -1-1 d) Momentum of bat = 37.5 kg m s so velocity of bat = (37.5)/5 = 7.5 m s e) Calculate the kinetic energies before and after the collision of bat and ball. Is the collision elastic or inelastic? 1122 so before collision and KE = 625 J KE?mvKE?0.5(50)22 12After the collision KE of ball is so KE = 156.25 J = 156 J to 3sf KE?0.5(25)2 12After collision Bat KE = so KE = 140.63 J = 141 J to 3sf KE?5(7.5)2 Total KE after collision is 156.25 + 140.63 =296.9 J = 297 J to 3sf so the collision is inelastic as KE before > KE after the collision -13. a) This is an energy reminder so and speed = 5.9 m s so v?2ghv?2x(9.8)x(1.8) -1velocity = - 5.9 m s 12 b) KE?mv so before impact KE = 0.5 x 0.20 x 8.0 x 8.0 = 6.4 J 2 after impact KE = 0.5 x 0.20 x (-5.9) x (-5.9) = 3.5 J so KE change is 6.4 - 3.5 = .2.9J -1 c) Momentum before collision = 0.2 x 8.0 = 1.6 kg m s Momentum of ball after = 0.2 x (-5.9) = -1.2 -1Change in momentum = 1.6 - (-1.2) = 2.8 kg m s 4. This question shows the effect of many small masses on a very much larger mass Estimate average mass of person m = 50 kg (NB Population will contain many children) -1Jump height _h = 0.2 m Take-off speed so = 2 m s (accept any v?2ghv?2x10x0.2-1reasonable method or blind guess of about 1 to 3 m s) Total upward momentum of all jumpers: 9-111-1p =1.3 x 10 x 50 kg x 2 m s = 1.3 x 10 kg m s To conserve momentum, Earth must acquire equal momentum in the opposite direction. 1124-14-1Earth, mass M, acquires speed v = p/M = 1 x 3 x 10 / 6 x 10 = !;2;x 10 m s i.e. velocity of the Earth is negligible. An elastic or inelastic jump? Question 1 was an adaptation of question 55(L) from Revised Nuffield Advanced Physics Unit A Question 3 was an adaptation of question 50(I) from Revised Nuffield Advanced Physics Unit A Question 4 is taken from Salters Horners Advanced Physics, Section TRA, Additional sheet 8 & 9 TAP 221- 2: Student Questions Elastic Collision Questions -11. A trolley of mass 1 kg rolls along a level, frictionless ramp at a speed of 6 m s. It collides with a second trolley of mass 2 kg which is initially at rest. The first trolley rebounds at a -1speed of 2 m s. a) Find, by conservation of momentum, the velocity of the second trolley after the collision. b) Compare the kinetic energy before and after the collision. Is the collision elastic? 2. A bowling ball of mass 5 kg strikes a skittle of mass 1 kg. The bowling ball is travelling at -13 m s before the collision, which is elastic. Find the velocity of the ball and the skittle after the collision. (NB in this question ignore the rolling motion of the ball – assume it is sliding). These are quite advanced and may not be appropriate for all specifications. Answers and Worked Solutions 1. a) Let us use the initial direction of the first trolley as the positive direction. Then momentum -1before = mu = 1 × 6 = 6 kg m s. Afterwards let the velocities be v and v and the masses 1112 are still m and m. Then: 12 6 = mv+mv1122 6 = 1 x (-2) + 2 x v 2 -1v= 4 m s (Note the care with the signs) 2 22 b) KE before = ?m = 1/2 ×1 × 6 = 18 J u11 22 2 2KE after = ?mv+ ?m v = ? × 1 × (-2) + ? ×2 × (4)1122 = 2 + 16 = 18 J So KE is conserved and the collision is elastic. 2. Conservation of momentum: mu = mv + mv where the subscripts s and b refer to the bbbbss skittle and the ball. This gives: -115 kg m s = 5v + v bs 222 Conservation of KE: ?mu = ? mv + ? mvbbbbss 222? × 5 × (3) = ? × 5 × v + ? × 1 × v bs 2222.5 J = 2.5v + 0.5v bs To solve this we use the first equation to eliminate v, v= (15 - 5v), and substitute in the ss b 2 222.5 = 2.5 v + 0.5 (15 - 5v) bb We‟ll multiply through by 0.4 to keep the numbers easy: 229 = v + 5(3 - v) bibe 229 = v + 45 + 5v - 30vbbb 2 6v- 30v + 36 = 0 bb 2 v- 5v + 6 = 0 bb (v - 3)(v -2) = 0 bb -1-1v = 2 m s (or 3 m s but that was the original value) b -1Substituting back in the first equation, v = 1 m s s TAP 221- 3: Momentum demonstrations 13. Pendulum on a trolley - momentum conservation Hang a pendulum on a dynamics trolley or on the rider of a linear air track and observe its motion as the trolley moves along. ; Linear air track ; Pendulum supported on an air-track rider 18. Momentum in catching This simple demonstration emphasises the vector nature of momentum. Use a ball (soccer or netball) and throw it to a pupil. Get them first to catch it and throw it back and then to punch it backwards without catching it first. Ask students which require the biggest force - catching or punching it back. It should be clear that it is second and that this gives the biggest change of momentum. This demonstration hopefully emphasises the velocity change from u to -u and so a greater change of momentum than simply stopping the ball. Consider whether or not this activity can be done safely in your lab with your pupils. Momentum change = mu - (-mu) = 2mu when it is punched back [NB this fact will be used when calculating the pressure due to a gas using the kinetic theory of gases.] and momentum change = mu when it is caught ; Netball or football 10. The air rifle and momentum How safe are these two experiments? The writer believes that if done with care and due regard to everyone‟s safety they are fine! So here they are. (a) Fix two timing gates - simply a plastic frame with a strip of aluminium foil across the centre - a metre apart and connected to a scalar. Fire an air rifle pellet through the two gates so that it starts the timer when the first strip of aluminium foil is broken and stops it when the second is broken. The time for the pellet to travel the one metre between the two pieces of foil is given directly and the speed of the bullet can easily be worked out Safety - the rifle should be clamped to a baseboard and a means of catching the pellet should be found - box of polystyrene backed with a wooden board is suitable for this. Use a safety screen.
<urn:uuid:3f2f3306-1fa2-474f-983f-bed3b936641c>
3.625
4,207
Tutorial
Science & Tech.
76.97435
95,602,359
Presumably it was to minimize the operators used. Yup. It thought it would make it easier to understand the problem (the programming challenge). It is also to limit the number of possible solutions. If there are many building blocks the search space is large but also full of good solutions. Then even a random search works. With this I hoped to demonstrate that even with limited building blocks the algorithm can work to a good solution. (It would be interesting to create a Perl program that can determine the solution density, say using monte carlo or so). I was a bit surprised at|= :) I added that to show that the algorithm can come up with solutions that are not easily visable to humans. You can even add things like $x ^= 715; $x >>= 1; and it will come up with surprising results.
<urn:uuid:44d4cb28-e359-4547-b34f-a302d88770da>
3.078125
177
Comment Section
Software Dev.
59.9235
95,602,367
The Relationship Between Language Paradigm and Parallelism: The EQ Prototyping Language Both the imperative (Fortran , APL , Matlab , etc.) and functional (Sisal , Id , EPL , etc.) language paradigms have attractive features. Imperative languages are able to easily express changing quantities, something which functional languages find more difficult. However, the side-effects of imperative languages makes parallelization more difficult than in functional languages. The EQ language is an attempt to blend these two paradigms, in order to try and achieve the advantages of both. While designed for sequential execution, it turns out that EQ makes the parallelism in a program explicit to the user, without introducing special “parallel” constructs. We feel this property makes EQ a good target for further parallel language research. KeywordsSequential Execution Functional Language Imperative Language Implicit Parallelism Language Paradigm Unable to display preview. Download preview PDF. - ANSI. American National Standard Programming Language Fortran ANSI X3.9–1978.Google Scholar - A. P. W. Bohm, R. R. Oldehoeft, D. C. Cann, and J. T. Feo. Sisal Reference Manual, Version 2.0.Google Scholar - Thomas Derby, Robert Schnabel, and Benjamin Zorn. Design Ideas for Prototyping Scientific Computations: the EQ Language. Technical Report, University of Colorado at Boulder, Dept. of Computer Science.Google Scholar - Math Works Inc. MATLAB User’s Guide, 1992.Google Scholar - Rishiyur S. Nikhil. Id language reference manual, version90.1. Postscript available via FTP from Massachusetts Institute of Technology, July 1991.Google Scholar - Boleslaw K. Szymanski. “EPL - parallel programming with recurrent equations”. In Boleslaw K. Szymanski, editor, Parallel Functional Languages and Compilers, chapter 3, pages 51–104. ACM Press, New York, New York, 1991.Google Scholar
<urn:uuid:d629667d-a14c-4332-8f04-6254ec784f44>
2.703125
429
Academic Writing
Software Dev.
35.659298
95,602,385
Elementary Electromagnetic Phenomena All the effects discussed in this text rely on the presence of electric, magnetic or electro-magnetic fields in the system. It is therefore natural to discuss first the governing equations and some basic electromagnetic phenomena. In this regard, “elementary” in the title of this chapter refers to subjects related to beam-wave interaction and not necessarily to undergraduate-level topics though a few really elementary concepts are discussed in Sects. 2.1 and 2.2. KeywordsEvanescent Wave Transverse Electric Poynting Vector Magnetic Vector Potential Transverse Magnetic Mode Unable to display preview. Download preview PDF.
<urn:uuid:57ff6012-d898-46a0-bd4c-6f12a8cc61a7>
3.203125
136
Truncated
Science & Tech.
20.021658
95,602,386
+44 1803 865913 By: Julie Kerr Casper 254 pages, full-colour photographs & line illustrations, tables & charts Global warming has increased dramatically during the last century at an unnatural rate, which makes specialists believe that humans contribute to the real cause of global warming today. Many activities humans are involved in - from burning fossil fuels for energy to massive deforestation - are contributing to the atmospheric warming at an alarming rate. Experts believe that in the future human induced damage will cause severe problems in the distribution of species and their critical habitats, increase the occurrence of severe weather and droughts, contribute to rising sea levels, and trigger a host of health and quality-of-life impacts that will affect everyone on Earth. Unfortunately, no ecosystem will escape the impact of human-induced global warming. "Changing Ecosystems" looks at this serious issue and the far-reaching effects it is having right now, and will have in the future, on every ecosystem on Earth. It is crucial that readers understand the relevant issues now so they can prevent this problem before it is too late and many species and habitats are gone forever. By discussing the effects of global warming on ecosystems, this new volume enlightens students on the many ways they can become more eco-responsible now and in the future. Chapters of this title include: Signs and Effects of Global Warming; Ecosystems, Adaptation, and Extinction; Impacts to Forests; Impacts to Rangelands, Grasslands, and Prairies; Impacts on Polar Ecosystems; Impacts to Desert Ecosystems; Impacts to Mountain Ecosystems; Impacts to Marine Ecosystems; and, Conclusions - Where to Go from Here. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects EXCELLENT SERVICE FROM NHBS. I will continue to choose them wherever possible for future purchases. Good service deserves to be rewarded. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:761eb513-01ab-410e-b81f-38050a38e255>
3.375
436
Product Page
Science & Tech.
26.3825
95,602,389
GOV.UK Message Queue Consumer Standardises the way GOV.UK consumes messages from RabbitMQ. RabbitMQ is a messaging framework that allows applications to broadcast messages that can be picked up by other applications. For detailed documentation, check out the gem documentation on rubydoc.info. This gem is used by Rummager. - Producer: an application that sends messages RabbitMQ. On GOV.UK this could be publishing-api. - Message: an object sent over RabbitMQ. It consists of a payload and headers. In the case of the publishing-api the payload is a content item. - Consumer: the app that receives the messages and does something with them. On GOV.UK, this is email-alert-service. - Exchange: in RabbitMQ's model, producers send messages to an exchange. Consumers can create a Queue that listens to the exchange, instead of subscribing to the exchange directly. This is done so that the queue can buffer any messages and we can make sure all messages get delivered to the consumer. - Queue: a queue listens to an exchange. In most cases the queue will listen to all messages, but it's also possible to listen to a specific pattern. - Processor: the specific class that processes a message. This is a ruby gem that deals with the boiler plate code of communicating with RabbitMQ. The user of this gem is left the task of supplying the configuration and a class that processes messages. The gem is automatically released by Jenkins. To release a new version, raise a pull request with the version number incremented. - The Bunny gem: to interact with RabbitMQ. Add a rake task like the following example: # lib/tasks/message_queue.rake namespace :message_queue do desc "Run worker to consume messages from rabbitmq" task consumer: :environment do GovukMessageQueueConsumer::Consumer.new( queue_name: "some-queue", processor: MyProcessor.new, ).run end end More options are documented here. The consumer expects a number of environment variables to be present. On GOV.UK, these should be set up in puppet. RABBITMQ_HOSTS=rabbitmq1.example.com,rabbitmq2.example.com RABBITMQ_VHOST=/ RABBITMQ_USER=a_user RABBITMQ_PASSWORD=a_super_secret Define a class that will process the messages: # eg. app/queue_consumers/my_processor.rb class MyProcessor def process(message) # do something cool end end The worker should also be added to the Procfile to run in production: # Procfile worker: bundle exec rake message_queue:consumer Because you need the environment variables when running the consumer, you should use govuk_setenv to run your app in development: $ govuk_setenv app-name bundle exec rake message_queue:consumer Processing a message Once you receive a message, you must tell RabbitMQ once you've processed it. This is called acking. You can also discard the message, or retry it. class MyProcessor def process(message) result = do_something_with(message) if result.ok? # Ack the message when it has been processed correctly. message.ack elsif result.failed_temporarily? # Retry the message to make RabbitMQ send the message again later. message.retry elsif result.failed_permanently? # Discard the message when it can't be processed. message.discard end end end You can pass a statsd_client to the GovukMessageQueueConsumer::Consumer initializer. The consumer will emit counters to statsd with these keys: your_queue_name.started- message picked up from the your_queue_name your_queue_name.retried- message has been retried your_queue_name.acked- message has been processed and acked your_queue_name.discarded- message has been discarded your_queue_name.uncaught_exception- an uncaught exception occured during processing Remember to use a namespace for the statsd_client = Statsd.new("localhost") statsd_client.namespace = "govuk.app.my_app_name" GovukMessageQueueConsumer::Consumer.new( statsd_client: statsd_client # ... other setup code omitted ).run Testing your processor This gem provides a test helper for your processor. # eg. spec/queue_consumers/my_processor_spec.rb require 'test_helper' require 'govuk_message_queue_consumer/test_helpers' describe MyProcessor do it_behaves_like "a message queue processor" end This will verify that your processor class implements the correct methods. You should add your own tests to verify its behaviour. You can use GovukMessageQueueConsumer::MockMessage to test the processor behaviour. When using the mock, you can verify it acknowledged, retried or discarded. For example, with it "acks incoming messages" do message = GovukMessageQueueConsumer::MockMessage.new MyProcessor.new.process(message) expect(message).to be_acked # or if you use minitest: assert message.acked? end For more test cases see the spec for the mock itself. Running the test suite bundle exec rake spec - Bunny is the RabbitMQ client we use. - The Bunny Guides explain all AMQP concepts really well. - The Developer Docs documents the usage of "heartbeat" messages, which this gem also supports.
<urn:uuid:5b0c2de5-77c2-426b-b95d-5deea84c976f>
2.765625
1,232
Documentation
Software Dev.
37.372797
95,602,401
New technology is helping more people see Frank Lloyd Wright’s winter home. IMAGE Search: Amateur Astronomer Reconnects NASA To Zombie Satellite An amateur astronomer has picked up signals from a satellite NASA gave up for dead more than decade ago. Scott Tilley was scanning the skies for secret military satellites when he picked up a transmission from the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. “Every once in a while, the universe gives you a gift, and this one, I think, is ours,” said University of Arizona’s Bill Sandel, team lead for one of IMAGE’s instruments, the Extreme Ultraviolet Imager (EUV). Sandel, a retired senior research scientist at UA’s Lunar and Planetary Laboratory, has continued to analyze IMAGE data, but admits that work has slowed down recently. All that could all change if NASA succeeds in restoring the satellite’s functions and reactivating its instrument array. IMAGE launched in 2000 to map the Earth’s magnetic field — and the charged particles it contains — on a larger scale than was previously possible. After a successful two-year run, NASA extended its mission. Then, in 2005, the space agency unexpectedly lost contact with the craft. The IMAGE failure review board hoped that the solar-powered satellite would reboot on its own if it spent enough time in Earth’s shadow to drain its batteries. But when a 2007 opportunity came and went with no signal, the agency declared the satellite lost. It is now clear that IMAGE has been transmitting since at least October 2016, and possibly earlier. NASA is still trying to piece together what happened, but for now it appears that a reboot did occur at some point after 2007. NASA’s Goddard Space Flight Center in Greenbelt, Maryland, confirmed the satellite’s identity after using five antennas from the Deep Space Network to acquire its radio signals and checking them against IMAGE’s expected characteristics. Subsequent tests have revealed that the craft appears to be in good shape, and that its battery is fully charged. Engineers will next test whether IMAGE can receive and carry out commands — that is, once they recreate the 12-plus-year-old software and find a machine that can run it. “Then we’ll be in a position to think about restarting the science payload, getting the instruments up and running again, and making a new set of observations,” said Sandel. At stake is a better understanding of the magnetic field that surrounds the Earth, shielding the planet from solar wind and cosmic rays. Within its confines lies the plasmasphere, a doughnut-shaped region roiling with a gaseous soup of charged atoms and molecules known as plasma. This plasmasphere starts at Earth’s upper atmosphere, where solar radiation ionizes atmospheric molecules in the ionosphere. Improved knowledge of this complex, coupled system would answer basic scientific questions and potentially improve radio communications, GPS accuracy, satellite lifespans and other factors affected by the ionosphere, the magnetosphere, or the Van Allen radiation belts. Prior to IMAGE, satellites studied the Earth’s electromagnetic environs by flying through them, their detectors providing data for wherever they passed through, whenever they passed through it. Such snapshots limited the conclusions scientist could draw, since a change in a reading could be chalked up either to the passage of time or a change of place. IMAGE expanded upon that research by proving a more comprehensive view. “The secret is to use imaging to record the state of the entire system at once, and then to continue looking over long periods of time so that, instead of making a measurement at a single point in space, you measure simultaneously the whole system,” said Sandel. Sandel’s EUV instrument photographs helium ions in far-ultraviolet light. The wavelength at which these ions scatter sunlight, 30.4 nanometers, is about one-tenth the size of what the human eye can detect. Because these ions populate the plasmasphere, imaging them provides a picture of activity within. “It shrinks and contracts and changes on time scales of a few hours, in a very dramatic way," Sandel said.
<urn:uuid:d5a7a409-4fce-4a06-b6d7-50ea64acf891>
2.890625
900
News Article
Science & Tech.
36.870231
95,602,441
Novel Species Paving the Way Identification and accurate documentation of species is vital to understanding how an ecosystem functions and in developing sound strategies for conservation management. Dilmah Conservation’s ‘Novel Species Paving the Way for Biodiversity Conservation’ programme was established to address the dearth of knowledge of Sri Lanka’s herpetofaunal (reptile and amphibian) species, thereby generating scientific evidence of their existence and thus aiding and elevating their conservation. Dilmah is privileged to be one of the only private organizations that have contributed to the discovery of new species. Thus far, the novel species programme has facilitated the discovery of 11 species of frog - including a shrub frog named Pseudophilautus dilmah, ‘named after Dilmah Conservation, for its dedicated efforts to biodiversity conservation on the Island’ as stated by the authors – 2 species of gecko, and one species of snake along with the re-characterization of another. DC also recognizes the ecological significance of lichen and contributed to their protection and conservation by enabling the discovery of 8 lichen species new to science (including one discovered within Dilmah’s Queensbury estate and so named Heterodermia queensberryi) and 88 new lichen species records from Sri Lanka.
<urn:uuid:743d7225-2e33-469e-9d6f-f40976cc3dc5>
2.921875
272
About (Org.)
Science & Tech.
-5.854615
95,602,450
The nucleus of Comet Borrelly: a study of morphology and surface brightness Abbreviated Journal Title Comets; nucleus; surfaces; topography; morphology; photometry; IMAGES; GANYMEDE; Astronomy & Astrophysics Stereo images obtained during the DS1 flyby were analyzed to derive a topographic model for the nucleus of Comet 19P/Borrelly for morphologic and photometric studies. The elongated nucleus has an overall concave shape, resembling a peanut, with the lower end tilted towards the camera. The bimodal character of surface-slopes and curvatures support the idea that the nucleus is a gravitational aggregate, consisting of two fragments in contact. Our photometric modeling suggests that topographic shading effects on Borrelly's surface are very minor (< 10%) at the given resolution of the terrain model. Instead, albedo effects are thought to dominate Borrelly's large variations in surface brightness. With 90% of the visible surface having single scattering albedos between 0.008 and 0.024, Borrelly is confirmed to be among the darkest of the known Solar System objects. Photometrically corrected images emphasize that the nucleus has distinct, contiguous terrains covered with either bright or dark, smooth or mottled materials. Also, mapping of the changes in surface brightness with phase angle suggests that terrain roughness at subpixel scale is not uniform over the nucleus. High surface roughness is noted in particular near the transition between the upper and lower end of the nucleus, as well as near the presumed source region of Borrelly's main jets. Borrelly's surface is complex and characterized by distinct types of materials that have different compositional and/or physical properties. (C) 2003 Elsevier Inc. All rights reserved. "The nucleus of Comet Borrelly: a study of morphology and surface brightness" (2004). Faculty Bibliography 2000s. 4617.
<urn:uuid:1b210dd2-1f58-447b-9a77-a8c078567921>
2.734375
399
Academic Writing
Science & Tech.
28.809613
95,602,469
Relativity: The Special and General Theory by Albert Einstein Publisher: Methuen & Co Ltd 1924 Number of pages: 56 How better to learn the Special Theory of Relativity and the General Theory of Relativity than directly from their creator, Albert Einstein himself? In Relativity: The Special and the General Theory, Einstein describes the theories that made him famous, illuminating his case with numerous examples and a smattering of math. Home page url Download or read it online for free here: by Edwin Emery Slosson - Brace and Howe What is this theory of relativity and why is it so important? The mathematics of it are too much for most of us, but we can get some notion of it by a familiar illustration. A discussion of the more intelligible features of the theory of relativity. by Matej Pavsic - arXiv This a book is for those who would like to learn something about special and general relativity beyond the usual textbooks, about quantum field theory, the elegant Fock-Schwinger-Stueckelberg proper time formalism, and much more. by Frank W. K. Firk - Yale University A book for the inquisitive reader who wishes to understand the main ideas of special and general theory of relativity. Only a modest understanding of high school mathematics is required. A formal account of special relativity is given in an appendix. by Francois Dehouck - arXiv This thesis deals with the construction of conserved charges for asymptotically flat spacetimes at spatial infinity in four spacetime dimensions in a pedagogical way. It highlights the difficulties with understanding the gravitational duality...
<urn:uuid:5ecd222f-4de1-4a9d-b2e9-3f700ada0b90>
3.40625
346
Content Listing
Science & Tech.
34.604907
95,602,484
A fern is a member of a group of vascular plants that reproduce via spores and have neither seeds nor flowers. They differ from mosses by being vascular, i.e., having specialized tissues that conduct water and nutrients, in having branched stems and in having life cycles in which the sporophyte is the dominant phase. Like other vascular plants, ferns have complex leaves called megaphylls, that are more complex than the microphylls of clubmosses. Most ferns are leptosporangiate ferns, sometimes referred to as true ferns. They produce coiled fiddleheads that uncoil and expand into fronds. The group includes about 10,560 known extant species. Ferns are defined here in the broad sense, being all of the Polypodiopsida, comprising both the leptosporangiate (Polypodiidae) and eusporangiate ferns, the latter itself comprising ferns other than those denominated true ferns, including horsetails or scouring rushes, whisk ferns, marattioid ferns, and ophioglossoid ferns. Ferns first appear in the fossil record about 360 million years ago in the late Devonian period, but many of the current families and species did not appear until roughly 145 million years ago in the early Cretaceous, after flowering plants came to dominate many environments. The fern Osmunda claytoniana is a paramount example of evolutionary stasis; paleontological evidence indicates it has remained unchanged, even at the level of fossilized nuclei and chromosomes, for at least 180 million years. Ferns are not of major economic importance, but some are used for food, medicine, as biofertilizer, as ornamental plants and for remediating contaminated soil. They have been the subject of research for their ability to remove some chemical pollutants from the atmosphere. Some fern species are significant weeds. They also play certain roles in mythology and art. Like the sporophytes of seed plants, those of ferns consist of stems, leaves and roots. Stems: Fern stems are often referred to as rhizomes, even though they grow underground only in some of the species. Epiphytic species and many of the terrestrial ones have above-ground creeping stolons (e.g., Polypodiaceae), and many groups have above-ground erect semi-woody trunks (e.g., Cyatheaceae). These can reach up to 20 meters (66 ft) tall in a few species (e.g., Cyathea brownii on Norfolk Island and Cyathea medullaris in New Zealand). Leaf: The green, photosynthetic part of the plant is technically a megaphyll and in ferns, it is often referred to as a frond. New leaves typically expand by the unrolling of a tight spiral called a crozier or fiddlehead fern. This uncurling of the leaf is termed circinate vernation. Leaves are divided into two types a trophophyll and a sporophyll. A trophophyll frond is a vegetative leaf analogous to the typical green leaves of seed plants that does not produce spores, instead only producing sugars by photosynthesis. A sporophyll frond is a fertile leaf that produces spores borne in sporangia that are usually clustered to form sori. In most ferns, fertile leaves are morphologically very similar to the sterile ones, and they photosynthesize in the same way. In some groups, the fertile leaves are much narrower than the sterile leaves, and may even have no green tissue at all (e.g., Blechnaceae, Lomariopsidaceae). The anatomy of fern leaves can either be simple or highly divided. In tree ferns, the main stalk that connects the leaf to the stem (known as the stipe), often have multiple leaflets. The leafy structures that grow from the stipe are known as pinnae and are often again divided into smaller pinnules.
<urn:uuid:2d4bc317-698f-43a6-97c3-587438e4c5c9>
3.984375
861
Knowledge Article
Science & Tech.
41.336838
95,602,501
Where is all of the New Horizons data? Last summer, speeding along at 9 miles per second, the New Horizons spacecraft flew by Pluto and its 5 moons, gathering data and snapping incredible high resolution photos of the never-before-seen surface of Pluto, which has led to extraordinary discoveries about the mysterious dwarf planet. However, much more information still remains out of scientist’s reach aboard the spacecraft flying ever further away towards the outer reaches of our solar system. In an effort to meet time and cost constraints, data collected was stored abroad the spacecraft instead of immediately transmitting to Earth. At a separating distance of 3 billion miles, the communication is reasonably slow between the spacecraft and scientists in Boulder, Colorado, who write the command sequences that compress and transmit the Pluto data. With more than half of the data yet to be received, scientists estimate another year until all information is secured on Earth. To be noted, flying a spacecraft is not easy, and in the event that the connection is lost, data would be as well. Therefore, it is of the highest importance that the data is tracked and accounted for at every stage of its transmission process. Emma Birath leads the Science Operations team at Southwest Research Institute who uses software she designed called DataTrack. The program helps to organize what data has been sent, scheduled to transmit, compressed, and remains unprocessed. According to Birath’s blog, DataTrack has greatly improved communication among scientist teams and will be invaluable in the safe accumulation of Pluto’s paramount data. As for the Pluto data that scientists do have, interesting observations have been made of the dwarf planet’s geological features. New Horizons, 21,100 miles from the planet, took the image below with the frame’s bottom edge measuring 750 miles of terrain. The region shown in the image is called Lowell Regio, named in honor of the Lowell Observatory’s founder, who is also responsible for encouraging the search that would eventually lead to Pluto’s discovery. The yellow areas seen in the image of Pluto’s north pole below dictate the highest points of elevation, while the transitioning blue-gray colors reference the lowest points. The colors are real but have been enhanced to clearly show the difference in elevation. Scientist’s are unsure why the high terrain is yellow, but one possibility is that older deposits of methane in these areas have been more processed by solar radiation. Canyons across the Lowell Regio similarly have degraded walls, signifying that they are significantly older than other canyons across Pluto, and remain from an age in which tectonics defined the landscape. The widest canyon stretches roughly 45 miles, and in equal width are several shallow pits, which were likely caused by ground collapse when subsurface ice melted and sublimed. Stay tuned for more amazing discoveries about everyone’s favorite ex-planet as more data is sent from New Horizons over the course of the coming year.
<urn:uuid:f108af19-1940-49b1-84e5-0c72cda17e7a>
3.765625
598
News (Org.)
Science & Tech.
31.87603
95,602,522
"As I was telling you guys a week or so ago, this Blood Moon that's approaching on July 27 - before it gets here you have a partial solar eclipse which will be over Tasmania". Partial solar Eclipses are witnessed more often and happens frequently. The last solar eclipse that fell on Friday the 13th was in December 1974. Can the partial solar eclipse be visible from India? The partial solar eclipse or Surya Grahan would begin on 13th July 2018 at 07:18:23 am and go on till 08:31:05 am, according to Indian local time. Syrian army hoists national flag in Dara'a city A Russian military delegation earlier held negotiations with the militants in Dera'a to persuade them to surrender their weapons. Latest estimates say some 2,000 militants are still holed up in Dara'a city, along with their families. Experts have prohibited directly viewing the eclipse without safety measures. Some sections of modern sciences do nottstick to the idea of following the idea of refraining from food or water but Ayurveda practitioners believe that staying away from food during the eclipse is advisable. To view the partial eclipse, people should wear proper glasses to protect their eyes from the Sun. The eclipse reached its peak over Antarctica in the early hours of this morning. The eclipse will be a partial one - when the Moon slips past, only partially concealing the Sun. There are many beliefs and myths associated with this date across the world. The blood moon is estimated to last over 100 minutes and is expected to cast a larger shadow over the Earth than previously recorded moons. Jonathan Powell, author of South Wales Argus night sky column, said: "For many this will be the eclipse of a lifetime, as it's the longest of the 21st century and there won't be another like it for 82 years". May's Brexit plans could 'kill' US trade deal Khan in the past has publicly feuded with Trump after the president criticized him over the city's response to a terror attack. Trump blamed the city's woes on immigration, and said it was a "shame" Europe was letting in so many migrants. Parents of Thai cave rescue boy give thanks to God Adisak Wongsukchan told Al Jazeera he gave his 14-year-old son Nong Bew a big thumbs up when he first saw him in hospital. The Wild Boars football team which were stuck in Tham Luang Nang Non cave for 18 days after monsoon floods. Massive fuel tank explosion rocks Cairo airport Cairo International was the scene of a terrorist attack in 2015 when a passenger smuggled a bomb on to Metrojet Flight 9268 . A number of local tweets poured in with users claiming to have heard a loud explosion followed by the sounds of ambulances. Chris becomes second hurricane of Atlantic season Chris is located about 245 miles southeast of Halifax, Nova Scotia, and about 470 miles southwest of Cape Race, Newfoundland. On the forecast track, the center of Chris will pass over or near southeastern Newfoundland Thursday afternoon or evening.
<urn:uuid:55e2c742-62be-4bf1-9e2e-876c047bebd7>
2.609375
635
Content Listing
Science & Tech.
51.921131
95,602,527
Share this article: The only supermoon of the year will rise on Sunday night, appearing bigger and brighter than any other full moon in 2017. A supermoon is a full moon that falls near or on perigee, the point in the moon’s orbit where it is closest to the Earth. This causes it to appear slightly larger and brighter than normal. December’s full moon goes by many different names, including the Full Cold Moon, the Long Night Moon, the Oak Moon and the Moon Before Yule. It is most commonly called the Full Cold Moon because December is when the winter cold fastens its grip and the nights become long and dark, according to the Old Farmer’s Almanac. This weekend’s full moon will rise on Sunday evening shortly after sunset and will be visible all night long, weather permitting. An optical illusion, known as the "moon illusion," will make the moon appear even bigger when it is close to the horizon compared to when it is high in the sky. Shortly after moonrise or before moonset is also a good time to photograph the moon, not only because of this illusion, but also because the moon will appear in frame with the natural landscape on the ground. In addition to appearing larger and brighter than normal, supermoons can affect the oceans. “The supermoon plays a role in the tides and has a stronger influence than other full moons,” AccuWeather Senior Meteorologist David Samuhel said. December’s full moon is the first of three full moons in a row that will be considered a supermoon. January will feature the next two supermoons, the first falling on Jan. 1 and the second occurring on Jan. 31. The second full moon in January will also a blue moon, the name given to the second full moon in a calendar month. Additionally, the same night of the super blue moon, part of the world will experience a total lunar eclipse as the moon passes through the shadow of the Earth. People from across Australia and eastern Asia will be able to view the entire event, while the rest of Asia, eastern Europe and North America will be able to view only part of the eclipse. Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated. After several dry days, the weather will take a downhill turn as NASCAR drivers gear up for the Foxwoods Resort Casino 301 in Loudon, New Hampshire, on Sunday. An 11-million-ton iceberg hovers over the town of Innaarsuit in Greenland. The massive iceberg floats dangerously close to shore, threatening the small town. Two people suffered shark bites while swimming in the water off Fire Island in Suffolk County, New York, according to NBC New York. Newly formed Tropical Storm Ampil is set to strengthen as it tracks toward Japan’s Ryukyu Islands into the weekend. A rainstorm moving up from the south will coincide with a shift in the jet stream and mark the beginning of an extended period of wet, humid conditions in the northeastern US that may last into August. Eventualmente, la aspirante a ingeniero ambiental espera trabajar tanto con gobiernos como con corporaciones para eliminar microplásticos de los océanos de manera segura y eficiente. Drenching thunderstorms advanced into the northeastern United States Tuesday afternoon and evening, bringing reports of flash flooding throughout the region. Weather invariably comes into play at certain points during the Tour de France, especially when some tour stages can be greater than 100 miles in length.
<urn:uuid:85c9d566-078a-426a-9548-63bb68c53d33>
2.921875
776
News Article
Science & Tech.
49.17245
95,602,548
Now, researchers from the University of the Basque Country (Universidad del País Vasco) have studied the responses to the midsummer heat of the Mediterranean and Atlantic trees and bushes of the Iberian Peninsula to conclude that the latter species will suffer most with the increase in temperatures. Researchers from the Department of Plant Biology and Ecology from the University of the Basque Country have shown the response capacity of Mediterranean and Atlantic plants. ?We were able to notice that all species responded in a similar way, through the accumulation of photoprotective compounds (tocopherol or Vitamin E), reduction in clorophyll content and the activation of the so called xanthophyll cycle, points out José Ignacio Garcia-Plazaola, the first signatory of the study. The study, which is published in the journal called Trees - Structure and Function, compares the effects of the summer of 2003 with the same period for 1998, 1999 and 2001. Generally, all the summers were dry, but in 2003 there was an average increase of 5o C, and this was considered to be the most stressful time for the trees, which turned yellow and the trees started to shed their leaves before the autumn. Differences between the Mediterranean and Atlantic species The researchers noticed a notable difference between the Mediterranean and Atlantic species. The Mediterranean species were much more plastic, having a much greater ability to stimulate the defence systems states García-Plazaola. With regard to the distribution of Atlantic species, scientists recorded the partial extinction of trees or bushes, such as the bearberry (Arctostaphylos), after the heat wave. The study shows that the Atlantic species have less ability to respond to acute summer stress because of their responses to photosynthesis and the induction of photoprotective molecules. However, the majority of Mediterranean species, as they keep their green leaves throughout the year, are much more protected in the presence of environmental adversities and have developed mechanisms which allow them to acclimatise in an efficient way in the presence of heat waves and episodic cold waves as well. According to the research, this phenomenon could be of special significance in the context of future global warming when the Atlantic species would be affected more. This result creates doubts about the future viability of certain Atlantic species that find their distribution limit on the Iberian Peninsula, as is the case of the beech tree (F. sylvatica), concludes García-Plazaola. The unusually hot period that affected Europe in the summer of 2003 may have been the most extreme heat wave in the last 200 years. The plant species had to deal with an unequalled level of environmental stress (or adversity) in their entire existence, circumstances that they will have to face more and more frequently as a consequence of climate change.Five years after the heat wave the Mediterranean species (Box and Holm Oak) remain the same but it has not been possible for the Atlantic species (Bearberry) to recover and it has disappeared. Photo: SINC/José Ignacio Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:dbbccf28-402d-4cec-894e-15fe0f327d1f>
3.734375
1,277
Content Listing
Science & Tech.
36.135988
95,602,566
Our Vision and Goals For quick summaries of the carnivores we work with, click on the names below. Dr. Adrian Treves founded the Carnivore Coexistence Lab in April 2007. Large carnivores are the most challenging species with which to coexist. For millions of years, they competed with our ancestors for food and space. Humans were generally subordinate in this struggle. But, the past few hundred years have seen the tables turned. Now humans cause most carnivore mortality worldwide. We have degraded ecosystems as a result because large carnivores play essential roles in maintaining functioning, diverse ecosystems. Therefore large carnivores are among the most challenging to preserve. Two species of large carnivores have gone extinct in recent times and most have suffered major population reductions. Loss of large carnivores disrupts ecosystems and depletes biodiversity, because of cascading influences on prey and smaller-bodied carnivores. The larger species of carnivores typically require vast areas to survive, thereby competing indirectly with people for space and resources. Direct competition is also apparent when carnivores prey on livestock or damage crops and when people retaliate by clearing habitat or killing carnivores. Human causes of mortality predominate in virtually all large carnivore populations. Mainly, people retaliate against carnivores for real and perceived threats to property, safety, or hunted species. Thus, carnivore conservation has often depended on reducing human causes of mortality. Both private citizens and governments are implicated. Government-sponsored bounties, pest eradication campaigns, and trophy hunts extirpated carnivores across vast areas of many countries. Even in the last decade, private eradication efforts have occurred in many regions. Large carnivores can be conserved within human-dominated areas, while also protecting people's livelihoods and safety. The solutions are never simple; indeed they can be maddeningly complex. But when we combine local knowledge with technical support and state-of-the-art research, we can balance the needs of people and wildlife. The challenge of preserving nature in the face of climate change and the sixth mass extinction requires that current generations recognize their public duty to protect nature for future generations. It also requires that governments and public scientists account transparently in a sophisticated way for nature's assets as a trust for current and future generations not narrow interests. The constitutional and public trust frameworks that establish these duties are legal obligations as well as ethical and moral responsibilities. Since 2012, Dr. Adrian Treves has combined ecology and the law to understand the roles of legal, ethical and scientific duties in preserving nature for posterity. The Carnivore Coexistence Lab members are public scientists who strive to uphold these duties to the broadest public not narrow interests or donors. To do so, we transparently describe our value judgments as follows: - Nature is an asset held in trust for current and future generations of all life. - Youth, current adults, and future generations have equal rights. - Democratic governments are empowered by the sovereign public to preserve nature and regulate its use as a trust for the broad public. - Respect the law. - Use ethical justifications for nature preservation and use. - Use best available science and account transparently and reproducibly for the condition of Nature's trust.
<urn:uuid:93b3bdc9-5f57-4b47-8f71-8083389935dc>
3.421875
660
About (Org.)
Science & Tech.
21.737654
95,602,567
This is an introductory tutorial on Docker containers. By the end of this article, you will know how to use Docker on your local machine. Along with Python, we are going to run Nginx and Redis containers. Those examples assume that you are familiar with the basic concepts of those technologies. There will be lots of shell examples, so go ahead and open the terminal. Table of Contents - What is Docker? - How does it differ from virtualization? - Why do we need Docker? - Supported platforms - Example 1: hello world - Example 2: Environment variables and volumes - Example 3: Writing your first Dockerfile - Best practices for creating images - Alpine images - Example 4: Connection between containers - Docker way Docker is an open-source tool that automates the deployment of an application inside a software container. The easiest way to grasp the idea behind Docker is to compare it to, well... standard shipping containers. Back in the day, transportation companies faced the following challenges: - How to transport different (incompatible) types of goods side by side (like food and chemicals, or glass and bricks). - How to handle packages of various sizes using the same vehicle. After the introduction of containers, bricks could be put over glass, and chemicals could be stored next to food. Cargo of various sizes can be put inside a standardized container and loaded/unloaded by the same vehicle. Let’s go back to containers in software development. When you develop an application, you need to provide your code along with all possible dependencies like libraries, the web server, databases, etc. You may end up in a situation when the application is working on your computer, but won’t even start on the staging server, or the dev or QA’s machine. This challenge can be addressed by isolating the app to make it independent of the system. Traditionally, virtual machines were used to avoid this unexpected behavior. The main problem with VM is that an “extra OS” on top of the host operating system adds gigabytes of space to the project. Most of the time your server will host several VMs that will take up even more space. And by the way, at the moment, most cloud-based server providers will charge you for that extra space. Another significant drawback of VM is a slow boot. Docker eliminates all the above by simply sharing the OS kernel across all the containers running as separate processes of the host OS. Keep in mind that Docker is not the first and not the only containerization platform. However, at the moment Docker is the biggest and the most powerful player on the market. The short list of benefits includes: - Faster development process - Handy application encapsulation - The same behaviour on local machine / dev / staging / production servers - Easy and clear monitoring - Easy to scale Faster development process There is no need to install 3rd-party apps like PostgreSQL, Redis, Elasticsearch on the system – you can run it in containers. Docker also gives you the ability to run different versions of same application simultaneously. For example, say you need to do some manual data migration from an older version of Postgres to a newer version. You can have such a situation in microservice architecture when you want to create a new microservice with a new version of the 3rd-party software. It could be quite complex to keep two different versions of the same app on one host OS. In this case, Docker containers could be a perfect solution – you receive isolated environments for your applications and 3rd-parties. Handy application encapsulation You can deliver your application in one piece. Most programming languages, frameworks and all operating systems have their own packaging managers. And even if your application can be packed with its native package manager, it could be hard to create a port for another system. Docker gives you a unified image format to distribute you applications across different host systems and cloud services. You can deliver your application in one piece with all the required dependencies (included in an image) ready to run. Same behaviour on local machine / dev / staging / production servers Docker can’t guarantee 100% dev / staging / production parity, because there is always the human factor. But it reduces to almost zero the probability of error caused by different versions of operating systems, system-dependencies, etc. With right approach to building Docker images, your application will use the same base image with the same OS version and the required dependencies. Easy and clear monitoring Out of the box, you have a unified way to read log files from all running containers. You don't need to remember all the specific paths where your app and its dependencies store log files and write custom hooks to handle this. You can integrate an external logging driver and monitor your app log files in one place. Easy to scale A correctly wrapped application will cover most of the Twelve Factors. By design, Docker forces you follow its core principles, such as configuration over environment variables, communication over TCP/UDP ports, etc. And if you’ve done your application right, it will be ready for scaling not only in Docker. Docker’s native platform is Linux, as it’s based on features provided by the Linux kernel. However, you can still run it on macOS and Windows. The only difference is that on macOS and Windows, Docker is encapsulated into a tiny virtual machine. At the moment, Docker for macOS and Windows has reached a significant level of usability and feels more like a native app. You can check out the installation instructions for Docker here. If you're running Docker on Linux, you need to run all the following commands as root or add your user to docker group and re-login: sudo usermod -aG docker $(whoami)` - Container – a running instance that encapsulates required software. Containers are always created from images. A container can expose ports and volumes to interact with other containers or/and the outer world. Containers can be easily killed / removed and re-created again in a very short time. Containers don't keep state. - Image – the basic element for every container. When you create an image, every step is cached and can be reused (Copy On Write model). Depending on the image, it can take some time to build. Containers, on the other hand, can be started from images right away. - Port – a TCP/UDP port in its original meaning. To keep things simple, let’s assume that ports can be exposed to the outer world (accessible from the host OS) or connected to other containers – i.e., accessible only from those containers and invisible to the outer world. - Volume – can be described as a shared folder. Volumes are initialized when a container is created. Volumes are designed to persist data, independent of the container’s lifecycle. - Registry – the server that stores Docker images. It can be compared to Github – you can pull an image from the registry to deploy it locally, and push locally built images to the registry. - Docker Hub – a registry with web interface provided by Docker Inc. It stores a lot of Docker images with different software. Docker Hub is a source of the "official" Docker images made by the Docker team or in cooperation with the original software manufacturer (it doesn't necessary mean that these "original" images are from official software manufacturers). Official images list their potential vulnerabilities. This information is available to any logged-in user. There are both free and paid accounts available. You can have one private image per account and an infinite amount of public images for free. Docker Store – a service very similar to Docker Hub. It's a marketplace with ratings, reviews, etc. My personal opinion is that it's marketing stuff. I'm totally happy with Docker Hub. It's time to run your first container: docker run ubuntu /bin/echo 'Hello world' Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 6b98dfc16071: Pull complete 4001a1209541: Pull complete 6319fc68c576: Pull complete b24603670dc3: Pull complete 97f170c87c6f: Pull complete Digest:sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d Status: Downloaded newer image for ubuntu:latest Hello world - docker run is a command to run a container. - ubuntu is the image you run. For example, the Ubuntu operating system image. When you specify an image, Docker looks first for the image on your Docker host. If the image does not exist locally, then the image is pulled from the public image registry – Docker Hub. - /bin/echo 'Hello world' is the command that will run inside a new container. This container simply prints “Hello world” and stops the execution. Let’s try to create an interactive shell inside a Docker container: docker run -i -t --rm ubuntu /bin/bash - -t flag assigns a pseudo-tty or terminal inside the new container. - -i flag allows you to make an interactive connection by grabbing the standard input (STDIN) of the container. - --rm flag automatically removes the container when the process exits. By default, containers are not deleted. This container exists until we keep the shell session and terminates when we exit the session (like an SSH session with a remote server). If you want to keep the container running after the end of the session, you need to daemonize it: docker run --name daemon -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done" - --name daemon assigns daemon name to a new container. If you don’t specify a name explicitly, Docker will generate and assign it automatically. - -d flag runs the container in the background (i.e., daemonizes it). Let’s see what containers we have at the moment: docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1fc8cee64ec2 ubuntu "/bin/sh -c 'while..." 32 seconds ago Up 30 seconds daemon c006f1a02edf ubuntu "/bin/echo 'Hello ..." About a minute ago Exited (0) About a minute ago gifted_nobel - docker ps is a command to list containers. - -a shows all containers (without -a flag ps will show only running containers). The ps shows us that we have two containers: - gifted_nobel (the name for this container was generated automatically – it will be different on your machine). It's the first container we created, the one that printed 'Hello world' once. - daemon – the third container we created, which runs as a daemon. Note: there is no second container (the one with interactive shell) because we set the --rm option. As a result, this container is automatically deleted right after execution. Let’s check the logs and see what the daemon container is doing right now: docker logs -f daemon ... hello world hello world hello world - docker logs fetch the logs of a container. - -f flag to follow the log output (works actually like tail -f). Now let’s stop the daemon container: docker stop daemon Make sure the container has stopped. docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1fc8cee64ec2 ubuntu "/bin/sh -c 'while..." 5 minutes ago Exited (137) 5 seconds ago daemon c006f1a02edf ubuntu "/bin/echo 'Hello ..." 6 minutes ago Exited (0) 6 minutes ago gifted_nobel The container is stopped. We can start it again: docker start daemon Let’s ensure that it’s running: docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1fc8cee64ec2 ubuntu "/bin/sh -c 'while..." 5 minutes ago Up 3 seconds daemon c006f1a02edf ubuntu "/bin/echo 'Hello ..." 6 minutes ago Exited (0) 7 minutes ago gifted_nobel Now, stop it again and remove all the containers manually: docker stop daemon docker rm <your first container name> docker rm daemon To remove all containers, we can use the following command: docker rm -f $(docker ps -aq) - docker rm is the command to remove the container. - -f flag (for rm) stops the container if it's running (i.e., force deletion). - -q flag (for ps) is to print only container IDs. It’s time to create and run more a meaningful container, like Nginx. Change the directory to examples/nginx: docker run -d --name "test-nginx" -p 8080:80 -v $(pwd):/usr/share/nginx/html:ro nginx:latest Warning: This command looks quite heavy, but it's just an example to explain volumes and env variables. In 99% of real-life cases, you won't start Docker containers manually – you'll use orchestration services (we'll cover docker-compose in example #4) or write a custom script to do it. Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx 683abbb4ea60: Pull complete a470862432e2: Pull complete 977375e58a31: Pull complete Digest: sha256:a65beb8c90a08b22a9ff6a219c2f363e16c477b6d610da28fe9cba37c2c3a2ac Status: Downloaded newer image for nginx:latest afa095a8b81960241ee92ecb9aa689f78d201cff2469895674cec2c2acdcc61c - -p is a ports mapping HOST PORT:CONTAINER PORT. - -v is a volume mounting HOST DIRECTORY:CONTAINER DIRECTORY. Important: run command accepts only absolute paths. In our example, we've used $(pwd) to set the current directory absolute path. Now check this url in your web browser. We can try to change /example/nginx/index.html (which is mounted as a volume to /usr/share/nginx/html directory inside the container) and refresh the page. Let’s get the information about the test-nginx container: docker inspect test-nginx This command displays system-wide information about the Docker installation. This information includes the kernel version, number of containers and images, exposed ports, mounted volumes, etc. To build a Docker image, you need to create a Dockerfile. It is a plain text file with instructions and arguments. Here is the description of the instructions we’re going to use in our next example: - FROM -- set base image - RUN -- execute command in container - ENV -- set environment variable - WORKDIR -- set working directory - VOLUME -- create mount-point for a volume - CMD -- set executable for container You can check Dockerfile reference for more details. Let’s create an image that will get the contents of the website with curl and store it to the text file. We need to pass the website url via the environment variable SITE_URL. The resulting file will be placed in a directory, mounted as a volume. Place the file name Dockerfile in examples/curl directory with the following contents: FROM ubuntu:latest RUN apt-get update \ && apt-get install --no-install-recommends --no-install-suggests -y curl \ && rm -rf /var/lib/apt/lists/* ENV SITE_URL http://example.com/ WORKDIR /data VOLUME /data CMD sh -c "curl -Lk $SITE_URL > /data/results" Dockerfile is ready. It’s time to build the actual image. Go to examples/curl directory and execute the following command to build an image: docker build . -t test-curl Sending build context to Docker daemon 3.584kB Step 1/6 : FROM ubuntu:latest ---> 113a43faa138 Step 2/6 : RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -y curl && rm -rf /var/lib/apt/lists/* ---> Running in ccc047efe3c7 Get:1 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB] Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB] ... Removing intermediate container ccc047efe3c7 ---> 8d10d8dd4e2d Step 3/6 : ENV SITE_URL http://example.com/ ---> Running in 7688364ef33f Removing intermediate container 7688364ef33f ---> c71f04bdf39d Step 4/6 : WORKDIR /data Removing intermediate container 96b1b6817779 ---> 1ee38cca19a5 Step 5/6 : VOLUME /data ---> Running in ce2c3f68dbbb Removing intermediate container ce2c3f68dbbb ---> f499e78756be Step 6/6 : CMD sh -c "curl -Lk $SITE_URL > /data/results" ---> Running in 834589c1ac03 Removing intermediate container 834589c1ac03 ---> 4b79e12b5c1d Successfully built 4b79e12b5c1d Successfully tagged test-curl:latest - docker build command builds a new image locally. - -t flag sets the name tag to an image. Now we have the new image, and we can see it in the list of existing images: REPOSITORY TAG IMAGE ID CREATED SIZE test-curl latest 5ebb2a65d771 37 minutes ago 180 MB nginx latest 6b914bbcb89e 7 days ago 182 MB ubuntu latest 0ef2e08ed3fa 8 days ago 130 MB We can create and run the container from the image. Let’s try it with the default parameters: docker run --rm -v $(pwd)/vol:/data/:rw test-curl To see the results saved to file, run: Let’s try with Facebook.com: docker run --rm -e SITE_URL=https://facebook.com/ -v $(pwd)/vol:/data/:rw test-curl To see the results saved to file, run: - Include only necessary context – use a .dockerignore file (like .gitignore in git) - Avoid installing unnecessary packages – it will consume extra disk space. - Use cache. Add context that changes a lot (for example, the source code of your project) at the end of Dockerfile – it will utilize Docker cache effectively. - Be careful with volumes. You should remember what data is in volumes. Because volumes are persistent and don’t die with the containers, the next container will use data from the volume created by the previous container. - Use environment variables (in RUN, EXPOSE, VOLUME). It will make your Dockerfile more flexible. A lot of Docker images (versions of images) are created on top of Alpine Linux – this is a lightweight distro that allows you to reduce the overall size of Docker images. I recommend that you use images based on Alpine for third-party services, such as Redis, Postgres, etc. For your app images, use images based on buildpack – it will be easy to debug inside the container, and you'll have a lot of pre-installed system-wide requirements. Only you can decide which base image to use, but you can get the maximum benefit by using one basic image for all images, because in this case the cache will be used more effectively. sudo pip install docker-compose In this example, I am going to connect Python and Redis containers. version: '3.6' services: app: build: context: ./app depends_on: - redis environment: - REDIS_HOST=redis ports: - "5000:5000" redis: image: redis:3.2-alpine volumes: - redis_data:/data volumes: redis_data: Go to examples/compose and execute the following command: Building app Step 1/9 : FROM python:3.6.3 3.6.3: Pulling from library/python f49cf87b52c1: Pull complete 7b491c575b06: Pull complete b313b08bab3b: Pull complete 51d6678c3f0e: Pull complete 09f35bd58db2: Pull complete 1bda3d37eead: Pull complete 9f47966d4de2: Pull complete 9fd775bfe531: Pull complete Digest: sha256:cdef88d8625cf50ca705b7abfe99e8eb33b889652a9389b017eb46a6d2f1aaf3 Status: Downloaded newer image for python:3.6.3 ---> a8f7167de312 Step 2/9 : ENV BIND_PORT 5000 ---> Running in 3b6fe5ca226d Removing intermediate container 3b6fe5ca226d ---> 0b84340fa920 Step 3/9 : ENV REDIS_HOST localhost ---> Running in a4f9a1d6f541 Removing intermediate container a4f9a1d6f541 ---> ebe63bf5959e Step 4/9 : ENV REDIS_PORT 6379 ---> Running in fd06aa65fd33 Removing intermediate container fd06aa65fd33 ---> 2a581c31ff4f Step 5/9 : COPY ./requirements.txt /requirements.txt ---> 671093a12829 Step 6/9 : RUN pip install -r /requirements.txt ---> Running in b8ea53bc6ba6 Collecting flask==1.0.2 (from -r /requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB) Collecting redis==2.10.6 (from -r /requirements.txt (line 2)) Downloading https://files.pythonhosted.org/packages/3b/f6/7a76333cf0b9251ecf49efff635015171843d9b977e4ffcf59f9c4428052/redis-2.10.6-py2.py3-none-any.whl (64kB) Collecting click>=5.1 (from flask==1.0.2->-r /requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB) Collecting Jinja2>=2.10 (from flask==1.0.2->-r /requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB) Collecting itsdangerous>=0.24 (from flask==1.0.2->-r /requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB) Collecting Werkzeug>=0.14 (from flask==1.0.2->-r /requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB) Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->flask==1.0.2->-r /requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz Building wheels for collected packages: itsdangerous, MarkupSafe Running setup.py bdist_wheel for itsdangerous: started Running setup.py bdist_wheel for itsdangerous: finished with status 'done' Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e5 Running setup.py bdist_wheel for MarkupSafe: started Running setup.py bdist_wheel for MarkupSafe: finished with status 'done' Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e46 Successfully built itsdangerous MarkupSafe Installing collected packages: click, MarkupSafe, Jinja2, itsdangerous, Werkzeug, flask, redis Successfully installed Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 click-6.7 flask-1.0.2 itsdangerous-0.24 redis-2.10.6 You are using pip version 9.0.1, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Removing intermediate container b8ea53bc6ba6 ---> 3117d3927951 Step 7/9 : COPY ./app.py /app.py ---> 84a82fa91773 Step 8/9 : EXPOSE $BIND_PORT ---> Running in 8e259617b7b5 Removing intermediate container 8e259617b7b5 ---> 55f447f498dd Step 9/9 : CMD [ "python", "/app.py" ] ---> Running in 2ade293ecb25 Removing intermediate container 2ade293ecb25 ---> b85b4246e9f8 Successfully built b85b4246e9f8 Successfully tagged compose_app:latest WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`. Creating compose_redis_1 ... done Creating compose_app_1 ... done Attaching to compose_redis_1, compose_app_1 redis_1 | 1:C 08 Jul 18:12:21.851 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf redis_1 | _._ redis_1 | _.-``__ ''-._ redis_1 | _.-`` `. `_. ''-._ Redis 3.2.12 (00000000/0) 64 bit redis_1 | .-`` .-```. ```\/ _.,_ ''-._ redis_1 | ( ' , .-` | `, ) Running in standalone mode redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 redis_1 | | `-._ `._ / _.-' | PID: 1 redis_1 | `-._ `-._ `-./ _.-' _.-' redis_1 | |`-._`-._ `-.__.-' _.-'_.-'| redis_1 | | `-._`-._ _.-'_.-' | http://redis.io redis_1 | `-._ `-._`-.__.-'_.-' _.-' redis_1 | |`-._`-._ `-.__.-' _.-'_.-'| redis_1 | | `-._`-._ _.-'_.-' | redis_1 | `-._ `-._`-.__.-'_.-' _.-' redis_1 | `-._ `-.__.-' _.-' redis_1 | `-._ _.-' redis_1 | `-.__.-' redis_1 | redis_1 | 1:M 08 Jul 18:12:21.852 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. redis_1 | 1:M 08 Jul 18:12:21.852 # Server started, Redis version 3.2.12 redis_1 | 1:M 08 Jul 18:12:21.852 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. redis_1 | 1:M 08 Jul 18:12:21.852 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. redis_1 | 1:M 08 Jul 18:12:21.852 * The server is now ready to accept connections on port 6379 app_1 | * Serving Flask app "app" (lazy loading) app_1 | * Environment: production app_1 | WARNING: Do not use the development server in a production environment. app_1 | Use a production WSGI server instead. app_1 | * Debug mode: on app_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) app_1 | * Restarting with stat app_1 | * Debugger is active! app_1 | * Debugger PIN: 170-528-240 The current example will increment view counter in Redis. Open the following url in your web browser and check it. How to use docker-compose is a topic for a separate tutorial. To get started, you can play with some images from Docker Hub. If you want to create your own images, follow the best practices listed above. The only thing I can add in terms of using docker-compose is that you should always give explicit names to your volumes in docker-compose.yml (if the image has volumes). This simple rule will save you from an issue in the future when you'll be inspecting your volumes. version: '3.6' services: ... redis: image: redis:3.2-alpine volumes: - redis_data:/data volumes: redis_data: In this case, redis_data will be the name inside the docker-compose.yml file; for the real volume name, it will be prepended with project name prefix. To see volumes, run: docker volume ls DRIVER VOLUME NAME local apptest_redis_data Without an explicit volume name, there will be a UUID. Here’s an example from my local machine: DRIVER VOLUME NAME local ec1a5ac0a2106963c2129151b27cb032ea5bb7c4bd6fe94d9dd22d3e72b2a41b local f3a664ce353ba24dd43d8f104871594de6024ed847054422bbdd362c5033fc4c local f81a397776458e62022610f38a1bfe50dd388628e2badc3d3a2553bb08a5467f local f84228acbf9c5c06da7be2197db37f2e3da34b7e8277942b10900f77f78c9e64 local f9958475a011982b4dc8d8d8209899474ea4ec2c27f68d1a430c94bcc1eb0227 local ff14e0e20d70aa57e62db0b813db08577703ff1405b2a90ec88f48eb4cdc7c19 local polls_pg_data local polls_public_files local polls_redis_data local projectdev_pg_data local projectdev_redis_data Docker has some restrictions and requirements, depending on the architecture of your system (applications that you pack into containers). You can ignore these requirements or find some workarounds, but in this case, you won't get all the benefits of using Docker. My strong advice is to follow these recommendations: - 1 application = 1 container. - Run the process in the foreground (don't use systemd, upstart or any other similar tools). - Keep data out of containers – use volumes. - Do not use SSH (if you need to step into container, you can use the docker exec command). - Avoid manual configurations (or actions) inside container. To summarize this tutorial, alongside with IDE and Git, Docker has become a must-have developer tool. It's a production-ready tool with a rich and mature infrastructure. Docker can be used on all types of projects, regardless of size and complexity. In the beginning, you can start with compose and Swarm. When the project grows, you can migrate to cloud services like Amazon Container Services or Kubernetes. Like standard containers used in cargo transportation, wrapping your code in Docker containers will help you build faster and more efficient CI/CD processes. This is not just another technological trend promoted by a bunch of geeks – it's a new paradigm that is already being used in the architecture of large companies like PayPal, Visa, Swisscom, General Electric, Splink, etc.
<urn:uuid:0154fbdd-1ec1-4d8b-a432-0ae84b8f6217>
3.296875
7,809
Tutorial
Software Dev.
58.617784
95,602,571
What is HTML? HTML is the standard markup language for making Web pages. - HTML stands for Hyper Text Markup Language - It describes the structure of Web pages using markup - The HTML elements are the building blocks of HTML pages - These elements are represented by tags - HTML tags label pieces of content such as “heading”, “paragraph”, “table”, and so on - Browsers do not display the HTML tags but use them to render the content of the page A Simple HTML Document <h1>My First Heading</h1> <p>My first paragraph.</p> - The <!DOCTYPE html> declaration defines this document to be HTML5 - The <html> element is the root element of an HTML page - The <head> element contains meta information about the document - The <title> element specifies a title for the document - The <body> element contains the visible page content - The <h1> element defines a large heading - The <p> element defines a paragraph HTML tags are element names surrounded by angle brackets: <tagname>content goes here…</tagname> - HTML tags normally come in pairs like <p> and </p> - The tag on the left-hand side in a pair is the start tag, the right-hand side tag is the end tag - We can use end tag like the start tag, but with a forward slash inserted before the tag name. Tip: The start tag is also called as an opening tag and the end tag as a closing tag. The purpose of a web browser (Chrome, IE, Firefox, Safari) is to read HTML documents and display them. The browser never display the HTML tags, but it uses them only to determine how to display the document in the browser: HTML Page Structure The visualization of an HTML page structure is given below: Note: Only the content which is inside the <body> section (the white area above) is displayed in a browser. The <!DOCTYPE> Declaration The <!DOCTYPE> declaration shows the type of the document and helps the browsers to display web pages correctly. It must appear only once, that is at the top of the page (before any HTML tags). The <!DOCTYPE> declaration is not case sensitive. The <!DOCTYPE> declaration for HTML5 is: Since the early days of the web, there have been many versions of HTML:
<urn:uuid:c7cb6ade-1361-4ab4-ab85-85bc9ab25074>
4.4375
541
Documentation
Software Dev.
52.261387
95,602,574
Cell reprogramming calls The Curious Case of Benjamin Button to mind. It's a new technology that uses molecular therapy to coax adult cells to revert to an embryonic stem cell-like state, allowing scientists to later re-differentiate these cells into specific types with the potential to treat heart attacks or diseases such as Parkinson's. But at this point in the technology's development, only one percent of cells are successfully being reprogrammed. Now, for the first time, scientists at Tel Aviv University in collaboration with researchers at Harvard University have succeeded in tracking the progression of these cells through live imaging to learn more about how they are reprogrammed, and how the new cells evolve over time. Dr. Iftach Nachman of TAU's Department of Biochemistry says that this represents a huge stride forward. It will not only allow researchers to develop techniques and choose the right cells for replacement therapy, increasing the efficiency of cell reprogramming, but will give invaluable insight into how these cells will eventually react in the human body. Results from the research project were recently published in the journal Nature Biotechnology. Looking at your cell's family tree Dr. Nachman and his fellow researchers used flourescent markers to develop their live imaging approach. During the reprogramming process, the team was able to visually track whole lineages of a cell population from their single-cell point of origin. Cell lineage proved to be crucial for predicting how the cells would behave and whether or not they could be reprogrammed successfully, says Dr. Nachman. "By combining quantitative analysis of the data, we were able to see that these 'decisions' are made very early on. We analyzed the cells over time, and we were able to detect subtle changes that occur as early as the first or second day in a long, two-week process." This is the first time that scientists have looked within a cell population to determine why some cells successfully reprogram while most fail or die along the way. The state in which cells enter the reprogramming process are thought to have an impact on the outcome. Nature in reverse Scientists are only beginning to understand this enigmatic process, which reverses nature by causing cells to regress back to their embryonic stage. Increased knowledge of how reprogramming cells proliferate will allow scientists to develop better real-life therapies and reduce risk to patients, says Dr. Nachman. While embryonic stem cells culled from live embryos can be manipulated to become new "replacement" tissues such as nerve or heart cells, these reprogrammed stem cells from adults represent a safer and ethically more responsible approach, some scientists believe. The next step for Dr. Nachman and his team is research into specific cell-type characteristics before adult cells even enter the reprogramming process. They will try to discover the molecular markers that differentiate between cells that successfully reprogram and those that do not. Several projects in their lab are now attempting to track different cell types and how they change under live imaging. George Hunka | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8bdeeb1f-ac3e-4b00-a90d-3da8ada2d29d>
3.53125
1,209
Content Listing
Science & Tech.
38.239132
95,602,595
Washington: NASA’s John H Glenn Research Centre is going to light a “large scale fire” in space as part of an experiment that seeks to understand how fire spreads in a micro-gravity environment. Called the Spacecraft Fire Experiment (Saffire), the experiment on board the next Orbital/ATK Cygnus cargo mission will begin after the unmanned resupply vehicle will undock the International Space Station (ISS) after dropping key science supplies. “Saffire I, II, and III will launch separately in 2016 aboard resupply missions to the ISS. But they will not be unloaded and after the Orbital/ATK Cygnus pulls far away from the space station, the experiments will begin,” NASA Glenn said in a YouTube video. The fire will take place in a box full of “cotton-fiberglass composite” and the data generated from the experiment will be beamed back to Earth before Cygnus starts re-entry. Instruments on the returning Cygnus will measure flame growth, oxygen use and more. NASA scientists know that flames can be erratic in space but they don’t fully understand their properties and mechanics, Tech Insider reported. Come March 22 and NASA’s commercial partner Orbital ATK will launch its Cygnus spacecraft into orbit atop a United Launch Alliance Atlas V rocket for its fifth contracted resupply mission to the International Space Station. The flight, known as Orbital ATK CRS-6, will deliver investigations to the space station to study fire, meteors, regolith, adhesion, and 3D printing in microgravity. Results could determine microgravity flammability limits for several spacecraft materials, help to validate NASA’s material selection criteria, and help scientists understand how microgravity and limited oxygen affect flame size. A less heated investigation called “Meteor Composition Determination” will enable the first space-based observations of meteors entering Earth’s atmosphere from space. From grounded to gripping, another investigation launching takes its inspiration from small lizards. The “Gecko Gripper” investigation tests a gecko-adhesive gripping device that can stick on command in the harsh environment of space.
<urn:uuid:d0f24216-0816-419d-899b-3eac33ebbeb9>
3.03125
465
News Article
Science & Tech.
25.862412
95,602,596
Population and Energetic Aspects of the Relationship Between Blackbrowed and Greyheaded Albatrosses and the Southern Ocean Marine Environment The methods and results of the study of the tropho-dynamic relationships between 2 Diomedea albatrosses and the marine environment at South Georgia are described. They illustrate the technical and theoretical developments necessary to obtain certain empirical data essential for accurate assessments of the role of seabirds in marine ecosystems. Differences in breeding success during eight yr (consistent in D. chrysostoma, more variable in D. melanophris) are linked with important differences in breeding frequencies which affect the size and activities of populations at the breeding sites. Extensive dietary studies, based on sampling adults about to feed chicks, showed major inter-specific differences, resulting in chicks receiving meals of similar size and frequency but of different energy content. The frequency of chick feeding was determined initially by daily and 3-h weighing. Recently automatic equipment has recorded weights every 10 min, giving the frequency and size of meals and resulting digestive performances of the chicks. Experiments involving exchanging chicks between the two species were combined with new methods for analyzing growth curves. They showed that, while there was a species-specific genetic component to growth, the overall rate could be significantly modified by the nature of the diet. The slower growth rate of D. chrysostoma chicks, and the species’ diet, are probably important factors affecting breeding frequency. Adult feeding performance is being studied by devices recording simple activity budgets at sea. Preliminary results are described and projected work linking this with the automatic weighing equipment and with assessment of foraging energy costs is outlined. KeywordsEnergy Content Breeding Population Breeding Success Meal Size King Penguin Unable to display preview. Download preview PDF. - Clarke MR; Prince PA (1981) Cephalopod remains in regurgitations of black-browed and grey-headed albatrosses at South Georgia. Bull Br Antarct Sury 54: 1–8Google Scholar - Croxall JP; Prince PA (1982) Calorific content of squid ( Mollusca: Cephalopoda). Bull Br Antarct Sury 55: 27–31Google Scholar - Croxall JP; Prince PA; Ricketts C (1985) Relationships between prey life-cycles and the extent, nature and timing of seal and seabird predation in the Scotia Sea. In: Siegfried WR, Condy PR, Laws RM (eds) Antarctic nutrient cycles and food webs (Proceedings of the 4th SCAR symposium on Antarctic biology). Springer, Berlin Heidelberg New YorkGoogle Scholar - Nagy KA (1980) CO2 production in animals: analysis of potential errors in the doubly labelled water method. Am J Physiol 238: 454–465Google Scholar - Prince PA (1980) The food and feeding ecology of grey-headed albatross Diomedea chrysostoma and black-browed albatross D. melanophris. Ibis 122: 746–488Google Scholar - Prince PA; Payne MR (1979) Current status of birds at South Georgia. Bull Br Antarct Sury 48: 103–118Google Scholar
<urn:uuid:475097c4-b814-46e5-9995-7fe168d13c9a>
3
664
Academic Writing
Science & Tech.
31.678027
95,602,609
posted by hannah identify the functional group and indicate polar bonds by assigning partial positive and negative charges. 1) The structure CH3OCH3 is drawn out and I know that this is an ether because the O is bonded to two carbons but I am not sure how to assign the charges. I think that the oxygen would be negative and the two C's would be positive but I am not sure. Would you agree with my answer? This looks much like the HOH molecule/ H. Of course you know both H atoms are partially positive and O is partially negative. Same with ethers. O is partially negative; the two C atoms are partially positive.
<urn:uuid:000e7e2d-f28a-4633-bdde-acd8175c5d37>
3.15625
141
Q&A Forum
Science & Tech.
58.355679
95,602,617