text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Geneva: Research scientists announced on Monday they had identified the missing piece of a major puzzle involving the make-up of the universe by observing a neutrino particle change from one type to another. The CERN physics research center near Geneva, relaying the announcement from the Gran Sasso laboratory in central Italy, said the breakthrough was a major boost for its own LHC particle collider programme to unveil key secrets of the cosmos. According to physicists at Gran Sasso, after three years of monitoring multiple billions of muon neutrinos beamed to them through the earth from CERN 730 kms (456 miles) away, they had spotted one that had turned into a tau neutrino. Behind that scientific terminology lies the long-sought proof that the three varieties of neutrinos -- sub-atomic particles that with others form the universe`s basic elements -- can switch appearance, like the chameleon lizard. The discovery is important, scientists say, because it helps explain why neutrinos arrive at earth from the sun in apparently far smaller numbers than they should under the Standard Model of physics that has held sway for some 80 years. The fact that neutrinos are now proven to switch identities -- as posited by two Moscow scientists in the late 1960s based on earlier work by a U.S. physicist -- suggests that other types of neutrinos could exist but slip detection. LIGHT ON DARK MATTER In its turn, specialists say, this could help shed light on what is the dark matter that makes up about a quarter of the universe alongside the some 5 percent that is observable and the remaining 70 percent invisible "dark energy." "This is really exciting because it shows that there are things beyond the Standard Model," said James Gillies, spokesman for CERN -- the European Organization for Nuclear Research on the border between Switzerland and France. The search for concrete evidence of dark matter and of what it might be is part of the work of CERN`s LHC, or Large Hardon Collider, the world`s biggest scientific machine that began operation near full force at the end of March. But the beaming of muon neutrinos to the Italian center is not part of the LHC experiment. The beam is directed south under the Alps from another, smaller, CERN particle accelerator. CERN quoted Lucia Votano, director of the Gran Sasso laboratories near the town of L`Aquila 112 kms south of Rome that was hit by a devastating earthquake in April last year, as saying that its work had achieved its first goal. Scientists there were confident that the detection in the centre`s OPERA experiment of a tau neutrino would be followed by others showing that neutrinos can change, she said. Work on the behavior of neutrinos has already brought Nobel prizes to late U.S. scientist Ray Davies, who first recorded in the 1960s that fewer were coming from the sun than current theories of the universe predicted. He shared the prize in 2002, at the age of 87 and 4 years before his death, with fellow U.S. researcher Ricardo Giacconi and Japanese physicist Masatoshi Koshiba for the contribution to astrophysics.
<urn:uuid:ff8fb451-9e53-488e-a425-efdf383d5f39>
3.015625
664
News Article
Science & Tech.
44.92131
95,551,216
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . . Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . . There are four children in a family, two girls, Kate and Sally, and two boys, Tom and Ben. How old are the children? Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? In how many ways can you arrange three dice side by side on a surface so that the sum of the numbers on each of the four faces (top, bottom, front and back) is equal? You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you notice. Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps? When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... Use the numbers in the box below to make the base of a top-heavy pyramid whose top number is 200. Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him? Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . . A, B & C own a half, a third and a sixth of a coin collection. Each grab some coins, return some, then share equally what they had put back, finishing with their own share. How rich are they? Take any two numbers between 0 and 1. Prove that the sum of the numbers is always less than one plus their product? The sums of the squares of three related numbers is also a perfect square - can you explain why? Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important. Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Arrange the numbers 1 to 16 into a 4 by 4 array. Choose a number. Cross out the numbers on the same row and column. Repeat this process. Add up you four numbers. Why do they always add up to 34? Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples. This is an interactivity in which you have to sort the steps in the completion of the square into the correct order to prove the formula for the solutions of quadratic equations. ABC is an equilateral triangle and P is a point in the interior of the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP must be less than 10 cm. The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values. Show that among the interior angles of a convex polygon there cannot be more than three acute angles. Write down a three-digit number Change the order of the digits to get a different number Find the difference between the two three digit numbers Follow the rest of the instructions then try. . . . Is the mean of the squares of two numbers greater than, or less than, the square of their means? Which set of numbers that add to 10 have the largest product? Here are three 'tricks' to amaze your friends. But the really clever trick is explaining to them why these 'tricks' are maths not magic. Like all good magicians, you should practice by trying. . . . Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. Replace each letter with a digit to make this addition correct. There are 12 identical looking coins, one of which is a fake. The counterfeit coin is of a different weight to the rest. What is the minimum number of weighings needed to locate the fake coin? A huge wheel is rolling past your window. What do you see? When is it impossible to make number sandwiches? Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number... The diagram shows a regular pentagon with sides of unit length. Find all the angles in the diagram. Prove that the quadrilateral shown in red is a rhombus. Four jewellers share their stock. Can you work out the relative values of their gems? Kyle and his teacher disagree about his test score - who is right? Semicircles are drawn on the sides of a rectangle. Prove that the sum of the areas of the four crescents is equal to the area of the rectangle. Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. Show that if you add 1 to the product of four consecutive numbers the answer is ALWAYS a perfect square. Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai. L triominoes can fit together to make larger versions of themselves. Is every size possible to make in this way? If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest? Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Can you use the diagram to prove the AM-GM inequality? You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? Four identical right angled triangles are drawn on the sides of a square. Two face out, two face in. Why do the four vertices marked with dots lie on one line?
<urn:uuid:a33914f4-9605-4e48-8d2a-6b1f3209f6cc>
3.671875
1,559
Content Listing
Science & Tech.
69.756729
95,551,219
Editor rating: 7 / 10Graciela Piñeiro –– This is an important contribution to the dinosaur reproductive behavior, and offers a pormenorized study of the evidence to suggest the presence of a cuticle-like layer in oviraptorid and alvarezsaurid dinosaur eggs. Editor rating: 7 / 10Hans-Dieter Sues –– Important study on troodontid cranial structure. Editor rating: 7 / 10Mark Young –– Gorgonopsians are an unstudied clade, which this paper helps to begin to rectify. Editor rating: 8 / 10Andrew Farke –– Detailed description of an important taxon for sauropod workers. Editor rating: 8 / 10Philip Cox –– This is a thorough study on an extensive dataset that will provide a model for other such macroevolutionary investigations. Editor rating: 8 / 10Andrew Farke –– This study presents data applicable for many paleontologists in trying to establish life history information for snakes extant and extinct. Editor rating: 7 / 10Kenneth De Baets –– Oldest best-preserved representatives of chalcid wasps which is relevant for entomologists, evolutionary biologist and paleontologists. The authors run a phylogenetic analysis to place them among their extant relatives. Section Editor rating: 8 / 10Andrew Farke –– Excellent imagery and descriptions of an important taxon. Editor rating: 7 / 10Laura Wilson –– Evaluates the impact of autapomorphies on the behaviour of tip-dating methods. The results are relevant for all future studies that employ tip-dating analyses. Editor rating: 9 / 10Luis Eguiarte –– Gunnera is a genus of plants that have fascinated scientist for a long time, in particular for the very large leaves of some species and because its symbiotic relationship with the nitrogen fixing cyanobacteria Nostoc. For a many years, some botanists suspected it to be a very old, primitive genus, perhaps basal in the phylogeny of the Angiosperms. While latter molecular phylogenies did not support this position, this paper shows that indeed Gunnera is an old genus, with a complex evolutionary and phylogeographic history, and a recent radiation in the Andes. All these new results are relevant for understanding why the Neotropics have so many plant species, more than any other similar region in the planet. Discussing these articles
<urn:uuid:78a253af-890f-4e70-9356-7dd9af49526f>
2.59375
517
Content Listing
Science & Tech.
16.3008
95,551,220
Authors: Gerges Francis Tawdrous Why does Earth rotate 4-years Cycle (1461 days? Because there's an interaction and coherence between Earth, Moon and Mars Motions, this interaction causes this 4-years Cycle This paper tries to prove this fact, and explains its mechanism. Also we discuss here the "Distance" and "Time" Definitions, explaining how to distinguish between the matter and space. Comments: 38 Pages. [v1] 2018-03-22 07:32:36 Unique-IP document downloads: 26 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:b9c9389a-ff3d-4fbf-8386-95e4a99ea374>
2.84375
255
Truncated
Science & Tech.
48.907404
95,551,241
Content © Andrew Bone. All rights reserved. Created : September 6, 2015 Last updated :April 17, 2016 The most recent article is: View this item in the topic: and many more articles in the subject: 'Universe' on ScienceLibrary.info covers astronomy, cosmology, and space exploration. Learn Science with ScienceLibrary.info. 1642 - 1727 Issac Newton is possibly the most influential scientist of all time. In the second half of the 17th century, he produced a breathtaking number of physics and mathematical laws and methods, explaining forces and physical phenomena, and deriving mathematical explanations still in use today. There are thousands of oil, gas and coal producers in the world. But the decision makers, the CEOs, or the ministers of coal and oil if you narrow it down to just one person, they could all fit on a Greyhound bus or two. Website © contentwizard.ch | Designed by Andrew Bone
<urn:uuid:77456fb5-1b55-44e0-a3bf-266e72743acc>
2.734375
198
Truncated
Science & Tech.
49.34373
95,551,243
Assignment of α-Helices in Multiply Aligned Protein Sequences — Applications to DNA Binding Motifs The first solved protein structures were helical globular proteins and soon after their availability for structural analysis, it was noted that buried surfaces of α-helices were composed of hydrophobic residues. Schiffer and Edmundson (1967) introduced the helical wheel representation, in which residues are positioned at 100° intervals around a circle (i.e. 3.6 residues per turn) and suggested its use as a predictive tool for helices. Given the variability of helical length and degree of burial in the tertiary structure, the use of this tool alone proved too simplistic and the presence or absence of helices could not be predicted with a high degree of certainty. Nevertheless, the frequent use of helical wheels to plot segments of sequence with good helical amphipathicity testifies to the continuing utility of this simple representation. KeywordsZinc Finger Hydrophobic Residue Leucine Zipper Secondary Structure Prediction Zinc Finger Motif Unable to display preview. Download preview PDF. - Gibson, T. J., Sibbald, P. R., and Rice, P. (1991). In press.Google Scholar - O’Shea, E. K., Rutkowski, R., Stafford, W. F., and Kim, P. S. (1989). Science, 245:646– 648.Google Scholar - Sander, C. and Schneider, R. (1990). Proteins, 8: 3.Google Scholar - Saudek, V., Pastore, A., Castiglione Morelli, M. A., Frank, R., Gausepohl, H., Gibson, T., Weih, F., and Roesch, P. (1990). Protein Engineering. In press.Google Scholar
<urn:uuid:2978ad78-0393-4742-a8cc-da7105646d70>
2.53125
389
Academic Writing
Science & Tech.
57.469071
95,551,244
|Tuesday, 2018-07-17, 2:28 PM||Main | Registration | Login| Video of the Month Total online: 1 Abrupt Change in the Atlantic Meridional Overturning Circulation ABRUPT CHANGES IN THE EARTH’S CLIMATE SYSTEM ABRUPT CHANGE IN THE ATLANTIC MERIDIONAL OVERTURNING CIRCULATION The Atlantic Meridional Overturning Circulation (AMOC) is an important component of the Earth’s climate system, characterized by a northward flow of warm, salty water in the upper layers of the Atlantic, a transformation of water mass properties at higher northern latitudes of the Atlantic in the Nordic and Labrador Seas that induces sinking of surface waters to form deep water, and a southward flow of colder water in the deep Atlantic (Fig. 1.6). There is also an interhemispheric transport of heat associated with this circulation, with heat transported from the Southern Hemisphere to the Northern Hemisphere. This ocean current system thus transports a substantial amount of heat from the Tropics and Southern Hemisphere toward the North Atlantic, where the heat is released to the atmosphere (Fig. 1.7). Changes in the AMOC have a profound impact on many aspects of the global climate system. There is growing evidence that fluctuations in Atlantic sea surface temperatures, hypothesized to be related to fluctuations in the AMOC, have played a prominent role in significant climate fluctuations around the globe on variety of time scales. Evidence from the instrumental record (based on the last~130 years) shows pronounced, multidecadal swings in large-scale Atlantic temperature that may be at least partly a consequence of fluctuations in the AMOC. Recent modeling and observational analyses have shown that these multidecadal shifts in Atlantic temperature exert a substantial influence on the climate system ranging from modulating African and Indian monsoonal rainfall to tropical Atlantic atmospheric circulation conditions of relevance for hurricanes. Atlantic SSTs also influence summer climate conditions over North America and Western Europe. Evidence from paleorecords suggests that there have been large, decadal-scale changes in theAMOC, particularly during glacial times. These abrupt change events have had a profound impact on climate, both locally in the Atlantic and in remote locations around the globe (Fig. 1.1). Research suggests that these abrupt events were related to discharges of freshwater into the North Atlantic from surrounding land-based ice sheets. Subpolar North Atlantic air temperature changes of more than 10 °C on time scales of a decade or two have been attributed to these abrupt change events. Uncertainties in Modeling the AMOC As with any projection of future behavior of the climate system, our understanding of the AMOC in the 21st century and beyond relies on numerical models that simulate the important physical processes governing the overturning circulation. An important test of model skill is to conduct transient simulations of the AMOC in response to the addition of freshwater and compare with paleoclimatic data. Such a test requires accurate, quantitative reconstructions of the freshwater forcing, including its volume, duration, and location, plus the magnitude and duration of the resulting reduction in the AMOC. This information is not easy to obtain; coupled general circulation model (GCM) simulations of most events have been forced with idealized freshwater pulses and compared with qualitative reconstructions of the AMOC (e.g., Hewitt et al., 2006; Peltier et al., 2006; see also Stouffer et al., 2006). There is somewhat more information about the freshwater pulse associated with an event 8,200 years ago, but important uncertainties remain (Clarke et al., 2004; Meissner and Clark, 2006). Thus, simulations of such paleoclimatic events provide important qualitative perspectives on the ability of models to simulate the response of the AMOC to forcing changes, but their ability to provide quantitative assessments is limited. Improvements in this area would be an important advance, but the difficulty in measuring even the current AMOC makes this task daunting. Although numerical models show good skill in reproducing the main features of the AMOC, there are known errors that introduce uncertainty in model results. Some of these model errors, particularly in temperature and heat transport, are related to the representation of western boundary currents and deep-water overflow across the Greenland-Iceland-Scotland ridge. Increasing the resolution of current coupled ocean-atmosphere models to better address these errors will require an increase in computing power by an order of magnitude. Such higher resolution offers the potential of more realistic and robust treatment of key physical processes, including the representation of deep-water overflows. Efforts are being made to improve this model deficiency (Willebrand et al., 2001; Thorpe et al., 2004; Tang and Roberts, 2005). Nevertheless, recent work by Spence et al. (2008) using an Earth-system model of intermediate complexity (EMIC) found that the duration and maximum amplitude of their coupled model response to freshwater forcing showed little sensitivity to increasing resolution. They concluded that the coarse-resolution model response to boundary layer freshwater forcing remained robust at finer horizontal resolutions. Future Changes in the AMOC A particular focus on the AMOC in Chapter 4 of this report is to address the widespread notion, both in the scientific and popular literature, that a major weakening or even complete shutdown of the AMOC may occur in response to global warming. This discussion is driven in part by model results indicating that global warming tends to weaken the AMOC both by warming the upper ocean in the subpolar North Atlantic and through increasing the freshwater input (by more precipitation, more river runoff, and melting inland ice) into the Arctic and North Atlantic. Both processes reduce the density of the upper ocean in the North Atlantic, thereby stabilizing the water column and weakening the AMOC. It has been theorized that these processes could cause a weakening or shutdown of the AMOC that could significantly reduce the poleward transport of heat in the Atlantic, thereby possibly leading to regional cooling in the Atlantic and surrounding continental regions, particularly Western Europe. This mechanism can be inferred from paleodata and is reproduced at least qualitatively in the vast majority of climate models (Stouffer et al., 2006). One of the most misunderstood issues concerning the future of the AMOC under anthropogenic climate change, however, is its often-cited potential to cause the onset of the next ice age. As discussed by Berger and Loutre (2002) and Weaver and Hillaire-Marcel (2004), it is not possible for global warming to cause an ice age by this mechanism. In the past, there was disagreement in determining which of the two processes governing upper-ocean density will dominate under increasing GHG concentrations, but a recent 11-model intercomparison project found that an MOC reduction in response to increasing GHG concentrations was caused more by changes in surface heat flux than by changes in surface freshwater flux (Gregory et al., 2005). Nevertheless, different climate models show different sensitivities toward an imposed freshwater flux (Gregory et al., 2005). It is therefore not fully clear to what degree salinity changes will affect the total overturning rate of the AMOC. In addition, by today’s knowledge, it is hard to assess how large future freshwater fluxes into the North Atlantic might be. This is due to uncertainties in modeling the hydrological cycle in the atmosphere, in modeling the sea-ice dynamics in the Arctic, as well as in estimating the melting rate of the Greenland ice sheet. It is important to distinguish between an AMOC weakening and an AMOC collapse. Historically, coupled models that eventually lead to a collapse of the AMOC under global warming scenarios have fallen into two categories: (1) coupled atmosphere-ocean general circulation models (AOGCMs) that required ad hoc adjustments in heat or moisture fluxes to prevent them from drifting away from observations, and (2) intermediate-complexity models with longitudinally averaged ocean components. Current AOGCMs used in the IPCC AR4 assessment typically do not use flux adjustments and incorporate improved physics and resolution. When forced with plausible estimates of future changes in greenhouse gases and aerosols, these newer models project a gradual 25-30% weakening of the AMOC, but not an abrupt change or collapse. Although a transient collapse with climatic impacts on the global scale can always be triggered in models by a large enough freshwater input (e.g., Vellinga and Wood, 2007), the magnitude of the required freshwater forcing is not currently viewed as a plausible estimate of the future. In addition, many experiments have been conducted with idealized forcing changes, in which atmospheric CO2 concentration is increased at a rate of 1%/year to either two times or four times the preindustrial levels and held fixed thereafter. In virtually every simulation, the AMOC reduces but recovers to its initial strength when the radiative forcing is stabilized at two times or four times the preindustrial levels. Perhaps more important for 21st century climate change, is the possibility for a rapid transition to seasonally ice-free Arctic conditions. In one climate model simulation, a transition from conditions similar to pre-2007 levels to a near-ice-free Septembe extent occurred in a decade (Holland et al., 2006). Increasing ocean heat transport was implicated in this simulated rapid ice loss, which ultimately resulted from the interaction of large, intrinsic variability and anthropogenically forced change. It is notable that climate models are generally conservative in the modeled rate of Arctic ice loss as compared to observations (Stroeve et al., 2007; Figure 1-3), suggesting that future ice retreat could occur even more abruptly than simulated. This nonlinear response occurs because sea ice has a strong inherent threshold in that its existence depends on the freezing temperature of seawater. Additionally, strong positive feedbacks associated with sea ice act to accelerate its change. The most notable of these is the positive surface albedo feedback in which changes in ice cover and surface properties modify the surface reflection of solar radiation. For example, in a warming climate, reductions in ice cover expose the dark underlying ocean, allowing more solar radiation to be absorbed. This enhances the warming and leads to further ice melt. Because the AMOC interacts with the circulation of the Arctic Ocean at its northern boundary, future changes in the AMOC and its attendant heat transport thus have the potential to further influence the future of sea ice. Our analysis indicates that it is very likely that the strength of the AMOC will decrease over the course of the 21st century. In models where the AMOC weakens, warming still occurs downsteam over Europe due to the radiative forcing associated with increasing greenhouse gases. No model under plausible estimates of future forcing exhibits an abrupt collapse of the MOC during the 21st century, even accounting for estimates of accelerated Greenland ice sheet melting. We conclude that it is very unlikely that the AMOC will abruptly weaken or collapse during the course of the 21st century. Based on available model simulations and sensitivity analyses, estimates of maximum Greenland ice sheet melting rates, and our understanding of mechanism of abrupt climate change from the paleoclimatic record, we further conclude that it is unlikely that the AMOC will collapse beyond the end of the 21st century as a consequence of global warming, although the possibility cannot be entirely excluded. The above conclusions depend upon our understanding of the climate system and on the ability of current models to simulate the climate system. An abrupt collapse of the AMOC in the 21st century would require either a sensitivity of the AMOC to forcing that is far greater than current models suggest or a forcing that greatly exceeds even the most aggressive of current rejections (such as extremely rapid melting of the Greenland ice sheet). While we view these as very unlikely, we cannot exclude either possibility. Further, even if a collapse of the AMOC is very unlikely, the large climatic impacts of such an event, coupled with the significant climate impacts that even decadal scale AMOC fluctuations induce, argue for a strong research effort to develop the observations, understanding, and models required to predict more confidently the future evolution of the AMOC. NOTE: The term "forcing” is used throughout this report to indicate any mechanism that causes the climate system to change, or respond. Examples of forcings discussed in this report include freshwater forcing of ocean circulation and changes in sea-surface temperatures and radiative forcing as a forcing of drought. As defined by the IPCC Third Assessment Report (Church et al., 2001), radiative forcing refers to a change in the net radiation at the top of the troposphere caused by a change in the solar radiation, the infrared radiation, or other changes that affect the radiation energy absorbed by the surface (e.g., changes in surface reflection properties), resulting in a radiation imbalance. A positive radiative forcing tends to warm the surface on average, whereas a negative radiative forcing tends to cool it. Changes in GHG concentrations represent a radiative forcing through their absorption and emission of infrared radiation. |Copyright gogreencanada © 2018 | Free web hosting — uCoz|
<urn:uuid:5ba6d0f7-f9c1-451e-8a20-e8239b55d4d8>
3.515625
2,678
Truncated
Science & Tech.
22.668191
95,551,285
Pi Day on 14 March (written as 314 in the US) also coincides with the birthday of Albert Einstein. What is it? Since its inception at the San Francisco Exploratorium in 1988, the day has commonly been celebrated by eating pies and embracing the mathematical constant that helps us to understand circles. In recent years it has been used to raise awareness of how fun maths can be. How can you celebrate? - mgilmevans has shared a dingbats-themed pi game. - Start the day by singing along to the pi song - which pupil can remember pi to the most decimal points? - Visit www.piday.org to see what other teachers have done for Pi Days gone by. From baking cookies and discussing their circumferences to "pi-ing" teachers for charity, there are lots of entertaining ideas. - Round up your lesson with fedoraboy's number search mental maths game to familiarise pupils with pi's first 64 digits.
<urn:uuid:d473043c-7a8f-4622-9d34-1be613d0c6a0>
3.125
202
News (Org.)
Science & Tech.
63.486154
95,551,348
Have you heard about the joint-shaped asteroid traveling through space? No, this isn’t the beginning of a nerdy stoner joke. It’s an actual thing that’s out in the cosmos right now. Here’s what we know about this crazy thing. There is an asteroid in the form of a massive joint hurtling through our solar system at this very moment. It is almost as if the “man upstairs,” whoever that poor bastard might be at this point in time, found himself so stoned after a long day at the office of Universal Architecture & Son that he or she fumbled a dynamic doobie loaded up with some of that high-grade space grass, sending it on a fiery freefall through space and time. We have magnificent technology that the scientific minds here on Earth are equipped with. However, reports indicate that the joint-shaped asteroid, known as “Oumuamua,” came blazing into our solar system last month mostly undetected. Typically, astronomers catch things like asteroids and comets long before they ever come so close to buzzing the planet. But this time around, the planetary remnants came rushing in fast. Had it been on a direct course for Earth, it could have possibly rendered the entire population extinct. Without any warning. Scientists first spotted the asteroid on October 19 with the volcano-perched Pan-STARRS1 telescope in Hawaii. In fact, the name Oumuamua, which was given to the blunt-shaped mass by the University of Hawaii Institute for Astronomy, is Hawaiian for “a messenger from afar arriving first.” From what we know now, perhaps scientists should have considered naming this beastly space missile “Sum Hegh vo’ wovbe’,” which when translated from Klingon means “near death from above.” Experts Weigh In Researchers say the rock likely traveled for “millions of years.” They estimate that it traveled at speeds of around 85,700 miles per hour before arriving in our galaxy. They have also confirmed it as the first interstellar object to invade our solar system. In addition, we now know that this unique asteroid is about the size of a football field. And it spins around similar to how a pencil might roll on a flat surface. The asteroid completes a rotation ever 7.3 hours. All of the details of Oumuamua were published in the latest issue of the journal Nature. “The orbit calculations revealed beyond any doubt that this body did not originate from inside the solar system…but instead had come from interstellar space,” the report reads. The situation with Oumuamua could change our understanding of asteroids. We have always considered these large masses of rock the projectile vomit, of sorts, from when space erupted almost 5 million years ago and created our solar system. However, Oumuamua originated from another galaxy altogether. So researchers are now curious to learn more about where it came from. It could mean there may be other planets out there that we are not aware of. And possibly even life beyond our comprehension. NASA plans to use the Hubble Space Telescope to observe the joint-shaped asteroid for the rest of the week. “For decades we’ve theorized that such interstellar objects are out there, and now—for the first time—we have direct evidence they exist,” Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate, told CNN. “This history-making discovery is opening a new window to study [the] formation of solar systems beyond our own.” Although Oumuamua is now 124 million miles away from Earth, scientists predict that other space joints, just like it, could threaten our planet in the future. They believe there may be around 10,000 more of these high-speed rocks blasting around the universe on course for our solar system – a situation that, under the right circumstances, could spell disaster. Oumuamua’s speed is what makes it so dangerous. The asteroid that hit Earth millions of years ago and put the dinosaurs out of commission was much larger than Oumuamua—about the size of a city. Some of the latest data shows that this space rock came down at just the right angle, creating the same amount of energy as tens of thousands of nuclear weapons being detonated simultaneously. This event, according to a recent study in Geophysical Research Letters, a journal of the American Geophysical Union, set off firestorms, hurricane-level winds and other disasters that, in addition to the dinosaurs, eliminated 75 percent of the Earth’s plant and animal species. Final Hit: Joint-Shaped Asteroid Traveled Millions of Years To Our Solar System But the million-dollar question remains. If a joint-shaped asteroid like Oumuamua ever actually threatens Earth, could we prevent it from snuffing us out like the dinosaurs? Science fiction author Stuart Hardwick recently told Quora that it would “depend on how large it is, what it was made of, and how much time we had to act.” For an object the size of Oumuamua, he suggests the most likely scenario would be a gravitational tractor. “In this scheme, we need only build a fairly robust spacecraft, get it into the vicinity of the asteroid, and have it keep station for a few years. Given enough time, it will draw the asteroid toward it by a non-trivial amount—enough to avert disaster,” Hardwick said. “The difficulty, of course, is that the spacecraft must remain on the job for a very long time, and while it might utilize sunlight or a radio thermoelectric generator for power and rely on highly efficient ion or plasma thrusters for station keeping, it will still have to get into the object’s orbit years, decades, or centuries ahead of time, and that poses a serious problem,” he continued. “We cannot always predict orbits with the required precision, sufficiently far in advance. Indeed, we also need a few years to design and build the tractor spacecraft, and we need to know with certainty that we are at risk in order to get funding for that.” For a larger asteroid, Hardwick indicates that we would probably have to lean on the savagery of a nuclear weapon. If that ever happens, we’re all going to need an asteroid-sized joint to get us through the madness. Choosing The Best CBD Oil For Your Needs How to Get a Medical Marijuana Card in Maryland North Carolina Police Find Guns, 100 Pounds of Marijuana in Daycare Center Tommy Chong to Jeff Sessions: Instead of Blowing Smoke Up Your Ass, Try Inhaling It News6 days ago Winners of the 2018 Amsterdam Cannabis Cup Strains6 days ago 12 Facts About Sour Diesel News6 days ago California College Paying People to Smoke Weed and Virtually Drive for Study Products5 days ago The 12 Best Flavors of THC Oil Health4 days ago Arizona Court Rules Cannabis Extracts Not Protected Under Medical Marijuana Act News6 days ago Oregon Plans New Cannabis Harvest Regulations to Combat Illicit Market Music6 days ago Best Music Styles To Listen To While Smoking Weed News5 days ago New York Health Department Announces Support of Cannabis Legalization
<urn:uuid:f4ded56d-68b8-47a3-ac0d-7475f01fafcf>
2.71875
1,546
News Article
Science & Tech.
43.792864
95,551,357
Pigments extracted from marine back shales in Mauritania, West Africa, by an international team of researchers likely represent the oldest colors in the geological record. These pigments are more than half a billion years older than previous pigment discoveries, and more significantly confirm that cyanobacteria dominated the base of the marine food chain a billion years ago. The colors are linked to the molecular fossils of chlorophyll, a pigment that enables plants to photosynthesize sunlight into energy. In the ground, the fossil materials appear to be blood red or deep purple, and once extracted, crushed and diluted, they take on a bright pink color. The findings have implications for fleshing out the world’s evolutionary record: the dominance of cyanobacteria accounts for the absence of more advanced biotic forms 1.1 billion years ago. Despite their prevalence, these microorganisms could not have provided a food supply sufficient to support the emergence of larger organisms. The researchers explained that a lack of algae and the small cell size of bacterial phytoplankton may have curtailed the flow of energy to higher trophic levels, potentially contributing to a diminished evolutionary pace toward complex eukaryotic ecosystems and large and active organisms. Scientists from the Japan Agency for Marine–Earth Science and Technology, Australian National University, Florida State University, Geoscience Australia and University of Liège (Belgium) contributed to this research.
<urn:uuid:cd43a5e1-0733-48f2-ad4c-e51fc0a40693>
4.28125
292
Knowledge Article
Science & Tech.
13.012
95,551,359
A series of rockets will be launched by the Indian space agency from its two centres between Thursday and Sunday to study Friday's solar eclipse and its after effects. Arrangements are being made by the Indian Space Research Organisation (ISRO) to send up a series of sounding rockets. These rockets are to carry instruments to measure the physical parameters of the upper atmosphere – from Sriharikota in Andhra Pradesh and Thumba in Kerala to study the effects of the solar eclipse. The solar eclipse on Friday will be for a duration of 11.8 minutes. The sounding rockets will be fired before and after. The nine-meter RH 560 rockets weigh 1.5 tonnes and carry a 100-kg payload of instruments each. The two-stage rocket will take the instruments 500 km above the earth's surface. From Sriharikota, there will be one launch each on Friday and Sunday. ISRO's Thumba Equatorial Rocket Launching Station (TERLS) in Kerala is to launch most of the rockets. On Thursday four rockets will be launched from TERLS and five on Friday. The rockets fired from TERLS are smaller than RH 560. They will reach 75 to 120 km above the earth. Grab the opportunity to meet the who's who from the world of education. Join us for World Education Summit in New Delhi on 9-10th August 2018. It will be a wonderful occasion to explore business opportunities. Like us on Facebook, connect with us on LinkedIn and follow us on Twitter.
<urn:uuid:fd66cf9a-9c7a-440a-8076-3f800503223b>
2.6875
304
News Article
Science & Tech.
62.032143
95,551,414
However, an asteroid discovered earlier this month - ZJ99C60, spotted on May 8 - turned out to be a blast from the past as astronomers realized it was actually 2010 WC9 coming back after nearly eight years. When it was first discovered in 2010, astronomers could not predict when it might return because the data on the orbit was not clear. Astronomers think that in spite of its dimension as well as range to Earth, 2010 WC9 will securely zoom past the planet. However, it is rapidly brightening and is expected to get even brighter than eleventh magnitude when it is at the closest distance to the Earth. The asteroid that would be moving at an estimated distance of about 203,453 kilometers (126,419 miles) could be reportedly witnessed even by the help of a small telescope. "We have discussed unusual objects 2010 with WC9 with EarthSky!". Thanks to Northolt Branch Observatories, you can actually watch it while flying. "Check out the link below to learn more about this asteroid, why it's special, and how to see it for yourself", on their Facebook page Say. We will be at us on Monday. The space rock - dubbed Asteroid 2010 WC9 - will pass by the planet at around half the moon's distance on Tuesday. Estimates of its size range from 197 to 427 feet (60-130 meters), making the May 15 pass one of the closest approaches ever observed of an asteroid of this size. Indigo staffer makes hoax bomb call ay IGI, arrested The IndiGo airline worker was angry over a job termination notice and made a decision to teach the company a lesson. He got depressed and made the call out of frustration and to teach a lesson to the airlines, the officer added. One of the ways this small asteroid is, "Earth Site EarthSky said". After a few weeks, the scientists lost track of the asteroid. Daniel Bamberger of Northolt stated on his Facebook page that the object has been imaged twice. As per the experts, the asteroid 2010 WC9 is even larger than that of the "Chelyabinsk meteor" that had injured near about one thousand people and had wrecked glasses on its breaking up atop Chelyabinsk in Russian Federation in the year 2013. Assam Congress divided, seeks referendum over bill The Bill once passed would render the Assam Accord as well as National Register of Citizens (NRC) toothless, he said. We the people of the Northeast have suffered a lot in the hand of the central government". Featured Groups on Saturday at THE PLAYERS He cooled off down the stretch, making five pars and a bogey over the final six holes. "It's set up for these guys to go low". He then built on that momentum on the par-5 2, where he got up and down from left of the green to get to two-under after 2. Rob Kardashian is furious with Tristan Thompson Khloe Kardashian has hit back at a fan that claims baby True being black "proves" that OJ Simpson is the star's father. Jordan has been quiet on the relationship and about Khloe since that time. Gray's rolling as Rockies host Brewers John Gray will get the start for Colorado, he's 4-4 on the season but has allowed just one earned run over his last 20 innings. Freeland (3-4) kept the Brewers in check the night after they had battered Rockies pitchers in an 11-10 win in 10 innings.
<urn:uuid:65a1846d-93b3-4f81-b37a-bb87580b7369>
2.671875
719
News Article
Science & Tech.
62.761405
95,551,415
The 90 experts participating in the survey anticipate a median sea-level rise of 200-300 centimeters by the year 2300 for a scenario with unmitigated emissions. In contrast, for a scenario with strong emissions reductions, experts expect a sea-level rise of 40-60 centimeters by 2100 and 60-100 centimeters by 2300. The survey was conducted by a team of scientists from the USA and Germany. “While the results for the scenario with climate mitigation suggest a good chance of limiting future sea-level rise to one meter, the high emissions scenario would threaten the survival of some coastal cities and low-lying islands,” says Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research. “From a risk management perspective, projections of future sea-level rise are of major importance for coastal planning, and for weighing options of different levels of ambition in reducing greenhouse-gas emissions.” Projecting sea-level rise, however, comes with large uncertainties, since the physical processes causing the rise are complex. They include the expansion of ocean water as it warms, the melting of mountain glaciers and ice caps and of the two large ice sheets in Greenland and Antarctica, and the pumping of ground water for irrigation purposes. Different modeling approaches yield widely differing answers. The recently published IPCC report had to revise its projections upwards by about 60 percent compared to the previous report published in 2007, and other assessments of sea-level rise compiled by groups of scientists resulted in even higher projections. The observed sea-level rise as measured by satellites over the past two decades has exceeded earlier expectations. Largest elicitation on sea-level rise ever: 90 key experts from 18 countries “It this therefore useful to know what the larger community of sea-level experts thinks, and we make this transparent to the public,” says lead author Benjamin Horton from the Institute of Marine and Coastal Sciences at Rutgers University in New Jersey. “We report the largest elicitation on future sea-level rise conducted from ninety objectively selected experts from 18 countries.” The experts were identified from peer-reviewed literature published since 2007 using the publication database ‘Web of Science’ of Thomson Reuters, an online scientific indexing service, to make sure they are all active researchers in this area. 90 international experts, all of whom published at least six peer-reviewed papers on the topic of sea-level during the past 5 years, provided their probabilistic assessment. The survey finds most experts expecting a higher rise than the latest IPCC projections of 28-98 centimeters by the year 2100. Two thirds (65%) of the respondents gave a higher value than the IPCC for the upper end of this range, confirming that IPCC reports tend to be conservative in their assessment. The experts were also asked for a “high-end” estimate below which they expect sea-level to stay with 95 percent certainty until the year 2100. This high-end value is relevant for coastal planning. For unmitigated emissions, half of the experts (51%) gave 1.5 meters or more and a quarter (27%) 2 meters or more. The high-end value in the year 2300 was given as 4.0 meters or higher by the majority of experts (58%). While we tend to look at projections with a focus on the relatively short period until 2100, sea-level rise will obviously not stop at that date. “Overall, the results for 2300 by the expert survey as well as the IPCC illustrate the risk that temperature increases from unmitigated emissions could commit coastal populations to a long-term, multi-meter sea-level rise,” says Rahmstorf. “They do, however, illustrate also the potential for escaping such large sea-level rise through substantial reductions of emissions.” Article: B. P. Horton, S. Rahmstorf, S. E. Engelhart, A.C.Kemp: Expert assessment of sea-level rise by AD 2100 and AD 2300. Quaternary Science Reviews (2013). [doi: 10.1016/j.quascirev.2013.11.002] Link to the article when it goes online: http://dx.doi.org/10.1016/j.quascirev.2013.11.002For further information please contact: Jonas Viering | PIK Pressestelle New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:594d575d-37cb-41e3-a6c5-8d9a38a8889d>
3.5625
1,533
Knowledge Article
Science & Tech.
46.449026
95,551,422
The oceanic or limnological mixed layer is a layer in which active turbulence has homogenized some range of depths. The surface mixed layer is a layer where this turbulence is generated by winds, surface heat fluxes, or processes such as evaporation or sea ice formation which result in an increase in salinity. The atmospheric mixed layer is a zone having nearly constant potential temperature and specific humidity with height. The depth of the atmospheric mixed layer is known as the mixing height. Turbulence typically plays a role in the formation of fluid mixed layers. - 1 Oceanic mixed layer - 2 Limnological mixed layer formation - 3 Atmospheric mixed layer formation - 4 External links - 5 References Oceanic mixed layer Importance of the mixed layer The mixed layer plays an important role in the physical climate. Because the specific heat of ocean water is much larger than that of air, the top 2.5 m of the ocean holds as much heat as the entire atmosphere above it. Thus the heat required to change a mixed layer of 25 m by 1 °C would be sufficient to raise the temperature of the atmosphere by 10 °C. The depth of the mixed layer is thus very important for determining the temperature range in oceanic and coastal regions. In addition, the heat stored within the oceanic mixed layer provides a source for heat that drives global variability such as El Niño. The mixed layer is also important as its depth determines the average level of light seen by marine organisms. In very deep mixed layers, the tiny marine plants known as phytoplankton are unable to get enough light to maintain their metabolism. The deepening of the mixed layer in the wintertime in the North Atlantic is therefore associated with a strong decrease in surface chlorophyll a. However, this deep mixing also replenishes near-surface nutrient stocks. Thus when the mixed layer becomes shallow in the spring, and light levels increase, there is often a concomitant increase of phytoplankton biomass, known as the "spring bloom". Oceanic mixed layer formation There are three primary sources of energy for driving turbulent mixing within the open-ocean mixed layer. The first is the ocean waves, which act in two ways. The first is the generation of turbulence near the ocean surface, which acts to stir light water downwards. Although this process injects a great deal of energy into the upper few meters, most of it dissipates relatively rapidly. If ocean currents vary with depth, waves can interact with them to drive the process known as Langmuir circulation, large eddies that stir down to depths of tens of meters. The second is wind-driven currents, which create layers in which there are velocity shears. When these shears reach sufficient magnitude, they can eat into stratified fluid. This process is often described and modelled as an example of Kelvin-Helmholtz instability, though other processes may play a role as well. Finally, if cooling, addition of brine from freezing sea ice, or evaporation at the surface causes the surface density to increase, convection will occur. The deepest mixed layers (exceeding 2000 m in regions such as the Labrador Sea) are formed through this final process, which is a form of Rayleigh–Taylor instability. Early models of the mixed layer such as those of Mellor and Durbin included the final two processes. In coastal zones, large velocities due to tides may also play an important role in establishing the mixed layer. The mixed layer is characterized by being nearly uniform in properties such as temperature and salinity throughout the layer. Velocities, however, may exhibit significant shears within the mixed layer. The bottom of the mixed layer is characterized by a gradient, where the water properties change. Oceanographers use various definitions of the number to use as the mixed layer depth at any given time, based on making measurements of physical properties of the water. Often, an abrupt temperature change called a thermocline occurs to mark the bottom of the mixed layer; sometimes there may be an abrupt salinity change called a halocline that occurs as well. The combined influence of temperature and salinity changes results in an abrupt density change, or pycnocline. Additionally, sharp gradients in nutrients (nutricline) and oxygen (oxycline) and a maximum in chlorophyll concentration are often co-located with the base of the seasonal mixed layer. Oceanic mixed layer depth determination The depth of the mixed layer is often determined by hydrography—making measurements of water properties. Two criteria often used to determine the mixed layer depth are temperature and sigma-t (density) change from a reference value (usually the surface measurement). The temperature criterion used in Levitus (1982) defines the mixed layer as the depth at which the temperature change from the surface temperature is 0.5 °C. The sigma-t (density) criterion used in Levitus (1982) uses the depth at which a change from the surface sigma-t of 0.125 has occurred. Neither criterion implies that active mixing is occurring to the mixed layer depth at all times. Rather, the mixed layer depth estimated from hydrography is a measure of the depth to which mixing occurs over the course of a few weeks. The mixed layer depth is in fact greater in winter than summer in each hemisphere. During the summer increased solar heating of the surface water leads to more stable density stratification, reducing the penetration of wind-driven mixing. Because seawater is most dense just before it freezes, wintertime cooling over the ocean always reduces stable stratification, allowing a deeper penetration of wind-driven turbulence but also generating turbulence that can penetrate to great depths. Barrier layer thickness The barrier layer thickness (BLT) is a layer of water separating the well-mixed surface layer from the thermocline. A more precise definition would be the difference between mixed layer depth (MLD) calculated from temperature minus the mixed layer depth calculated using density. The first reference to this difference as the barrier layer was in a paper describing observations in the western Pacific as part of the Western Equatorial Pacific Ocean Circulation Study. In regions where the barrier layer is present, stratification is stable because of strong buoyancy forcing associated with a fresh (i.e. more buoyant) water mass sitting on top of the water column. In the past, a typical criterion for MLD was the depth at which the surface temperature cools by some change in temperature from surface values. For example, Levitus (1982) used 0.5oC. In the example to the right, 0.2oC is used to define the MLD (i.e. DT-02 in the Figure). Prior to the abundant subsurface salinity available from Argo, this was the main methodology for calculating the oceanic MLD. More recently, a density criterion has been used to define the MLD. The density-derived MLD is defined as the depth where the density increases from the surface value due to a prescribed temperature decrease of some value (e.g. 0.2oC) from the surface value while maintaining constant surface salinity value. In the Figure, this is defined by Dsigma and corresponds to a layer that is both isothermal and isohaline. The BLT is the difference of the temperature-defined MLD minus the density-defined value (i.e. DT-02 - Dsigma). Large values of the BLT are typically found in the equatorial regions and can be as high as 50 m. Above the barrier layer, the well mixed layer may be due to local precipitation exceeding evaporation (e.g. in the western Pacific), monsoon related river runoff (e.g. in the northern Indian Ocean), or advection of salty water subducted in the subtropics (found in all subtropical ocean gyres). Barrier layer formation in the subtropics is associated with seasonal change in the mixed layer depth, a sharper gradient in sea surface salinity (SSS) than normal, and subduction across this SSS front. In particular, the barrier layer is formed in winter season in the equatorward flank of subtropical salinity maxima. During early winter, the atmosphere cools the surface and strong wind and negative buoyancy forcing mixes temperature to a deep layer. At this same time, fresh surface salinity is advected from the rainy regions in the tropics. The deep temperature layer along with strong stratification in the salinity gives the conditions for barrier layer formation. For the western Pacific, the mechanism for barrier layer formation is different. Along the equator, the eastern edge of the warm pool (typically 28oC isotherm - see SST plot in the western Pacific) is a demarcation region between warm fresh water to the west and cold, salty, upwelled water in the central Pacific. A barrier layer is formed in the isothermal layer when salty water is subducted (i.e. a denser water mass moves below another) from the east into the warm pool due to local convergence or warm fresh water overrides denser water to the east. Here, weak winds, heavy precipitation, eastward advection of low salinity water, westward subduction of salty water and downwelling equatorial Kelvin or Rossby waves are factors that contribute to deep BLT formation. Importance of BLT Prior to El Nino, the warm pool stores heat and is confined to the far western Pacific. During the El Nino, the warm pool migrates eastward along with the concomitant precipitation and current anomalies. The fetch of the westerlies is increased during this time, reinforcing the event. Using data from the ship of opportunity and Tropical Atmosphere – Ocean (TAO) moorings in the western Pacific, the east and west migration of the warm pool was tracked over 1992-2000 using sea surface salinity (SSS), sea surface temperature (SST), currents, and subsurface data from Conductivity, temperature, depth taken on various research cruises. This work showed that during westward flow, the BLT in the western Pacific along the equator (138oE-145oE, 2oN-2oS) was between 18 m – 35 m corresponding with warm SST and serving as an efficient storage mechanism for heat. Barrier layer formation is driven by westward (i.e. converging and subducting) currents along the equator near the eastern edge of the salinity front that defines the warm pool. These westward currents are driven by downwelling Rossby waves and represent either a westward advection of BLT or a preferential deepening of the deeper thermocline versus the shallower halocline due to Rossby wave dynamics (i.e. these waves favor vertical stretching of the upper water column). During El Nino, westerly winds drive the warm pool eastward allowing fresh water to ride on top of the local colder/saltier/denser water to the east. Using coupled, atmospheric/ocean models and tuning the mixing to eliminate BLT for one year prior to El Nino, it was shown that the heat buildup associated with barrier layer is a requirement for big El Nino. It has been shown that there is a tight relationship between SSS and SST in the western Pacific and the barrier layer is instrumental in maintaining heat and momentum in the warm pool within the salinity stratified layer. Later work, including Argo drifters, confirm the relationship between eastward migration of the warm pool during El Nino and barrier layer heat storage in the western Pacific. The main impact of barrier layer is to maintain a shallow mixed layer allowing an enhanced air-sea coupled response. In addition, BLT is the key factor in establishing the mean state that is perturbed during El Nino/La Niña Limnological mixed layer formation Formation of a mixed layer in a lake is similar to that in the ocean, but mixing is more likely to occur in lakes solely due to the molecular properties of water. Water changes density as it changes temperature. In lakes, temperature structure is complicated by the fact that fresh water is heaviest at 3.98 °C (degrees Celsius). Thus in lakes where the surface gets very cold, the mixed layer briefly extends all the way to the bottom in the spring, as surface warms as well as in the fall, as the surface cools. This overturning is often important for maintaining the oxygenation of very deep lakes. The study of limnology encompasses all inland water bodies, including bodies of water with salt in them. In saline lakes and seas (such as the Caspian Sea), mixed layer formation generally behaves similarly to the ocean. Atmospheric mixed layer formation The atmospheric mixed layer results from convective air motions, typically seen towards the middle of the day when air at the surface is warmed and rises. It is thus mixed by Rayleigh–Taylor instability. The standard procedure for determining the mixed layer depth is to examine the profile of potential temperature, the temperature which the air would have if it were brought to the pressure found at the surface without gaining or losing heat. As such an increase of pressure involves compressing the air, the potential temperature is higher than the in-situ temperature, with the difference increasing as one goes higher in the atmosphere. The atmospheric mixed layer is defined as a layer of (approximately) constant potential temperature, or a layer in which the temperature falls at a rate of approximately 10 °C/km, provided it is free of clouds. Such a layer may have gradients in the humidity, though. As is the case with the ocean mixed layer, velocities will not be constant throughout the atmospheric mixed layer. - Lake effect snow for a link to a NASA image from the SeaWiFS satellite showing clouds in the atmospheric mixed layer. - See the Ifremer/Los Mixed Layer Depth Climatology website at http://www.ifremer.fr/cerweb/deboyer/mld for having access to up to date ocean Mixed Layer Depth Climatology, data, maps and links. - Levitus, Sydney (1982), Climatological Atlas of the World Ocean, NOAA Professional Paper 13, U.S. Department of Commerce. - Mellor, G. L.; Durbin, P. A. (1975). "The structure and dynamics of the ocean surface mixed layer". Journal of Physical Oceanography. 5: 718–728. Bibcode:1975JPO.....5..718M. doi:10.1175/1520-0485(1975)005<0718:TSADOT>2.0.CO;2. - Wallace, J. M., and P.V. Hobbs (1977), Atmospheric Science: An Introductory Survey, Academic Press, San Diego. - Kato, H.; Phillips, O.M. (1969). "On the penetration of a turbulent layer into a stratified fluid". J. Fluid Mechanics. 37 (4): 643–655. Bibcode:1969JFM....37..643K. doi:10.1017/S0022112069000784. - Agrawal, Y.C.; Terray, E.A.; Donelan, M.A.; Hwang, P.A.; Williams, A.J.; Drennan, W.M.; Kahma, K.K.; Kitaiigorodski, S.A. (1992). "Enhanced dissipation of kinetic energy beneath surface waves". Nature. 359 (6392): 219–220. Bibcode:1992Natur.359..219A. doi:10.1038/359219a0. - Craik, A.D.D.; Leibovich, S. (1976), "A Rational model for Langmuir circulations", Journal of Fluid Mechanics, 73 (3): 401–426, Bibcode:1976JFM....73..401C, doi:10.1017/S0022112076001420 - Gnanadesikan, A.; Weller, R.A. (1995), "Structure and variability of the Ekman spiral in the presence of surface gravity waves", Journal of Physical Oceanography, 25 (12): 3148–3171, Bibcode:1995JPO....25.3148G, doi:10.1175/1520-0485(1995)025<3148:saiote>2.0.co;2 - Sprintall, J., and M. Tomczak, Evidence of the barrier layer in the surface-layer of the tropics, Journal of Geophysical Research: Oceans, 97 (C5), 7305-7316, 1992. - Lukas, R.; Lindstrom, E. (1991). "The Mixed Layer of the Western Equatorial Pacific-Ocean". Journal of Geophysical Research: Oceans. 96: 3343–3357. Bibcode:1991JGR....96.3343L. doi:10.1029/90jc01951. - Sato, K., T. Suga, and K. Hanawa, Barrier layers in the subtropical gyres of the world's oceans, Geophysical Research Letters, 33 (8), 2006. - Mignot, J., C.d.B. Montegut, A. Lazar, and S. Cravatte, Control of salinity on the mixed layer depth in the world ocean: 2. Tropical areas, Journal of Geophysical Research: Oceans, 112 (C10), 2007. - Bosc, C.; Delcroix, T.; Maes, C. (2009). "Barrier layer variability in the western Pacific warm pool from 2000 to 2007". Journal of Geophysical Research: Oceans. 114. Bibcode:2009JGRC..114.6023B. doi:10.1029/2008jc005187. - Delcroix, T.; McPhaden, M. (2002). "Interannual sea surface salinity and temperature changes in the western Pacific warm pool during 1992-2000". Journal of Geophysical Research: Oceans. 107 (C12). Bibcode:2002JGRC..107.8002D. doi:10.1029/2001jc000862. - Maes, C.; Picaut, J.; Belamari, S. (2005). "Importance of the salinity barrier layer for the buildup of El Niño". Journal of Climate. 18 (1): 104–118. Bibcode:2005JCli...18..104M. doi:10.1175/jcli-3214.1. - Maes, C.; Ando, K.; Delcroix, T.; Kessler, W.S.; McPhaden, M.J.; Roemmich, D. (2006). "Observed correlation of surface salinity, temperature and barrier layer at the eastern edge of the western Pacific warm pool". Geophysical Research Letters. 33 (6). Bibcode:2006GeoRL..33.6601M. doi:10.1029/2005gl024772. - Mignot, J.; Montegut, C.d.B.; Lazar, A.; Cravatte, S. (2007). "Control of salinity on the mixed layer depth in the world ocean: 2. Tropical areas". Journal of Geophysical Research: Oceans. 112 (C10). Bibcode:2007JGRC..11210010M. doi:10.1029/2006jc003954. - Maes, C.; Belamari, S. (2011). "On the Impact of Salinity Barrier Layer on the Pacific Ocean Mean State and ENSO". Sola. 7: 97–100. Bibcode:2011SOLA....7...97M. doi:10.2151/sola.2011-025.
<urn:uuid:8678f8b1-288b-4093-8c50-325e170c5eba>
3.890625
4,160
Knowledge Article
Science & Tech.
59.343822
95,551,444
2 pipes can fill a tank in 35 minutes. The larger pipe alone can fill the tank in 24 minutes less time than the smaller pipe. How long does each pipie take to fill the tank alone? Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - The tank The tank had 9 inflows to be filled in 21 days. After 9 days, 3 trips out. How many days did the remaining 6 tributaries fill the tank? If water flows into the pool by two inlets, fill the whole for 18 hours. First inlet filled pool 10 hour longer than second. How long pool is filled with two inlets separately? - The larger The larger of two numbers is nine more than four times the smaller number. The sum of the two numbers if fifty-nine. Find the two numbers. - The sum 2 The sum of five consecutive even integers is 150. Find the largest of the five integers. A.28 B.30 C.34 D.54 Show your solution and explain your answer. - The length The length of a rectangle is 6 meters less than twice the width. If the area of the rectangle is 216 meters, find the dimensions of the rectangle. - 40% volume 40% volume with 104 uph (units per labor hour) 8 people working. What is the volume? - Trapezoid MO The rectangular trapezoid ABCD with right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of the trapezoid. - Area of iso-trap Find the area of an isosceles trapezoid, if the lengths of its bases are 16 cm, and 30 cm, and the diagonals are perpendicular to each other. - Right triangle eq2 Find the lengths of the sides and the angles in the right triangle. Given area S = 210 and perimeter o = 70. Area of square garden is 6/4 of triangle garden with sides 56 m, 35 m and 35 m. How many meters of fencing need to fence a square garden? Prove that k1 and k2 is the equations of two circles. Find the equation of the line that passes through the centers of these circles. k1: x2+y2+2x+4y+1=0 k2: x2+y2-8x+6y+9=0 - CuSO4 mixture How many grams of solid CuSO4 we have to add to 450g of 15% CuSO4 solution to produce a 25% solution? One brick is 6 kg and half a brick heavy. What is the weight of one brick? Juan bought 9 same chocolates for 9 Eur. How many euros he pay for 29 chocolates? - Intercept with axis F(x)=log(x+4)-2, what is the x intercept - Solve 2 Solve integer equation: a +b+c =30 a, b, c = can be odd natural number from this set (1,3,5,7,9,11,13,15) - Book reading If we read the book at a speed of 15 pages a day, we read it 3 days before we read it at a speed of 10 pages per day. If I read at 6 pages per day, how many days will I read the book?
<urn:uuid:17ac3a66-b3ad-418a-bb0f-5152c9c5a2b9>
3.375
760
Tutorial
Science & Tech.
86.891344
95,551,448
A global analysis raises the minimum estimated number of tropical tree species to at least 40,000 to 53,000 worldwide in a paper appearing in Proceedings of the National Academy of Sciences, whose co-authors include researchers from the Center for Tropical Forest Science-Forest Global Earth Observatory (CTFS-ForestGEO) and the Smithsonian Tropical Research Institute (STRI). Many of these species risk extinction because of their rarity and restriction to small geographic areas, reaffirming the need for comprehensive, pan-tropical conservation efforts. Although scientists could confidently say 'the tropics are diverse,' the answer to 'how diverse' still remains open to speculation. Tropical tree identification is notoriously difficult -- hampered by hard-to-access terrain and the sheer number of rare species. Much of the data came from CTFS-ForestGEO study sites, where standardized pan-tropical survey methods create opportunities to much more accurately guage tropical diversity. By raising the estimated minimum number of tree species in the world, estimates for the number of insect and microbe species associated with tropical trees also increases, placing an even higher premium on protection of these forest ecosystems. Co-author William Laurance, senior research associate at STRI and distinguished research professor at James Cook University, explains that the 'stunningly high tree diversity' of the tropics is represented by thousands of rare species, whose sparse populations may not be sustained in the long term by isolated protected areas. This study once again validates a strategy of making forest reserves as big as possible, and also trying to prevent their isolation from adjoining areas of forest.' The study's lead author Ferry Slik, professor at Universiti Brunei Darussalam, collaborated with over 170 scientists from 126 institutions to study a dataset composed of 207 forested locations across tropical America, Africa and the Indo-Pacific. Each forest plot contains at least 250 individual trees identified to species, ensuring comprehensive coverage of the total species diversity in each geographical area. Among their findings, the researchers note that, contrary to previous assumptions, the Indo-Pacific tropics contain as much species diversity as tropical America -- at least 19,000 species. Both tropical America and the Indo-Pacific are about five times as species-rich as Africa, whose forests are hypothesized to have experienced extensive extinction events during the Pleistocene era of glaciation and climate change. All three regions contain distinct tree lineages reflecting unique evolutionary histories. Researchers note that their calculations excluded some 10 percent of unidentifiable trees in a dataset comprising 657,630 individuals. Since these trees could reasonably represent rare or previously unknown species, there's a high likelihood that the world's estimates of total tree species diversity will keep increasing as more of the tropics are surveyed and studied. Laurance notes that the CTFS-ForestGEO network continues to grow, adding new forest plots not just for basic research but also, 'as barometers of the long-term effects of global change on forest communities.' Meanwhile, as deforestation and development increase the extinction risk for many unique species, lessons may be learned from Africa's reduced tropical diversity. When forest areas shrink, rare species are usually the first to disappear. Consequently, even if the extinction pressure is eventually lifted, a much more limited palette of species remains to repopulate the region. While the tropics are vast and diverse, their individual components are irreplaceable. The Center for Tropical Forest Science-Forest Global Earth Observatories (CTFS-ForestGEO) is a global network of forest research plots and scientists dedicated to the study of tropical and temperate forest function and diversity. The multi-institutional network comprises over 60 forest research plots across the Americas, Africa, Asia, and Europe, with a strong focus on tropical regions. CTFS-ForestGEO monitors the growth and survival of approximately 6 million trees and 10,000 species. http://www. The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama, is a unit of the Smithsonian Institution. The Institute furthers the understanding of tropical nature and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems. Website: http://www. Reference: Ferry J. W. Slik, et al. 2015. An estimate of the number of tropical tree species. Proceedings of the National Academy of Sciences USA. DOI 10.1073/pnas.1423147112. Beth King | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:3f904b4a-e77c-469f-9142-4ae3a8dc9059>
3.828125
1,499
Content Listing
Science & Tech.
30.0583
95,551,449
Lowering the Freezing Point of Water Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays. It is common knowledge that the freezing point of pure water is at 0 degrees centigrade or 32 degrees ferinheight. However, is it possible to keep water in its liquid state below that freezing point? It is indeed possible, and people have been using this principle for centuries! Traveling back to the 1600s we find King Charles I of England dining with his lords and ladies. The final course is the epicurean delight of ice cream. It is doubtful that King Charles I understood the scientific principle of depressing the freezing point of a solution; nevertheless, at that time it was impossible to make ice cream without freezing the crème by depressing the freezing point of water below 0 degrees centigrade (Zinger, 2005). Today, principalities spread salt on icy roads in order to “melt” the ice. In actuality, the salt is merely depressing the freezing point of the water, allowing the roads to remain ice free even while the temperatures are below 0 degrees centigrade. To comprehend freezing point depression, you must first understand freezing point. Simply put, it is considered the temperature at which a liquid changes into its solid phase. However, it can also be thought of as the temperature at which the liquid and solid phases are at equilibrium with the atmospheric, or vapor, pressure around it. Freezing occurs as water molecules become ordered into a crystalline lattice. Scientists have long known about the phenomenon that when you add a solute to a solvent, the freezing point lowers, or depresses. Freezing point depression is a colligative property. Colligative properties are the properties of solutions that depend on the number of molecules in a solvent. It does not depend on the properties of the individual molecules in the solution (Prentice-Hall, 1972). As an example, when you create a solution by adding sodium chloride as the solute, to the solvent of water, the freezing temperature of the solution decreases. The increase of the number of solute particles of the solution interferes with the development of the crystalline structure, therefore the freezing process is delayed (Newton, 1999). Freezing point depression can be expressed mathematically as: ΔT = i Kf m. The ΔT equals change in temperature, i equals the number of particles into which the solute dissociates, m equals the moles of the solute per kilogram of solvent, and Kf equals the molal freeing point constant (for water, Kf = 1.853 C/m) (thinkquest, 2010). As discussed, solutes interfere with the shifting of a liquid to a solid state. The colligative properties relate to the number of solute particles in a solution. The greater the solute particles there are in a solution, the greater the decrease in freezing temperature. If 10 grams of sodium chloride were added to 100 grams of water, the freezing point would drop to -5.9 degrees centigrade. However if 10 grams of sucrose were added to 100 grams of water, the water solution's freezing point would only drop to -0.56 degrees centigrade. Why the dramatic difference between the two? After all, the same amount of sucrose and sodium chloride was added to the same amount of water. The answer lies in the number of particles in each solute. There are more particles in 10 grams of sodium chloride then there are in 10 grams of sucrose. Sucrose, C12 H22 O11, has a molecular weight of 342.3 grams per mole. Sodium Chloride on the other hand, has a molecular weight of 58.44 grams per mole. Sodium Chloride has almost six times as many particles than sucrose has in the same number of grams. Therefore, the sodium chloride solution has a lower freezing point than the sucrose solution (Chemistry Explained, 2010). Not only is it possible to quantify the depression of the freezing point of a solution, it is possible to predict how far the freezing point will be decreased. According to the principles of the colligative properties, it doesn't matter what the physical properties of the solute added to the solution may be. The only determining factor is the number of particles in the solution. Therefore, if you double the amount of sodium chloride in a solution, the depression of the freezing point will be double the original solution. The original question of, “is it possible to keep water in its liquid state below that freezing point?”, has most assuredly been answered with a resounding yes. Not only can it be lowered, that lowering can be understood, quantified and predicted. In the experiment phase of this project, the scientific method will be used to assess the validity of this research. King Charles I of England would be surprised to know that his epicurean delight of ice cream paved the way for the discoveries of colligative properties and lowering the freezing point of water. A Brief History of Ice Cream, http://www.zingersicecream.com/history.htm Colligative Properties, http://www.chemistryexplained.com/Ce-Co/Colligative-Properties.html Solutions and Colligative Properties: Colligative Properties, http://library.thinkquest.org/C006669/data/Chem/colligative/colligative.html W.J. Moore, Physical Chemistry Prentice-Hall 1972
<urn:uuid:6c23ea58-65db-4248-b68d-d0a89b1c9d94>
3.84375
1,190
Truncated
Science & Tech.
47.657967
95,551,453
The Structuring Role of Marine Life in Open Ocean Habitat: Importance to International Policy - Environment Department, University of York, York, United Kingdom Areas beyond national jurisdiction (ABNJ) lie outside the 200 nautical mile limits of national sovereignty and cover 58% of the ocean surface. Global conservation agreements recognize biodiversity loss in ABNJ and aim to protect ≥10% of oceans in marine protected areas (MPAs) by 2020. However, limited mechanisms to create MPAs in ABNJ currently exist, and existing management is widely regarded as inadequate to safeguard biodiversity. Negotiations are therefore underway for an “internationally legally binding instrument” (ILBI) to the United Nations Convention on the Law of the Sea to enable biodiversity conservation beyond national jurisdiction. While this agreement will, hopefully, establish a mechanism to create MPAs in ABNJ, discussions to date highlight a further problem: namely, defining what to protect. We have a good framework for terrestrial and coastal habitats, however habitats in ABNJ, particularly the open ocean, are less understood and poorly defined. Often, predictable broad oceanographic features are used to define open ocean habitats. But what exactly, constitutes the habitat—the water, or the species that live there? Complicating matters, species in the open sea are often highly mobile. Here, we argue that mobile marine organisms provide the structure-forming biomass and constitute “habitat” in the open ocean. For an ABNJ ILBI to offer effective protection to marine biodiversity it must consider habitats a function of their inhabitants and represent all marine life within its scope. Only by enabling strong protection for every element of biodiversity can we hope to be fully successful in conserving it. Areas beyond national jurisdiction (ABNJ) cover 58% of the ocean surface and lie outside the 200 nautical mile limits of national sovereignty (exclusive economic zones). International concern has steadily increased over the multiplication and intensification of threats to marine biodiversity in ABNJ, fragmented and uncoordinated management, and the lack of a comprehensive legal framework to properly address threats (Ban et al., 2014; Merrie et al., 2014; Gjerde et al., 2016; Wright et al., 2016). Global agreements on environmental protection recognize steep biodiversity losses in ABNJ and have set targets to protect >10% of coastal and marine areas in marine protected areas (MPAs) (Convention on Biological Diversity, 2010; United Nations, 2015). However, there is presently no agreed mechanism to protect biodiversity in ABNJ. Following 10 years of informal negotiations, in March 2016 delegates from 193 countries, and representatives from numerous intergovernmental and non-governmental organizations, met at the United Nations (UN) in New York for the first of four meetings to negotiate the elements of an “international legally binding instrument” (ILBI) for the conservation and sustainable use of biodiversity beyond national jurisdiction under the UN Convention on the Law of the Sea (UNCLOS). These negotiations are framed by four issues which delegates have agreed must be considered “together and as a whole”: (1) marine genetic resources, including benefit sharing; (2) area-based management tools, including MPAs; (3) environmental impact assessments; and (4) capacity building and the transfer of marine technology (UNGA, 2015; Gjerde et al., 2016; Wright et al., 2016) This process should result in recommendations to the UN General Assembly by the end of 2017. If successful, these negotiations will lead to an intergovernmental negotiating conference in 2018 to improve governance and management of biodiversity beyond national jurisdiction. However, while the ILBI will, hopefully, facilitate conservation and sustainable use of biodiversity in ABNJ, including a mechanism for establishing MPAs, these discussions highlight a further problem: namely, defining what to protect. Existing global targets measure progress toward biodiversity conservation using the extent of ecosystems and habitats covered by protected areas (Convention on Biological Diversity, 2010). However, while we have a good working framework for terrestrial and coastal habitats, habitats in ABNJ, and particularly the open ocean, are less understood and poorly defined (e.g., IUCN, 2017). To inform these discussions we consider what constitutes “habitat” in the largely fluid environment of the open ocean. Relating Habitat Concepts to Areas Beyond National Jurisdiction Habitat or ecosystem concepts overlap substantially. The Convention on Biological Diversity (CBD) (Article 2) defines “ecosystem” as “a dynamic complex of plant, animal and micro-organism communities and their non-living environment interacting as a functional unit,” and “habitat” as “the place or type of site where an organism or population naturally occurs” (Convention on Biological Diversity, 2010). The definitions are therefore interrelated, and application of the terms is scale-dependent. Even without clear definitions, the idea that the world is divided into a series of ecosystems and habitats is most easily grasped when fixed entities comprise a habitat with discrete boundaries (although visible boundaries often conceal complex networks of wider connections that may be overlooked–Box 1). For instance, on land it is easy to conceive where a lake or forest end and a different habitat or ecosystem begins. This principle clearly translates to coastal systems where, for example, mangrove trees, seagrass meadows, or coral and oyster reefs act as easily defined structuring elements. Similarly, some habitats in ABNJ, especially seabed features such as seamounts, hydrothermal vents, and ocean ridges, offer defined features around which boundaries can be drawn using traditional principles. Box 1. Considering complex connections amongst ecosystems in the MPA context. Whether on land or in the sea, ecosystems are an interconnected continuum in space and time across living and non-living realms. While ecosystems on land may be easier to perceive and define as distinct entities, it is increasingly understood that successful management and conservation must incorporate connectivity with the surrounding environment. For example, migratory salmon are an important mediator of marine-derived nutrients to freshwater and riparian habitats and the animals that rely on them; without considering the underlying ecology and importance of salmon in these systems, management is unlikely to achieve desired outcomes of maintaining habitat diversity, structure and function (Darimont et al., 2010; Artelle et al., 2016). Similarly, anthropogenic nutrient inputs from land may result in eutrophication of freshwater, estuarine, and coastal ecosystems leading to dead zones (Diaz and Rosenberg, 2008), harmful algal blooms (Heisler et al., 2008), contaminated water and seafood (Heisler et al., 2008), and increased mortality of wildlife (Fey et al., 2015). Broader considerations than simply the apparent spatial footprint of a habitat are therefore needed to attain management and conservation objectives. The same broad approach applies to management in ABNJ. A seamount or hydrothermal vent ecosystem, for example, is not just a reflection of the bathymetric feature but rather a combination of influences which includes the water column and the creatures on and within it (Clark et al., 2010; Levin et al., 2016). Nutrient and food subsidies from seeps and vents influence surrounding fish and fisheries (Grupe et al., 2015) in a similar manner to coastal habitats such as seagrass meadows (Heck et al., 2008), although quantification is still limited. Other seabed features likely exert similar influences (Morato et al., 2010; Letessier et al., 2016). The vertical and horizontal footprint of such ecosystems is therefore much larger than simply where the physical habitat manifests, with the extent and scales of influence varying from place to place and among ecosystems (e.g., Levin et al., 2016). At the sea surface, distinctive habitats based on oceanographic features and areas of high productivity and biodiversity are identifiable using sea surface temperature, temperature at depth, chlorophyll, and nitrates among other variables (e.g., Hobday et al., 2011). Beneath these areas often lie diverse seabed ecosystems (Woolley et al., 2016) with nutrients, dissolved organic matter, and minerals moved through the water column, mediated by marine life as well as topographically induced currents, which influence both seabed and water column habitat characteristics (Turner, 2015; Soetaert et al., 2016). For an international legally binding instrument for biodiversity protection beyond national jurisdiction under the UN Convention on the Law of the Sea to be effective, it needs to ensure these broader considerations are incorporated. To achieve this, species and habitat conservation need to be integrated, recognizing that habitats will not be protected if their component species are not. Fluid realms, such as the open ocean (Norse, 2005) and airspace (Diehl, 2013), challenge our conception and application of habitat and ecosystem ideas. Predictable, broad oceanographic features, such as frontal zones, which aggregate nutrients and food and attract predators, offer opportunities to delineate boundaries (Scales et al., 2014). But what exactly defines the habitat? Is it the water, or the species that live there? With the exception of floating Sargassum weed (Hemphill, 2005), there is little structure-forming biomass in pelagic systems and even this is not fixed in space. The biomass present is held in the bodies of the creatures that live in the water and is highly mobile. It is those creatures, and the ecological roles they fulfill, we argue, that constitute “habitat” in the open sea. Habitats as a Function of Their Inhabitants Living and non-living realms interact to characterize ecosystems. The occupants of any habitat create and alter the system they live within. Species that create more complex habitat by modulating the availability of resources to other species are known as “ecosystem engineers” (Jones et al., 1994). Most research identifying marine organisms as ecosystem engineers has focused on species that either attach or interact with seabed communities, such as corals, bivalves, seagrasses, and species that modify sediments (e.g., Soetaert et al., 2016). Only recently have some begun to explore the potential for other marine organisms to act as ecosystem engineers. Examples in the open ocean include phytoplankton and zooplankton (Jones et al., 1994; Breitburg et al., 2010), and baleen and sperm whales (Roman et al., 2014). The structuring role of plankton in pelagic food-webs has long been recognized. They are critical to ecosystem function, and their abundance and biomass determines the distribution and productivity of marine life (Chassot et al., 2010; Watson et al., 2015). However, plankton may also be considered ecosystem engineers—affecting the photic, chemical and thermal regimes of water and consequently the suitability of that habitat for other life (Haury et al., 1978; Duffy and Stachowicz, 2006; Breitburg et al., 2010). For example, Antarctic krill (Euphausia superba) are a fundamental food source for predators from squid to baleen whales (Constable et al., 2000), play a major role in ocean productivity by recycling iron in surface waters (Nicol et al., 2010), alter organic matter and trace element concentrations in surface waters during molting (Nicol and Stolp, 1989), and may be an important carbon sink (Swadling, 2006; Tarling and Johnson, 2006). However, most research examines krill-based food-webs or environmental factors affecting their populations rather than their influence on the non-living realm. Advances in technology and increased demand are anticipated to expand zooplankton fisheries in the future leading to calls for precautionary management to avert adverse ecosystem and habitat consequences of exploitation (Nicol et al., 2012; Brotz, 2016; Kawaguchi and Melle, 2016). Emerging evidence suggests that mobile marine species can transform the environment as they move through it, transferring nutrients within the water column (deep to shallow and vice versa) and across oceans (Wilson et al., 2009; Pershing et al., 2010; Roman and McCarthy, 2010; Roman et al., 2014; St John et al., 2016). For example, depletion of whales due to commercial whaling resulted in substantial deep-sea habitat loss through a reduction in dead whale “falls” (Smith, 2007), declines in primary productivity due to reduced nutrient shuttling (Nicol et al., 2010; Roman and McCarthy, 2010), changes in food-web structure and biogeochemical cycles (Lavery et al., 2010; Roman and McCarthy, 2010), and reduced potential for organic carbon sequestration (Lavery et al., 2010; Pershing et al., 2010). The consequences of extraction therefore extended far beyond the decline of individual whale species. Similar ecosystem-wide changes can be expected from exploitation of other large-bodied or highly abundant marine animals. For instance, mesopelagic fish (200–1,000 m) undertake daily migrations between near-surface and deep water (Robinson et al., 2010). Estimates suggest the global biomass of mesopelagic fish is on the order of 10 billion tones and, while there is uncertainty in this number, they likely represent the most abundant vertebrates on Earth (Irigoien et al., 2014) and the largest structuring biomass in the open sea. Their mass migration provides critical links in biogeochemical cycles across the water column, promoting carbon uptake and storage, thereby affecting climate regulation (Robinson et al., 2010; Giering et al., 2014; St John et al., 2016), modifies fluxes of nutrients and oxygen (Robinson et al., 2010; Bianchi et al., 2013a,b), and helps sustain the metabolic requirements of mesopelagic ecosystems (Burd et al., 2010; Bianchi et al., 2013b). They are also a key resource for higher trophic levels such as tunas and billfish (Potier et al., 2007; Duffy et al., 2017). There is increasing interest in exploiting mesopelagic fish, particularly for fishmeal and oil, but technical and economic constraints still prevent large-scale commercial activity (St John et al., 2016). Nonetheless, licenses to fish mesopelagics have been issued by Norway and Pakistan (The Economist, 2017), although the US has proactively prohibited directed commercial fisheries in its Pacific Ocean due to concerns over potential adverse ecosystem consequences (NOAA, 2016). Many exploited marine species such as billfish, tuna, sharks, and rays that spend time in ABNJ regularly undertake extensive horizontal movements and deep-dives into the meso- and even bathypelagic (1,000–4,000 m) realms (Thorrold et al., 2014; Abascal et al., 2015; Fuller et al., 2015; Howey et al., 2016). Their roles in biogeochemical cycles are largely unquantified, but the movements link surface waters and the deep ocean and are likely to influence habitat characteristics in a similar manner to whales and mesopelagic fish. Mobile, open ocean predators also structure habitats through their physical presence and behaviors. For example, hunting pelagic fish such as tuna provide visual cues to seabirds enabling prey detection over greater distances and enhancing bird foraging success by forcing prey close to the surface (Maxwell and Morgan, 2013). Other marine organisms may control the abundance of prey or act as food, either directly or through detritus, influences that change across life stages (Young et al., 2015). Although rarely considered this way, these roles are analogous in their significance to the structuring influence of kelp or seagrass in coastal habitats. There are many examples of altered ocean food-web dynamics following depletion of apex predators. For example, the recent range expansion of the Humboldt squid into the eastern North Pacific has been linked to reduced predation and competition with large predatory fish targeted by fisheries, and expanded low oxygen waters (Zeidberg and Robison, 2007). Predator depletion has well-known effects on coastal habitats, e.g., sea otter loss led to kelp decline due to reduced predation on herbivores (Estes et al., 2011), and overfishing of apex predators has altered trophic structure leading to increased abundance of mid-size predators (Heithaus et al., 2008; Ritchie and Johnson, 2009; Ferretti et al., 2010; Ortuño Crespo and Dunn, 2017). In the open ocean, where ecosystems are defined by their inhabitants, community level impacts equate to ecosystem impacts. While the ecological impacts of apex predator depletion in ABNJ are poorly understood, by inference from known cases they can be expected to be significant. Unifying Habitat and Species Protection in Areas Beyond National Jurisdiction A habitat is malleable—preserving it in a particular state requires protection of the things that make it distinctive and recognizable. A woodland will not remain a woodland without protection for trees. It is easy to understand this because we can see the difference cutting down trees makes. However, protecting trees does not produce the same forest as protecting trees and all the species that live in and around them (Brodie, 2016), although the assumption is often made that it does. Similarly, protecting a seafloor habitat without protecting the species that regulate it, such as parrotfish in coral reefs (Mumby, 2009) or spiny lobster in kelp forests (Halpern et al., 2006), will alter the functioning and resilience of that habitat if those species are depleted. In the open ocean it is much harder to understand that removing big, predatory or highly abundant fish or marine mammals transforms an open sea habitat because it looks much the same as before. But the nature of open water habitats is also dictated by what occupies the space. Nature conservation often operates on two levels, habitat and species protection, with protection added in layers through different laws and in varying mixes. Such an approach can create perverse outcomes, however, when the inanimate is emphasized at the expense of the animate. For example, it is unclear to many, including nature conservation bodies, what protecting shallow, sub-tidal sandbanks under the EU Habitats Directive should entail. Does it mean ensuring the sand remains where it is, or is there some obligation to protect the animals and plants that live on or around sandbanks? Most people would assume the latter, yet in many cases, there is no protection given to Special Areas of Conservation from highly disturbing and destructive practices like bottom trawling and dredging (Plumeridge and Roberts, 2017). It is the sand, not wildlife, that prevails under this stewardship. The physical characteristics of an area act only as a placeholder for the life that could or does occupy it. Areas little affected by human activity will possess the most intact communities (D'agata et al., 2016), others will need to rebuild their wildlife under protection. The habitat that results from protection therefore depends on the level of protection given and even the most diligent network design schemes will fail if the sites chosen get little protection. Strongly and fully protected MPAs (Lubchenco and Grorud-Colvert, 2015) therefore promote the highest levels of complexity and the most intact ecosystems (Edgar et al., 2014). Conclusions and Suggestions Given the horizontal and vertical spread of human activities through ABNJ (Merrie et al., 2014), the structure and function of open ocean habitats has certainly altered over time (Ortuño Crespo and Dunn, 2017). While some land-based habitats retain high conservation value as a function of human use, e.g., highly diverse flower meadows from seasonal cutting and grazing regimes, or understory flowers, insects and birds from coppiced woodlands, we are not aware of any comparable examples in the sea. Evidence that mobile species benefit from spatial protection in national waters is increasing (Jensen et al., 2010; Edgar et al., 2014; Dunphy-Daly, 2015). Likewise, protection could offer benefits to such species in ABNJ but the extensive movements of many of the animals inhabiting these regions reinforce the need for strong complementary protection measures to be applied outside MPA boundaries. Such measures could include dynamic management, effective fisheries regulation and, increasingly, precautionary regulation of emerging activities (Dunn et al., 2011, 2016; Maxwell et al., 2015; Jaeckel et al., 2017). On purely biological grounds, the case is clear for fish and other exploited species to be an integral part of any agreement to protect biodiversity in ABNJ. Current negotiations consider what marine life and activities should be covered by any new protective legislation, and whether MPAs should be established through a new overarching mechanism or through existing regional and sectoral frameworks. The argument frequently made is that fisheries management bodies have a legal remit and competence to manage fisheries and are therefore best placed to look after fish (Vincent et al., 2014). But these bodies have so far failed to safeguard fisheries or fish (Gilman et al., 2014), are often limited to certain species, do not comprehensively cover the oceans, and introduce measures only applicable to members (Vincent et al., 2014). Furthermore, other activities in ABNJ affect marine life (e.g., Ramirez-Llodra et al., 2011) over which fisheries bodies have no remit. Objectives of MPAs go beyond tackling fishery problems, addressing threats from other activities such as maritime traffic or oil, mineral and genetic resource exploration and exploitation, as well as protecting biodiversity and ecosystem structure and function, and supporting cultural values and ecosystem services. Given the indivisibility between species and habitats, and the potential for cumulative impacts from human activities currently managed separately, protection of biodiversity in ABNJ will require comprehensive and strategic management across sectors. Moving from a regional to global approach would also: promote universal participation; allow comprehensive environmental impact assessments to established standards that address cumulative impacts from different activities; provide a mandate to implement ecologically representative MPA networks; and help harmonize the implementation of UNCLOS with the CBD, Sustainable Development Goals, the Paris Agreement, and other instruments. Furthermore, the interrelatedness of the four issues framing the ABNJ ILBI negotiations cannot be addressed in isolation from each other. For example, environmental impact assessments will promote informed decisions regarding acceptable levels of harm from activities on marine life which represent genetic and provisioning resources, prior to activities being undertaken. While negotiations are constrained by the requirement that any new agreement “should not undermine existing relevant legal instruments and frameworks and relevant global, regional and sectoral bodies” (UNGA, 2015), the opportunity is there to unify existing regulatory and governance mechanisms and fill gaps where they exist. Protection of animal life is crucial in the open ocean, because they structure the habitats there. Therefore, targets for habitat protection beyond national jurisdiction can only be fully met by protecting animal and plant communities in their entirety. Globally, efforts to align habitats and species conservation have increased in recent years. For example, 66 Ecologically and Biologically Significant Areas have been defined under the CBD that cover places in ABNJ (Bax et al., 2016) and the IUCN is developing a Red List for Ecosystems which includes marine habitats (Keith et al., 2015). Other efforts are pioneering approaches to identify important areas based on species distributions (e.g., Key Biodiversity Areas, Edgar et al., 2008; Important Marine Mammal Areas, Corrigan et al., 2014). These efforts are designed to inform global policy and future decisions regarding protection. Achieving habitat representation under global conservation targets will involve selecting sites for protection identified through these and similar efforts. Habitat conservation in whichever places are chosen for protection, however, will only be successful if MPAs and other measures safeguard more than just water, offering real refuges and protection for the creatures that define open sea habitats. The ongoing UN negotiations for the conservation and sustainable use of biodiversity beyond national jurisdiction present a unique opportunity to move from a sectoral and fragmented ABNJ management system to one that is holistic and based on the ecosystem approach. To be effective the ILBI should consider habitats a function of their inhabitants and represent all marine life within its scope. To do otherwise will fail to improve governance and management of ABNJ and undermine our ability to recover depleted species and repair degraded habitats. All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. BO' is supported by The Pew Charitable Trusts. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. We would like to thank our three reviewers for their constructive comments that greatly improved this manuscript. Abascal, F. J., Mejuto, J., Quintans, M., Garcia-Cortes, B., and Ramos-Cartelle, A. (2015). Tracking of the broadbill swordfish, Xiphias gladius, in the central and eastern North Atlantic. Fish Res. 162, 20–28. doi: 10.1016/j.fishres.2014.09.011 Artelle, K. A., Anderson, S. C., Reynolds, J. D., Cooper, A. B., Paquet, P. C., and Darimont, C. T. (2016). Ecology of conflict: marine food supply affects human-wildlife interactions on land. Sci. Rep. 6:25936. doi: 10.1038/srep25936 Ban, N. C., Bax, N. J., Gjerde, K. M., Devillers, R., Dunn, D. C., Dunstan, P. K., et al. (2014). Systematic conservation planning: a better recipe for managing the high seas for biodiversity conservation and sustainable use. Conserv. Lett. 7, 41–54. doi: 10.1111/conl.12010 Bax, N. J., Cleary, J., Donnelly, B., Dunn, D. C., Dunstan, P. K., Fuller, M., et al. (2016). Results of efforts by the Convention on Biological Diversity to describe ecologically or biologically significant marine areas. Conserv. Biol. 30, 571–581. doi: 10.1111/cobi.12649 Bianchi, D., Galbraith, E. D., Carozza, D. A., Mislan, K. A. S., and Stock, C. A. (2013a). Intensification of open-ocean oxygen depletion by vertically migrating animals. Nat. Geosci. 6, 545–548. doi: 10.1038/ngeo1837 Bianchi, D., Stock, C., Galbraith, E. D., and Sarmiento, J. L. (2013b). Diel vertical migration: ecological controls and impacts on the biological pump in a one-dimensional ocean model. Global Biogeochem. Cy. 27, 478–491. doi: 10.1002/gbc.20031 Breitburg, D. L., Crump, B. C., Dabiri, J. O., and Gallegos, C. L. (2010). Ecosystem engineers in the Pelagic Realm: alteration of habitat by species ranging from microbes to jellfish. Integr. Comp. Biol. 50, 188–200. doi: 10.1093/icb/icq051 Brotz, L. (2016). “Jellyfish fisheries: a global assessment,” in Global Atlas of Marine Fisheries: A Critical Appraisal of Catches and Ecosystem Impacts, eds D. Pauly and D. Zeller (Washington, DC: Island Press), 110–124. Burd, A. B., Hansell, D. A., Steinberg, D. K., Anderson, T. R., Aristegui, J., Baltar, F., et al. (2010). Assessing the apparent imbalance between geochemical and biochemical indicators of meso- and bathypelagic biological activity: what the @#! is wrong with present calculations of carbon budgets? Deep Sea Res. II 57, 1557–1571. doi: 10.1016/j.dsr2.2010.02.022 Chassot, E., Bonhommeau, S., Dulvy, N. K., Mélin, F., Watson, R., Gascuel, D., et al. (2010). Global marine primary production constrains fisheries catches. Ecol. Lett. 13, 495–505. doi: 10.1111/j.1461-0248.2010.01443.x Clark, M. R., Rowden, A. A., Schlacher, T., Williams, A., Consalvey, M., Stocks, K. I., et al. (2010). The ecology of seamounts: structure, function, and human impacts. Ann. Rev. Mar. Sci. 2, 253–278. doi: 10.1146/annurev-marine-120308-081109 Constable, A. J., de la Mare, W. K., Agnew, D. J., Everson, I., and Miller, D. (2000). Managing fisheries to conserve the Antarctic marine ecosystem: practical implementation of the Convention on the Conservation of Antarctic Marine Living Resources (CCAMLR). ICES J. Mar. Sci. 57, 778–791. doi: 10.1006/jmsc.2000.0725 Convention on Biological Diversity (2010). TARGET 11 - Technical Rationale Extended (Provided in Document COP/10/INF/12/Rev.1). Available online at: https://www.cbd.int/sp/targets/rationale/target-11/ (Accessed Jan 12, 2017) Corrigan, C. M., Ardron, J. A., Comeros-Raynal, M. T., Hoyt, E., Notarbartolo di Sciara, G., and Carpenter, K. E. (2014). Developing important marine mammal area criteria: learning from ecologically or biologically significant areas and key biodiversity areas. Aquat. Conserv. 24, 166–183. doi: 10.1002/aqc.2513 D'agata, S., Mouillot, D., Wantiez, L., Friedlander, A. M., Kulbicki, M., and Vigliola, L. (2016). Marine reserves lag behind wilderness in the conservation of key functional roles. Nat. Commun. 7:12000. doi: 10.1038/ncomms12000 Darimont, C. T., Bryan, H. M., Carlson, S. M., Hocking, M. D., MacDuffee, M., Paquet, P. C., et al. (2010). Salmon for terrestrial protected areas. Conserv. Lett. 3, 379–389. doi: 10.1111/j.1755-263X.2010.00145.x Duffy, J. E., and Stachowicz, J. J. (2006). Why biodiversity is important to oceanography: potential roles of genetic, species, and trophic diversity in pelagic ecosystem processes. Mar. Ecol. Prog. Ser. 311, 179–189. doi: 10.3354/meps311179 Duffy, L. M., Kuhnert, P. M., Pethybridge, H. R., Young, J. W., Olson, R. J., Logan, J. M., et al. (2017). Global trophic ecology of yellowfin, bigeye, and albacore tunas: understanding predation on micronekton communities at ocean-basin scales. Deep Sea Res. II 140, 55–73. doi: 10.1016/j.dsr2.2017.03.003 Dunn, D. C., Boustany, A. M., and Halpin, P. N. (2011). Spatio-temporal management of fisheries to reduce by-catch and increase fishing selectivity. Fish Fish. 12, 110–119. doi: 10.1111/j.1467-2979.2010.00388.x Dunn, D. C., Maxwell, S. M., Boustany, A. M., and Halpin, P. N. (2016). Dynamic ocean management increases the efficiency and efficacy of fisheries management. Proc. Natl. Acad. Sci. U.S.A. 113, 668–673. doi: 10.1073/pnas.1513626113 Edgar, G. J., Langhammer, P. F., Allen, A., Brooks, T. M., Brodie, J., Crosse, W., et al. (2008). Key biodiversity areas as globally significant target sites for the conservation of marine biological diversity. Aquat. Conserv. 18, 969–983. doi: 10.1002/aqc.902 Edgar, G. J., Stuart-Smith, R. D., Willis, T. J., Kininmonth, S., Baker, S. C., Banks, S., et al. (2014). Global conservation outcomes depend on marine protected areas with five key features. Nature 506, 216–220. doi: 10.1038/nature13022 Ferretti, F., Worm, B., Britten, G. L., Heithaus, M. R., and Lotze, H. K. (2010). Patterns and ecosystem consequences of shark declines in the ocean. Ecol. Lett. 13, 1055–1071. doi: 10.1111/j.1461-0248.2010.01489.x Fey, S. B., Siepielski, A. M., Nusslé, S., Cervantes-Yoshida, K., Hwan, J. L., Huber, E. R., et al. (2015). Recent shifts in the occurrence, cause, and magnitude of animal mass mortality events. Proc. Natl. Acad. Sci. U.S.A. 112, 1083–1088. doi: 10.1073/pnas.1414894112 Fuller, D. W., Schaefer, K. M., Hampton, J., Caillot, S., Leroy, B. M., and Itano, D. G. (2015). Vertical movements, behavior, and habitat of bigeye tuna (Thunnus obesus) in the equatorial central Pacific Ocean. Fish Res. 172, 57–70. doi: 10.1016/j.fishres.2015.06.024 Giering, S. L. C., Sanders, R., Lampitt, R. S., Anderson, T. R., Tamburini, C., Boutrif, M., et al. (2014). Reconciliation of the carbon budget in the ocean's twilight zone. Nature 507, 480–483. doi: 10.1038/nature13123 Gilman, E., Passfield, K., and Nakamura, K. (2014). Performance of regional fisheries management organisations: ecosystem-based governance of bycatch and discards. Fish Fish. 15, 327–351. doi: 10.1111/faf.12021 Gjerde, K., Nordtvedt Reeve, L. L., Harden-Davis, H., Ardron, J., Dolan, R., Durussel, C., et al. (2016). Protecting Earth's last conservation frontier: scientific, management and legal priorities for MPAs beyond national boundaries. Aquat. Conserv. 26, 45–60. doi: 10.1002/aqc.2646 Grupe, B. M., Krach, M. L., Pasulka, A. L., Maloney, J. M., Levin, L. A., and Frieder, C. A. (2015). Methane seep ecosystem functions and services from a recently discovered southern California seep. Mar. Ecol. 36, 91–108. doi: 10.1111/maec.12243 Haury, L. R., McGowan, J. A., and Wiebe, P. H. (1978). “Patterns and processes in the time-space scales of plankton distributions,” in Spatial Pattern in Plankton Communities, ed J. H. Steele (New York, NY: Plenum Press & NATO Scientific Affairs Division), 277–327. Heck, K. L. Jr., Carruthers, T. J. B., Duarte, C. M., Hughes, A. R., Kendrick, G., Orth, R. J., et al. (2008). Trophic transfers from seagrass meadows subsidize diverse marine and terrestrial consumers. Ecosystems 11, 1198–1210. doi: 10.1007/s10021-008-9155-y Heisler, J., Glibert, P. M., Burkholder, J. M., Anderson, D. M., Cochlan, W., Dennison, W. C., et al. (2008). Eutrophication and harmful algal blooms: a scientific consensus. Harmful Algae 8, 3–13. doi: 10.1016/j.hal.2008.08.006 Hobday, A. J., Young, J. W., Moeseneder, C., and Dambacher, J. M. (2011). Defining dynamic pelagic habitats in oceanic waters off eastern Australia. Deep Sea Res. II 58, 734–745. doi: 10.1016/j.dsr2.2010.10.006 Howey, L. A., Tolentino, E. R., Papastamatiou, Y. P., Brooks, E. J., Abercrombie, D. L., Watanabe, Y. Y., et al. (2016). Into the deep: the functionality of mesopelagic excursions by an oceanic apex predator. Ecol. Evol. 6, 5290–5304. doi: 10.1002/ece3.2260 Irigoien, X., Klevjer, T. A., Røstad, A., Martinez, U., Boyra, G., Acuña, J. L., et al. (2014). Large mesopelagic fishes biomass and trophic efficiency in the open ocean. Nat. Commun. 5:3271. doi: 10.1038/ncomms4271 IUCN (2017). Habitats Classification Scheme (Version 3.1). Available Online at: http://www.iucnredlist.org/technical-documents/classification-schemes/habitats-classification-scheme-ver3 (Accessed 02/07/2017) Jensen, O. P., Ortega-Garcia, S., Martell, S. J. D., Ahrens, R. N. M., Domeier, M. L., Walters, C. J., et al. (2010). Local management of a “highly migratory species”: the effects of long-line closures and recreational catch-and-release for Baja California striped marlin fisheries. Prog. Oceanogr. 86, 176–186. doi: 10.1016/j.pocean.2010.04.020 Keith, D. A., Rodriguez, J. P., Brooks, T. M., Burgman, M. A., Barrow, E. G., Bland, L., et al. (2015). The IUCN red list of ecosystems: motivations, challenges, and applications. Conserv. Lett. 8, 214–226. doi: 10.1111/conl.12167 Lavery, T. J., Roudnew, B., Gill, P., Seymour, J., Seuront, L., Johnson, G. C., et al. (2010). Iron defecation by sperm whales stimulates carbon export in the Southern Ocean. Proc. R. Soc. B 277, 3527–3531. doi: 10.1098/rspb.2010.0863 Letessier, T. B., Cox, M. J., Meeuwig, J. J., Boersch-Supan, P. H., and Brierley, A. S. (2016). Enhanced pelagic biomass around coral atolls. Mar. Ecol. Prog. Ser. 546, 271–276. doi: 10.3354/meps11675 Levin, L. A., Baco, A. R., Bowden, D. A., Colaco, A., Cordes, E. E., Cunha, M. R., et al. (2016). Hydrothermal vents and methane seeps: rethinking the sphere of influence. Front. Mar. Sci. 3:72. doi: 10.3389/fmars.2016.00072 Maxwell, S. M., and Morgan, L. E. (2013). Foraging of seabirds on pelagic fishes: implications for management of pelagic marine protected areas. Mar. Ecol. Prog. Ser. 481, 289–303. doi: 10.3354/meps10255 Maxwell, S. M., Hazen, E. L., Lewison, R. L., Dunn, D. C., Bailey, H., Bograd, S. J., et al. (2015). Dynamic ocean management: defining and conceptualizing real-time management of the ocean. Mar. Pol. 58, 42–50. doi: 10.1016/j.marpol.2015.03.014 Merrie, A., Dunn, D. C., Metian, M., Boustany, A. M., Takei, Y., Elferink, A. O., et al. (2014). An ocean of surprises – Trends in human use, unexpected dynamics and governance challenges in areas beyond national jurisdiction. Glob. Environ. Change 27, 19–31. doi: 10.1016/j.gloenvcha.2014.04.012 Morato, T., Hoyle, S. D., Allain, V., and Nicol, S. J. (2010). Seamounts are hotspots of pelagic biodiversity in the open ocean. Proc. Natl. Acad. Sci. U.S.A. 107, 9707–9711. doi: 10.1073/pnas.0910290107 Nicol, S., and Stolp, M. (1989). Sinking rates of cast exoskeletons of Antarctic krill (Euphausia superba Dana) and their role in the vertical flux of particulate matter and fluoride in the Southern Ocean. Deep Sea Res. I 36, 1753–1762. doi: 10.1016/0198-0149(89)90070-8 Nicol, S., Bowie, A., Jarman, S., Lannuzel, D., Meiners, K. M., and Van Der Merwe, P. (2010). Southern Ocean iron fertilization by baleen whales and Antarctic krill. Fish Fish. 11, 203–209. doi: 10.1111/j.1467-2979.2010.00356.x NOAA (2016). Fisheries Off West Coast States; Comprehensive Ecosystem-Based Amendment 1; Amendments to the Fishery Management Plans for Coastal Pelagic Species, Pacific Coast Groundfish, U.S. West Coast Highly Migratory Species, and Pacific Coast Salmon [Online]. Available Online at: https://www.gpo.gov/fdsys/pkg/FR-2016-04-04/pdf/2016-07516.pdf (Accessed April 28, 2017). Pershing, A. J., Christensen, L. B., Record, N. R., Sherwood, G. D., and Stetson, P. B. (2010). The impact of whaling on the ocean carbon cycle: why bigger was better. PLoS ONE 5:e12444. doi: 10.1371/journal.pone.0012444 Plumeridge, A. A., and Roberts, C. M. (2017). Conservation targets in marine protected area management suffer from shifting baseline syndrome: a case study on the Dogger Bank. Marine Poll. Bull. 116, 395–404. doi: 10.1016/j.marpolbul.2017.01.012 Potier, M., Marsac, F., Cherel, Y., Lucas, V., Sarbatié, R., Maury, O., et al. (2007). Forage fauna in the diet of three large pelagic fishes (lancetfish, swordfish and yellowfin tuna) in the western equatorial Indian Ocean. Fish Res. 83, 60–72. doi: 10.1016/j.fishres.2006.08.020 Ramirez-Llodra, E., Tyler, P. A., Baker, M. C., Bergstad, O. A., Clark, M. R., Escobar, E., et al. (2011). Man and the last great wilderness: human impact on the deep sea. PLoS ONE 6:e22588. doi: 10.1371/journal.pone.0022588 Robinson, C., Steinberg, D. K., Anderson, T. R., Arístegui, J., Carlson, C. A., Frost, J. R., et al. (2010). Mesopelagic zone ecology and biogeochemistry – a synthesis. Deep Sea Res. II 57, 1504–1518. doi: 10.1016/j.dsr2.2010.02.018 Scales, K. L., Miller, P. I., Hawkes, L. A., Ingram, S. N., Sims, D. W., and Votier, S. C. (2014). On the Front Line: frontal zones as priority at-sea conservation areas for mobile marine vertebrates. J. Appl. Ecol. 51, 1575–1583. doi: 10.1111/1365-2664.12330 Smith, C. R. (2007). “Bigger is better: the role of whales as detritus in marine ecosystems,” in Whales, Whaling, and Ocean Ecosystems eds J. A. Estes, D. P. DeMaster, D. F. Doak, T. M. Williams, and R. L. Brownell (Berkeley, CA: University of California Press), 286–300. Soetaert, K., Mohn, C., Rengstorf, A., Grehan, A., and van Oevelen, D. (2016). Ecosystem engineering creates a direct nutritional link between 600-m deep cold-water coral mounds and surface productivity. Sci. Rep. 6:35057. doi: 10.1038/srep35057 St John, M. A., Borja, A., Chust, G., Heath, M., Grigorov, I., Mariani, P., et al. (2016). A dark hole in our understanding of marine ecosystems and their services: perspectives from the mesopelagic community. Front. Mar. Sci. 3:31. doi: 10.3389/fmars.2016.00031 The Economist (2017). The Mesopelagic: Cinderella of the Oceans [Online]. Available online at: https://www.economist.com/news/science-and-technology/21720618-one-least-understood-parts-sea-also-one-most-important Thorrold, S. R., Afonso, P., Fontes, J., Braun, C. D., Santos, R. S., Skomal, G. B., et al. (2014). Extreme diving behaviour in devil rays links surface waters and the deep ocean. Nat. Commun. 5:4274. doi: 10.1038/ncomms5274 UNGA (2015). Resolution 69/292. Development of an International Legally Binding Instrument Under the United Nations Convention on the Law of the Sea on the Conservation and Sustainable Use of Marine Biological Diversity of Areas Beyond National Jurisdiction. UNGA. Available online at: https://documents-dds-ny.un.org/doc/UNDOC/GEN/N15/187/55/PDF/N1518755.pdf?OpenElement United Nations (2015). Sustainable Development Goal 14: Conserve and Sustainable Use the Oceans, Seas, and Marine Resources for Sustainable Development [Online]. Available Online at: https://sustainabledevelopment.un.org/sdg14 (Accessed March 02, 2017) Vincent, A. C. J., Sadovy de Mitcheson, Y. J., Fowler, S. L., and Lieberman, S. (2014). The role of CITES in the conservation of marine fishes subject to international trade. Fish Fish. 15, 563–592. doi: 10.1111/faf.12035 Watson, R. A., Nowara, G. B., Hartmann, K., Green, B. S., Tracey, S. R., and Carter, C. G. (2015). Marine foods sourced from farther as their use of global ocean primary production increases. Nat. Commun. 6:7365. doi: 10.1038/ncomms8365 Wilson, R. W., Millero, F. J., Taylor, J. R., Walsh, P. J., Christensen, V., Jennings, S., et al. (2009). Contribution of fish to the marine inorganic carbon cycle. Science 323, 359–362. doi: 10.1126/science.1157972 Woolley, S. N. C., Tittensor, D. P., Dunstan, P. K., Guillera-Arroita, G., Lahoz-Monfort, J. J., Wintle, B. A., et al. (2016). Deep-sea diversity patterns are shaped by energy availability. Nature 533, 393–396. doi: 10.1038/nature17937 Young, J. W., Hunt, B. P. V., Cook, T. R., Llopiz, J. K., Hazen, E. L., Pethybridge, H. R., et al. (2015). The trophodynamics of marine top predators: current knowledge, recent advances and challenges. Deep Sea Res. II 113, 170–187. doi: 10.1016/j.dsr2.2014.05.015 Keywords: areas beyond national jurisdiction, ABNJ, area-based management, biodiversity beyond national jurisdiction, BBNJ, high seas, marine protected areas, UNCLOS Citation: O'Leary BC and Roberts CM (2017) The Structuring Role of Marine Life in Open Ocean Habitat: Importance to International Policy. Front. Mar. Sci. 4:268. doi: 10.3389/fmars.2017.00268 Received: 26 May 2017; Accepted: 03 August 2017; Published: 05 September 2017. Edited by:Sara M. Maxwell, Old Dominion University, United States Reviewed by:Daniel Carl Dunn, Duke University, United States Tammy Davies, BirdLife International, United Kingdom Colleen Corrigan, The University of Queensland, Australia Copyright © 2017 O'Leary and Roberts. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Bethan C. O'Leary, firstname.lastname@example.org
<urn:uuid:bf0e1984-0df3-4f65-bb33-b5805c2c1e1b>
3.25
10,973
Academic Writing
Science & Tech.
61.364094
95,551,487
The initial image, showing a dimly illuminated cloud-covered region was successfully downloaded on 6 March. A second picture – the first to be produced on command from the ground – was taken soon after dawn on 7 March and shows a scattering of white and pink clouds close to the Aleutian Islands in the north Pacific. “It was really exciting to see the first image arriving from space after the long period of developing the camera and testing it in orbit,” said Massimo Sabbatini, ESA Principal Investigator for the EVC. “This success would not have been possible without the major contribution of Carlo Gavazzi Space and the hard work of the integration and operations teams at the European Space Technology and Research Centre (ESTEC) in the Netherlands.” “We are just starting to experiment with the various camera parameters to adjust for the vast range of lighting conditions we encounter. That’s why the second picture is slightly blurred,” explains Sabbatini. “The ISS is travelling at about 7 km per second, so we have to adjust the exposure time to compensate for this rapid motion. At that speed the camera moves over hundreds of metres on the ground in a matter of milliseconds.” The camera is intended to be a valuable resource for public outreach and education. Sabbatini says, "We hope to encourage teachers and students to use the EVC as a tool for studying all aspects of Earth observation from space – imaging, telemetry, telecommunications links and orbit predictions. We are also hoping to receive requests for images of particular regions over which the ISS is passing.” The story of the EVC began in 2003, when the company Carlo Gavazzi Space of Milan, Italy, approached ESA with a proposal to fly a digital camera as a low cost payload on one of the external platforms on Columbus. ESA and Carlo Gavazzi Space signed an agreement in March 2004 whereby each partner would provide half of the required funding for the development of the camera. The EVC points continuously at a fixed angle toward the Earth. The camera weighs 7.8 kg and measures 0.4 x 0.28 x 0.16 m. It uses a commercial, off the shelf, sensor provided by Kodak, with a 2k x 2k detector. It is able to capture colour images of the Earth’s surface that cover an area of 200 x 200 km. The images are received in Europe by the Columbus Control Centre at Oberpfaffenhofen in Germany and then forwarded to the ESA User Support Operation Centre in the Erasmus Centre at ESTEC. In the future, the EVC image acquisition process and exploitation will be coordinated from the EVC User Home Base, also located at the Erasmus Centre. EVC is part of the European Technology Exposure Facility (EuTEF) installed on the European Columbus laboratory’s external platform during a spacewalk on 15 February 2008 by NASA astronauts Rex Walheim and Stanley Love. Located on the starboard side of the International Space Station, Columbus sweeps around the Earth once every 90 minutes. Since the Station’s orbital path is inclined at about 52 degrees to the equator, the Earth Viewing Camera has the potential to take pictures of anywhere on the Earth’s surface from England to the southern tip of South America. This includes almost all of the densely populated parts of the world. Markus Bauer | alfa New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:3408fe0e-2994-4f77-83ae-2af379163883>
2.9375
1,335
Content Listing
Science & Tech.
42.965354
95,551,522
A new Swansea University study has suggested that warming temperatures could drive sea turtles to extinction. The study by Dr Jacques-Olivier Laloë of the University's College of Science and published in the Global Change Biology journal, argues that warmer temperatures associated with climate change could lead to higher numbers of female sea turtles and increased nest failure, and could impact negatively on the turtle population in some areas of the world. The effects of rising temperatures Rising temperatures were first identified as a concern for sea turtle populations in the early 1980s as the temperature at which sea turtle embryos incubate determines the sex of an individual, which is known as Temperature-Dependent Sex Determination (TSD). The pivotal temperature for TSD is 29°C as both males and females are produced in equal proportions - above 29°C mainly females are produced while below 29°C more males are born. Within the context of climate change and warming temperatures, this means that, all else being equal, sea turtle populations are expected to be more female-biased in the future. While it is known that males can mate with more than one female during the breeding season, if there are too few males in the population this could threaten population viability. The new study also explored another important effect of rising temperatures: in-nest survival rates. Sea turtle eggs only develop successfully in a relatively narrow thermal range of approximately 25-35°C, so if incubation temperatures are too low the embryo does not develop but if they are too high then development fails. This means that if incubation temperatures increase in the future as part of climate warming, then more sea turtle nests will fail. The researchers recorded sand temperatures at a globally important loggerhead sea turtle nesting site in Cape Verde over 6 years. They also recorded the survival rates of over 3,000 nests to study the relationship between incubation temperature and hatchling survival. Using local climate projections, the research team then modeled how turtle numbers are likely to change throughout the century at this nesting site. Dr Laloë said: "Our results show something very interesting. Up to a certain point, warmer incubation temperatures benefit sea turtles because they increase the natural growth rate of the population: more females are produced because of TSD, which leads to more eggs being laid on the beaches. "However, beyond a critical temperature, the natural growth rate of the population decreases because of an increase of temperature-linked in-nest mortality. Temperatures are too high and the developing embryos do not survive. This threatens the long-term survival of this sea turtle population." The researchers expect that the numbers of nests in Cape Verde will increase by approximately 30% by the year 2100 but, if temperatures keep rising, could start decreasing afterwards. The new study identifies temperature-linked hatchling mortality as an important threat to sea turtles and highlights concerns for species with TSD in a warming world. It suggests that, in order to safeguard sea turtle populations around the world, it is critical to monitor how hatchling survival changes over the next decades. Dr Laloë said: "In recent years, in places like Florida--another important sea turtle nesting site--more and more turtle nests are reported to have lower survival rates than in the past. This shows that we should really keep a close eye on incubation temperatures and the in-nest survival rates of sea turtles if we want to successfully protect them. "If need be, conservation measures could be put in place around the world to protect the incubating turtle eggs. Such measures could involve artificially shading turtle nests or moving eggs to a protected and temperature-controlled hatchery." Climate change and temperature-linked hatchling mortality at a globally important sea turtle nesting site was published this week by Global Change Biology. Authors: Jacques-Olivier Laloë, Jacquie Cozens, Berta Renom , Albert Taxonera and Graeme C. Hays Delyth Purchase | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:c436ffb5-037d-4250-8dbd-e091ccc51f8d>
3.484375
1,472
Content Listing
Science & Tech.
34.921345
95,551,535
ASP.Net provides a technique to develop a multilingual web application. Such applications use same code-base but different language cultures. To accomplish this ASP.Net provides resource files. These files are with extension .resx, are local files which reside at server’s local machine. Depeneding upon the browser culture the web application picks up necessary resource file and display the text accordingly. Internally resources are stored in the form of key-value pair. User can have custom resources as well such as database, XML files etc. Steps to demonstrate the Localization concept: - Create a web application using Visual Studio(I am using VS2005) - Add a new folder to the project solution App_localResources - Add a resource file to this folder say Default.aspx.resx ( it is a file with .resx extension) - Add a label on aspx page in design view. - Add a resource string with name value pair as label1.text as Welcome. - Add one more resource file say Default.aspx.es.resx and add it’s relative value. es stands for spanish, it’s universal standard. Go to the source view of aspx page and add meta:resourceKey=”label1” to the label The code will look like, - Go to the page directive of the page and add culture details as,Culture="auto:en-US" UICulture="auto" - Here auto keyword will detect the browser culture and will apply appropriate resource file to the page and en-US is default setting. - Now run the page in any browser, it will show the text depending upon the language it is configured to. - Try running this application by adding spanish language into the browser. - Here are the steps to change/add language to Internet Explorer.
<urn:uuid:082056f2-b37b-41c0-9501-f1a6ba85f085>
2.765625
390
Tutorial
Software Dev.
48.3858
95,551,556
The findings offer a new window on the inner life of the honey bee hive, which once was viewed as a highly regimented colony of seemingly interchangeable workers taking on a few specific roles (nurse or forager, for example) to serve their queen. Now it appears that individual honey bees actually differ in their desire or willingness to perform particular tasks, said University of Illinois entomology professor and Institute for Genomic Biology director Gene Robinson, who led the study. These differences may be due, in part, to variability in the bees’ personalities, he said. “In humans, differences in novelty-seeking are a component of personality,” he said. “Could insects also have personalities?” These bees, called nest scouts, are on average 3.4 times more likely than their peers to also become food scouts, the researchers found. “There is a gold standard for personality research and that is if you show the same tendency in different contexts, then that can be called a personality trait,” Robinson said, who also is affiliated with the Neuroscience Program at Illinois. Not only do certain bees exhibit signs of novelty-seeking, he said, but their willingness or eagerness to “go the extra mile” can be vital to the life of the hive. The researchers wanted to determine the molecular basis for these differences in honey bee behavior. They used whole-genome microarray analysis to look for differences in the activity of thousands of genes in the brains of scouts and non-scouts. “People are trying to understand what is the basis of novelty-seeking behavior in humans and in animals,” Robinson said. “And a lot of the thinking has to do with the relationship between how the (brain’s) reward system is engaged in response to some experience.” The researchers found thousands of distinct differences in gene activity in the brains of scouting and non-scouting bees.“We expected to find some, but the magnitude of the differences was surprising given that both scouts and non-scouts are foragers,” Robinson said. Among the many differentially expressed genes were several related to catecholamine, glutamate and gamma-aminobutyric acid (GABA) signaling, and the researchers zeroed in on these because they are involved in regulating novelty-seeking and responding to reward in vertebrates.To test whether the changes in brain signaling caused the novelty-seeking, the researchers subjected groups of bees to treatments that would increase or inhibit these chemicals in the brain. Two treatments (with glutamate and octopamine) increased scouting in bees that had not scouted before. Blocking dopamine signaling decreased scouting behavior, the researchers found. “Our results say that novelty-seeking in humans and other vertebrates has parallels in an insect,” Robinson said. “One can see the same sort of consistent behavioral differences and molecular underpinnings.” The findings also suggest that insects, humans and other animals made use of the same genetic “toolkit” in the evolution of behavior, Robinson said. The tools in the toolkit – genes encoding certain molecular pathways – may play a role in the same types of behaviors, but each species has adapted them in its own, distinctive way.“It looks like the same molecular pathways have been engaged repeatedly in evolution to give rise to individual differences in novelty-seeking,” he said. Collaborators on this study included researchers from Wellesley College and Cornell University.Editor’s notes: To reach Gene Robinson, call 217-202-9130; Diana Yates | University of Illinois Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:3755198f-6286-421e-b270-e6731342fbd6>
3.59375
1,330
Content Listing
Science & Tech.
34.158509
95,551,574
From Event: SPIE Optical Engineering + Applications, 2017 The Strontium Iodide Radiation Instrumentation (SIRI) is designed to space-qualify new gamma-ray detector technology for space-based astrophysical and defense applications. This new technology offers improved energy resolution, lower power consumption and reduced size compared to similar systems. The SIRI instrument consists of a single europiumdoped strontium iodide (SrI2:Eu) scintillation detector. The crystal has an energy resolution of 3% at 662 keV compared to the 6.5% of traditional sodium iodide and was developed for terrestrial-based weapons of mass destruction (WMD) detection. SIRI’s objective is to study the internal activation of the SrI2:Eu material and measure the performance of the silicon photomultiplier (SiPM) readouts over a 1-year mission. The combined detector and readout measure the gammaray spectrum over the energy range of 0.04 - 4 MeV. The SIRI mission payoff is a space-qualified compact, highsensitivity gamma-ray spectrometer with improved energy resolution relative to previous sensors. Scientific applications in solar physics and astrophysics include solar flares, Gamma Ray Bursts, novae, supernovae, and the synthesis of the elements. Department of Defense (DoD) and security applications are also possible. Construction of the SIRI instrument has been completed, and it is currently awaiting integration onto the spacecraft. The expected launch date is May 2018 onboard STPSat-5. This work discusses the objectives, design details and the STPSat-5 mission concept of operations of the SIRI spectrometer. Lee J. Mitchell, Bernard F. Phlips, Richard S. Woolf, Theodore T. Finne, W. Neil Johnson, and Emily G. Jackson, "Strontium Iodide Radiation Instrumentation (SIRI)," Proc. SPIE 10397, UV, X-Ray, and Gamma-Ray Space Instrumentation for Astronomy XX, 103970B (Presented at SPIE Optical Engineering + Applications: August 06, 2017; Published: 29 August 2017); https://doi.org/10.1117/12.2272606. Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 12,000 conference presentations, including many plenary and keynote presentations.
<urn:uuid:410aef9a-ed74-4f8d-a241-d2aa0e4ba998>
2.625
549
Academic Writing
Science & Tech.
33.025
95,551,578
Questions about carbon dating. could have dramatically changed over the past few tens of thousands of years. run out of steam at about 50,000 years.How it works; its limitations. 12 has been constant through the past 50,000 years. the C-14 dating technique on a number of carbon-bearing samples from.Losing Carbon-14. Stable carbon. year. Not only is carbon 14 dating limited in its theorectical usefulness any farther back in time than 50,000 years, 3 but its. The Carbon Dating Game | theTrumpet.com0–50,000 YEARS CAL BP. 15Radiocarbon Dating Laboratory,. C content of the atmospheric has been constant (Stuiver and Polach 1977). However, past.Carbon dating is based on the atmospheric C-14/C-12 ratio,. 14 C activity and global carbon cycle changes over the past 50,000 years. Science 303: 202-207.Los Angeles has enjoyed the same amazing climate for 50,000. over the past 50,000 years. them to the carbon-dated fossils. The carbon-dating is. Find out how carbon-14 dating works and why. a biological origin up to about 50,000 years old. It is used in dating things. relatively recent past by.How Far Back Can Radiocarbon Dating Go. you date with carbon dating?. are providing a more accurate timeline for dating objects as far back as 50,000 years.Radiocarbon Dating of. and the < 50,000 year ages reported from carbon-14 dating are less. back in the past than dinosaurs —over a billion years. Scientific Methods - Mungo Man 92 protons 17. How many neutrons does U-238 have?. Why can’t we use Carbon-14 to date the rocks? Carbon dating can not be used to date things past 50,000 years. William J. Jenkins : NOSAMS - Woods Hole Oceanographic Institution Carbon-14 dating isn't effective on material past 50,000 years of age because, the material lacks the necessary "carbon half-life" to preform Carbon-14 dating.The Carbon 14 Myth. We know that. fluctuations in atmospheric carbon levels over the past 5,000 years have. if the Earth is older than 50,000 years, then the.Four types of radiometric dating?. 730 years so this method is mainly used for dating things from the last 50,000 years. carbon dating is an absolute dating. MYTHS REGARDING RADIOCARBON DATING - The Institute for Creation ResearchRadiocarbon-14 Dating in. It has a greater impact on our understanding of the human past than in any other. http://www.radiocarbon.com/about-carbon-dating.htm. Learn about the importance of Carbon Dating and the physics. Chemistry Article; Carbon Dating;. remains the main tool for dating the past 50,000 years.30,000 year limit to Carbon dating. It is somewhat accurate back to a few thousand years, but carbon dating is not accurate past this.Scientific Methods That Have Been Used to. radio carbon-14 dating. layer it gave archaeologists the impression that he was roughly 25 000 to 50 000 years old.The Carbon Dating Game. It would take 30,000 to 50,000 years to go from zero C-14 until. it is necessary to “estimate” what it would have been in the past.Carbon-14 dating can be used on objects ranging from a few hundred years old to 50,000 years old. Here's an example of calculating carbon-14 dating. Radiocarbon Dating, Tree Rings, DendrochronologyDating the Hobbit; Dating. ages based on radio carbon dating were essentially. have had the capacity to get across the water 50,000 years ago. Radiocarbon (Carbon-14) Dating Of Manuscripts. Carbon dating,. J. Turnbull, "14 C Activity And Global Carbon Cycle Changes Over The Past 50,000 Years.Archaeologists have long used carbon-14 dating. between 500 and 50,000 years old and exploits the fact that. the National Institute of Justice,.INTCAL13 AND MARINE13 RADIOCARBON AGE CALIBRATION CURVES 0–50,000 YEARS CAL BP. 19Radiocarbon Dating Laboratory,.MYTH #2 Radiocarbon dating has established the date of some organic materials (e.g., some peat deposits) to be well in excess of 50,000 years,. the past 3,000 years.Decay of carbon 14 takes thousands of years, and it is this wonder of nature that forms the basis of radiocarbon dating and made this carbon 14 analysis a powerful tool in revealing the past. The process of radiocarbon dating starts with the analysis of the carbon 14 left in a sample. All atoms of an element may not be identical. Atoms of the same element can have different number of neutrons. The different versions of the same element are called.Answers For Kids: Dating Methods. “This animal lived 50,000 years ago”,. a scientist can interpret the result of the carbon dating within a Biblical.How do scientists determine the age of dinosaur. carbon-14 is only 5,730 years, so carbon-14 dating is only effective on samples that are less than 50,000 years old.How Is Radioactive Dating. A popular way to determine the ages of biological substances no more than 50,000 years. Carbon-14 has a half-life of 5,730 years.Radiocarbon dating compares the amount of radioactive Carbon 14 in organic plants and animals. about environmental changes over the past 50,000 years,.about 50,000 years old should theoretically have no detectable 14C left. the past. This will make old. What about carbon dating? ~ 71.
<urn:uuid:e4277570-8f18-407e-840d-a06c1df3d1db>
3
1,230
Content Listing
Science & Tech.
73.406333
95,551,582
The accelerating ice loss adds half a foot of sea level rise to the 2 feet already expected by the end of the century, increasing flood risk for coastal communities. The most complete assessment to date of Antarctica's ice sheets confirms that the meltdown accelerated sharply in the past five years, and there is no sign of a slowdown. That means sea level is expected to rise at a rate that will catch some coastal communities unprepared despite persistent warnings, according to the international team of scientists publishing a series of related studies this week in the journal Nature. The scientists found that the rate of ice loss over the past five years had tripled compared to the previous two decades, suggesting an additional 6 inches of sea level rise from Antarctica alone by 2100, on top of the 2 feet already projected from all sources, including Greenland. "That may not sound like a lot, but it's a big deal for people living along coasts," said University of Leeds climate researcher Andrew Shepherd, who led the assessment, supported by NASA and the European Space Agency. "And the signals are not abating. The ice shelves continue to retreat, warmer ocean water continues to melt them from below, which all means we are progressively going to lose more and more ice from the interior." Coastal flooding is becoming an increasing problem for some U.S. cities as sea level rises. A study released last week by the National Oceanic and Atmospheric Administration found that the frequency of high-tide flooding had doubled in the past 30 years, with some cities experiencing more than 20 days of it over the past year. Pine Island and Thwaites Glaciers University of California, Irvine climate scientist Eric Rignot said the compilation of data on Antarctica's ice loss gives a detailed and thorough understanding of how the rapid changes in ice flow raise sea level worldwide. "The large signal we see now will only grow bigger with time," he said. As the Southern Ocean warms, there will be additional ice shelf disintegration. That will speed up the flow of land-based ice from the interior to the sea, where it raises sea level. Between 1992, when detailed satellite measurements started, and 2012, Antarctica lost about 76 billion tons of ice per year. But since 2012, that rate has tripled to about 219 billion tons of ice loss per year, the scientists found. The recent acceleration is partly driven by natural climate cycles, including El Niño and the Pacific Decadal Oscillation, as well as rising temperatures, Rignot said. In West Antarctica, most of the ice loss has been from the massive Pine Island and Thwaites Glaciers, which have been retreating rapidly. In the Amundsen and Bellingshausen seas, ice shelves that slow the flow of the glaciers behind them have thinned by up to 18 percent since the early 1990s as warming water has reached underneath them. In some areas, the ice is thinning by as much as 19 to 20 feet per year. A November 2017 study suggested that a rapid collapse of both glaciers is possible, which would raise sea level more than 3 feet by 2100. Around the Antarctic Peninsula, where air temperatures have risen sharply, more than 13,000 square miles of ice shelf area has been lost since the 1950s, according to the study. Credit: Climate Signals/Climate Nexus The new assessment used data from satellite altimeter instruments that can measure tiny changes in ice sheet elevations, as well as from gravity-sensing instruments that can track subtle shifts in the distribution of Earth's gravitational field to show where ice loss is happening. All the data together show that the problem of sea level rise is "growing in severity each year," said UC Irvine researcher and co-author Isabella Velicogna. Ice Regrew in Past, But Took Thousand of Years One of the related studies in the series found a surprising pattern of ice sheet retreat and advance in West Antarctica during the past 10,000 years. As the heavy ice melted rapidly at the end of the last ice age, the land beneath rebounded. The mountains grew higher, colder and snowier, and underwater mountains offshore also rose up, creating new anchor points for floating shelves that blocked the glaciers flowing from land and enabled them to build up temporarily again, even during the post-ice age warm-up. That could happen again, but not fast enough to help coastal communities facing the threat of sea level rise in the coming decades said Torsten Albrecht, from the Potsdam Institute for Climate Impact Research. Albrecht's team used data from ice radar, chemical analysis of sediment deposits at the base of the ice and climate simulations to show that the ice retreated inland by more than 600 miles in a period of 1,000 years, then regrew by 250 mile—but that regrowth took 10,000 years, he said. Some of the radar data came from a 2014 field survey by co-author Jonny Kingslake, with Columbia University's Lamont-Doherty Earth Observatory. Unusual cracks in the base of the Weddell Sea Ice Shelf suggested the ancient ice had, at one point, been stretched or squished rapidly in an area where scientists assumed only slow ice movement. Sediment data showed a similar record, and together, the information helped the Potsdam climate modelers create an accurate picture of the ice sheet cycle. "People are using the history of the ice flow to tune their models. Knowing more about that past can help us understand what's going to happen in the future," Kingslake said. Warm Water Intrusions Lead to More Fractures Another study, published June 13 in the journal Science Advances, raises even more concerns about the vulnerability of Antarctic ice to global warming. Data from radar and laser readings of the ice enabled scientists with the University of Texas at Austin and the University of Waterloo to map a vast network of channels in the base of many ice sheets formed by intrusions of warm water. Some are several kilometers wide, said University of Texas at Austin researcher Jamin Greenbaum. They found the channels everywhere they looked, including beneath the ice shelf of the Totten Glacier, in East Antarctica, as well as in Greenland. While scientists have known about the channels for quite some time, the new study found a relationship between what's happening below the ice and on the surface. Warm water intrusions are melting the ice from below so much that the ice in those channels is cracking. That allows surface meltwater to flow into the fractures, which can destabilize the ice shelf and increase the chances that big chunks will break off, Greenbaum said. "These things are conspiring to increase loss of ice," Greenbaum said. "It's all a positive feedback system, with the ice getting thinner, more strained, and more susceptible to all these processes." "In a warming climate, we expect to see more and more melting rivers of surface meltwater, and if they interact with these fractures, you could see more rapid melting," he said. "It could mean we are underestimating the magnitude and the speed of the meltdown."
<urn:uuid:b5afe42f-0462-4545-aac9-436cc312227f>
3.6875
1,453
News Article
Science & Tech.
39.290543
95,551,640
Berkeley Lab scientists find that an iron-binding protein can transport actinides into cells Scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) have reported a major advance in understanding the biological chemistry of radioactive metals, opening up new avenues of research into strategies for remedial action in the event of possible human exposure to nuclear contaminants. Research led by Berkeley Lab's Rebecca Abergel, working with the Fred Hutchinson Cancer Research Center in Seattle, has found that plutonium, americium, and other actinides can be transported into cells by an antibacterial protein called siderocalin, which is normally involved in sequestering iron. Their results were published online recently in the journal Proceedings of the National Academy of Sciences in a paper titled, "Siderocalin-mediated recognition, sensitization, and cellular uptake of actinides." The paper contains several other findings and achievements, including characterization of the first ever protein structures containing transuranic elements and how use of the protein can sensitize the metal's luminescence, which could lead to potential medical and industrial applications. Abergel's group has already developed a compound to sequester actinides and expel them from the body. They have put it in a pill form that can be taken orally, a necessity in the event of radiation exposure amongst a large population. Last year the FDA approved a clinical trial to test the safety of the drug, and they are seeking funding for the tests. However, a basic understanding of how actinides act in the body was still not well known. "Although [actinides] are known to rapidly circulate and deposit into major organs such as bone, liver, or kidney after contamination, the specific molecular mechanisms associated with mammalian uptake of these toxic heavy elements remain largely unexplored," Abergel and her co-authors wrote. The current research described in PNAS identifies a new pathway for the intracellular delivery of the radioactive toxic metal ions, and thus a possible new target for treatment strategies. The scientists used cultured kidney cells to demonstrate the role of siderocalin in facilitating the uptake of the metal ions in cells. "We showed that this protein is capable of transporting plutonium inside cells," she said. "So this could help us develop other strategies to counteract actinide exposure. Instead of binding and expelling radionuclides from the body, we could maybe block the uptake." The team used crystallography to characterize siderocalin-transuranic actinide complexes, gaining unprecedented insights into the biological coordination of heavy radioelements. The work was performed at the Advanced Light Source (ALS), a Department of Energy synchrotron located at Berkeley Lab. "These are the first protein structures containing thorium or the transuranic elements plutonium, americium, or curium," Abergel said. "Until this work there was no structure in the Protein Data Bank that had those elements. That's an exciting thing for us." The researchers also made the unexpected finding that siderocalin can act as a "synergistic antenna" that sensitizes the luminescence of actinides and lanthanides. "We showed that by adding the protein we enhance the sensitization pathways, making it much brighter," Abergel said. "That is a new mechanism that hasn't been explored yet and could be very useful; it could have applications down the line for diagnostics and bioimaging." Abergel notes that a study like this would have been possible in very few other places. "Very few people have the capabilities to combine the different approaches and techniques--the spectroscopy techniques at the ALS, handling of heavy elements that are radioactive, plus the chemical and biological tools we have onsite," she said. "The combination of all those techniques here is very unique." The work was funded by the Department of Energy's Basic Energy Sciences program in the Office of Science and the National Institutes of Health. The paper's co-authors are Benjamin Allred, Stacey Gauny, Dahlia An, Corie Ralston, and Manuel Sturzbecher-Hoehne of Berkeley Lab, and Roland Strong and Peter Rupert of the Hutchinson Center. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. Julie Chao | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:b001d8fb-f127-4bac-b75e-f87c7c999d4d>
3.1875
1,667
Content Listing
Science & Tech.
31.577839
95,551,650
Over the past several 100 ka glacial-interglacial cycles, the concentration of atmospheric CO2 was closely coupled to global temperature, which indicates the importance of CO2 as a greenhouse gas. The prevailing notion is that the oceans acted as a sink for CO2 during glacials, and that the position of the southern westerlies played an important role in deep ocean ventilation. I first present glacial chronologies from the Andes that allow reconstructing the position of the westerlies during the last glacial, and then new results from a permafrost loess-paleosol sequence in NE-Siberia, which show that huge amounts of carbon were sequestered at high northern latitudes during glacials. Together, these findings challenge the prevailing notions concerning the global carbon cycle and indicate that changing permafrost conditions may have controlled atmospheric CO2 and the rhythm of the ice ages during the Quaternary. Eingeladen von Prof. Ludwig Zöller. Absolventenfeier Geoökologie 2018/19
<urn:uuid:b02ab3ca-04f8-4db6-9273-4bdffe897afd>
3.109375
216
Knowledge Article
Science & Tech.
24.416661
95,551,658
* Meteoroids are pieces of rock or iron debris which travel around the Sun in our solar system. Most are broken off particles from asteroids and range in size from small grains to the size of a golf ball. However, meteoroids also come from a variety of other sources, including comets, the Moon and even the planet Mars. * As the Earth orbits around the Sun, it regularly passes through clouds of these meteoroids left behind by comets (rocks and ice) or asteroids (rocky object sometimes called minor planets or planetoids). These particles then evaporate high in the Earth’s atmosphere creating glowing streaks of light known as meteors or shooting stars. * On average, we can see a meteor once every 15 minutes in the night sky but when we pass through a particularly concentrated area in space, such as a comet’s tail, we witness an unforgettable cosmic show known as a meteor shower. During a meteor shower more than 20 meteors can be seen each hour in Earth’s atmosphere, but when this number increases to more than 1,000 meteors per hour the phenomena is known as a meteor storm. * Since Earth’s orbit remains virtually the same each year, these night sky spectacles are predictable and are usually named after the constellation from where they appear to radiate. Some of the most prolific meteor showers of the year include the following: Lyrids: Caused by comet C/1861 G1 Thatcher and peaking around April 22 with up to 20 meteors per hour. Eta Aquarids: Caused by Halley’s Comet and peaking on May 5 with up to 60 meteors per hour. Perseids: Caused by comet Swift-Tuttle and peaking around August 12 with over 60 meteors per hour. Orionids: Caused by Halley’s comet and peaking around October 21 with around 70 meteors per hour. Leonids: Caused by Tempel-Tuttle comet and peaking around November 17/18 with up to 30 meteors every hour. Geminids: Caused by a minor planet called 3200 Phaethon and peaking around December 13/14, with around 70 multi-coloured meteors per hour. * A meteor that then reaches the Earth’s surface without being vaporized is known as a meteorite s are classified as either stony, iron or stony-iron. Freshly fallen meteorites are often dark brown or black in colour and covered with a thin coating of glass called the fusion crust. This crust is formed when meteors are superheated falling through Earth’s atmosphere. However, after laying in the ground for a while and being attacked by the Earth’s elements, this fusion crust often crumbles away.
<urn:uuid:a59fa921-f54d-4ed8-bef4-1282ac9c009c>
3.90625
584
Knowledge Article
Science & Tech.
51.758536
95,551,680
Electromagnetic Induction Heat Edited by StephWrites, Sharingknowledge, Jen Moreau We may think that the only way to heat an object is by setting the source of the heat directly to the targeted object. However, there's a different way to heat an object without even touching it. Induction heating is employed to bond, harden or soften metal parts or other conductive materials. It works because it presents a combination of speed, reliability, and control for many existing industrial processes. In traditional procedures, an open flame is directly warming up the metal part. On the contrary, in induction heating, by circulating electrical currents, heat is induced into the object. This technique relies on the attributes of radio frequency (RF) energy. Within the electromagnetic spectrum, this frequency flows under infrared and microwave energy. The metal part never contacts a flame since heat is transmitted to the object through electromagnetic waves. This way, the inductor doesn't get hot, and the object is not contaminated. The process can be repeated and controlled easily when properly set up. As a result, objects can be heated very quickly. At the same time, there's no external contact, and this is key where contamination is a problem. Who discovered it? In 1820, a huge discovery was made. When an unstable electric current runs down through a wire, it generates an invisible magnetic field all around it. Initially, Hans Christian Oersted a scientist from Denmark made this observation. One year later, Andre-Marie Ampère a French physicist, discovered that two wires transporting fluctuating electric currents, located close to each other, will attract or repel each other because the magnetic fields they create, produces a force between them. In 1831, Michael Faraday determined the main principle implicated in induction heating. His work included the employment of a switched DC supply with a battery and two windings of copper wire that he wrapped around an iron core. He noticed that when the switch was shut a temporary current flowed in the secondary winding. At that moment, it could be measured with a galvanometer. If the circuit stayed energized, then the current stopped to flow. After opening the switch, a current once more flowed in the secondary winding but in the opposed direction. Finally, Faraday deduced that because there was no physical contact between the two windings, then it was provoked from the first coil and that the current created was directly proportional to the rate of variation of the magnetic flux. How does it work? An oscillating magnetic field is created when an alternating electrical current is used into the primary coil of a transformer. Corresponding to Faraday's principle, if the secondary coil of the transformer is placed inside the magnetic field, an electric current will be created. Let´s take a look at the example of a solid state Radio Frequency. The power supply transmits an Alternate current via an inductor which is often a copper coil, then the metal part to be heated is placed within the inductor. The inductor functions as the transformer primary coil and the metal part to be heated generate a short circuit on the secondary coil. Whenever an object is placed within the inductor and gets inside the magnetic field, circulating eddy currents will be produced inside the object. These eddy currents run in opposite direction to the electrical resistance of the metal, creating precise and localized heat without any direct touch between the object and the inductor. When heating occurs within the magnetic and non-magnetic parts, it is frequently referred to as the "Joule effect". This formula states the connection with the heat created by electrical current distributed through a conductor. Extra heat is generated inside magnetic parts through hysteresis. Hysteresis refers to the internal friction that is produced when magnetic parts pass via the inductor. Magnetic materials present electrical resistance to the quickly fluctuating magnetic fields inside the inductor. This resistance creates internal friction which consecutively generates heat. Therefore there is no contact between the inductor and the metal part during the procedure of heating the material, and neither are there any combustion gasses. The material to be heated can be positioned in a setting separated from the power supply; immersed in a liquid, covered by isolated substances, in gaseous atmospheres or yet in a vacuum. An eddy current is a swirling current system in a conductor in reaction to a changing magnetic field. The current swirls in a way that creates a magnetic field contrasting the change. Electrons swirl in a plane perpendicular to the magnetic field to do this in a conductor. Eddy currents cause energy to be lost because of their nature to oppose. To be more precise, eddy currents convert more useful forms of energy like kinetic energy, into heat, which is usually less convenient. Although, in many applications, the loss of valuable energy is not especially wanted, there are some practical uses for it. Consider, for instance, the brakes of a train. While braking, the metal wheels are in interaction with a magnetic field from an electromagnet, producing eddy currents in the wheels. This interaction between the used field and the eddy currents proceeds to slow the wheels down. The faster the wheels are rotating, the stronger the result, this means that as the train slows the braking force is reduced, generating a smooth stopping motion. - Electromagnetic induction is employed in many industrial procedures such as heat treatment in metallurgy, Czochralski crystal development and zone refining applied in the semiconductor industry, and to melt refractory metals which need very high temperatures. It is also utilized in something called "induction cooking" which basically means induction cooktops for heating containers of food. - Since the 1920s, the basic principles of induction heating have been industrially applied. During World War II, the technology was used to harden metal engine parts. - Lately, the emphasis on lean manufacturing systems and importance on better quality control, have headed to a reawakening of induction technology, along with the expansion of precisely controlled power supplies. - Induction heating lets the targeted heating of a related item for applications involving surface hardening, melting, brazing, soldering, and heating to fit. Iron and its compounds react best to induction heating, due to their ferromagnetic characteristics. However, Eddy currents can be produced in any conductor, and magnetic hysteresis can happen in any magnetic material. - It has been employed to heat liquid conductors and also gaseous conductors. - It is often utilized to heat graphite crucibles and is applied widely in the semiconductor industry to heat silicon and other semiconductors. - Other applications include induction furnace, welding, cooking, brazing, sealing, heating to fit, heat treatment and plastic processing. *An induction furnace employs induction to heat metal to its melting point. After molten, the high-frequency magnetic field can be applied to blend the hot metal. This is suitable for ensuring that alloying additions are fully blended into the melt. Metals melted involve iron and steel, copper, aluminum, and precious metals. - In the food and pharmaceutical industries, induction heating is employed in cap sealing of containers. A layer of aluminum foil is put over the jar opening and heating by induction to fuse it to the container. This delivers a tamper-resistant seal, as moving the contents demands breaking the foil. - In plastic injection molding machines, induction heating is also used. Induction heating enhances energy efficiency for injection and extrusion procedures. Heat is directly created in the barrel of the machine, decreasing warm-up time and energy consumption. Finally, the productivity of an induction heating system for a particular application depends on some factors: the properties of the part itself, the design of the inductor, the capacity of the power supply, and the amount of temperature change needed for the application. So, now you know that heating goes beyond direct contact and how clean and useful induction heating can be. Referencing this Article If you need to reference this article in your work, you can copy-paste the following depending on your required format: APA (American Psychological Association) Electromagnetic Induction Heat. (2017). In ScienceAid. Retrieved Jul 22, 2018, from https://scienceaid.net/Electromagnetic_Induction_Heat MLA (Modern Language Association) "Electromagnetic Induction Heat." ScienceAid, scienceaid.net/Electromagnetic_Induction_Heat Accessed 22 Jul 2018. Chicago / Turabian ScienceAid.net. "Electromagnetic Induction Heat." Accessed Jul 22, 2018. https://scienceaid.net/Electromagnetic_Induction_Heat. Categories : Physics Recent edits by: Sharingknowledge, StephWrites
<urn:uuid:dc23fb77-ca81-441b-9bfa-11bd5b5f38a1>
4
1,797
Knowledge Article
Science & Tech.
32.636911
95,551,703
As you already know, binary search is an O(log N) way to search if a particular value exists in a sorted array. There are some reports on how "most of our implementation" of binary searches are broken. The mistake discussed in that post is the overflow when calculating the index of the middle element. But regardless of whether you get this bit right, there's a different problem with all binary search implementations I've seen. They're just too damn complicated and hard to get right out of your memory. Which is exactly what you want, say, at a job interview. RecapLet's recap what binary search algorithm is, at a high level. We are given a sorted array, and a value we're looking for. Is the value in the array? Here's the basic recursive algorithm: - Check if the middle element is the value? Yes--good, found it. - Is it smaller than what we're looking for? Yes--repeat the search in the array between the middle and the end of the original array. - So it must be larger then--repeat the search in the array between the beginning and the middle of the original array. Almost always, if you're relying on your memory, you should prefer simple algorithms to complex, like you should implement a treap instead of an AVL tree. So here's a sequence of steps that helps me to write binary search correctly. - Simplicity: solve "bisection" rather than "binary search" - Consistency: use consistent definition of search bounds - Overflows: avoid overflows for integer operations - Termination: ensure termination 1. Simplicity: Solve "Bisection" rather than "Binary Search" From now on, let's redefine the problem. We aren't searching for anything anymore. "Searching" for an element in array is a bit hard to define. Say, what should "searching" return if the element doesn't exist, or there are several equal elements? That's a bit too confusing. Instead, let's bisect an array, pose a problem that always have a single solution no matter what. Bisection is defined as follows: given a boolean function f, find the smallest k such that, for all i < k, f(a[i]) == false, and for all j >= k, f(a[j] == true). In other words, we are to find the smallest k such that for all i within array bounds f(a[i]) == (i<k). If the array is sorted in such a way that f(a[i]) is monotonic, being first false and then true, this problem always has a single solution. In some cases, when f(a[i]) is false for all array elements, the resultant k is greater by one than the last index of the array, but this is still not a special case because this is exactly the answer as defined by bold text above. This is more powerful than the binary "search", which can be reduced to "bisection" as follows. In order to find if an element x exists in the array a of length N, we can do the following: Now let's proceed with solving the bisection, now that we've defined it. 2. Consistency: use consistent definition of search bounds The answer to our bisection problem is between 0 and N inclusively, so we will start by assigning i to 0, and j to N. No special cases allowed: we can do without them at all. At each step, the answer we seek will be between our values i <= k < j, and we will not know any information that would help us to reduce the range size. 3. Overflows: Avoid Overflows for Integer Operations We literally have 1 integer operation here: given the bounds, compute the pivot point. i.e. the point at the middle of the array we'll apply f to to make a step. How hard can that be to compute the middle of an interval right? Turns out, quite hard. This problem is discussed in the article I linked above more. If we lived in the world of unbounded integers, it would simply be (i+j)/2, but we live in the world of integer overflows. If i and j are positive integers, which makes sense for array indexes in C++, the never-overflowing expression would be: Another useful property of this expression (in C++, at least) is that it's never equal to j if i < j. Here's why it's important. 4. Termination: ensure termination A basic way to make sure that your program terminates some day is to construct such a "loop variant" that strictly decreases with each loop iteration, and which can't decrease forever. For our bisection, the natural variant is the length of the part of the original array we currently look for the answer in, which is max(j-i, 0). It can't decrease forever because its value would reach zero at some point, from which there's nowhere to proceed. We only need to prove that this variant decreases each iteration by at least one. Fun fact: special programs can prove that a program terminates automatically, without human intervention! One of these programs is named... Terminator. You can read more about this in a very accessible article "Proving Program Termination" by Byron Cook et al. I first learned about it in a summer school where Mr. Cook himself gave a talk about this. First step is to make sure that the time it reaches zero would be the last iteration of the loop (otherwise it wouldn't decrease!), so we use while (i < j). Now assume that we've understood that the pivot point s does not satisfy the target function f. Our key difference with most binary search algorithms is that we should assign the new i to s+1 rather than s. Given our definition of the problem, s is never the answer in this case, and we should keep our candidate range as small as possible. If f(a[s]) is true, then s could still be the answer, so it does not apply here. Here's our iteration step now: We mentioned in the previous section that i <= s < j, hence i < s + 1 and s < j, so each of assignments decreases j-i by at least 1 regardless of f's value. This proves that this loop never hangs up forever. The complete program So here's our bisection that works for all sorted arrays (in the sense of f(a[i]) being first false then true as i increases). The search itself in terms of finding if a value exists in a sorted array, would be then written (I'm just repeating what I wrote above, no surprises here): Proof By Authority And GNU C++ STL implementation of binary_search follows the same guidelines I posted here. In my version of GCC 4.3.4 binary search looks like this: Where lower_bound is the bisection function. Here's my version of bisection and binary search I usually use when I need one. It is designed to be simple, and simple things are easier to get right. Just make sure your loop variant decreases, you do not overflow, and you get rid of as much special cases as possible. Comments imported from the old website Author Paul Shved Modified April 7, 2013 License CC BY-SA 3.0
<urn:uuid:e87e43ba-bd88-46fe-bfcb-0ec5ae9a3711>
3.375
1,554
Personal Blog
Software Dev.
65.208137
95,551,728
the Kansas State Physics Education Research Group and Dean A. Zollman This resource discusses the meaning of quantum wavefunctions in terms of measurement probabilities. The double-slit experiment is used to motivate the concept of probabilistic measurements. A shockwave applet is then used to allow students to explore the relationship between a 1D wavefunction and probabilities. Both the magnitude of the probability density and the probability in a finite interval in position can be displayed. Kansas State Physics Education Research Group, & Zollman, D. (n.d.). Interpreting Wave Functions. Retrieved July 16, 2018, from https://web.phys.ksu.edu/vqm/tutorials/interpretingwavefunctions/index.html %0 Electronic Source %A Kansas State Physics Education Research Group, %A Zollman, Dean %T Interpreting Wave Functions %V 2018 %N 16 July 2018 %9 text/html %U https://web.phys.ksu.edu/vqm/tutorials/interpretingwavefunctions/index.html Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:b3f201ac-0283-4e80-9514-50f350949bc9>
3.046875
272
Truncated
Science & Tech.
39.328667
95,551,734
Canada is building the world's first space telescope designed to detect and track asteroids as well as satellites. Called NEOSSat (Near Earth Object Surveillance Satellite), this spacecraft will provide a significant improvement in surveillance of asteroids that pose a collision hazard with Earth and innovative technologies for tracking satellites in orbit high above our planet. Weighing in at a mere 65-kilograms, this dual-use $12-million mission builds upon Canada's expertise in compact "microsatellite" design. NEOSSat will be the size of a large suitcase, and is cost-effective because of its small size and ability to "piggyback" on the launch of other spacecraft. The mission is funded by Defence Research Development Canada(DRDC) and the Canadian Space Agency(CSA). Together CSA and DRDC formed a Joint Project Office to manage the NEOSSat design, construction and launch phases. NEOSSat is expected to be launched into space in 2010. The two projects that will use NEOSSat are HEOSS (High Earth Orbit Space Surveillance) and the NESS (Near Earth Space Surveillance) asteroid search program. "Canada continues to innovate and demonstrate its technological expertise by developing small satellites that can peer into near and far space for natural and man-made debris," says Guy Bujold, President of the Canadian Space Agency. "We are building the world's first space-based telescope designed to search for near-Earth asteroids." NEOSSat is the first follow up mission to the groundbreaking MOST (Microvariability and Oscillation of STars) spacecraft, a 60-kilogram satellite designed to measure the age of stars in our galaxy. NEOSSat also marks the first project using Canada's Multi-Mission Microsatellite Bus. CSA's Space Technology branch launched the Multi-Mission Bus project to capitalize on technology developed for the MOST project by making it adaptable to future satellite missions. Captain Tony Morris of DRDC Ottawa, and Deputy Program Manager of the NEOSSat Joint Project Office, says, "NEOSSat is a technological pathfinder for us to demonstrate the potential of microsatellite technologies to satisfy operational requirements of the Canadian Forces. NEOSSat will demonstrate the ability of a microsatellite to enhance the CF's contribution to the NORAD mission – providing accurate knowledge of the traffic orbiting our planet. This would contribute to the safety of critical Canadian assets, military and civilian, in an increasingly congested space environment." Dr. Brad Wallace leads the science team at DRDC for HEOSS, which will use NEOSSat for traffic control of Earth's high orbit satellites. Dr. Wallace says, "We have already done satellite tracking tests using MOST, so we know that a microsatellite can track satellites. The challenge now is to demonstrate that it can be done efficiently, reliably, and to the standards required to maximize the safety of the spacecraft that everyone uses daily, like weather and communication satellites." The HEOSS project will demonstrate how a microsatellite could contribute to the Space Surveillance Network (SSN), a network of ground based telescopes and radars located around the world. Until the 1980s, Canada contributed to the SSN with two ground-based telescopes in eastern and western Canada. The fact that HEOSS will be a space-based capability on a microsatellite represents an exciting enhancement to the contribution and offers significant advantages to the SSN. Ground-based sensors' tracking opportunities are constrained by their geographic location and the day-night cycle. In Sun-synchronous orbit around our planet, NEOSSat will offer continuous tracking opportunities and the ability to track satellites in a wide variety of orbit locations. "NEOSSat requires remarkable agility and pointing stability that has never before been achieved by a microsatellite," says David Cooper, General Manager of Mississauga-based Dynacon Inc., the prime contractor for the NEOSSat spacecraft and the manufacturer and operator of the MOST satellite. "It must rapidly spin to point at new locations hundreds of times per day, each time screeching to a halt to hold rock steady on a distant target, or precisely track a satellite along its orbit, and image-on-the-run." Cooper says. "Dynacon is the world leader in this microsatellite attitude-control-system technology." Dr. Alan Hildebrand, holder of a Canada Research Chair in Planetary Science in the University of Calgary's Department of Geoscience, leads an international science team for the NESS asteroid search project and is excited by its prospects. "NEOSSat being on-orbit will give us terrific skies for observing 24-hours a day, guaranteed," Hildebrand says. "Keeping up with the amount of data streaming back to us will be a challenge, but it will provide us with an unprecedented view of space encompassing Earth's orbit." Although NEOSSat's 15-centimetre telescope is smaller than most amateur astronomers', its location approximately 700 kilometres above Earth's atmosphere will give it a huge advantage in searching the blackness of space for faint signs of moving asteroids. Twisting and turning hundreds of times each day, orbiting from pole to pole every 50 minutes, and generating power from the Sun, NEOSSat will send dozens of images to the ground each time it passes over Canada. Due to the ultra-low sky background provided by the vacuum of space, NEOSSat will be able to detect asteroids delivering as few as 50 photons of light in a 100-second exposure. Hildebrand, who oversees the U of C's ground-based asteroid observation program using the Rothney Astrophysical Observatory's wide-field Baker Nunn telescope, said NEOSSat will greatly enhance the study of asteroids and comets as they approach Earth. "NEOSSat will discover many asteroids much faster than can be done from the ground alone. Its most exciting result, however, will probably be discovering new targets for exploration by both manned and unmanned space missions," he observes. "By looking along Earth's orbit, NEOSSat will find 'low and slow' asteroids before they pass by our planet and sprint missions could be launched to explore them when they are in the vicinity of the Earth." Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:15cf7723-4d87-4b5c-983f-8763952cfe28>
3.234375
1,836
Content Listing
Science & Tech.
33.890152
95,551,799
How will YOUR country cope with climate change? Map reveals the best and worst places to live as our planet warms up - London-based company The Eco Experts has revealed the countries best-equipped to cope with climate change - Scandinavian countries like Norway and Finland and also the UK, score highly in the map - But places in sub-Saharan Africa will be most affected by a warming climate - The map uses data that tracks a country's vulnerability and readiness to climate change - Results have been published every year since 1995 - with Norway coming top every single year Climate change experts have released maps of the world revealing how prepared different countries are to cope with the effects of climate change. In the maps, 192 countries are ranked by their ‘vulnerability’ and ‘readiness’, to produce an overall judgement on their fate. The results reveal that Scandinavian countries and the UK are among the most likely to survive - but areas of sub-Saharan Africa will be hardest hit. Scroll down for video London-based company The Eco Experts has revealed the countries best-equipped to cope with climate change on a map (shown). Scandinavian countries like Norway and Finland, and also the UK, score highly. Green is best, scaling down to red being worst They took into account location, terrain, pollution rates and national resources when calculating which countries would be most affected. Countries like Norway, Sweden, Finland and Denmark score well on the scale. But places like Central America, Africa and India all appear at risk from natural disaster - and are poorly equipped to cope, said The Eco Experts. Jon Whiting, of The Eco Experts warned: ‘Hurricanes, earthquakes, blizzards, droughts and flooding are all real dangers for some of these areas, and this is compounded by a lack of national strategy to counteract the effects.’ Burundi, Chad, Sudan and the Democratic Republic of Congo produced some of the lowest scores, meaning these countries will be the biggest victims of weather disasters. |Least affected||Score||Most affected||Score| |1 - Norway||82.7||1 - Chad||31.6| |2 - New Zealand||82.2||2 (tied) - Eritrea||33.8| |3 - Sweden||81.6||2 (tied) - Burundi||33.8| |4 - Finland||81.5||4 (tied) - Democratic Republic of Congo||34.0| |5 - Denmark||81.4||4 (tied) - Central Africa Republic||34.0| |6 - Australia||80.1||6 - Sudan||35.5| |7 - United Kingdom||80.0||7 (tied) - Niger||35.6| |8 - United States||78.9||7 (tied) - Haiti||35.6| |9 (tied) - Germany||78.8||7 (tied) - Afghanistan||35.6| |9 (tied) - Iceland||78.8||10 - Guinea-Bissau||37.3| Most countries across Europe will be not be severely affected by climate change, according to the map. It takes into accounts many factors such as access to clean drinking water and the risk of heat waves But places in sub-Saharan Africa (shown left) will be most affected by a warming climate, while some countries in Africa like Bolivia (right) will also also be severely affected by global warming The map is based on data compiled by the ND-Gain index, which has been monitoring 45 internal and external indicators of climate change exposure of 192 countries since 1992. The index is built on two variables; ‘vulnerability’ and ‘readiness’ for which a country gets a separate mark for each. These scores tally up to produce an overall total indicating how a particular nation would fare. On the scale, the country best equipped to cope with the effects of climate change was Norway. In fact, Norway has topped the ranking every year since the Index began in 1995. North America will also apparently be able to cope with the effects of climate change, thanks to high readiness scores for the USA and Canada. The results have been published every year since 1995 - with Norway coming top every single year Asia has a wide range of scores for different countries, owing to the vastly different climates and levels of infrastructure in various countries. Surprisingly, Australia comes out fairly well in the map, despite being a notoriously hot country Various islands such as Haiti will be severely affected by climate change, perhaps due to the effects of rising sea levels. Others like Barbados, though, will apparently avoid some of the worst effects ‘Adaptation challenges still exist, but Norway is well positioned to adapt. Norway is the 4th least vulnerable country and the 5th most ready country,’ according to the ND-Gain Index website. Second was New Zealand, third was Sweden, fourth was Finland and fifth Denmark. The UK came in seventh, followed by the US in eighth. Lower down, Ireland came in 17th and Russia in 32nd. At the bottom of the table Chad was deemed the country that would suffer the most from climate change. ‘It has both a great need for investment and innovations to improve readiness and a great urgency for action,’ said the site. It was followed by Eritrea, Burundi, the Democratic Republic of Congo and Central Africa Republic. On the site, detailed factors for each country can be viewed. These include the projected change of cereal yields, access to reliable drinking water and vulnerability to heatwave hazards. Most watched News videos - Shocking video shows mother brutally beating her twin girls - Family and friends pay their respects to Alesha MacPhail - Man fatally shoots a father during an argument over a handicap spot - Waitress tackles male customer after grabbing her backside - Road rage brawl ends with BMW driver sending man flying - Cohen taped Trump discussing payment to Playboy model - Sir David Attenborough shuts down Naga Munchetty's questions - London commuter sings out loud and doesn't care who hears him - Roseanne Barr gives official statement on her Valerie Jarrett tweet - Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant - Dennis Quaid says he ‘disappeared’ during Meg Ryan relationship - Woman livestreams unassisted birth of her 6th child in her garden
<urn:uuid:689b8956-9222-4499-a586-a2a5901f0f08>
2.5625
1,356
Truncated
Science & Tech.
49.761734
95,551,806
Supplementary material from "Head movements quadruple the range of speeds encoded by the insect motion vision system in hawkmoths" Published on 2017-09-25T07:25:26Z (GMT) by Flying insects use compensatory head movements to stabilize gaze. Like other optokinetic responses, these movements can reduce image displacement, motion and misalignment, and simplify the optic flow field. Because gaze is imperfectly stabilized in insects, we hypothesized that compensatory head movements serve to extend the range of velocities of self-motion that the visual system encodes. We tested this by measuring head movements in hawkmoths <i>Hyles lineata</i> responding to full-field visual stimuli of differing oscillation amplitudes, oscillation frequencies and spatial frequencies. We used frequency-domain system identification techniques to characterize the head's roll response, and simulated how this would have affected the output of the motion vision system, modelled as a computational array of Reichardt detectors. The moths' head movements were modulated to allow encoding of both fast and slow self-motion, effectively quadrupling the working range of the visual system for flight control. By using its own output to drive compensatory head movements, the motion vision system thereby works as an adaptive sensor, which will be especially beneficial in nocturnal species with inherently slow vision. Studies of the ecology of motion vision must therefore consider the tuning of motion-sensitive interneurons in the context of the closed-loop systems in which they function. Cite this collection Windsor, Shane P.; K. Taylor, Graham (2017): Supplementary material from "Head movements quadruple the range of speeds encoded by the insect motion vision system in hawkmoths". The Royal Society. Collection.
<urn:uuid:275c720e-1285-40ba-82d5-0034772c5f7a>
3.09375
362
Truncated
Science & Tech.
22.607772
95,551,813
Discovery could help scientists better understand exotic behaviors of electrons New research published this week shows a rare state of matter in which electrons in a superconducting crystal organize collectively. The findings lay the groundwork for answering one of the most compelling questions in physics: How do correlated electron systems work, and are they related to one another? Crystalline samples of CeRhIn5 from Los Alamos were cut into microscopic, crystalline conducting paths with a focused ion beam at MPI-CPfS. Credit: MPI CPfS The paper, Electronic in-plane symmetry breaking at field-tuned quantum criticality in CeRhIn5, was published in the journal Nature. Electrons in most metals act individually, free to move through a metal to conduct electric currents and heat. But in a special sample of layered cerium, rhodium and indium (CeRhIn5), scientists discovered that electrons unite to flow in the same direction (a behavior called "breaking symmetry") when in high magnetic fields of 30 tesla. Known as "electronic nematic," this is a rare state of matter between liquid and crystal. "It's sort of like in ancient times," clarifies Phillip Moll, principal investigator of this work and leader of the Physics of Microstructured Quantum Matter Group at the Max-Planck Institute for Chemical Physics of Solids in Germany. "People would draw maps in whatever direction best served them. But this state is like the moment when the world's mapmakers unified to arbitrarily pick north as the orientation for all maps." Scientists believe that the electronic nematicity state may be closely related to superconductivity, another strongly correlated electron state in which electrons flow with no resistance. This cerium crystal becomes a superconductor under high pressure. However, when placed in a high magnetic field, it demonstrates this electronic nematic state. Because it exhibits both behaviors, CeRhIn5 appears uniquely positioned to one-way reveal possible interactions between these two correlated electron phases. "This fundamental question in materials in which the electrons interact was the starting point for my PhD thesis," adds Maja Bachmann, a doctoral student on the research team. "Do the electrons have to decide either to pair or to all go in one direction? In other words, are superconductivity and nematicity competitive phenomena, or could the same interaction that leads to pairing also create nematicity?" This research featured a specialized sample fabricated from a single crystal of CeRhIn5 using focused ion beam (FIB) machining, and required experiments in both pulsed and resistive magnets. Work in the DC Field Facility's 45-tesla hybrid showed that the nematic phase appears in very high fields, beginning at 30 tesla and remaining through the hybrid's full field. Researchers wanted to understand how far this phase extended and, through experiments at the Pulsed Field Facility, found that at around 50 tesla, the nematicity vanishes, possibly even undergoing another exotic phase transition. . But something else happened during the pulsed experiments: Researchers noticed that they could control the direction of the electrons when they tilted the field slightly. Returning back to the DC Field Facility, the scientists were able to continuously change this tilt angle while keeping the field steady at 45 tesla, a unique experimental parameter at the MagLab. "One big advantage of the MagLab is that it offers all the state-of-the-art magnet technologies, and throughout a project, the magnet type can be changed easily if it becomes clear that a different technology was required," Moll said. "Really, the close technological, scientific and administrative integration of these very different but complementary high-field technologies was the key to this success, and is a major strength of the MagLab." Moll's team performed additional work in the lab's 100-tesla pulsed magnet that will be featured in a future paper. The researchers are continuing to explore how the nematic phase merges into the superconducting phase, part of an ongoing project that will involve additional MagLab experiments. In addition to Bachman, Moll's co-authors on the paper included F. Ronning and E.D. Bauer of Los Alamos National Laboratory; T. Helm and K. R. Shirer of the Max-Planck-Institute for Chemical Physics of Solids; and L. Balicas, M. K. Chan, B. J. Ramshaw, R. D. McDonald, F. F. Balakirev and M. Jaime of the National MagLab. About MPI CPfS The research at the Max Planck Institute for Chemical Physics of Solids (MPI CPfS) in Dresden aims to discover and understand new materials with unusual properties. In close cooperation, chemists and physicists (including chemists working on synthesis, experimentalists and theoreticians) use the most modern tools and methods to examine how the chemical composition and arrangement of atoms, as well as external forces, affect the magnetic, electronic and chemical properties of the compounds. New quantum materials, physical phenomena and materials for energy conversion are the result of this interdisciplinary collaboration. The MPI CPfS is part of the Max Planck Society and was founded in 1995 in Dresden. It consists of around 280 employees, of which about 180 are scientists, including 70 doctoral students. Ingrid Rothe | EurekAlert! Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters Theorists publish highest-precision prediction of muon magnetic anomaly 16.07.2018 | DOE/Brookhaven National Laboratory For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:884813b1-e532-4d5b-9445-6fa342be6025>
3.5
1,765
Content Listing
Science & Tech.
37.667894
95,551,838
I was wondering how to prove Euclid’s theorem: The medians of a triangle are concurrent. My work so far: First of all my interpretation of the theorem is that if a line segment is drawn from each of the 3 side’s medians to the vertex opposite to it, they intersect at one point. Since a triangle has three sides and each side must have a median, I figure that at least 2 of them have to intersect as the lines can’t be parallel. May anyone explain further? Thank you! I could do that by using Thales’s Theorem. Sorry if I did it on a paper. It is really hard to do on this page. This is my short proof from 1963: In the triangle ABC draw medians BE, and CF, meeting at point G. Construct a line from A through G, such that it intersects BC at point D. We are required to prove that D bisects BC, therefore AD is a median, hence medians are concurrent at G (the centroid). Produce AD to a point P below triangle ABC, such that AG = GP. Construct lines BP and PC. Since AF = FB, and AG = GP, FG is parallel to BP. (Euclid) Similarly, since AE = EC, and AG = GP, GE is parallel to PC Thus BPCG is a parallelogram. Since diagonals of a parallelogram bisect one another (Euclid), therefore BD = DC. Thus AD is a median. QED Corollary: GD = AD/3. Since AG = GP and GD = GP/2, AG = 2GD. AD = (AG + GD) = (2GD + GD) = 3GD. Hence GD = AD/3. QED If you know Ceva’s Theorem, apply it. Note that if $ D,E$ and $F$ are the midpoints of sides, $AD/DB = BE/EC = CF/FA = 1$ Therefore proved by converse of Ceva’s Theorem. I have been trying to answer this question for an AS question, using only methods already shown in the course book. I’m sorry for the lousy handwriting, hope it’s readable.
<urn:uuid:4075674c-9ed1-4745-b321-86fff9822312>
3.234375
499
Q&A Forum
Science & Tech.
84.259844
95,551,846
Younger Dryas impact hypothesis The Younger Dryas impact hypothesis or Clovis comet hypothesis originally proposed that a large air burst or earth impact of one or more comets initiated the Younger Dryas cold period about 12,900 BP calibrated (10,900 14C uncalibrated) years ago. The hypothesis has been contested by research showing that most of the conclusions cannot be repeated by other scientists, and criticized because of misinterpretation of data and the lack of confirmatory evidence. The current impact hypothesis states that the air burst(s) or impact(s) of a swarm of carbonaceous chondrites or comet fragments set areas of the North American continent on fire, causing the extinction of most of the megafauna in North America and the demise of the North American Clovis culture after the last glacial period. The Younger Dryas ice age lasted for about 1,200 years before the climate warmed again. This swarm is hypothesized to have exploded above or possibly on the Laurentide Ice Sheet in the region of the Great Lakes, though no impact crater has yet been identified and no physical model by which such a swarm could form or explode in the air has been proposed. Nevertheless, the proponents suggest that it would be physically possible for such an air burst to have been similar to, but orders of magnitude larger than, the Tunguska event of 1908. The hypothesis proposed that animal and human life in North America not directly killed by the blast or the resulting coast-to-coast wildfires would have likely starved on the burned surface of the continent. The evidence for an impact event includes charred carbon-rich layers of soil that have been found at some 50 Clovis sites across the continent. The layers contain unusual materials (nanodiamonds, metallic microspherules, carbon spherules, magnetic spherules, iridium, platinum, charcoal, soot and fullerenes enriched in helium-3), which are interpreted to be potential evidence of an impact event, at the very bottom of black mats of organic material that marks the beginning of the Younger Dryas, and it is claimed these cannot be explained by volcanic, anthropogenic, or other natural processes. Recent research has been reported that at Lake Cuitzeo, in the central Mexican state of Guanajuato, evidence supporting a modified version of the Younger Dryas impact hypothesis—involving a much smaller, non-cometary impactor—was found in lake bed cores dating to 12,900 BP. The reported evidence included nanodiamonds (including the hexagonal form called lonsdaleite), carbon spherules, and magnetic spherules. Multiple hypotheses were examined to account for these observations, though none were believed to be terrestrial. Lonsdaleite occurs naturally in asteroids and cosmic dust and as a result of extraterrestrial impacts on Earth. The analysis of the study has not been confirmed or repeated by other researchers. Lonsdaleite has also been made artificially in laboratories. A 100-fold spike in the concentration of platinum has also been found in Greenland ice cores, dated to 12,890 BP with 5-year accuracy. This is interpreted as evidence against the Younger Dryas impact hypothesis by the study’s authors, but cited as evidence for the hypothesis by its proponents. Consequences of hypothetical impact It is conjectured that this impact event brought about the extinction of many species of North American Pleistocene megafauna. These animals included camels, mammoths, the giant short-faced bear and numerous other species that the proponents suggest died out at this time. The proposed markers for the impact event are claimed to appear at the end of the Clovis culture. History of the hypothesis The initial description of this hypothesis was published in a 2006 book. The following year, a paper with the same principal authors suggested that the impact event may have led to an immediate decline in human populations in North America at that time. Additional data purported to support the synchronous nature of the black mats was published. The authors stated that the data required further analysis, and independent analysis of other Clovis sites for verification of this evidence. The authors stated that they remained skeptical of the bolide impact hypothesis as the cause of the Younger Dryas and the megafaunal extinction. They also concluded that "...something major happened at 10,900 B.P. (14C uncalibrated) that we have yet to understand." Transmission electron microscopy evidence purported to show nanodiamonds from a layer assumed to correspond to the geologic moment of the event was published in the journal Science. Also, in the same issue, D.J. Kennett reported that the nanodiamonds were evidence for bolide impacts from a rare swarm of carbonaceous chondrites or comets at the start of Younger Dryas, resulting from multiple airbursts and surface impacts. This resulted in substantial loss of plant life, megafauna and other animals. This study has been strenuously disputed by some scientists for a variety of technical and professional reasons. Rex Daulton's skepticism increased with the revelation of documentation demonstrating misconduct and past criminal conduct (conviction for fraud and misrepresentation of credentials) by the researcher who prepared samples for the proponents of the hypothesis. However, those charges were later dismissed and expunged by the court. The disputing scientists claim that the study's conclusions could not be repeated, that further research suggests that no nanodiamonds were found, and that the supposed carbon spherules were, in fact, either fungus or insect feces and included modern contaminants. A re-evaluation published by the original proponents in June 2013 of spherules from 18 sites worldwide supports their hypothesis. Further analysis of Younger Dryas boundary sediments at 9 sites, released in June 2016, found no evidence of an extraterrestrial impact at the YDB. In December 2016, an analysis of nanodiamond evidence failed to uncover lonsdaleite or a spike in nanodiamond concentration at the YDB. Radiocarbon dating, microscopy of paleobotanical samples, and analytical pyrolysis of fluvial sediments "[found] no evidence in Arlington Canyon for an extraterrestrial impact or catastrophic impact-induced fire." Exposed fluvial sequences in Arlington Canyon on Santa Rosa Island "features centrally in the controversial hypothesis of an extra-terrestrial impact at the onset of the Younger Dryas." A study of Paleoindian demography found no evidence of a population decline among the Paleoindians at 12,900 ± 100 BP, which was inconsistent with predictions of an impact event. They suggested that the hypothesis would probably need to be revised. There is also no evidence of continent-wide wildfires at any time during terminal Pleistocene deglaciation, though there is evidence that most larger wildfires had a human origin, which calls into question the origin of the "black mat." Iridium, magnetic minerals, microspherules, carbon, and nanodiamonds are all subject to differing interpretations as to their nature and origin, and may be explained in many cases by purely terrestrial or non-catastrophic factors. If it is assumed that the hypothesis supposes that all effects of the putative impact on Earth's biota would have been brief, all extinctions caused by the impact should have occurred simultaneously. However, there is much evidence that the megafaunal extinctions that occurred across northern Eurasia, North America and South America at the end of the Pleistocene were not synchronous. The extinctions in South America appear to have occurred at least 400 years after the extinctions in North America. The extinction of woolly mammoths in Siberia also appears to have occurred later than in North America. A greater disparity in extinction timings is apparent in island megafaunal extinctions that lagged nearby continental extinctions by thousands of years; examples include the survival of woolly mammoths on Wrangel Island, Russia, until 3700 BP, and the survival of ground sloths in the Antilles, the Caribbean, until 4700 cal BP. The Australian megafaunal extinctions occurred approximately 30,000 years earlier than the hypothetical Younger Dryas event. The megafaunal extinction pattern observed in North America poses a problem for the bolide impact scenario, since it raises the question why large mammals should be preferentially exterminated over small mammals or other vertebrates. Additionally, some extant megafaunal species such as bison and Brown bear seem to have been little affected by the extinction event, while the environmental devastation caused by a bolide impact would not be expected to discriminate. Also, it appears that there was collapse in North American megafaunal population from 14,800 to 13,700 BP, well before the date of the hypothetical extraterrestrial impact, possibly from anthropogenic activities, including hunting. Scientists have asserted that the carbon spherules originated as fungal structures and/or insect fecal pellets, and contained modern contaminants and that the claimed nanodiamonds are actually misidentified graphene and graphene/graphane oxide aggregates. An analysis of a similar Younger Dryas boundary layer in Belgium yielded carbon crystalline structures such as nanodiamonds, but the authors concluded that also did not show unique evidence for a bolide impact. Researchers have also have not found any extraterrestrial platinum group metals in the boundary layer which would be inconsistent with the hypothesized impact event. Further independent analysis was unable to confirm prior claims of magnetic particles and microspherules, concluding that there was no evidence for a Younger Dryas impact event. Other research has shown no support for the impact hypothesis. One group examined carbon-14 dates for charcoal particles that showed wildfires occurred well after the proposed impact date, and the glass-like carbon was produced by wildfires and no lonsdaleite was found. Analysis of fluvial sediments on Santa Rosa Island by another group also found no evidence of lonsdaleite, impact-induced fires, or extraterrestrial impact. Research published in 2012 has shown that the so-called "black mats" are easily explained by typical earth processes in wetland environments. The study of black mats, that are common in prehistorical wetland deposits which represent shallow marshlands, that were from 6000 to 40,000 years ago in the southwestern USA and Atacama Desert in Chile, showed elevated concentrations of iridium and magnetic sediments, magnetic spherules and titanomagnetite grains. It was suggested that because these markers are found within or at the base of black mats, irrespective of age or location, suggests that these markers arise from processes common to wetland systems, and probably not as a result of catastrophic bolide impacts. A 2013 study found a spike in platinum in Greenland ice. The authors of that study conclude that such a small impact of an iron meteorite is “unlikely to result in an airburst or trigger wide wildfires proposed by the YDB impact hypothesis." Finally, researchers have criticized the conclusions of various studies for incorrect age-dating of the sediments, contamination by modern carbon, inconsistent hypothesis that made it difficult to predict the type and size of bolide, lack of proper identification of lonsdaleite, confusing an extraterrestrial impact with other causes such as fire, and for inconsistent use of the carbon spherule "proxy". Naturally occurring lonsdaleite has also been identified in non-bolide diamond placer deposits in the Sakha Republic. - Pleistocene megafauna - Holocene extinction event - Tollmann's hypothetical bolide - Murray Springs Clovis Site - Firestone, Richard; West, Allen; Warwick-Smith, Simon (4 June 2006). The Cycle of Cosmic Catastrophes: How a Stone-Age Comet Changed the Course of World Culture. Bear & Company. p. 392. ISBN 1591430615. - Firestone RB, West A, Kennett JP; et al. (October 2007). "Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas cooling". Proc. Natl. Acad. Sci. U.S.A. 104 (41): 16016–21. Bibcode:2007PNAS..10416016F. doi:10.1073/pnas.0706977104. PMC . PMID 17901202. - Bunch TE, Hermes RE, Moore AM; et al. (June 2012). "Very high-temperature impact melt products as evidence for cosmic airbursts and impacts 12,900 years ago". Proc Natl Acad Sci U S A. 109 (28): E1903–12. Bibcode:2012PNAS..109E1903B. doi:10.1073/pnas.1204453109. PMC . PMID 22711809. - Kerr, R. A. (3 September 2010). "Mammoth-Killer Impact Flunks Out". Science. 329 (5996): 1140–1. Bibcode:2010Sci...329.1140K. doi:10.1126/science.329.5996.1140. PMID 20813931. - Pinter, Nicholas; Scott, Andrew C.; Daulton, Tyrone L.; Podoll, Andrew; Koeberl, Christian; Anderson, R. Scott; Ishman, Scott E. (2011). "The Younger Dryas impact hypothesis: A requiem". Earth-Science Reviews. 106 (3–4): 247. Bibcode:2011ESRv..106..247P. doi:10.1016/j.earscirev.2011.02.005. - Pigati JS; Latorre C; Rech JA; Betancourt JL; Martínez KE; Budahn JR (April 2012). "Accumulation of impact markers in desert wetlands and implications for the Younger Dryas impact hypothesis". Proc Natl Acad Sci U S A. 109 (19): 7208–12. Bibcode:2012PNAS..109.7208P. doi:10.1073/pnas.1200296109. PMC . PMID 22529347. Retrieved 7 February 2017. - Boslough, M.; K. Nicoll; V. Holliday; T. L. Daulton; D. Meltzer; N. Pinter; A. C. Scott; T. Surovell; P. Claeys; J. Gill; F. Paquay; J. Marlon; P. Bartlein; C. Whitlock; D. Grayson & A. J. T. Jull (2012). "Arguments and Evidence Against a Younger Dryas Impact Event" (PDF). Geophysical Monograph Series. 198: 13–26. Retrieved 7 February 2017. - Kennett DJ, Kennett JP, West A; et al. (January 2009). "Nanodiamonds in the Younger Dryas boundary sediment layer". Science. 323 (5910): 94. Bibcode:2009Sci...323...94K. doi:10.1126/science.1162819. PMID 19119227. - Dalton, Rex (17 May 2007). "Archaeology: Blast in the past?" (PDF). Nature. 447 (7142): 256–7. Bibcode:2007Natur.447..256D. doi:10.1038/447256a. PMID 17507957. News article in Nature - Wittke, James H. (20 May 2013). "Evidence for deposition of 10 million tonnes of impact spherules across four continents 12,800 y ago" (PDF). Proceedings of the National Academy of Sciences. 110 (23): E2088–E2097. Bibcode:2013PNAS..110E2088W. doi:10.1073/pnas.1301760110. PMC . PMID 23690611. - Israde-Alcántara I, Bischoff JL, Domínguez-Vázquez G; et al. (March 2012). "Evidence from central Mexico supporting the Younger Dryas extraterrestrial impact hypothesis". Proc. Natl. Acad. Sci. U.S.A. 109 (13): E738–47. Bibcode:2012PNAS..109E.738I. doi:10.1073/pnas.1110614109. PMC . PMID 22392980. - Bundy, F. P. (1967). "Hexagonal Diamond—A New Form of Carbon". Journal of Chemical Physics. 46 (9): 3437. Bibcode:1967JChPh..46.3437B. doi:10.1063/1.1841236. - Kaminskii, F.V., G.K. Blinova, E.M. Galimov, G.A. Gurkina, Y.A. Klyuev, L.A. Kodina, V.I. Koptil, V.F. Krivonos, L.N. Frolova, and A.Y. Khrenov (1985). "Polycrystalline aggregates of diamond with lonsdaleite from Yakutian [Sakhan] placers". Mineral. Zhurnal. 7: 27–36. - Simon Redfern (1 August 2013). "Ice core data supports ancient space impact idea". BBC. - Michail I. Petaev, Shichun Huang, Stein B. Jacobsen, Alan Zindler (2013). "Large Pt anomaly in the Greenland ice core points to a cataclysm at the onset of Younger Dryas". Proc. Natl. Acad. Sci. U.S.A. 110 (32): 12917–12920. Bibcode:2013PNAS..11012917P. doi:10.1073/pnas.1303924110. PMC . - Haynes, G (5 November 2010). "The catastrophic extinction of North American mammoths and mastodonts". World Archeology. 33 (3): 391–416. doi:10.1080/00438240120107440. - Carrasco MA, Barnosky AD, Graham RW (2009). "Quantifying the Extent of North American Mammal Extinction Relative to the Pre-Anthropogenic Baseline". PLOS One. 4 (12): e8331. Bibcode:2009PLoSO...4.8331C. doi:10.1371/journal.pone.0008331. PMC . PMID 20016820. - Haynes CV (May 2008). "Younger Dryas "black mats" and the Rancholabrean termination in North America". Proc. Natl. Acad. Sci. U.S.A. 105 (18): 6520–5. Bibcode:2008PNAS..105.6520H. doi:10.1073/pnas.0800560105. PMC . PMID 18436643. - Kerr, Richard A. (2 January 2009). "Did the Mammoth Slayer Leave a Diamond Calling Card?". Science. 323 (5910): 26. doi:10.1126/science.323.5910.26. PMID 19119192. - Dalton R (2011). "Comet Theory Comes Crashing to Earth". Miller-McCune. Archived from the original on 7 April 2012. Retrieved 15 April 2012. - "Allen West smeared by Dalton, former Nature writer". - "2010 Court document" (PDF). - Daulton, T. L.; Pinter, N.; Scott, A. C. (30 August 2010). "No evidence of nanodiamonds in Younger–Dryas sediments to support an impact event". Proc. Natl. Acad. Sci. U.S.A. 107 (37): 16043–7. Bibcode:2010PNAS..10716043D. doi:10.1073/pnas.1003904107. PMC . PMID 20805511. - Roach, John (22 June 2010). "Fungi, Feces Show Comet Didn't Kill Ice Age Mammals?". National Geographic Daily News. National Geographic Society. Retrieved 25 June 2010. - Wittke; et al. (June 2013). "Evidence for deposition of 10 million tonnes of impact spherules across four continents 12,800 y ago". Proc. Natl. Acad. Sci. U.S.A. 110 (23): E2088–E2097. Bibcode:2013PNAS..110E2088W. doi:10.1073/pnas.1301760110. PMC . PMID 23690611. - Holliday, Vance; Surovell, Todd; Johnson, Eileen (2016-07-08). "A Blind Test of the Younger Dryas Impact Hypothesis". PLOS One. 11 (7): e0155470. Bibcode:2016PLoSO..1155470H. doi:10.1371/journal.pone.0155470. ISSN 1932-6203. PMC . PMID 27391147. - Daulton, Tyrone L.; Amari, Sachiko; Scott, Andrew C.; Hardiman, Mark; Pinter, Nicholas; Anderson, R. Scott (2017-01-01). "Comprehensive analysis of nanodiamond evidence relating to the Younger Dryas Impact Hypothesis". Journal of Quaternary Science. 32 (1): 7–34. Bibcode:2017JQS....32....7D. doi:10.1002/jqs.2892. ISSN 1099-1417. - Scott, Andrew C.; Hardiman, Mark; Pinter, Nicholas; Anderson, R. Scott; Daulton, Tyrone L.; Ejarque, Ana; Finch, Paul; Carter-champion, Alice (2017-01-01). "Interpreting palaeofire evidence from fluvial sediments: a case study from Santa Rosa Island, California, with implications for the Younger Dryas Impact Hypothesis". Journal of Quaternary Science. 32 (1): 35–47. Bibcode:2017JQS....32...35S. doi:10.1002/jqs.2914. ISSN 1099-1417. - Wolbach, Wendy S; Ballard, Joanne P; et al. (2018). "Extraordinary Biomass-Burning Episode and Impact Winter Triggered by the Younger Dryas Cosmic Impact ∼12,800 Years Ago. 1. Ice Cores and Glaciers". Journal of Geology. 126 (2): 165–184. Bibcode:2018JG....126..165W. doi:10.1086/695703. - Wolbach, Wendy S; Ballard, Joanne P; et al. (2018). "Extraordinary Biomass-Burning Episode and Impact Winter Triggered by the Younger Dryas Cosmic Impact ∼12,800 Years Ago. 2. Lake, Marine, and Terrestrial Sediments". Journal of Geology. 126 (2): 185–205. Bibcode:2018JG....126..185W. doi:10.1086/695704. - Holliday VT, Meltzer DJ (2010). "The 12.9-ka ET Impact Hypothesis and North American Paleoindians" (pdf). Current Anthropology. 51 (5): 575–606. doi:10.1086/656015. Retrieved 10 April 2012. - Buchanan B, Collard M, Edinborough K (August 19, 2008). "Paleoindian demography and the extraterrestrial impact hypothesis". Proc. Natl. Acad. Sci. U.S.A. 105 (33): 11651–4. Bibcode:2008PNAS..10511651B. doi:10.1073/pnas.0803762105. PMC . PMID 18697936. - Gary Haynes (2009). American megafaunal extinctions at the end of the Pleistocene. Springer. p. 125. ISBN 978-1-4020-8792-9. Retrieved 20 April 2012. - Marlon J.R.; et al. (2009). "Wildfire responses to abrupt climate change in North America". Proc. Natl. Acad. Sci. U.S.A. 106 (8): 2519–24. Bibcode:2009PNAS..106.2519M. doi:10.1073/pnas.0808212106. PMC . PMID 19190185. - Perkins S (23 April 2012). "No Love for Comet Wipeout - ScienceNOW". Retrieved 28 April 2012. - Pinter N., Ishman S.E (2008). "Impacts, mega-tsunami, and other extraordinary claims". GSA Today. 18 (1): 37–38. doi:10.1130/GSAT01801GW.1. - Haynes, Gary (2009). "Introduction to the Volume". In Haynes, Gary. American Megafaunal Extinctions at the End of the Pleistocene. Springer. pp. 1–20. doi:10.1007/978-1-4020-8793-6_1. ISBN 978-1-4020-8792-9. - Fiedel, Stuart (2009). "Sudden Deaths: The Chronology of Terminal Pleistocene Megafaunal Extinction". In Haynes, Gary. American Megafaunal Extinctions at the End of the Pleistocene. Springer. pp. 21–37. doi:10.1007/978-1-4020-8793-6_2. ISBN 978-1-4020-8792-9. - Hubbe A, Hubbe M, Neves W (2007). "Early Holocene survival of megafauna in South America". Journal of Biogeography. 34 (9): 1642–1646. doi:10.1111/j.1365-2699.2007.01744.x. - Stuart AJ, Kosintsev PA, Higham TF, Lister AM (October 2004). "Pleistocene to Holocene extinction dynamics in giant deer and woolly mammoth". Nature. 431 (7009): 684–9. Bibcode:2004Natur.431..684S. doi:10.1038/nature02890. PMID 15470427. - Martin, Paul (2005). "4 Ground Sloths at Home Cryptozoology, Ground Sloths, and Mapinguari National Park". Twilight of the mammoths: ice age extinctions and the rewilding of America. Berkeley: University of California Press. ISBN 0-520-23141-4. - Barnosky AD (August 2008). "Colloquium paper: Megafauna biomass tradeoff as a driver of Quaternary and future extinctions". Proc. Natl. Acad. Sci. U.S.A. 105 Suppl 1: 11543–8. Bibcode:2008PNAS..10511543B. doi:10.1073/pnas.0801918105. PMC . PMID 18695222. - Scott, E. (2010). "Extinctions, scenarios, and assumptions: Changes in latest Pleistocene large herbivore abundance and distribution in western North America". Quat. Int. 217 (1–2): 225. Bibcode:2010QuInt.217..225S. doi:10.1016/j.quaint.2009.11.003. - Gill JL, Williams JW, Jackson ST, Lininger KB, Robinson GS (November 2009). "Pleistocene megafaunal collapse, novel plant communities, and enhanced fire regimes in North America". Science. 326 (5956): 1100–3. Bibcode:2009Sci...326.1100G. doi:10.1126/science.1179504. PMID 19965426. - Kerr, Richard A. (30 October 2010). "Mammoth-Killer Impact Rejected". Science NOW. AAAS. Retrieved 31 August 2010. - Tian H, Schryvers D, Claeys P (January 2011). "Nanodiamonds do not provide unique evidence for a Younger Dryas impact". Proc. Natl. Acad. Sci. U.S.A. 108 (1): 40–4. Bibcode:2011PNAS..108...40T. doi:10.1073/pnas.1007695108. PMC . PMID 21173270. - Paquay FS, Goderis S, Ravizza G; et al. (December 2009). "Absence of geochemical evidence for an impact event at the Bølling-Allerød/Younger Dryas transition". Proc. Natl. Acad. Sci. U.S.A. 106 (51): 21505–10. Bibcode:2009PNAS..10621505P. doi:10.1073/pnas.0908874106. PMC . PMID 20007789. - Surovell TA; Holliday VT; Gingerich JA; Ketron C; Haynes CV, Jr.; Hilman I; Wagner DP; Johnson E & Claeyse P (October 2009). "An independent evaluation of the Younger Dryas extraterrestrial impact hypothesis". Proc. Natl. Acad. Sci. U.S.A. 106 (43): 18155–8. Bibcode:2009PNAS..10618155S. doi:10.1073/pnas.0907857106. PMC . PMID 19822748. - van Hoesel A, Hoek WZ, Braadbaart F, van der Plicht J, Pennock GM, Drury MR (May 2012). "Nanodiamonds and wildfire evidence in the Usselo horizon postdate the Allerod-Younger Dryas boundary". Proc. Natl. Acad. Sci. U.S.A. 109 (20): 7648–53. Bibcode:2012PNAS..109.7648V. doi:10.1073/pnas.1120950109. PMC . PMID 22547791. - Blaauw M, Holliday VT, Gill JL, Nicoll K (July 2012). "Age models and the Younger Dryas Impact Hypothesis". Proc Natl Acad Sci U S A. 109 (34): E2240; author reply E2245–7. Bibcode:2012PNAS..109E2240B. doi:10.1073/pnas.1206143109. PMC . PMID 22829673. - Boslough M (July 2012). "Inconsistent impact hypotheses for the Younger Dryas". Proc Natl Acad Sci U S A. 109 (34): E2241; author reply E2245–7. Bibcode:2012PNAS..109E2241B. doi:10.1073/pnas.1206739109. PMC . PMID 22829675. - Daulton TL (July 2012). "Suspect cubic diamond "impact" proxy and a suspect lonsdaleite identification". Proc Natl Acad Sci U S A. 109 (34): E2242; author reply E2245–7. Bibcode:2012PNAS..109E2242D. doi:10.1073/pnas.1206253109. PMC . PMID 22829671. - Gill JL, Blois JL, Goring S; et al. (July 2012). "Paleoecological changes at Lake Cuitzeo were not consistent with an extraterrestrial impact". Proc Natl Acad Sci U S A. 109 (34): E2243; author reply E2245–7. Bibcode:2012PNAS..109E2243G. doi:10.1073/pnas.1206196109. PMC . PMID 22829674. - Hardiman M, Scott AC, Collinson ME, Anderson RS (July 2012). "Inconsistent redefining of the carbon spherule "impact" proxy". Proc Natl Acad Sci U S A. 109 (34): E2244; author reply E2245–7. Bibcode:2012PNAS..109E2244H. doi:10.1073/pnas.1206108109. PMC . PMID 22829672. - James Kennett, UC Santa Barbara, May 21, 2013, Comprehensive Analysis of Impact Spherules Supports Theory of Cosmic Impact 12,800 Years Ago - Holliday, V. T., 2011, A Cosmic Catastrophe: The Great Clovis Comet Debate: A personal perspective on an Outrageous Hypothesis. Argonaut Archaeological Research Fund, Department of Anthropology at the University of Arizona, University of Arizona, Tucson, Arizona. - Pringle, H., 2008, Firestorm from space wiped out prehistoric Americans. The New Scientist. vol. 194, no. 2605, pp. 8–9. - West, A., and A. Goodyear, 2008, The Clovis Comet: Part I:Evidence for a Cosmic Collision 12,900 Years Ago. Mammoth Trumpet. v. 23, no. 1, pp. 1–4. - "Younger Dryas Boundary: Extraterrestrial Impact or Not" (pdf). www.georgehoward.net. Retrieved 15 April 2012. - Hoffman, Carey (2 July 2008). "Exploding Asteroid Theory Strengthened by New Evidence Located in Ohio, Indiana". University of Cincinnati. Retrieved 5 August 2008. - "Science & Environment: Diamond clues to beasts' demise". BBC NEWS. Retrieved 15 April 2012. - "Sciency Thoughts: Evidence for a Younger Dryas impact event?". Retrieved 15 April 2012. - "The Younger Dryas Impact Hypothesis". Scientific American Blog Network. Retrieved 15 April 2012. - "New Clovis-Age Comet Impact Theory". Retrieved 15 April 2012.
<urn:uuid:0e715e6c-3f43-4168-9b3a-753de2889060>
4.125
7,190
Knowledge Article
Science & Tech.
69.913949
95,551,854
Welcome back to Messier Monday! Today, we continue in our tribute to our dear friend, Tammy Plotner, by looking at the intermediate spiral galaxy known as Messier 65. In the 18th century, while searching the night sky for comets, French astronomer Charles Messier kept noting the presence of fixed, diffuse objects he initially mistook for comets. In time, he would come to compile a list of approximately 100 of these objects, hoping to prevent other astronomers from making the same mistake. This list – known as the Messier Catalog – would go on to become one of the most influential catalogs of Deep Sky Objects. One of these objects is the intermediate elliptical galaxy known as Messier 66 (NGC 3627). Located about 36 million light-years from Earth in the direction of the Leo constellation, this galaxy measures 95,000 light-years in diameter. It is also the brightest and largest member of the Leo Triplet of galaxies and is well-known for its bright star clusters, dust lanes, and associated supernovae. Enjoying life some 35 million light years from the Milky Way, the group known as the “Leo Trio” is home to bright galaxy Messier 66 – the easternmost of the two M objects. In the telescope or binoculars, you’ll find this barred spiral galaxy far more visible and much easier to see details within its knotted arms and bulging core. Because of interaction with its neighboring galaxies, M66 shows signs of a extremely high central mass concentration as well as a resolved noncorotating clump of H I material apparently removed from one of the spiral arms. Even one of its spiral arms got it noted in Halton Arp’s collection of Peculiar Galaxies! So exactly what did it collide with?As Xiaolei Zhang (et al) indicated in a 1993 study: “The combined CO and H I data provide new information, both on the history of the past encounter of NGC 3627 with its companion galaxy NGC 3628 and on the subsequent dynamical evolution of NGC 3627 as a result of this tidal interaction. In particular, the morphological and kinematic information indicates that the gravitational torque experienced by NGC 3627 during the close encounter triggered a sequence of dynamical processes, including the formation of prominent spiral structures, the central concentration of both the stellar and gas mass, the formation of two widely separated and outwardly located inner Lindblad resonances, and the formation of a gaseous bar inside the inner resonance. These processes in coordination allow the continuous and efficient radial mass accretion across the entire galactic disk. The observational result in the current work provides a detailed picture of a nearby interacting galaxy which is very likely in the process of evolving into a nuclear active galaxy. It also suggests one of the possible mechanisms for the formation of successive instabilities in postinteraction galaxies, which could very efficiently channel the interstellar medium into the center of the galaxy to fuel nuclear starburst and Seyfert activities.” Ah, yes! Star forming regions… And what better way to look deeper than through the eyes of the Spitzer Space Telescope? As R. Kennicutt (University of Arizona) and the SINGS Team observed: “M66’s blue core and bar-like structure illustrates a concentration of older stars. While the bar seems devoid of star formation, the bar ends are bright red and actively forming stars. A barred spiral offers an exquisite laboratory for star formation because it contains many different environments with varying levels of star-formation activity, e.g., nucleus, rings, bar, the bar ends and spiral arms. The SINGS image is a four-channel false-color composite, where blue indicates emission at 3.6 microns, green corresponds to 4.5 microns, and red to 5.8 and 8.0 microns. The contribution from starlight (measured at 3.6 microns) in this picture has been subtracted from the 5.8 and 8 micron images to enhance the visibility of the dust features.” Messier 66 has also been deeply studied for evidence of forming super star clusters, too. As David Meier indicated: “Super star clusters are thought to be precursors of globular clusters and are some of the most extreme star formation regions in the universe. They tend to occur in actively starbursting galaxies or near the cores of less active galaxies. Radio super star clusters cannot be seen in optical light because of extreme extinction, but they shine brightly in infrared and radio observations. We can be certain that there are many massive O stars in these regions because massive stars are required to provide the UV radiation that ionizes the gas and creates a thermally bright HII regions. Not many natal SSCs are currently known, so detection is an important science goal in its own right. In particular, very few SSCs are known in galactic disks. We need more detections to be able to make statistical statements about SSCs and fill in the mass range of forming star clusters. With more detections, we will be able to investigate the effects of other environments (e.g. bars, bubbles, and galactic interaction) on SSCs, which could potentially be followed up in the far future with the Square Kilometer Array to discover their effects on individual forming massive stars.” But there’s still more. Try magnetic properties in M66’s spiral patterns. As M. Soida (et al) indicated in their 2001 study: “By observing the interacting galaxy NGC 3627 in radio polarization we try to answer the question; to which degree does the magnetic field follow the galactic gas flow. We obtained total power and polarized intensity maps at 8.46 GHz and 4.85 GHz using the VLA in its compact D-configuration. In order to overcome the zero-spacing problems, the interferometric data were combined with single-dish measurements obtained with the Effelsberg 100-m radio telescope. The observed magnetic field structure in NGC 3627 suggests that two field components are superposed. One component smoothly fills the interarm space and shows up also in the outermost disk regions, the other component follows a symmetric S-shaped structure. In the western disk the latter component is well aligned with an optical dust lane, following a bend which is possibly caused by external interactions. However, in the SE disk the magnetic field crosses a heavy dust lane segment, apparently being insensitive to strong density-wave effects. We suggest that the magnetic field is decoupled from the gas by high turbulent diffusion, in agreement with the large Hi line width in this region. We discuss in detail the possible influence of compression effects and non-axisymmetric gas flows on the general magnetic field asymmetries in NGC 3627. On the basis of the Faraday rotation distribution we also suggest the existence of a large ionized halo around this galaxy.” History of Observation: Both M65 and M66 were discovered on the same night – March 1, 1780 – by Charles Messier, who described M66 as, “Nebula discovered in Leo; its light is very faint and it is very close to the preceding: They both appear in the same field in the refractor. The comet of 1773 and 1774 has passed between these two nebulae on November 1 to 2, 1773. M. Messier didn’t see them at that time, no doubt, because of the light of the comet.” Both galaxies would be observed and cataloged by the Herschel family and further expounded upon by Admiral Smyth: “A large elongated nebula, with a bright nucleus, on the Lion’s haunch, trending np [north preceding, NW] and sf [south following, SE]; this beautiful specimen of perspective lies just 3deg south-east of Theta Leonis. It is preceded at about 73s by another of a similar shape, which is Messier’s No. 65, and both are in the field at the same time, under a moderate power, together with several stars. They were pointed out by Mechain to Messier in 1780, and they appeared faint and hazy to him. The above is their appearance in my instrument. “These inconceivably vast creations are followed, exactly on the same parallel, ar Delta AR=174s, by another elliptical nebula of even a more stupendous character as to apparent dimensions. It was discovered by H. [John Herschel], in sweeping, and is No. 875 in his Catalogue of 1830 [actually, probably an erroneous position for re-observed M66]. The two preceding of these singular objects were examined by Sir William Herschel, and his son [JH] also; and the latter says, “The general form of elongated nebulae is elliptic, and their condensation towards the centre is almost invariably such as would arise from the superposition of luminous elliptic strata, increasing in density towards the centre. In many cases the increase of density is obviously attended with a diminution of ellipticity, or a nearer approach to the globular form in the central than in the exterior strata.” He then supposes the general constitution of those nebulae to be that of oblate spheroidal masses of every degree of flatness from the sphere to the disk, and of every variety in respect of the law of their density, and ellipticity towards the centre. This must appear startling and paradoxical to those who imagine that the forms of these systems are maintained by forces identical with those which determine the form of a fluid mass in rotation; because, if the nebulae be only clusters of discrete stars, as in the greater number of cases there is every reason to believe them to be, no pressure can propagate through them. Consequently, since no general rotation of such a system as one mass can be supposed, Sir John suggests a scheme which he shows is not, under certain conditions, inconsistent with the law of gravitation. “It must rather be conceived,” he tells us, ” as a quiescent form, comprising within its limits an indefinite magnitude of individual constituents, which, for aught we can tell, may be moving one among the other, each animated by its own inherent projectile force, and deflected into an orbit more or less complicated, by the influence of that law of internal gravitation which may result from the compounded attractions of all its parts.” Locating Messier 66: Even though you might think by its apparent visual magnitude that M66 wouldn’t be visible in small binoculars, you’d be wrong. Surprisingly enough, thanks to its large size and high surface brightness, this particular galaxy is very easy to spot directly between Iota and Theta Leonis. In even 5X30 binoculars under good conditions you’ll easy see both it and M65 as two distinct gray ovals. A small telescope will begin to bring out structure in both of these bright and wonderful galaxies, but to get a hint at the “Trio” you’ll need at least 6″ in aperture and a good dark night. If you don’t spot them right away in binoculars, don’t be disappointed – this means you probably don’t have good sky conditions and try again on a more transparent night. The pair is well suited to modestly moonlit nights with larger telescopes. May you equally be attracted to this galactic pair! And here are the quick facts on M66 to help you get started: Object Name: Messier 66 Alternative Designations: M66, NGC 3627, (a member of the) Leo Trio, Leo Triplet Object Type: Type Sb Spiral Galaxy Right Ascension: 11 : 20.2 (h:m) Declination: +12 : 59 (deg:m) Distance: 35000 (kly) Visual Brightness: 8.9 (mag) Apparent Dimension: 8×2.5 (arc min) We have written many interesting articles about Messier Objects here at Universe Today. Here’s Tammy Plotner’s Introduction to the Messier Objects, M1 – The Crab Nebula, and David Dickison’s articles on the 2013 and 2014 Messier Marathons. - NASA – Messier 66 - ESA – Spiral Galaxy Messier 66 - Messier Objects – Messier 66 - Wikipedia – Messier 66 The post Messier 66 – the NGC 3627 Intermediate Spiral Galaxy appeared first on Universe Today.
<urn:uuid:85e1049e-5c6e-44fe-9278-5ab0820a1e24>
3.09375
2,641
Personal Blog
Science & Tech.
44.554614
95,551,859
A Rosetta Stone for Quantum Mechanics with an Introduction to Quantum Computation by Samuel J. Lomonaco, jr Publisher: arXiv 2000 Number of pages: 97 The purpose of these lecture notes is to provide readers, who have some mathematical background but little or no exposure to quantum mechanics and quantum computation, with enough material to begin reading the research literature in quantum computation and quantum information theory. Home page url Download or read it online for free here: by Riley T. Perry A quantum computing tutorial for everyone, including those who have no background in physics. In quantum computers we exploit quantum effects to compute in ways that are faster or more efficient than, or even impossible, on conventional computers. by Michele Mosca - arXiv This text surveys the state of the art in quantum computer algorithms, including both black-box and non-black-box results. A representative sample of quantum algorithms is given. This includes a summary of the early quantum algorithms, etc. by Richard L Amoroso - viXra.org From the table of contents: From Concept to Conundrum; Cornucopia of Quantum Logic Gates; Surmounting Uncertainty Supervening Decoherence; Measurement With Certainty; New Classes of Quantum Algorithms; References; and more ... by Clare Hewitt-Horsman - arXiv This paper introduces one interpretation of quantum mechanics, a modern 'many-worlds' theory, from the perspective of quantum computation. Reasons for seeking to interpret quantum mechanics are discussed, then the specific theory is introduced.
<urn:uuid:8e08770d-0db5-4786-97c9-798db3520d4c>
2.65625
328
Content Listing
Science & Tech.
25.902119
95,551,863
London: Scientists are closer to demonstrating that DNA can form spontaneously from chemicals thought to be present on the primordial Earth suggesting that DNA could have predated the birth of life itself. Deoxyribonucleic acid (DNA) is essential to almost all life on Earth, yet most biologists think that life began with Ribonucleic acid (RNA). Just like DNA, it stores genetic information. Prebiotic chemists have so far largely ignored DNA, because its complexity suggests it cannot possibly form spontaneously, the New Scientist reported. Conventional wisdom is that RNA-based life eventually switched to DNA because DNA is better at storing information. In other words, RNA organisms made the first DNA. "The story makes more sense if DNA nucleotides were naturally present in the environment. Organisms could have taken up and used them, later developing the tools to make their own DNA once it became clear how advantageous the molecule was - and once natural supplies began to run low," Christopher Switzer of the University of California, Riverside said. RNA can also fold into complex shapes that can clamp onto other molecules and speed up chemical reactions, just like a protein, and it is structurally simpler than DNA, so might be easier to make. In 2009 researchers finally managed to generate RNA using chemicals that probably existed on the early Earth. Matthew Powner, now at University College London, and his colleagues synthesised two of the four nucleotides that make up RNA. Their achievement suggested that RNA may have formed spontaneously - powerful support for the idea that life began in an "RNA world". In his latest work, Powner is trying to make DNA nucleotides through similar methods to those he used to make RNA nucleotides in 2009. Nucleotides consist of a sugar attached to a phosphate and a nitrogen-containing base molecule - these bases are the familiar letters of the genetic code. DNA nucleotides, which link together to form DNA, are harder to make than RNA nucleotides, because DNA uses a different sugar that is tougher to work with. Starting with a mix of chemicals, many of them thought to have been present on the early Earth, Powner has now created a sugar like that in DNA, linked to a molecule called AICA, which is similar to a base. "A DNA nucleotide is just a few years away. It`s practically a fait accompli at this point," Switzer said.
<urn:uuid:c12b2396-0fa2-4fe5-8eda-258321f352b4>
3.734375
501
News Article
Science & Tech.
35.708224
95,551,906
C# Access Modifier List This is a quick reminder sheet for the different access modifiers, their descriptions and scope. An access modifier defines which objects, properties and methods can "see" a particular object, property or method. Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services. |public||A public member is accessible from anywhere. This is the least restrictive access modifier.| |protected||A protected member is accessible from within the class and all derived classes. No access from the outside is permitted.| |private||A private member is accessible only from within the same class. Not even derived classes can access it.| |internal||An internal member is accessible from within any part of the same Microsoft .NET-based assembly. You can think of it as public at the assembly level and private from outside the assembly.| |protected internal||internal protected member is accessible from within the current assembly or from within types derived from the containing class.| Last updated on: Friday 8th September 2017 There are no comments for this post. Be the first!
<urn:uuid:fe7ef733-32ad-45f8-bcba-523f9f1ce6f8>
2.734375
238
Content Listing
Software Dev.
42.036667
95,551,920
May 23, 2017 NASA Missions Provide New Insights into 'Ocean Worlds' in Our Solar System,” announced a NASA press release in April. Life as we know it requires three primary ingredients: liquid water; a source of energy for metabolism; and the right chemical ingredients, primarily carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur. With this finding, Cassini has shown that Enceladus – a small, icy moon a billion miles farther from the sun than Earth – has nearly all of these ingredients for habitability. Come hear Chris Glein of Southwest Research Institute discuss Cassini’s discovery of molecular hydrogen, a possible food source for (hypothetical) microbes in Enceladus’ salty subsurface sea.
<urn:uuid:d92a4493-971d-4dbe-af82-1c6f1db93955>
3.640625
153
News (Org.)
Science & Tech.
19.63
95,551,923
Faculty of Natural Resources, PSU has learned about the death of farmed mussels in Tambon South Kantang, Amphoe Kangtang, Trang Province where there are 89 mussel farmers. The Faculty studied the cause of the death which had begun since August 18, 2009 and found that the area had increased the number of farmed mussels 2-3 times. Crowdedness resulted in bad water circulation leading to smaller quantity of nutrients available for each mussel and sudden environmental changes. These, in turn, caused stress and eventually weakness in the mussels. When they died and decayed, the water condition deteriorated and bred more parasites and microbes, causing even more rapid and massive deaths. Part of the changes has resulted from the decrease of water salinity due to heavy rains in the watersheds. Mussels died most on heavy rainy days beginning from the rafts 3 kilometers before the village. More of those at the top of the sacks died than those at the bottom because the changes at that level were more drastic, i.e., the salinity level changed more quickly than that at a deeper level. There were no reports about the death of white sea bass, however, because they could live in water with ranges of salinity level and so not affected. The rain wash also incurred various colored sediments, becoming a problem to mussels which feed by filtering though they have mechanisms to manage suspended sediments. The study team has made suggestions to mussel farmers as follows. Mussel farming relies on natural settings so farmers should consider the capability of the system in catering for it. This means the management of natural environment to make it sustainable, balanced and undisturbed so as not to incur any phenomenon that has never occurred before. In this case, the farmers will have to reduce the number and the size of the mussel rafts as well as not to make the mussel sacks too overcrowded in each raft. Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:37d055e3-d7f9-483a-b779-2ac2a7a59ea5>
2.828125
989
Content Listing
Science & Tech.
41.078809
95,551,927
Movement and storage of sediment in rivers of the United States and Canada Sediment in river systems is of interest to earth and water scientists working at problems that span several time scales. On the longest time scale (108 to 109 years), sediment is the major form in which material is transferred from continents to oceans, and the rates at which river sediment has been produced in the geologic past are major concerns of those who study longterm geochemical cycling. At somewhat shorter time scales (106 to 108 years), the properties of the existing sedimentary rocks and deposits are the major clues to past hydrologic and geologic conditions, and, to the extent that the present really is the key to the past, it behooves us to understand how today’s observable conditions influence today’s river sediments. On a more secular time scale, sediment in rivers is of immediate concern as a reflection of soil erosion, as a major design consideration for reservoir sedimentation, river navigation, and other engineering works, as a transporter of various materials (toxic and otherwise) that are adsorbed onto sediment particles in river systems, and as an influence on the habitat of aquatic wildlife. These secular-scale considerations have prompted research and monitoring activities during the past half century that have provided much of our basic knowledge of sediment in the river systems of North America. Studies of soil erosion and valley sedimentation became especially intensive and extensive during the 1930s; these studies continue today, mostly under the aegis of the national agricultural agencies of the United States and Canada.
<urn:uuid:95a9ec4b-b4a8-4cbd-bc62-a13e78a9c904>
4
316
Academic Writing
Science & Tech.
3.870455
95,551,940
The phonon, like the photon or electron, is a physical particle that travels like waves, representing mechanical vibration. Phonons transmit everyday sound and heat. Recent progress in phononics has led to the development of new ideas and devices that are using phononic properties to control sound and heat, according to a new review in Nature. Martin Maldovan, of the Georgia Institute of Technology, has published a review article on phononics in Nature. Credit: Credit: Rob Felt. One application that has scientists buzzing is the possibility of controlling sound waves by designing and fabricating cloaking shells to guide acoustic waves around a certain object – an entire building, perhaps – so that whatever is inside the shell is invisible to the sound waves. The future possibilities for phonons might also solve the biggest challenges in energy consumption and buildings today. Understanding and controlling the phononic properties of materials could lead to novel technologies to thermally insulate buildings, reduce environmental noise, transform waste heat into electricity and develop earthquake protection, all by developing new materials to manipulate sound and heat. These ideas are all possible in theory, but to make them a reality, phononics will have to inspire the same level of scientific innovation as electronics, and today that's not the case. "People know about electrons because of computers, and electromagnetic waves because of cell phones, but not so much about phonons," said Martin Maldovan, a research scientist in the School of Chemical and Biomolecular Engineering at the Georgia Institute of Technology. Maldovan's review article appeared online Nov. 13 in the journal Nature. In the article he blends eight different subjects in the field of phononics, describing advances in sonic and thermal diodes, optomechanical crystals, acoustic and thermal cloaking, hypersonic phononic crystals, thermoelectrics and thermocrystals. These technologies "herald the next technological revolution in phononics," he said. All of these areas share a common theme: manipulating mechanical vibrations, but at different frequences. The hottest fields in phononics, Maldovan said, is the development of acoustic and thermal metamaterials. These materials are capable of cloaking sound waves and thermal flows. The phononics approach to cloaking is based on electromagnetic cloaking materials that are already in use for light. Maldovan, formerly a research scientist at the Massachusetts Institute of Technology, also conducts phononics research of his own. This past summer, Maldovan published an article in the journal Physical Review Letters, describing an invention for controlling the conduction of heat through solid objects. Known as thermocrystals, this new area of phononics research seeks to manage heat waves in a similar manner as sound and light waves, by channeling the flow of heat at certain frequencies. The technology could lead to devices that convert heat into energy, or the thermal equivalent of diodes, which could help data centers solve the problem of massive heat generated by their servers. "The field of Phononics is relatively new, and when you have something new you don't know what you will find," Maldovan said. "You're always thinking 'what can I do with that?'" CITATION: M Maldovan. "Sound and heat revolutions in phononics," (Nature, 2013). DOI:10.1038/nature12608 Brett Israel | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:c0a853a6-60d5-40e2-a3cc-c234ec041a95>
3.578125
1,264
Content Listing
Science & Tech.
33.842613
95,551,943
Solving the "faint young sun paradox" -- explaining how early Earth was warm and habitable for life beginning more than 3 billion years ago even though the sun was 20 percent dimmer than today -- may not be as difficult as believed, says a new University of Colorado Boulder study. This is an artist's conception of the Earth during the late Archean, 2.8 billion years ago. Weak solar radiation requires the Earth have increased greenhouse gas amounts to remain warm. CU-Boulder doctoral student Eric Wolf Wolf and CU-Boulder Professor Brian Toon use a three-dimensional climate model to show that the late Archean may have maintained large areas of liquid surface water despite a relatively weak greenhouse. With carbon dioxide levels within constraints deduced from ancient soils, the late Archean may have had large polar ice caps but lower latitudes would have remained temperate and thus hospitable to life. The addition of methane allows the late Archean to warmed to present day mean surface temperatures. Credit: Charlie Meeks In fact, two CU-Boulder researchers say all that may have been required to sustain liquid water and primitive life on Earth during the Archean eon 2.8 billion years ago were reasonable atmospheric carbon dioxide amounts believed to be present at the time and perhaps a dash of methane. The key to the solution was the use of sophisticated three-dimensional climate models that were run for thousands of hours on CU's Janus supercomputer, rather than crude, one-dimensional models used by almost all scientists attempting to solve the paradox, said doctoral student Eric Wolf, lead study author. "It's really not that hard in a three-dimensional climate model to get average surface temperatures during the Archean that are in fact moderate," said Wolf, a doctoral student in CU-Boulder's atmospheric and oceanic sciences department. "Our models indicate the Archean climate may have been similar to our present climate, perhaps a little cooler. Even if Earth was sliding in and out of glacial periods back then, there still would have been a large amount of liquid water in equatorial regions, just like today." Evolutionary biologists believe life arose on Earth as simple cells roughly 3.5 billion years ago, about a billion years after the planet is thought to have formed. Scientists have speculated the first life may have evolved in shallow tide pools, freshwater ponds, freshwater or deep-sea hydrothermal vents, or even arrived on objects from space. A cover article by Wolf and Professor Brian Toon on the topic appears in the July issue of Astrobiology. The study was funded by two NASA grants and by the National Science Foundation, which supports CU-Boulder's Janus supercomputer used for the study. Scientists have been trying to solve the faint young sun paradox since 1972, when Cornell University scientist Carl Sagan -- Toon's doctoral adviser at the time -- and colleague George Mullen broached the subject. Since then there have been many studies using 1-D climate models to try to solve the faint young sun paradox -- with results ranging from a hot, tropical Earth to a "snowball Earth" with runaway glaciation -- none of which have conclusively resolved the problem. "In our opinion, the one-dimensional models of early Earth created by scientists to solve this paradox are too simple -- they are essentially taking the early Earth and reducing it to a single column atmospheric profile," said Toon. "One-dimensional models are simply too crude to give an accurate picture." Wolf and Toon used a general circulation model known as the Community Atmospheric Model version 3.0 developed by the National Center for Atmospheric Research in Boulder and which contains 3-D atmosphere, ocean, land, cloud and sea ice components. The two researchers also "tuned up" the model with a sophisticated radiative transfer component that allowed for the absorption, emission and scattering of solar energy and an accurate calculation of the greenhouse effect for the unusual atmosphere of early Earth, where there was no oxygen and no ozone, but lots of CO2 and possibly methane. The simplest solution to the faint sun paradox, which duplicates Earth's present climate, involves maintaining roughly 20,000 parts per million of the greenhouse gas CO2 and 1,000 ppm of methane in the ancient atmosphere some 2.8 billion years ago, said Wolf. While that may seem like a lot compared to today's 400 ppm of CO2 in the atmosphere, geological studies of ancient soil samples support the idea that CO2 likely could have been that high during that time period. Methane is considered to be at least 20 times more powerful as a greenhouse gas than CO2 and could have played a significant role in warming the early Earth as well, said the CU researchers. There are other reasons to believe that CO2 was much higher in the Archean, said Toon, who along with Wolf is associated with CU's Laboratory for Atmospheric and Space Physics. The continental area of Earth was smaller back then so there was less weathering of the land and a lower release of minerals to the oceans. As a result there was a smaller conversion of CO2 to limestone in the ocean. Likewise, there were no "rooted" land plants in the Archean, which could have accelerated the weathering of the soils and indirectly lowered the atmospheric abundance of CO2, Toon said. Another solution to achieving a habitable but slightly cooler climate under the faint sun conditions is for the Archean atmosphere to have contained roughly 15,000 to 20,000 ppm of CO2 and no methane, said Wolf. "Our results indicate that a weak version of the faint young sun paradox, requiring only that some portion of the planet's surface maintain liquid water, may be resolved with moderate greenhouse gas inventories," the authors wrote in Astrobiology. "Even if half of Earth's surface was below freezing back in the Archean and half was above freezing, it still would have constituted a habitable planet since at least 50 percent of the ocean would have remained open," said Wolf. "Most scientists have not considered that there might have been a middle ground for the climate of the Archean. "The leap from one-dimensional to three-dimensional models is an important step," said Wolf. "Clouds and sea ice are critical factors in determining climate, but the one-dimensional models completely ignore them." Has the faint young sun paradox finally been solved? "I don't want to be presumptuous here," said Wolf. "But we show that the paradox is definitely not as challenging as was believed over the past 40 years. While we can't say definitively what the atmosphere looked like back then without more geological evidence, it is certainly not a stretch at all with our model to get a warm early Earth that would have been hospitable to life." "The Janus supercomputer has been a tremendous addition to the campus, and this early Earth climate modeling project would have impossible without it," said Toon. The researchers estimated the project required roughly 6,000 hours of supercomputer computation time, an effort equal to about 10 years on a home computer.Eric Wolf, 240-461-8336 Eric Wolf | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:9b0cf127-fe9e-4873-b639-a3faafd650a1>
4.0625
2,101
Content Listing
Science & Tech.
40.730957
95,551,967
Insert the thermometer into the can and note the starting temperature. Secure the can with the clamp in mid air and place the spirit burner underneath recording which spirit used. Then start the clock and time how long it takes to reach 50i?? C After the experiment weight the spirit burner. Then cool down the can and restart the experiment. Formulas: I have used the following formulas to calculate energy released in different formats: To get the energy released per Joule: Energy Release /J = Mass of Water x Temperature rise x 4. 2. To get the energy released per gram: Energy released / J/g = Energy released per Joule / Mass of fuel burned To get the energy released per mole: Energy released / J/m = Energy released / J/g x Weight of molecule (e. g. 46g) Analysing the results: I found out that as the size of the molecule increases so does the energy released. My tables of results, the average results and the graph clearly shows as the molecule chain size increases so does the energy released per mole. The graph shows a steady rate of increase suggesting my results are correct. My results also show that less fuel is used in the heavier compounds. My prediction was correct as the heavier the hydrocarbon the more energy it produces because of their increased chain length, meaning the more carbon and hydrogen atoms there are the more energy it produces and therefore making it bigger and weigh more. In my research I know that the number of particles such as carbon increased from 1 in Methanol, to 4 in Butanol. The more carbon atoms it had the more energy it produced. Evaluation: I believe I took enough results at 50i?? for each compound to get a fairly reliable result. Three results on each were good enough for me to get a good estimate average. I tried to keep to my original fair test plan but due to different types of spirit burners on some occasions I had to lower the can. Even though each time the wick was the same length the flame sometimes had problems with the flame touching the can and heating it. Sometimes due to drafts etc the stability of the flame was interrupted and resulted in it pointing in other directions, which may explain the large gap in my ethanol results. The chart is based on the averages, which don’t really show any discrepancies however individual results do. The difference between result 2 and 3 was over 83,000 J/m released which is a very large gap compared with the other results. Although I believe he average is quite accurate I cannot say they are totally reliable, as those results didn’t fit the pattern. Both Methanol and Propanol gave good constant results but once again with a large gap of 75,000 J/m energy released in the Butanol experiment however 2 results were virtually the same. These results I believe were caused by problems with the flame heating the can. As the flame was a yellow flame the experiment was not burning all the fuel and not enough oxygen was getting to the fuel. If I were to repeat the experiment I would maybe use an isolated incubation filled with oxygen so results are more accurate due to complete combustion. Things such as drafts and other gases would be kept out of the reaction. Because of drafts etc the fuel was not always being heated so my results for that reason may be a little inaccurate. I might also try using the same type of bottle and wick for each experiment so the wick is always the same distance from the beaker of water. This would make sense as some bottles were smaller than others and the flames had trouble touching and heating the water. We changed the bottle but it was still not quite right and flames were different for each experiment. This could also be the fault of the wick as some were thick and others thin. With all these differences I don’t believe my results are as accurate as they could be. Maybe if I had more time I would have extended the range of compounds tested and probably at lower temperatures to get accurate results. This would allow me to see further what happens when you increase the amount of particles in a molecule and the affect it has on the amount of energy released. I might also try how long it takes the fuel to heat different amounts of water or even a totally different compound. Show preview only The above preview is unformatted text This student written piece of work is one of many that can be found in our GCSE Electricity and Magnetism section.
<urn:uuid:466d4097-b41f-4cdf-af4d-74e3e8bd0d9b>
2.75
908
Academic Writing
Science & Tech.
51.532707
95,551,978
The Danjon Scale The Danjon Scale of Lunar Eclipse Brightness is a five-point scale useful for measuring the appearance and luminosity of the Moon during a lunar eclipse. It was proposed by André-Louis Danjon when he was measuring the Earthshine on the Moon. Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services. During a lunar eclipse, the Moon doesn't become lost against the night sky, it actually has a reddish or coppery hue. This is because light passing through the thin layers of atmosphere at the edge of the Earth is refracted or bent inwards, making the Earth's shadow lighter. Our atmosphere is good at scattering blue light and the predominant colours that get through are from the red end of the spectrum. The depth (darkness) of the eclipse is affected by many factors, such as how much cloud cover there is on the Earth and how polluted the atmosphere is. A major volcanic eruption just before a lunar eclipse can have a profound effect on the appearance of the totally eclipsed Moon. The Danjon Scale is a method for measuring the depth of the eclipse. Determination of the value of depth for an eclipse is best done near mid-totality with the naked eye. The scale is subjective, and different observers may determine different values. In addition, different parts of the Moon may have different L values, depending on their distance from the centre of the Earth's umbra. Very dark eclipse. Moon almost invisible, especially at mid-totality. Dark Eclipse, grey or brownish in colouration. Details distinguishable only with difficulty. Deep red or rust-colored eclipse. Very dark central shadow, while outer edge of umbra is relatively bright. Umbral shadow usually has a bright or yellow rim. Very bright copper-red or orange eclipse. Umbral shadow has a bluish, very bright rim. Last updated on: Friday 8th September 2017 There are no comments for this post. Be the first!
<urn:uuid:683e236f-bf50-4377-8147-089f6f4c5860>
3.375
432
Knowledge Article
Science & Tech.
50.752049
95,551,983
Research published today in Molecular Ecology shows that female hawksbill turtles mate at the beginning of the season and store sperm for up to 75 days to use when laying multiple nests on the beach. New University of East Anglia research into the mating habits of a critically endangered sea turtle will help conservationists understand more about its mating patterns. The turtle is critically endangered, largely due to the (now banned) international trade in tortoiseshell as a decorative material. Because the turtles live underwater, and often far out to sea, little has been understood about their breeding habits until now. The breakthrough was made by studying DNA samples. Credit: Karl Phillips (University of East Anglia) It also reveals that these turtles are mainly monogamous and don't tend to re-mate during the season. Because the turtles live underwater, and often far out to sea, little has been understood about their breeding habits until now. The breakthrough was made by studying DNA samples taken from turtles on Cousine Island in the Seychelles. Lead researcher Dr David Richardson, from UEA's school of Biological Sciences, said: "We now know much more about the mating system of this critically endangered species. By looking at DNA samples from female turtles and their offspring, we can identify and count the number of breeding males involved. This would otherwise be impossible from observation alone because they live and mate in the water, often far out to sea. "We now know that female turtles mate at the beginning of the season - probably before migrating to the nesting beaches. They then store sperm from that mating to use over the next couple of months when laying multiple nests. "It also lets us calculate how many different males contribute to the next generation of turtles, as well as giving an idea of how many adult males are out there, which we never see because they live out in the ocean. "Perhaps most importantly, it gives us a measure of how genetically viable the population is - despite all the hunting of this beautiful and enigmatic species over the last 100 years. "The good news is that each female is pairing up with a different male – which suggests that there are plenty of males out there. This may be why we still see high levels of genetic variation in the population, which is crucial for its long term survival .This endangered species does seem to be doing well in the Seychelles at least." Lead author Karl Phillips, a PhD student in UEA's school of Biological Sciences, added: "This is an excellent example of how studying DNA can reveal previously unknown aspects of species' life histories." The research was funded by UEA and the Natural Environment Research Council (NERC) Biomolecular Analysis Facility (NBAF). 'Reconstructing paternal genotypes to infer patterns of sperm storage and sexual selection in the hawksbill turtle' by David S. Richardson, Karl P. Phillips, and Tove H.Jorgensen (all UEA) and Kevin G. Jolliffe, San-Marie Jolliffe and Jock Henwood (Cousine Island) is published by the journal Molecular Ecology on Monday, February 4, 2012. Lisa Horton | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:61a55582-7205-4b4c-a59e-729f7774a23a>
3.890625
1,308
Content Listing
Science & Tech.
43.117832
95,551,985
|Linux & Unix Commands - Search Man Pages IOPL(2) Linux Programmer's Manual IOPL(2) iopl - change I/O privilege level int iopl(int level); iopl() changes the I/O privilege level of the calling process, as specified by the two least significant bits in level. This call is necessary to allow 8514-compatible X servers to run under Linux. Since these X servers require access to all 65536 I/O ports, the ioperm(2) call is not sufficient. In addition to granting unrestricted I/O port access, running at a higher I/O privilege level also allows the process to disable interrupts. This will probably crash the system, and is not recommended. Permissions are inherited by fork(2) and execve(2). The I/O privilege level for a normal process is 0. This call is mostly for the i386 architecture. On many other architectures it does not exist or will always return an error. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. EINVAL level is greater than 3. ENOSYS This call is unimplemented. EPERM The calling process has insufficient privilege to call iopl(); the CAP_SYS_RAWIO capability is required to raise the I/O privilege level above its current value. iopl() is Linux-specific and should not be used in programs that are intended to be porta- Libc5 treats it as a system call and has a prototype in <unistd.h>. Glibc1 does not have a prototype. Glibc2 has a prototype both in <sys/io.h> and in <sys/perm.h>. Avoid the latter, it is available on i386 only. ioperm(2), outb(2), capabilities(7) This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at Linux 2013-03-15 IOPL(2) All times are GMT -4. The time now is 10:49 AM.
<urn:uuid:d3497bf6-d627-44ea-99bb-368a897a0718>
2.609375
481
Documentation
Software Dev.
63.831747
95,551,987
Imagem: DETAIL OF THE WORK THE DAY WE HIT THE MOON BY SHEILA GOLOBOROTKOParts of DNA and other essential molecules in living beings may have formed in space billions of years ago and got a ride to Earth on comets or meteorites. One hypothesis that has acquired new arguments is that the fragments of these molecules may have appeared in galactic clouds bombarded by cosmic rays, high-energy particles that have been abundant since the start of the Universe. Such clouds are extremely cold and consist of grains of solid water and condensed gases, such as carbon monoxide, carbon dioxide, ammonia and methane. Brazilian and French physicists reached these conclusions through experiments in particle accelerators at PUC-Rio (the Catholic University of Rio de Janeiro) and at the University of Caen-Low Normandy, in Caen, northwest France. The ion beams produced in these machines interact with ice kept as cold as -260o Celsius, producing effects similar to those of cosmic rays upon galactic clouds. “We’re rebuilding the conditions of the emergence of the earliest steps of life,” says the physicist Enio Silveira, from PUC-Rio. “We want to discover what results from sidereal space ice bombarded by cosmic rays.” According to him, the meeting of cosmic rays and ice clouds is akin to sandblasting a wall: the grains of sand erode the wall’s surface. Another possibility is that the organic molecules might have been formed from the interaction with another type of beam of elementary particles, the electrons. These are more abundant, but have less energy than the cosmic rays. The experiments of the PUC-Rio and Caen teams indicated that the water can decompose and form hydrogen peroxide (H2O2), ozone (O3) or chemical radicals with a strong affinity for molecules with the opposite electrical charge. In 2009 and 2010, as part of his doctoral studies, the astronomer Eduardo Seperuelo Duarte, from PUC, worked for 18 months with Alicja Domaracka at Ganil (the Great National Accelerator of Heavy Ions) in Caen, to determine what new chemical species come out of the frozen clouds of carbon monoxide (CO) or carbon dioxide (CO2) bombarded by nickel ions. “Cosmic rays formed by elements with a high atomic mass, like nickel, are rare in the Universe, but their effect is devastating, like that of a cannon shot in a war, as compared to the far more plentiful machine-gun shots,” compares Silveira. In other Ganil tests held in December, the physicist Ana Lúcia Barros, from Silveira’s group, found that five different molecules, such as CH3 and C2H4, are formed in the methane(CH4) clouds bombarded by ion beans simulating cosmic rays. “The cosmic rays may induce the synthesis of new molecules if ice cloud exposure to them is temporary,” comments Silveira. “Long-lasting bombarding hinders the formation of macromolecules.” In December of 2009, Alicja Domaracka visited Brazil and worked with Silveira at the PUC accelerator, bombarding lithium fluoride crystals, which shattered in a similar way to the ice clouds. “Our planet was heavily bombarded by comets, which brought the water that forms part of the oceans,” states Silveira. “Life arose here in a relatively short time, only about 1 billion years after the Earth was formed.” If this hypothesis is correct, comets may have carried the organic molecules to any corner of the Universe, enhancing the possibility of extraterrestrial life.
<urn:uuid:41ac898a-8b0f-4f7f-8d0e-4a8cfd818efe>
4.03125
776
Nonfiction Writing
Science & Tech.
27.729435
95,551,988
In an effort to better understand how the Pacific Northwest fits into the larger climate-change picture, scientists from the University of New Hampshire and University of Maine are heading to Denali National Park on the second leg of a multi-year mission to recover ice cores from glaciers in the Alaska wilderness. Cameron Wake of the UNH Institute for the Study of Earth, Oceans, and Space (EOS) and Karl Kreutz of the University of Maine Climate Change Institute are leading the expedition, which is funded by the National Science Foundation. This year’s month-long reconnaissance mission will identify specific drill sites for surface-to-bedrock ice cores that will provide researchers with the best climate records going back some 2,000 years. The fieldwork is part of a decade-long goal to gather climate records from ice cores from around the entire Arctic region. “Just as any one meteorological station can’t tell you about regional or hemispheric climate change, a series of ice cores is needed to understand the regional climate variability in the Arctic,” says Wake, research associate professor at UNH. “This effort is part of a broader strategy that will give us a fuller picture.” Kreutz says the 2,000-year ice core record will provide a good window for determining how the climate system has been affected by volcanic activity, the variability of solar energy, changes in greenhouse gas concentrations and the dust and aerosols in the atmosphere that affect how much sunlight reaches the Earth. “This is a joint effort in the truest sense,” says Kreutz, who has collaborated with Wake in both Arctic and Asian research for the better part of a decade. Kreutz’s UMaine team will consist of Erich Osterberg, who received his Ph.D. in December, second-year M.S. candidate Ben Gross, and Seth Campbell, an undergraduate majoring in Earth science. Wake conducted an initial aerial survey of the Denali terrain two years ago but notes there have been “no boots on the ground.” Through May, Wake, his Ph.D. student Eric Kelsey, the UMaine team, and Canadian ice-core driller Mike Waszkiewicz will visit potential deep drilling sites and use a portable, ground-penetrating radar to determine the ice thickness and internal structure on specific glaciers. They will be looking for “layer-cake” ice with clear, well-defined annual stratigraphy. A clear record from Denali will help round out the bigger paleoclimate picture by adding critical information gathered from ice cores recovered in the North Pacific, all of which can be compared to a wealth of climate data already gathered in the North Atlantic region. According to Wake, scientists have long thought the North Atlantic drives global climate changes. However, there are now indications that a change in the North Pacific might happen first and be followed by a North Atlantic response. “We need to better understand the relationship in terms of the timing and magnitude of climate change between these two regions,” he says. At the potential drill sites, the scientists will also collect samples for chemical analysis from 20-foot-deep snowpits and shallow ice cores, and install automatic weather stations at 7,800 feet and 14,000 feet. The chemical analyses, which will be carried out at both UNH and UMaine labs, are needed to decipher changes in temperature, atmospheric circulation, and environmental change such as the phenomenon known as “Arctic haze,” which has brought heavily polluted air masses to the region for decades from North America, Europe, and Asia. Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ee599a44-145e-451d-914d-04e17e3f5aa0>
3.078125
1,320
Content Listing
Science & Tech.
39.957602
95,552,020
Brent Meeker writes: I find it hard to believe that something as stable as memories that last decades is encoded in a way dependent on ionic gradients across cell and the type, number, distribution and conformation of receptor and ion proteins. What evidence is there for this? It seems much more likely that long term memory would be stored as configuration of neuronal connections. You have to keep in mind that every living organism is being continually remodelled by cellular repair mechanisms. Jesse Mazer recently quoted an article which cited radiolabelling studies demonstrating that the entire brain is turned over every couple of months, and the synapses in particular are turned over in a matter of minutes. The appearance of "permanent" anatomical structures is an illusion due to the constant expenditure of energy rebuilding that which is constantly falling apart. If anything, parameters such as ionic gradients and protein conformation are more closely regulated over time than gross anatomy. Cancer cells may forget who they are, what their job is, what they look like and where they live, but if an important enzyme curled up a little tighter than usual due to corruption of intracellular homeostasis mechanisms, the cell would instantly die. Recent theory based on the work of Eric Kandel is that long term memory is mediated by new protein synthesis in synapses, which modulates the responsiveness of the synapse to neurotransmitter release; that is, it isn't just the "wiring diagram" that characterises a memory, but also the unique properties of each individual "connection". But let's suppose, for the sake of argument, that each distinct mental state were encoded by the simplest possible mechanism: the "on" or "off" state of each individual neuron. This would allow 2^10^11 possible different mental states - more than enough for trillions of humans to live trillions of lifetimes and never repeat a thought. In theory, it should be possible to scan a brain in vivo using some near-future MRI analogue and determine the state of each of the 10^11 neurons, and store the information as a binary srtring on a hard disk. Once we had this data, what would we do with it? The details of ionic gradients, type, number and conformation of cellular proteins, anatomy and type of synaptic connections, etc. etc. etc., would be needed for each neuron, along with an accurate model of how they all worked and interacted, in order to calculate the next state, and the state after that, and so on. This would be difficult enough to do if each neuron were considered in isolation, but in fact, there may be hundreds of synaptic connections between neurons, and the activity of each connected neuron needs to be taken into account, along with the activity of each of the hundreds of neurons connected to each of *those* neurons, and so on. Express yourself instantly with MSN Messenger! Download today - it's FREE!
<urn:uuid:e9d9fdce-fa74-4113-8b7f-786114f20c8d>
2.6875
649
Comment Section
Science & Tech.
35.482054
95,552,022
Climate Models and Climate Catastrophe: A Reality Check 1. Modern warming 2. Greenhouse theory 3. Climate models and their failings 4. Past climate change 5. CO2: the elixir of life 6. Solar-terrestrial interactions 7. Extreme weather and alarmist hype 8. The cost of the climate cult 1. Modern warming The United Nation’s Intergovernmental Panel on Climate Change (IPCC) claims that carbon dioxide is now the main driver of climate change, and that the earth will warm by up to 5.4° (relative to 1850-1900) by the end of the 21st century, leading to dangerous and potentially catastrophic consequences. The graph below shows the HadCRUT4 near-surface temperature record from 1850 to the end of 2017. The dataset is managed by the Climatic Research Unit at the University of East Anglia in the UK. The temperature ‘anomalies’ (red bars) show by how much the average global temperature for each year differs from the average temperature during the reference period (1961-1990). Based on the linear trend, the average global temperature rose by about 0.9°C over that 168-year period. Fig. 1.1 (metoffice.gov.uk) According to the IPCC (2013), the warming in the first half of the 20th century was mainly caused by natural factors (including solar variations and ocean oscillations), while the warming in the second half of the 20th century was mainly caused by humans (notably greenhouse gas emissions, but also land-use changes). However, there is no statistically significant difference between the rate and magnitude of warming in the first half and second half of the century. Temperature rose by 0.16°C per decade from 1916 to 1944, and by 0.18°C per decade from 1975 to 2000, and fell by 0.02°C per decade from 1944 to 1975 (HadCRUT4). Fig. 1.2 CO2 emissions from fossil fuel combustion did not start increasing substantially until after 1950. (epa.gov) The modern global warming scare began in the late 1980s and was preceded by a global cooling scare. The cooling was blamed mainly on emissions of man-made aerosols (smoke, dust, industrial pollutants), and there were concerns that the earth might be descending into a new ice age. James Hansen et al. (1981) wrote: ‘the temperature in the Northern Hemisphere decreased by about 0.5°C between 1940 and 1970, a time of rapid CO2 buildup’. Numerous other scientific papers put the cooling over that period at up to 0.5°C (notrickszone.com; realclimatescience.com). James Hansen – a vocal supporter of catastrophic anthropogenic global warming (CAGW) – was the director of NASA’s Goddard Institute for Space Studies (GISS) from 1981 to 2013. The latest version of the GISTEMP dataset shows only about 0.13°C of cooling in the northern hemisphere from 1940 to 1970 (climexp.knmi.nl) – far lower than the original figure of 0.5°. Similarly, Hansen & Lebedeff (1987) reported an overall warming of 0.5°C between 1880 and 1950, but the latest version of GISTEMP has halved the value of this natural warming to 0.25°C (ysbl.york.ac.uk). Fig. 1.3 Graph showing how NASA’s northern hemisphere temperature curves changed between 1981 and 2017. (notrickszone.com) These changes are due to the ongoing, poorly documented adjustments to the historical temperature record. Scientists armed with complex algorithms and supercomputers think they can determine historical temperatures more accurately than they were known at the time. In the case of the temperature records prepared by GISS and the National Climatic Data Center (NCDC) (climate4you.com), the adjustments tend to cool the past and warm the present, thereby increasing the overall warming trend – a sign that strong confirmation bias is at work. We know from the Climategate emails leaked from the University of East Anglia’s computer system in 2009 that leading climate scientists were concerned about the 1940s temperature spike followed by cooling. Tom Wigley wrote: ‘It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”’ (di2.nu/foia/1254108338.txt). One way or another, the temperature record has been massaged so that it better fits the global warming narrative. The following graph compares the GISTEMP versions in 1988 and 2018 with HadCRUT4 in 2018. Fig. 1.4 The overall trend in the GISTEMP 2018 data is about 40% larger than the trend in the HadCRUT4 data. (wattsupwiththat.com) Since 1979 satellite-based records of the temperature of the lower troposphere have been produced by the University of Alabama in Huntsville (UAH) and by Remote Sensing Systems (RSS), a private research firm. This involves measuring microwave radiation emitted by the lower atmosphere. RSS version 3.3 gave a temperature trend of 0.14°C per decade since 1979. However, in 2016 its chief scientist decided that this was too low and the methodology needed adjusting; version 4.0 shows a trend of 0.19°C per decade – 35.7% higher than the previous rate (McIntyre, 2017). Fig. 1.5 Satellite-based lower troposphere temperature record produced by UAH, version 6.0 (drroyspencer.com; nsstc.uah.edu). The distinct temperature spikes in 1997-98 and 2015-16 were caused by super El Niños – natural, sunlight-fuelled warming events in the tropical Pacific. From about 1997 to 2013 there was a major slowdown in global warming, sometimes referred to as the ‘pause’ or ‘hiatus’, even though atmospheric CO2 increased by 9% during this time. The linear temperature trend over this period was 0.06°C per decade according to HadCRUT4, and -0.02°C/decade according to UAHv6.0 (ysbl.york.ac.uk). The ‘pause’ was brought to an end by the El Niño that began in 2014 and peaked in 2015-16. The average absolute surface temperature of the earth was reportedly between 13.7°C and 14.0°C for the 1961-1990 period and between 13.9°C and 14.2°C for 1981-2010 (Jones & Harpham, 2013). The ‘global’ temperature is basically an average of lots of local temperatures. At any particular location, temperature commonly varies by 10°C or more in the course of a day and by even more during the course of a year. Across the globe, local temperatures can be anywhere between about +55°C and -90°C. So an average global temperature is rather meaningless. Fig. 1.6 This graph presents the same data as fig. 1.1 but the temperature axis puts the changes in average global temperature in better perspective. (debunkhouse.com) The popular claim that an increase in global temperature of 2°C, or even 1.5°C, would probably have an overall catastrophic impact is unfounded. Most of the warming over the past 150 years has occurred in high northern latitudes, in the winter and at night. There has been less warming in the tropics and in the southern hemisphere, and in maximum temperatures in general. Using the Berkeley Earth surface temperature dataset (climexp.knmi.nl), global maximum temperature (1853-2017) increased at 0.066°C per decade while global minimum temperature increased twice as fast, at 0.121°C per decade. The benefits of warming include milder winters, less cold-related mortality (cold causes nearly 20 times more deaths than heat (Gasparrini et al., 2015)) and longer growing seasons. When analysing global temperature, temperatures are usually expressed not as absolute values but as ‘anomalies’ relative to a reference period. This focuses attention on the change in average temperature and reduces the problems arising from the fact that weather stations are not spread evenly over the globe, and the number and location of weather stations have changed greatly over time. One of the many confounding factors to be taken into account in determining a global temperature is the impact of increasing urbanization in the vicinity of weather stations. Urban areas can be up to 12°C warmer than nearby rural areas partly due to waste heat from the energy used for heating and cooling and partly because concrete, brick and asphalt surfaces absorb solar radiation during the daytime and release that heat at night. The IPCC (2013) states: It is unlikely that any uncorrected urban heat island effects and land use change effects have raised the estimated centennial globally averaged land surface air temperature trends by more than 10% of the reported trend. This is an average value; in some regions that have rapidly developed urban heat island and land use change impacts on regional trends may be substantially larger. (Tech. summary) However, surveys show that less than 10% of the weather stations in the United States Historical Climatology Network (USHCN) are free of urban heat island effects and other biases. The National Oceanic and Atmospheric Administration (NOAA) applies a ‘homogenization’ algorithm, which causes data from well-sited (compliant) stations to be adjusted upwards to match the trends of poorly sited (non-compliant) stations. This increases the warming trend for the period 1979 to 2008 by 59%, as the following graph shows (Watts, 2017). Fig. 1.7 Temperature trends for compliant, non-compliant and adjusted USHCN weather stations. (Watts, 2017) In Australia, about 0.4°C of the 0.9°C temperature increase since the late 19th century is due to data adjustments by the Bureau of Meteorology (Nova, 2017). As elsewhere, instead of compensating for urban heat contamination, the automated adjustments to the raw data make the problem worse. In the case of the town of Rutherglen, the adjustments managed to turn a cooling trend of 0.3°C per century in the observational data into a warming trend of 1.6°C per century (Marohasy, 2017b). It has been proposed that the recent period of the current Holocene interglacial should be named the Anthropocene (anthropos = human being) to reflect humans’ impact on climate etc. Another suggestion is: the Adjustocene ... (therightinsight.org) Since the earth’s surface has seen overall warming for some 300 years, whereas thermometer measurements usually go back no further than about 150 years, countless temperature records have been set over the past century and a half. The uncertainty in temperature anomalies is optimistically said to be 0.1°C for recent measurements, and at least 0.2°C for older measurements. 2005, 2010, 2014, 2015 and 2016 were all loudly proclaimed to be ‘the warmest year ever’ but, with the exception of the El Niño year 2015, the claims were meaningless because the differences with the previous record were at most a few hundredths of a degree – smaller than the margin of error (HadCRUT4, GISTEMP). It takes far less energy to raise the temperature of cold, dry air (such as in the polar regions) by 1° than it does to raise the temperature of warm, moist air (such as in the tropics) by 1°. That is why there is growing recognition that the heat content of the oceans (covering about 71% of the earth’s surface) is a better metric of warming than average global temperature. The heat content of the oceans is about 1000 times greater than the heat content of the atmosphere. The graph below shows the estimated change in the global heat content in the top 2000 metres of the oceans from 1955 to 2017, expressed as a temperature anomaly. Fig. 1.7 (nodc.noaa.gov) 2. Greenhouse theory For the earth to maintain a constant average temperature, the amount of solar energy it absorbs has to match the amount of energy lost to space (other things being equal). Atmospheric greenhouse gases – mainly water vapour, carbon dioxide and methane – absorb infrared (IR) radiation emitted or reflected by the earth’s surface and then reemit it in all directions (including back towards the surface). This delays the loss of IR energy to space, causing the earth’s lower atmosphere and surface to be warmer than they would otherwise be, while the upper atmosphere becomes cooler. This, in turn, is said to cause the amount of IR radiation escaping to space to increase, until the planetary energy balance is restored – though perfect equilibrium is never attained. Fig. 2.1 Earth’s energy budget describes the balance between the radiant energy reaching earth from the sun and the energy flowing from earth back to space (climate.nasa.gov). There is said to be a net imbalance of 0.6 watts per square metre (W/m2) warming the planet – a figure far smaller than the measurement error. The potency of a greenhouse gas depends on its atmospheric concentration and on what wavelengths of infrared radiation it absorbs. Water vapour is the most powerful greenhouse gas: its concentration ranges from 0.01% (at -32°C) to 4.24% (at 30°C) whereas the current concentration of CO2 is just over 0.04% (400 parts per million); and water vapour absorbs IR radiation over a far broader range of wavelengths than CO2 does (fig. 2.3). Water vapour accounts for 70 to 90% of the natural greenhouse effect, which helps to keep the earth habitable. Most of the greenhouse effect takes place in the lowest two kilometres of the atmosphere (the lower troposphere). According to the IPCC (2013, Summary for Policymakers), the extra greenhouse gases emitted by burning fossil fuels have produced an additional climate ‘forcing’ of 2.3 W/m2, which is less than 1% of the total energy flows in the climate system. Theoretically, a doubling of the atmospheric CO2 concentration is expected to produce a temperature rise of about 1°C. However, the actual amount of warming depends on feedbacks, i.e. secondary effects such as changes in cloudiness, water vapour, precipitation or ice extent, which may increase an initial warming (positive feedback) or reduce it (negative feedback). The IPCC assumes that a CO2-induced temperature rise causes an increase in atmospheric water vapour and a reduction in cloud cover, producing a positive feedback that triples the amount of warming. Other scientists argue that a related increase in low-level clouds and reduction in upper-level cirrus clouds, together with changes in precipitation systems, provide a strong negative feedback (Lindzen, 2015; Spencer, 2008, ch. 4). The earth’s albedo is a measure of its reflectivity: the higher the albedo, the greater the amount of sunlight reflected back into space. Clouds reflect about 30% of the incoming solar radiation. A 1% change in cloud albedo has a radiative effect of 3.4 W/m2 (Farmer & Cook, 2013), only slightly less than the direct forcing of 3.7 W/m2 expected from a doubling of CO2. Fig. 2.2 Average cloud cover between July 2002 and April 2015. About 67% of the earth’s surface is typically covered by clouds. (earthobservatory.nasa.gov) The earth has various mechanisms for regulating its temperature, with the water cycle (evaporation, condensation and precipitation) being the most important. Water is the only common molecule on earth that occurs naturally in three different states: solid, liquid and gas. In the tropics, temperature is largely regulated on a daily basis by the timing and strength of clouds and thunderstorms (Eschenbach, 2013, 2018a,b). Data shows that when the tropical ocean is warm, clouds and thunderstorms form earlier in the day and in greater numbers. The evaporation of water at the surface requires energy and has a cooling effect. Water vapour rises and condenses to form clouds, releasing latent heat, which is radiated out to space. Clouds reflect sunlight back into space, while the surface is further cooled by rain and winds. ‘Climate sensitivity’ is the temperature increase caused by a doubling of atmospheric CO2: equilibrium climate sensitivity (ECS) is the average temperature response after the atmosphere and oceans have fully adjusted and reached a new equilibrium state (there is no agreement on how many centuries this takes); transient climate response (TCR) is the average temperature response at the time CO2 doubles. Climate models give values of ECS ranging from 1.5 to 9°C. According to the IPCC’s Fourth Assessment Report (2007), equilibrium climate sensitivity ‘is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C’. The 2013 Fifth Assessment Report (AR5) stated with ‘high confidence’ that ECS is ‘likely in the range 1.5°C to 4.5°C’, but refrained from giving a best estimate due to the discrepancy between observation-based estimates and the far higher estimates from climate models (IPCC, 2013, SPM). AR5 also stated with ‘high confidence’ that the transient climate response was 1.0°C to 2.5°C. In the climate models used for AR5, the average ECS is 3.2°C and the average TCR is 2.3°C. Despite 35 years of research costing billions of dollars, the IPCC’s range of ‘likely’ values for ECS has therefore increased, rather than narrowed, with the main uncertainty being the role of clouds. Yet the IPCC claims to be more certain than ever before that all the warming since 1950 is caused by humans. In recent years, several papers have been published in mainstream journals that provide observation-based estimates of climate sensitivity below 2°C (see McKitrick, 2018). For instance, Lewis & Curry (2018) calculated a best estimate of 1.5°C for ECS and 1.2°C for TCR. Using satellite data since 1979, Christy & McNider (2017) determined TCR in the lower troposphere to be 1.1°C. Since they made the simplifying assumption that most of the warming over that period was caused by humans, the real value must be far lower. Some researchers have in fact put the value of climate sensitivity at 0.5°C or less (Lindzen & Choi, 2009; friendsofscience.org; john-daly.com). Every doubling of atmospheric CO2 – whether it be from 100 to 200 ppm, or from 200 to 400 ppm – is said to produce the same temperature rise. In other words, each increase in CO2 has a smaller effect. This logarithmic behaviour arises from the fact that there is an absolute limit on the total amount of IR radiation that CO2 can absorb because it absorbs only a narrow range of wavelengths. Fig. 2.3 The top panel shows the incoming shortwave solar radiation in red, and outgoing longwave radiation in blue, the rest being absorbed or scattered. The lower panels show the wavelengths of the radiation absorbed by the main greenhouse gases (wikimedia.org). CO2 absorbs infrared radiation in only three narrow bands of frequencies, and only the one corresponding to a wavelength of 15 micrometres (µm) has much significance. Where the grey shading extends to the top of a panel, it indicates that the energy at that wavelength is already fully absorbed. Parts of the CO2 spectrum are already fully saturated, and further CO2 increases will result in ever diminishing effects as more of the remaining available wavelengths become saturated. 3. Climate models and their failings Climate computer models – or general circulation models (GCMs) as they’re officially known – attempt to simulate the earth’s climate. They divide the atmosphere, oceans and land into a three-dimensional grid system. Each grid cell is typically about 100 to 200 km wide and 1 km deep – the smaller the size, the greater the computing power and processing time required. Many important weather phenomena occur on scales smaller than the model resolution. These processes are represented by approximations known as ‘parameterizations’ (i.e. fudge factors). Only one figure can be placed in each grid cell for each parameter (temperature, cloud cover, rain, snow, humidity, etc.). The commonly heard claim that climate models are based purely on ‘physics’ is false. A great deal of arbitrary tuning or calibration is required. If climate science was truly settled, there would be one model, and it would match reality. Instead, there are over 100 different models, each tuned differently, but they can all approximately reproduce the temperature trend over the past century, since that is what they have been designed to do. They produce very different simulations of 21st-century climate, but observations show that, overall, they have a notable tendency to exaggerate warming. In 1990 the IPCC predicted a warming rate of 0.3°C per decade for the next century, whereas the observed rate from 1990 to the end of 2017 was 0.165°C per decade (HadCRUT4). A key factor used to tune climate models is the cooling effect of man-made aerosols, which can reflect solar radiation back into space. The value of this forcing is highly uncertain, and this allows modellers to offset models’ excessively high climate sensitivity to greenhouse gases with aerosol cooling to reduce the mismatch between model projections and observations (see Lindzen, 2015).Clouds, which can cause cooling or warming, are a major uncertainty in all climate models. Due to the inadequate size of model grid cells, crude estimations of cloud cover at varying altitudes have to be inserted into the models. Climate models assume that a warming world will lead to greater evaporation and therefore more water vapour in the atmosphere, producing a strong positive feedback. This, in turn, assumes that relative humidity will remain constant. However, atmospheric physicist Richard Lindzen (2015) points out that there is no basis for this claim and that a change in relative humidity from 80% to 83% would completely eliminate any increase in evaporation resulting from a warming of 3°C. He adds that ‘such changes in relative humidity at the surface are commonplace and easily produced’ and indicate ‘the ease with which the system can adjust to changes’. Climate models are run according to different scenarios for future greenhouse gas emissions and other factors. The four scenarios for AR5 are known as Representative Concentration Pathways: RCP2.6, RCP4.5, RCP6.0 and RCP8.5. The figures 2.6, 4.5, 6.0 and 8.5 are the total radiative forcing (in W/m2) in 2100 relative to 1750. RCP2.6 is a scenario with strong mitigation of greenhouse gas emissions, RCP4.5 and 6.0 are medium-emission scenarios, and RCP8.5 is an extremely high-emission scenario. Since 2011 our emissions have been growing at a similar rate to RCP4.5. Fig. 3.1 Approximately 1000 climate model runs for the four RCPs. (Fuss et al., 2014) RCP8.5 gets the most attention from climate catastrophists because it produces the scariest predictions; it is sometimes falsely called the ‘business as usual’ scenario, but is actually a worst-case scenario (Kummer, 2015; Pielke, 2018). It is the most commonly used scenario in climate impact studies published in the academic literature. Media reports about its dire predictions rarely mention the unrealistic assumptions on which they are based. RCP8.5 assumes that in 2100 the world will be powered mainly by coal, resulting in CO2 levels rising by 300% (to 1200 parts per million). However, thanks to the discovery of abundant shale gas, the use of natural gas for electrical generation is rapidly increasing at the expense of coal. Current trends indicate that natural gas usage will overtake coal worldwide around 2030, and oil use will level off at the same time. Moreover, a recent study found that there simply isn’t enough coal to support RCP8.5 (Michaels, 2018). Fig. 3.2 Primary energy consumption in billion tonnes of oil equivalent (toe). (bp.com) Climate models predict a region of enhanced greenhouse-gas warming above the tropics in the mid-troposphere (10-15 km above the earth’s surface), caused by the heat released when water vapour condenses. This ‘hotspot’ has been called a ‘fingerprint’ of human-caused warming. However, observations show that it does not exist. Earlier IPCC reports contained dramatic diagrams of the hotspot, but more recent ones do not. Fig. 3.3 The hotspot is supposed to be located in the outlined tropical band (20°S-20°N) of the atmosphere. (Christy, 2017) The climate model experiments for AR5 are part of the Coupled Model Intercomparison Project Phase 5 (CMIP5). CMIP5 models were initialized in 2006: before that date, model outputs are hindcasts, using historical data as inputs; model projections begin after that date. The 102 climate models used for CMIP5 are divided into 32 institutional groups. The graph below shows the model results for these 32 groups (dotted lines), the average of all the models (red line), and actual observations (circles, squares and diamonds). Clearly, the models as a whole fail miserably to match the bulk atmospheric temperature trend. Fig. 3.4 (Christy, 2017) A diagram confirming that the models exaggerate the effects of CO2 can be found in AR5 itself (IPCC, 2013, fig. 10.SM.1). It shows temperature trends at different levels of the atmosphere according to observations and model outputs. It was included at the request of climate scientist John Christy, a critic of CAGW and one of the reviewers, but the IPCC presented the information in a way that made it difficult to understand and buried the graph in the supplementary material for chapter 10, where it would receive little attention. Christy presents the key data in the following simplified graph. Fig. 3.5 This graph shows the temperature trends in the tropical atmosphere from the surface (1000 hPa [hectopascals]) up to 50,000 ft (15 km) (Christy, 2017). The grey lines are the bounds for the range of observations, the blue for the range of IPCC model results without extra greenhouse gases (GHGs), and the red for IPCC model results with extra GHGs. The model trends in which extra GHGs are included lie completely outside the range of the observational trends. In other words, the bulk tropical atmospheric temperature change is modelled best when no extra GHGs are included. Yet, on the basis of the same models that failed this simple validation test, the IPCC claims to know with ‘high confidence’ that the rise in global temperature since 1950 was entirely due to human greenhouse gas emissions. Commenting on AR5, Richard Lindzen wrote: [T]he latest IPCC report has truly sunk to the level of hilarious incoherence. They are proclaiming increased confidence in their models as the discrepancies between their models and observations increase. ... It is quite amazing to see the contortions the IPCC has to go through in order to keep the international climate agenda going. (climatedepot.com) The IPCC (2013, FAQ 10.1) presents the following two graphs to ‘prove’ the human impact on climate. The black line shows the change in the observed surface temperature. The blue and red lines show the temperatures projected by the climate models for the Third and Fifth Assessment Reports respectively. The top graph shows the model results that include only natural forcings (mainly the sun and volcanoes). The bottom graph shows the model results that include both natural forcings and human forcings (mainly greenhouse gases). Only the model results that include GHGs match observed temperatures, therefore – says the IPCC – all the warming since 1950 is caused by humans. The graphs imply that if GHG emissions were not increasing, the climate would be cooling. Fig. 3.6 IPCC pseudoscience at its finest. If models that assign a major role to GHGs are tuned to match the climate record, and the GHG tuning knob is then turned down without turning up the tuning knobs for natural factors, the model output will of course no longer match historical temperatures. If, on the other hand, natural forcings are increased, models can still be made to match past temperatures. The above graphs therefore prove nothing about how the climate really works. They prove only that modellers can program their models to give whatever results they want. As the bottom graph in fig. 3.6 shows, climate models are tuned to match the rapid warming from 1970 to 2000, which they then extrapolate over the 21st century, on the false assumption that the warming was entirely caused by increases in CO2 emissions. A close look at the graph shows that the climate models do not correctly simulate the cooling from the 1880s to 1910, the significant warming from 1910 to 1945, the cooling from 1945 to the late 1970s and the flat temperatures in the early 21st century (until the 2015-16 El Niño; see fig. 3.7). From 1910 to 1945 the climate warmed up to three times faster than the multi-model mean (Tisdale, 2015, 219-20). If the models are tweaked to better match this warming, but retain a high sensitivity to CO2, they end up grossly exaggerating the warming in the late 20th century. Fig. 3.7 HadCRUT4 surface temperature anomalies (13-month average) from 1950 to May 2018 (black curve), with average linear trend (thin black line), compared with the CMIP5 multi-model mean (historical/RCP4.5, thin blue line), showing confidence intervals (light and dark blue areas) (Javier, 2018c). The powerful 2015-16 El Niño briefly lifted temperatures above the model mean. AR5 (IPCC, 2013, Tech. summary, box TS.3) mentioned the ‘hiatus’ in warming from 1998 to 2012, pointing out that the mean model trend was 0.21°C per decade – four times higher than the observed trend (0.05°C per decade). It admitted that the models might exaggerate the warming from greenhouse gas increases and underestimate the climate’s natural internal variability. But no steps have been taken to fix this as it would risk putting an end to the scaremongering and flow of funding. While the pause was in progress, CAGW supporters came up with over 60 reasons why temperatures had stalled. These included: low solar activity, Chinese coal use, volcanic aerosols, faster trade winds, and coincidence, but one of the favourites was that the ‘missing’ heat was hiding deep in the oceans – though no convincing evidence of this has ever been found (Istvan, 2014). In one of the Climategate emails, Kevin Trenberth, a leading IPCC author, comments: ‘The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.’ He goes on to say that the data collected by the CERES satellite (which measures radiant energy) ‘are surely wrong’ (junksciencearchive.com; see Climategate). In other words, he blames the data rather than the models. Unfortunately for Trenberth, the temperature slowdown he thought was an artefact of wrong data in 2009 is still in place in 2018, according to UAH6.0 (ysbl.york.ac.uk). The following graph compares the observed warming of the oceans with the warming projected by the CMIP5 climate models. The models exaggerate the warming trend by 55%. Fig. 3.8 Data: Reynolds OI.v2 (climexp.knmi.nl). Model: CMIP5 multi-model mean (tos, historical/RCP6.0) (climexp.knmi.nl). Linear data trend: 0.105°C/decade. Linear model trend: 0.163°C/decade. In the Pacific, the models exaggerate the warming trend by 100%. Fig. 3.9 Data: Reynolds OI.v2 (climexp.knmi.nl). Model: CMIP5 multi-model mean (tos, historical/RCP4.5) (climexp.knmi.nl). Linear data trend: 0.104°C/decade. Linear model trend: 0.216°C/decade. The reason climate models fail to capture the different levels of warming of the individual oceans (Tisdale, 2018) is because they don’t correctly simulate ocean temperature oscillations like the El Niño-Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO). Jim Steele writes: Abrupt ocean regime shifts cause abrupt climate shifts. The ocean surface temperatures determine the strength and location of rising and sinking air currents. So shifting ocean temperatures rapidly alter high and low pressure systems that distribute the earth’s heat and moisture. Global warming models have failed to predict those shifts. (2013, 167) Fig. 3.10 Above: El Niño (top) and La Niña (bottom), together called ENSO, involve warmer-than-normal and cooler-than-normal sea surface temperatures in the equatorial Pacific Ocean, which affect weather patterns around the world by influencing high and low pressure systems, winds and precipitation (esrl.noaa.gov). Below: ENSO index (esrl.noaa.gov). El Niños occur every two to seven years, with major events occurring in 1981-82, 1986-87, 1997-98, 2010-11 and 2014-16. El Niños and La Niñas are the main causes of annual variations in surface temperatures around the globe. El Niños release a huge amount of heat from the tropical Pacific Ocean to the atmosphere, mainly through evaporation, and redistribute warm water within the oceans. The warm water for an El Niño is typically created during a preceding La Niña, when cloud cover above the tropical Pacific decreases, allowing more sunlight to heat the water. While sunlight penetrates and warms the oceans to depths of 200 metres (though most is absorbed in the top 10 metres), longwave infrared radiation emitted downwards by greenhouse gases can only penetrate the top few millimetres of the ocean surface (Tisdale, 2015, 2018). Fig. 3.11 Atlantic Multidecadal Oscillation index. (climatedataguide.ucar.edu) The Pacific shifted from a cool to a warm cycle in 1976 (meaning that El Niños began to outnumber La Niñas), while the Atlantic started its warm cycle in 1995. The Pacific flipped back into its cool cycle in 2007, and the Atlantic will do so around 2020. The AMO can explain about 70% of the surface warming in the northern hemisphere since 1975 (Tisdale, 2015, 485). Spencer & Braswell (2014) argue that about half of the worldwide warming since the late 1970s was due to stronger El Niño events. The IPCC predicts a 0.3 to 0.7°C rise in global mean surface temperature from 2016 to 2035, while researchers who assign a major role to natural internal variability believe that the recent slowdown in global warming could extend into the 2030s (Wyatt & Curry, 2014; Bastardi, 2018; Javier, 2018c). The following graph shows that climate models exaggerate the quantity of precipitation by over 10%. Yet according to the IPCC’s worst-case scenario, global precipitation is only supposed to increase by about 5.5% by 2100. In other words, models already show more precipitation today than is supposed to occur by the end of the century. This indicates that the models exaggerate the amount of water vapour in the atmosphere. Modellers are unlikely to fix this problem because the models need lots of water vapour to triple the amount of warming expected from a doubling of atmospheric CO2. Fig. 3.12 Data: GPCP v2.3 (climexp.knmi.nl). Model: CMIP5 multi-model mean (pr, historical/RCP4.5) (climexp.knmi.nl). Linear data trend: 0.105°C/decade. Linear model trend: 0.163°C/decade. The model trend is 55% too high. How well do climate models represent the changing climate in the north and south polar regions? Northern polar winters are warming very strongly (0.26°C/decade), and much faster than summers (0.18°C/decade), and warming is much faster than in the tropics (0.10°C/decade). This is in line with global warming theory. Unfortunately for the theory, the opposite is happening in the southern polar region: summers are warming slightly (0.058°C/decade), while winters are cooling strongly (0.166°C/decade) (kenskingdom.wordpress.com). Ice core data shows no overall warming of Antarctica from 1800 to 2000 (fig. 3.13). The large East Antarctic Ice Sheet is currently gaining mass. Whether it is growing fast enough to compensate for ice loss in coastal areas of West Antarctica and the Antarctic Peninsula is controversial. NASA glaciologist Jay Zwally and his team argue that it is, and that Antarctica is gaining land ice overall (Zwally et al., 2015; dailycaller.com; McIntyre, 2015). For challenging the IPCC consensus on this matter, they have been accused of doing ‘a real disservice to the science community’ (scientificamerican.com). Geothermal heat is increasingly being recognized as a potential factor in ice-sheet melting in West Antarctica, and also in Greenland (see Curry, 2018c). Fig. 3.13 Antarctic temperature anomalies for 1800-2000 (blue curve) and CO2 concentration expressed as a natural logarithm (ln; red curve). (Schneider et al., 2006) The following two graphs compare the sea ice changes observed in the Arctic and Antarctic regions with model projections. Fig. 3.14 Data: NSIDC Sea Ice Index v3.0 (climexp.knmi.nl). Model: CMIP5 multi-model mean (sic, historical/RCP4.5) (climexp.knmi.nl). Models project a sea ice loss of 206,000 sq km per decade in the Arctic, while observations show that the ice loss is 2.6 times greater (533,000 sq km per decade). Fig. 3.15 Data: NSIDC Sea Ice Index v3.0 (climexp.knmi.nl). Model: CMIP5 multi-model mean (sic, historical/RCP4.5) (climexp.knmi.nl). Models project a sea ice loss of 98,000 sq km per decade around Antarctica, while the long-term trend shows that ice is being gained at the rate of 114,000 sq km per decade. In short, CO2-based climate models are clueless about what is happening in the polar regions. 4. Past climate change We are currently living in an interglacial period known as the Holocene, which began 11,700 BP (years before 2000). It was preceded by the Pleistocene, which saw a succession of long glacial periods separated by shorter interglacials. The four interglacials that preceded the Holocene were, on average, more than 2°C warmer than the present one (NIPCC, 2013, 4.2.1). At the peak of the previous (Eemian) interglacial some 125,000 years ago, trees were able to grow 13° further north than they do today, the hippopotamus was found as far north as the rivers Rhine and Thames, and sea level was 6 to 9 metres higher than today (en.wikipedia.org). Fig. 4.1 Reconstruction of a temperate landscape in Germany during the previous interglacial, showing the now extinct straight-tusked elephant and an extinct rhinoceros. (pnas.org) The large climatic shifts needed to bring about alternating glacial and interglacial periods are usually attributed mainly to changes in the earth’s orbital parameters (the ellipticity of its orbit, the precessional cycle, and changes in the inclination of the earth’s axis), which alter the amount of solar radiation reaching the earth’s surface and how it is distributed seasonally and by latitude. This theory was first proposed by Milutin Milankovitch, who argued that the key factor determining glacial cycles is summer insolation in the Arctic. The orbital variation of this quantity is about 100 W/m2 (Lindzen, 2015), which is huge compared with the 3.7 W/m2 contribution from a doubling of CO2. However, the Milankovitch theory faces a number of problems and is unable to explain all the climatic changes that have occurred (see Poleshifts, pt. 4). Changes in CO2 concentration are certainly not the primary cause of glacial-interglacial temperature changes, because the data shows that changes in temperature tend to precede changes in CO2 concentration by between a few hundred and several thousand years (NIPCC, 2013, 4.2.1). As atmospheric temperature rises, more CO2 is released from the oceans, which contain 50 times more CO2 than the atmosphere. The long lag between temperature changes and the CO2 response has been linked to the circulation of surface and deep ocean currents (Climate change controversies, section 7). Some scientists have tried to resolve the ‘problem’ of CO2 (and methane) lagging temperature changes by ‘recalibrating’ the data (a highly dubious practice), but because the time lags vary so greatly in length, these efforts can never be entirely successful (euanmearns.com). Even the mainstream argument that rising atmospheric CO2 levels significantly amplify the temperature increases once they get started is questionable, because during the ice age cycle we see again and again that when CO2 levels peak, temperature suddenly plummets and the earth descends into the next glacial period, and when CO2 reaches a minimum, the world rapidly warms and enters an interglacial. Fig. 4.2 Antarctic temperature vs. CO2 for the last 800,000 years, from the EPICA3 ice core. (ars.els-cdn.com) Ellis & Palmer (2016) propose that, in addition to being determined by the Milankovitch cycle, ice ages are regulated by changes in the earth’s albedo resulting from an increase in ice or dust. During a glacial period, the high albedo of the northern ice sheets lowers global temperatures and leads to more and more CO2 being absorbed by the oceans. When atmospheric CO2 reaches a critically low level, terrestrial plant life is starved of a vital nutrient, causing a die-back of upland forests and savannahs. This results in widespread desertification and soil erosion, and storms deposit large amounts of dust on the ice sheets. This reduces their albedo, allowing them to absorb considerably more insolation and undergo rapid melting, forcing the climate into an interglacial period. Every interglacial warming period over the last 800,000 years was preceded by several thousand years of dust storms. The Holocene started with 2000 years of rapid warming, leading to the Holocene Climatic Optimum, a 4000-year period of high humidity and temperatures about 1°C to 2°C warmer than today. Current desert regions of Central Asia were extensively forested, and the Sahara desert was green and dotted with numerous lakes containing crocodile and hippopotamus fauna. Pine trees grew in Scotland at altitudes 650 metres higher than those where we find the stunted trees of today, and sea level was over 2 metres higher than today (Brady, 2017, ch. 4). The Holocene is divided into three stages or ages. The most recent is known as the Meghalayan Age and began 4250 BP, when an abrupt and critical mega-drought and cooling precipitated the collapse of agricultural civilizations in Egypt, Greece, Syria, Palestine, Mesopotamia, the Indus Valley, and the Yangtze River Valley (wattsupwiththat.com). The general trend over the past 6000 years has been one of progressively accelerating cooling and drying, punctuated by a number of warm periods, such as the Minoan Warm Period, Roman Warm Period, Medieval Warm Period (900-1250) and Current Warm Period. The last three warm periods were separated by the cooler Dark Ages (5th to 8th centuries) and the Little Ice Age (1450-1850) (Javier, 2017a). The European Alps were nearly glacier free in the Minoan, Roman and medieval warm periods. During the Roman Warm Period there were vineyards in Britain as far north as Hadrian’s Wall near the Scottish border, and during the Medieval Warm Period Vikings farmed much of the now-frozen expanse of Greenland. Retreating glaciers have exposed artefacts, bodies and remains of trees dating from earlier ice-free periods (news.bbc.co.uk; notrickszone.com; climateaudit.org). Fig. 4.3 The receding Mendenhall Glacier in Alaska has exposed tree stumps and logs – the remains of an ancient forest that grew there over 1000 years ago until it was destroyed by the advancing glacier. (livescience.com) The Little Ice Age was the coldest period of our interglacial and led to many famines. Some scientists believe it was caused mainly by volcanic activity, but this is contradicted by the fact that volcanic activity was two to four times higher during the Holocene Climatic Optimum, the warmest period of our interglacial (Javier, 2018b). Other researchers invoke solar and oceanic cycles. Fig. 4.4 The upper panel shows the air temperature derived from the GISP2 ice core from Greenland (the record ends in 1855). The lower panel shows the past atmospheric CO2 concentration, derived from the EPICA Dome C ice core in the Antarctic (the record ends in 1777). (climate4you.com) The IPCC’s First and Second Assessment Reports (1990 and 1995) contained graphs showing a Medieval Warm Period warmer than today. That changed with the Third Assessment Report (2001), which featured the notorious ‘hockey stick’ graph of northern-hemisphere temperatures over the past millennium (see The global warming scare, section 3; McKitrick, 2015; Montford, 2010). This once iconic but now discredited graph eliminated the Medieval Warm Period and Little Ice Age, and showed relatively stable temperatures until the 20th century, when the modern rapid temperature increase began. The hockey stick rewrote well-established climate history, but easily passed peer review because it supported the alarmist narrative. The graph turned out to be based on statistical malpractices applied to carefully selected tree-ring data. Other temperature proxies – e.g. oxygen isotope ratios in carbonates (like stalagmites in caves), borehole temperatures, lake-bottom pollen and diatoms, and plankton alkenones – reveal a clear Medieval Warm Period, in agreement with historical records. Fig. 4.5 The hockey stick – a glowing example of ‘climate change denial’. The long, flat shaft of the graph is mostly reconstructed from tree-ring proxy data and the almost upright blade represents the instrumental temperature record, ending with the 1998 super El Niño. Other CAGW scientists have ‘confirmed’ the hockey stick by using much of the same dubious data and flawed methodology (Climate change controversies, section 4). Fig. 4.6 Non-tree-ring temperature reconstruction with 95% confidence intervals, showing a distinct Medieval Warm Period and Little Ice Age (Loehle & McCulloch, 2008). The last point on the graph represents the 29-year average temperature centred on 1935. The 2013 Fifth Assessment Report resurrects the Medieval Warm Period (or Medieval Climate Anomaly) to some extent. It states that, at times, some regions were as warm as in the mid-20th century and others were as warm as in the late 20th century. It also states with ‘medium confidence’ that, in terms of average annual northern-hemisphere temperatures, the period 1981-2010 was the warmest 30-year period of the last 1300 years (IPCC, 2013, ch. 5). As the Non-Intergovernmental Panel on Climate Change (NIPCC, 2013, ch. 4) points out, the IPCC ignores an enormous body of literature that clearly demonstrates that its assessment of the Medieval Warm Period (MWP) is wrong. There are over 350 peer-reviewed papers showing that the MWP was mostly warmer than today (by 0.91°C on average) and global in nature (CO2science.org). This means that there is nothing unusual, unnatural or unprecedented about the warming seen since 1950. Fig. 4.7 Reconstruction of the extra-tropical, northern-hemisphere mean temperature reaching back to 300 CE. Calibration period: 1880-1960. Thin black curves are annual values; thick curves are 50-year smoothed. Red curves show bias and confidence intervals for the smoothed values. The green curve shows the observed extra-tropical (>30°N) annual mean temperature. The yellow curve shows the temperature average over grid cells with proxies used in the reconstruction. The graph shows a well-defined MWP, with a peak warming around 950-1050 reaching 0.6°C relative to the reference period, and the Little Ice Age, cumulating in 1580-1720, with a temperature minimum of 1.0°C below the reference period. (Christiansen & Ljungqvist, 2012) Climate models tuned to present-day global warming perform very poorly when trying to reproduce Holocene climate evolution because they are too sensitive to changes in greenhouse gases and underestimate solar, oceanic and tectonic factors. They show a constant increase in temperature during the entire Holocene, in line with the rising greenhouse gas concentration. This disagreement has been called the ‘Holocene temperature conundrum’. Fig. 4.8 Estimated global average surface air temperature over the last 540 million years (Phanerozoic Eon) (wikimedia.org). Temperature is plotted as anomalies relative to the 1960-1990 average, in degrees Celsius (left) and Fahrenheit (right). The graph consists of five separate segments, with the timescale expanding by about an order of magnitude at each vertical break. The two red dots in blue circles on the right-hand axis indicate the IPCC’s predicted temperatures for 2050 and 2100, based on its worst-case scenario (RCP8.5), and are for scaremongering/entertainment purposes only. The fact that temperature has changed by only about ±10°C over hundreds of millions of years indicates that the climate system is dominated by negative, stabilizing feedbacks. 5. CO2: the elixir of life The three main constituent gases of the earth’s atmosphere are nitrogen (78.0%), oxygen (21.0%) and argon (0.9%). The final 0.1% of the atmosphere is made up of many other gases, known as trace gases. The atmospheric concentration of CO2 at the start of 2018 was 408 parts per million (ppm) by volume of dry air, or just over 0.04% (esrl.noaa.gov). The estimated composition of the earth’s atmosphere has changed considerably over geologic time, as the following chart shows. It indicates that in the early history of the earth the atmospheric concentration of CO2 (by volume) may have been over 500 times higher than today. Fig. 5.1 Changing composition of earth’s atmosphere (in terms of mass). (Plimer, 2017) The following chart shows how average global temperature and CO2 concentration have changed over the past 600 million years. There is little correlation between CO2 and temperature. Fig. 5.2 Global temperature and atmospheric CO2 during the Phanerozoic. (geocraft.com) In the Cambrian, CO2 levels reached nearly 7000 ppm, about 17 times higher than today. The late Carboniferous and early Permian were the only period when both atmospheric CO2 and temperatures were as low as they are today. In the late Ordovician, an ice age occurred while CO2 concentrations were 4400 ppm, over 10 times higher than today. It is sometimes claimed that this was because the sun was weaker than it is now, but this doesn’t explain why the earth warmed up and emerged from that ice age even though the CO2 concentration began to fall. The evolution and subsequent diversification of land plants some 450 million years ago contributed to the decline in CO2 levels as a result of plant photosynthesis. The far higher concentrations of CO2 in the geologic past never led to runaway warming or disaster. The official CO2 record during the Pleistocene and Holocene, as derived from ice core data, is shown in figures 4.2 and 4.4 above. If this record is taken at face value, it would mean that CO2 levels are now higher than they have been in at least 800,000 years (though temperatures are not). In the Medieval Warm Period, the CO2 concentration was allegedly only 280 ppm and did not change much during the subsequent Little Ice Age, standing at 278 ppm in 1750. Many researchers challenge the official ice core record on the grounds that the CO2 concentration in air bubbles trapped in ice declines over the course of countless thousands of years due to many physico-chemical processes (see Climate change controversies, section 7). CO2 concentrations determined on the basis of fossil plant stomata tend to be higher and more variable than those determined from ice cores. For instance, stomata data indicates that CO2 levels may have reached 425 ppm 12,750 years ago (Steinthorsdottir et al., 2013). Moreover, in the 19th and early 20th centuries various chemical analyses were made of CO2 levels which also gave higher values than ice cores (Middleton, 2010; Scotese, 2010). None of these records is perfect, so controversy remains. However, this does not alter the fact that changes in CO2 concentration tend to lag temperature changes during the ice age cycles. The demonization of carbon dioxide as a ‘pollutant’ or ‘poison’ is ignorant and idiotic. CO2 is a colourless, odourless, tasteless, nontoxic gas that is essential to life on earth. It is one of the raw materials for photosynthesis, the process whereby plants and other organisms use energy from sunlight to convert carbon dioxide and water into carbohydrates to sustain their growth. This is accompanied by the release of oxygen, which has built up in the atmosphere, allowing higher life forms to evolve. Thousands of laboratory and field experiments over the past 200 years demonstrate numerous growth-enhancing, water-conserving and stress-alleviating effects of elevated atmospheric CO2 on terrestrial and aquatic plants (NIPCC, 2014; CO2science.org). Horticulturists add extra CO2 to their glasshouses to raise the concentration to up to 1500 ppm and boost crop yields (maximumyield.com). A 300 ppm increase in CO2 levels is expected to increase agricultural yields by an average of one-third (Idso, 2017). Fig. 5.3 Positive impact of CO2 on plants and trees. (NIPCC, 2014) Satellite data shows that between 25% and 50% of earth’s vegetated lands have undergone significant greening over the last 35 years. CO2 fertilization explains 70% of this effect, followed by nitrogen deposition (9%), climate change (8%) and land-cover change (4%) (Zhu et al., 2016). Fig. 5.4 (nasa.gov) Fig. 5.5 The IPCC’s (2013, 6.1) simplified diagram of the global carbon cycle (climatechange2013.org). Numbers represent reservoir mass (‘carbon stocks’) in petagrams of carbon (1 PgC = 1015 g = 1 billion tonnes) and annual carbon exchange fluxes (in PgC/yr). Black numbers and arrows indicate estimated reservoir mass and exchange fluxes prior to the industrial era (1750). Red arrows and numbers indicate annual ‘anthropogenic’ fluxes (2000-2009 average). The uptake of anthropogenic CO2 by the ocean and by terrestrial ecosystems (‘carbon sinks’) is indicated by the red arrow numbers in ‘Net land flux’ and ‘Net ocean flux’. Red numbers in the reservoirs denote cumulative changes of anthropogenic carbon over the industrial period (1750-2011). The IPCC notes that ‘Individual gross fluxes and their changes since the beginning of the Industrial Era have typical uncertainties of more than 20%’, and that figures have been adjusted ‘to achieve an overall balance’. Whatever its shortcomings, the above diagram helps to put human-related CO2 emissions in perspective. Total ‘anthropogenic’ emissions amount to 8.9 billion tonnes (gigatonnes, Gt) of carbon per year: 7.8 Gt from fossil fuel combustion and cement production, and 1.1 GtC from land-use changes.* Total annual emissions from the oceans are 78.4 GtC and those from the terrestrial biosphere are 118.7 GtC, giving a total of 197.1 GtC. This means that anthropogenic emissions make up 3.8% of total annual emissions of CO2. Moreover, the IPCC admits that the figures for natural emissions have an uncertainty of 20%, equal to 39.4 GtC. In other words, total anthropogenic emissions are far smaller than the error margin in the level of natural emissions. Finally, the average annual increase in atmospheric carbon is 4 GtC (or about 2 ppm), which the IPCC attributes entirely to humans, on the grounds that it is only about half the total anthropogenic emissions, with the other half being absorbed by the oceans and terrestrial biosphere. *8.9 GtC per year = 32.7 GtCO2 per year. 1 GtC = 3.67 GtCO2 (carbon and oxygen have atomic masses of 12 and 16 respectively; CO2/C = 44/12 = 3.67). A key assumption behind the IPCC’s analysis is that the carbon cycle was in a state of equilibrium prior to 1750, and that humans have ‘perturbed’ this balance. But clearly the carbon cycle could not have been in perfect equilibrium, because the earth was already warming in 1750, and in a warming world the oceans start releasing more CO2. There are currently huge uncertainties regarding CO2 sources and sinks. The following graph shows that annual increases in atmospheric CO2 concentration correlate poorly with annual emissions from fossil fuel burning and cement production. Fig. 5.6 (Spencer, 2014) Climate scientist Roy Spencer (2014) comments: There are obviously some very large natural yearly imbalances in CO2 sources and sinks, with the atmospheric yearly increase ranging anywhere from 23% to 100% of anthropogenic emissions. If the yearly fluctuations are this large, how do we know that nature is in long-term balance for CO2 sources and sinks? The answer is, we don’t. This is why NASA launched the OCO-2 [Orbiting Carbon Observatory 2] satellite [in 2014], to try to get a better handle on the regional sources and sinks of CO2 around the world. Furthermore, in contradiction to IPCC predictions, the ability of the Earth to absorb extra CO2 seems to be increasing with time: the equivalent of 40% of our emissions were being absorbed early in the record, a fraction which has increased to 50% late in the record. Many other researchers have challenged the belief that 100% of the annual increase in atmospheric CO2 is due to humans (see Climate change controversies, section 7; Berry, 2018; Harde, 2017; Salby, 2016; Björnbom, 2013; Humlum et al., 2013; Glassman, 2010; Rorsch et al., 2005). Exactly how much of the CO2 increase may come from stronger outgassing from the oceans, faster plant growth and decomposition, or other sources is unknown. Regardless of the precise causes of increasing atmospheric CO2, the net effect is likely to be beneficial for life on earth. There are plenty of genuine environmental problems that need fixing, but rising atmospheric CO2 is not one of them – it is a diversion. Examples of highly uncertain sources of CO2 include the following (Quinn, 2010): - Microbial/insect activity. Termites alone are estimated to emit more CO2 per year than human activities (50 Gt versus 32.7 Gt) (Zimmerman et al., 1982). - Planetary degassing (besides CO2, this includes water vapour, methane, sulphur-bearing gases, nitrogen). Grape-sized, highly viscous, liquid CO2 balls have been observed drifting up from the bottom of the Marianas Trench and surrounding submarine volcanoes in the western Pacific. This outgassing may be occurring all along the world’s volcanic ocean-ridge and trench system. - Methane hydrates in marine sediments, which can be released as oceans warm. The amount of carbon bound in gas hydrates is conservatively estimated at twice the amount of carbon in all known fossil fuels on earth. It is noteworthy that changes in atmospheric CO2 concentration continue to lag behind temperature changes. Humlum et al. (2013) found that the short-term changes in atmospheric CO2 lagged about one year behind short-term changes in global temperature, but found no correlation with short-term changes in anthropogenic CO2 emissions. Fig. 5.7 Green curve: southern-hemisphere sea surface temperature (HadSST3). Red curve: rate of change in CO2 concentration (Mauna Loa). (woodfortrees.org) The oceans are not acidic, but alkaline (i.e. their average pH is higher than 7.0), and they will always be alkaline, but they are reportedly becoming less alkaline – that is the real meaning of the potentially misleading term ‘ocean acidification’. If ocean pH ever fell below 7, shells would start to dissolve. The IPCC (2013, FAQ 3.3) claims that the absorption of atmospheric CO2 by the oceans has reduced the average pH of ocean surface waters from about 8.2 to 8.1 since the beginning of the industrial revolution, and that the average pH could be 0.2 to 0.4 lower than it is today by the end of this century. Like other scare stories, this one is overly simplistic and exaggerated. The pH of present-day seawater varies widely. On average it ranges from 8.2 to 8.4, but it can be as high as 9.5 in isolated coral reef pools during the day and can fall to 7.5 at night. In fact, night-time pH minima on the reef flat fringing Heron Island on the Great Barrier Reef are already lower than the pH values predicted for the open ocean by 2100. Experiments showing harmful effects of ocean acidification on biological systems often deal with single species in contrived laboratory conditions that ignore natural variability in pH and take no account of the ability of species to adapt. Such studies tend to appear in high-profile journals, while studies reporting no adverse effect tend to appear in lower-ranking journals (Abbot & Marohasy, 2017). Fig. 5.8 Variations in pH at Flinders Reef reconstructed from boron isotopes in coral. Comparatively rapid changes in pH have occurred on a decadal timescale before the recent increase in atmospheric CO2. (Abbot & Marohasy, 2017) We know that many aquatic organisms survived when atmospheric CO2 was 15 times higher than it is today. The NIPCC (2014) states: Rising temperatures and atmospheric CO2 levels do not pose a significant threat to aquatic life. Many aquatic species have shown considerable tolerance to temperatures and CO2 values predicted for the next few centuries, and many have demonstrated a likelihood of positive responses in empirical studies. Any projected adverse impacts ... will be largely mitigated through phenotypic adaptation or evolution during the many decades to centuries it is expected to take for pH levels to fall. Abnormally high seawater temperatures can cause corals to bleach. This results from corals expelling their algal symbionts, and forces them to take on board a better adapted strain of symbionts. Most corals that bleach fully recover, and are then relatively unsusceptible to similar high temperatures in subsequent years. A survival mechanism such as bleaching indicates that corals have adapted to periods of unusually high temperatures in the past (Ridd, 2017). Fig. 5.9 Australia’s Great Barrier Reef has been a poster child for the global warming cause (businessdestinations.com, climatecentral.org). Its imminent demise has been predicted as often as the demise of Arctic sea ice. The head of the Great Barrier Reef Marine Park Authority has accused activist groups of exaggerating the extent of coral bleaching (joannenova.com.au). In the non-binding Paris Climate Agreement (2016), many countries pledged to reduce their greenhouse gas emissions, with a view to limiting global warming to less than 2°C, and if possible 1.5°C, by the end of the century. What impact will these measures really have on global temperature? Bjørn Lomborg (2017) has addressed this question using MAGICC 6.3, the latest version of a climate model used in all the IPCC’s past reports. He stresses that previous decarbonization promises have routinely been flouted; for instance, almost every Organisation for Economic Cooperation and Development (OECD) country missed its target under the 1997 Kyoto Protocol. He does not take seriously promises by Western countries to cut emissions by 85 to 90% by 2050. He finds that even if we assume that the promises countries have made for the period until 2030 are continued for the rest of the century, the temperature in 2100 will be only 0.17°C lower than it would otherwise have been. Even this puny figure is based on an exaggerated climate sensitivity of 3°C. What will the emission reductions agreed in Paris cost? Amazingly, there are no official estimates of this. We know from past experience that governments tend to enormously underestimate the economic costs of adopting climate policies. In addition, politicians rarely pick the most efficient policies. Lomborg says that EU countries could have reduced their emissions by switching to gas and improving efficiency; this would have cost 0.7% of their gross domestic product (GDP) per year. But instead, they opted for extremely inefficient subsidies for solar power and biofuels, which almost doubled the cost to 1.3% of GDP. He calculates that the global cost of the Paris Agreement will reach around US$1 trillion per year by 2030 if the most efficient policies are adopted, or nearly US$2 trillion per year if less efficient policies are adopted. This works out at a minimum of $4.1 trillion for each hundredth of a degree saving in temperature! Integrated assessment models (IAMs) are used to map out ways of stabilizing CO2 emissions and preventing more than 2°C of warming by the end of the century. But virtually all IAM mitigation scenarios depend on the wide deployment of technologies that do not exist, notably bioenergy with carbon capture and storage (BECCS). Some scenarios assume that BECCS will be able to remove over 1000 Gt of CO2 from the atmosphere (‘negative emissions’) over the course of the century and store it underground. Full-scale implementation of BECCS would require a global land area 1½ times the size of India, which would not be available for agriculture or other uses. In other words, climate policies – including the Paris Agreement – focused on stabilizing atmospheric CO2 at low levels are based not on science, but on science fiction (Pielke, 2018; theclimatefix.com). Nearly all the increase in energy-related CO2 emissions in the coming decades will come from emerging economies like China and India, as the graph below shows. Non-OECD countries are expected to account for 68% of emissions in 2040, compared with 46% in 1990 (eia.gov). In this context, the emission cuts by developed (OECD) countries are of little consequence. Fig. 5.10 (mongabay.com) 6. Solar-terrestrial interactions Strong empirical correlations have been reported from all over the world between solar variability and climate indices, including temperature, precipitation, droughts, floods, streamflow and monsoons (NIPCC, 2013, ch. 3; Scafetta, 2013, 2014). There is climate data matching the sun’s variable 87-year Gleissberg cycle, its 105-year periodicity, the approx. 200-year De Vries/Suess cycle, the approx. 1000-year Eddy cycle, and the approx. 2500-year Bray cycle. The lows of the Bray cycle correspond to significant glacier readvances and vegetation changes. The lows in the De Vries cycle correlate with tree growth in many regions. The De Vries and Gleissberg cycles affect the intensity of summer monsoons and regional precipitation patterns. The lows in the Eddy cycle correspond to increased iceberg discharges in the North Atlantic (Javier, 2016). The Eddy cycle dominated Holocene climate evolution between 11,500 and 4000 years ago, and also in the last two millennia, where it defines the Roman, medieval and modern warm periods (Javier, 2017c). Some researchers believe that the sun could have contributed at least 50% of the post-1850 global warming, whereas IPCC climate models predict at most a 5% solar contribution. Solar scientist Willie Soon (2015) writes: [T]he IPCC asserts against all evidence that the sun has little influence on climate change. This represents neither a consensus nor an authoritative review of the subject. ... Centuries of observation and more recent research strongly suggest that our climate is modulated in important ways by the sun’s variability. The basic physics of this connection is still poorly understood and stands at the frontier of research. According to the IPCC’s Second and Third Assessment Reports, the sun’s radiative influence on earth’s climate since pre-industrial times is 0.3±0.2 W/m2. The Fifth Assessment Report (IPCC, 2013, SPM) reduced this number still further, to 0.05±0.05 W/m2, while it estimated the forcing due to increased greenhouse gas concentrations at 2.29 W/m2. The IPCC reaches this conclusion because it focuses solely on the direct impact of total solar irradiance (TSI) variations on the earth’s climate, and ignores possible amplification mechanisms and indirect influences. Satellite estimates of TSI vary from 1360 to 1365 W/m2; the error margin is therefore greater than the entire forcing attributed to greenhouse gases. Svensmark & Shaviv (2017) argue that sea level variations during the approx. 11-year sunspot cycle indicate that the solar forcing is about 1.25±0.25 W/m2, 25 times higher than the IPCC figure. During the sunspot cycle, TSI varies by only 0.1%, whereas ultraviolet (UV) radiation varies by 6% (longer wave UV) to 75% (shortest wave UV). These wavelengths are absorbed by aerosols, clouds and gases such as nitrous oxide in the earth’s lower atmosphere. The middle range of UV radiation forms ozone in the stratosphere, and the shortest waves of UV form a band of charged particles around the earth known as the ionosphere. The ozone formed in the stratosphere is a minor greenhouse gas. Some researchers argue that this ozone can cause high-altitude cloud formation, thereby significantly varying the amount of solar radiation reflected into space (Brady, 2017, ch. 11). Studies suggest that the major changes caused in the stratosphere may propagate down into the lower atmosphere through complex physical and chemical interactions (NIPCC, 2013, ch. 3). Cyclical changes in the sun’s magnetic field modulate the amount of cosmic rays reaching the earth, which govern the amount of atmospheric ionization. Despite IPCC denials, experiments confirm that this could significantly affect the formation of cloud condensation nuclei and either increase or decrease cloud cover in the earth’s lower atmosphere (Kirkby, 2007; Svensmark & Shaviv, 2017). John Quinn (2010) writes: Evidence indicates that global warming is closely related to a wide range of solar-terrestrial phenomena, from the Sun’s magnetic storms and fluctuating solar wind all the way to the Earth's core motions. Changes in the Solar and Earth magnetic fields, changes in the Earth’s orientation and rotation rate, as well as the gravitational effects associated with the relative barycenter motions of the Earth, Sun, Moon, and other planets, all play key roles. Solar-terrestrial magnetic-field and charged-particle interactions generate electric currents that produce heating in the atmosphere, oceans and earth’s interior, particularly along faults and fissures, and interfaces between different zones within the earth. Such interactions also influence earth’s angular momentum, which affects jet stream wind patterns and global climate (see Leybourne et al., 2018). Fig. 6.1 Solar activity (yellow curves) plotted against northern hemisphere temperature reconstructions (other curves) since 1600 (from Wanner et al., 2008). (Javier) There is a definite correlation between low sunspot numbers and cold periods on earth, such as the Oort Minimum (1040-1080), the Wolf Minimum (1280-1350), the Sporer Minimum (1460-1550), the Maunder Minimum (1645-1710) and the Dalton Minimum (1795-1825). Solar activity has been increasing for the past 300 years according to sunspot observations and solar proxies. However, the current sunspot cycle (the 24th since 1755) saw the lowest level of solar activity since the Dalton Minimum. During sunspot cycle 25, which will begin in 2019 and last until about 2030, solar activity is widely expected to be higher than in cycle 24, but still below average. This could result in the global warming slowdown continuing into the mid-2030s. As many wise people have noted, it’s difficult to make predictions, especially about the future – and especially when there are many variables and interacting cycles involved. Some researchers predict even lower solar activity in the decades ahead, resulting in a grand solar minimum (of which there have been about 30 during the Holocene), with temperatures falling as low as in the Maunder or Dalton Minimum (e.g. Abdussamatov, 2013; Yndestad & Solheim, 2017). However, this does not look very likely at present (Javier, 2018c). One analysis of Holocene climate cycles indicates that the period 1600-2100 should be a period of overall warming, with natural warming peaking in 2050-2100, followed by 500 years of cooling, provided the cycle maintains its beat (Javier, 2018a). What happens in the coming decades will help to clarify the sun’s impact on climate. 7. Extreme weather and alarmist hype As Richard Lindzen (2015) notes, ‘The failure of the public to get unduly excited over a degree or two of warming has led the environmental alarmists to turn to the bogey man of extreme weather.’ Nowadays, we’re frequently told that every turn of the weather is proof of global warming / climate change / climate chaos / climate disruption / climate weirding / the climate crisis (or whatever the latest buzzword may be). Each heat wave or cold snap, drought or flood, storm or bushfire, each decrease in sea ice or calving glacier is hailed as a sign of impending doom. In fact, whether we experience unusual warmth or unusual cold, more rain or less rain, more drought or less drought, more snow or less snow, more ice or less ice, more hurricanes or fewer hurricanes – it’s all cited as evidence that the earth is fast approaching a ‘tipping point’ that could lead to climate catastrophe. Climate change has been blamed for causing or worsening an endless range of problems, including airplane turbulence, murder, prostitution, rape, car thefts, barroom brawls and child marriage (Morano, 2018, ch. 11). For amusing lists of hundreds of contradictory things that have been blamed on ‘global warming’ (i.e. anthropogenic greenhouse gas emissions), see numberwatch.co.uk, whatreallyhappened.com and notrickszone.com. Many people are convinced that the weather is already becoming more extreme due to CO2 emissions. But even the IPCC (2013) doesn’t go this far, though its models do predict that the human fingerprint on extreme weather events will eventually become visible in the coming decades. Like negative events in general, extreme weather events are instantly broadcast around the world and sensationalized, and this can give the impression that lots of things are getting worse. However, extreme weather expert Roger Pielke Jr. (2017) writes: The world is presently in an era of unusually low weather disasters. This holds for the weather phenomena that have historically caused the most damage: tropical cyclones, floods, tornadoes and drought. Given how weather events have become politicized in debates over climate change, some find this hard to believe. Fortunately, government and IPCC ... analyses allow such claims to be adjudicated based on science, and not politics. Fig. 7.1 Economic damage from weather disasters as a proportion of global GDP has decreased since 1990. (theclimatefix.com) Fig. 7.2 Over the past five decades, the number of global hurricane strength tropical cyclones (top curve) has decreased, while the number of major hurricane strength tropical cyclones (bottom curve) has increased slightly (though the trend is not significant). (wx.graphics) Fig. 7.3 The top curve shows the accumulated cyclone energy (ACE) for the entire globe, the lower curve shows ACE for the northern hemisphere, and the area in between shows ACE for the southern hemisphere. (wx.graphics) Fig. 7.4 (philklotzbach) After five category 4 or 5 hurricanes hit the US in 2005, climate activists issued numerous warnings that hurricane destruction would increase further in the years that followed. Instead, no category 3 or higher hurricanes made landfall in the US for the next 12 years – a pause this long is expected to happen only once every 250 to 300 years. Then in August and September 2017 three category 4 hurricanes hit the US – Harvey in Texas, Irma in Florida, and Maria in Puerto Rico. This prompted the global warming propaganda machine to go into overdrive. However, the worst decade for major (category 3, 4 and 5) hurricanes in the US was the 1940s. Fig. 7.5 There is no statistically significant trend in the number of hurricanes making landfall in the continental US. (philklotzbach) The biggest natural disaster in the history of the United States was the Great Galveston Hurricane of 8 September 1900, which killed an estimated 6000 to 12,000 people and wiped out almost the entire city, destroying 3600 buildings. The second strongest landfalling hurricane in the US struck coastal Mississippi in 1969, at a time when the popular press was warning of a new ice age. It destroyed virtually every structure and killed 259 people. The historical record shows that in some years and decades there are many storms, while others are relatively quiet. As Roy Spencer (2017b) says, ‘This isn’t what human-caused climate change looks like. It’s what weather looks like.’ Hurricanes require a unique set of circumstances to occur, and sufficiently warm sea surface temperatures is only one of them. Meteorologist Joe Bastardi (2018, 5) explains: ‘Hurricanes are nature’s way of taking heat out of the tropics and redistributing it to the temperate regions. Weather and climate are nature’s way of seeking a balance it can never attain because of the very design of the system.’ If the world continues to warm, the intensity of cyclones may increase in tropical regions, but in mid-latitude areas there could be less severe weather events and lower wind speeds because the equator-to-pole temperature gradient is expected to decline, and it is this that drives the jet stream (Brady, 2017, ch. 5). Storm data from the last 6500 years clearly shows that the frequency and strength of storms increase with cooling, and decrease with warming (Javier, 2017b, fig. 75b). In 1970 a tropical cyclone struck East Pakistan (now Bangladesh), killing 500,000 people. Although coastal population has skyrocketed in recent decades, that kind of humanitarian disaster is unlikely today because we have developed satellite technology to monitor storms, created warning systems and built infrastructure to protect people or evacuate them. As for floods, droughts and tornadoes, the official data shows little or no indication of them becoming more severe or frequent. The IPCC concludes: - ‘There continues to be a lack of evidence and thus low confidence regarding the sign of trend [i.e. whether there is an upward or downward trend] in the magnitude and/or frequency of floods on a global scale.’ (2013, ch. 2) - ‘There is low confidence in observed trends in small spatial-scale phenomena such as tornadoes and hail ...’ (2012) - ‘[T]here is low confidence in detection and attribution [to humans] of changes in drought over global land areas since the mid-20th century.’ (2013, ch. 10) It’s worth remembering that in the 1970s many climatologists blamed droughts, floods and other extreme weather on global cooling (Morano, 2018, ch. 12). It seems that the frequency and intensity of heatwaves ought to increase in a warming world. However, as already noted, global warming is having more effect on minimum temperatures than on maximum temperatures, producing generally warmer winters. In the US, the worst heatwaves and worst droughts to date happened in the 1930s. Fig. 7.6 US annual heat wave index, 1895-2015. (epa.gov) Fig. 7.7 Average drought conditions in the contiguous United States, 1895-2015. (epa.gov) In 2000 a senior scientist at the UK’s Climatic Research Unit predicted that within a few years winter snowfall would become ‘a very rare and exciting event’. ‘Children just aren’t going to know what snow is,’ he declared (climatedepot.com). This prophecy is not faring very well, as the graph below shows. However, climate activists have now decided that more snow is ‘consistent with’ man-made climate change. A headline in the pro-alarmist Guardian newspaper proclaimed: ‘That snow outside is what global warming looks like’ (climatedepot.com). Fig. 7.8 (ncdc.noaa.gov) In a study of global trends in wildfires, Doerr & Santín (2016) wrote: many consider wildfire as an accelerating problem, with widely held perceptions both in the media and scientific papers of increasing fire occurrence, severity and resulting losses. However, important exceptions aside, the quantitative evidence available does not support these perceived overall trends. Instead, global area burned appears to have overall declined over past decades, and there is increasing evidence that there is less fire in the global landscape today than centuries ago. It has been claimed that global warming could cause up to 40,000 plant and animal species to go extinct by 2100 (newscientist.com). As the NIPCC (2014) points out, such extreme predictions are based on models that exaggerate future warming and on ‘assumptions about the immobility of species that are routinely contradicted by real-world observations’. The adaptive responses of many species, such as range shifts and phenotypic or genetic adaptations, provide evidence of species resilience. Data indicates that in some cases warmer temperatures and higher atmospheric CO2 concentrations will be highly beneficial, and favour a proliferation of species. After all, the greatest diversity of life is found in the tropics. The polar bear has long been an icon for the global warmist cult. The polar bear population was allegedly being decimated by rising temperatures and declining sea ice in the Arctic. It turns out however that in the 1950s and 60s there were 5000 to 10,000 polar bears, while in 2016 there were between 22,000 and 31,000 – the highest estimate in 50 years. What’s more, over the past 10,000 years polar bears have managed to survive Arctic temperatures from 0.5 to 5°C higher than today. In Greenland, the warmest decades since the 18th century were in the 1930s and 40s (Vinther et al., 2006). Even climate activists seems to be realizing their mistake, but – fortunately for diehard alarmists – modellers are still able to program their models to show a serious drop in polar bear numbers at some point in the distant future (Morano, 2018, ch. 4; polarbearscience.com). Still waiting for extinction ... Polar bears have survived past changes in climate that have exceeded those of the 20th century or are forecast by computer models to occur in the future. Sea level has risen by around 130 metres since the last glacial maximum, 22,000 years ago. Nowadays, there are many scare stories about the rise in sea level accelerating and how this will threaten coastal communities and wipe out low-lying Pacific islands. The IPCC claims that sea level could rise by up to 82 cm by the year 2100, but this would require a massive acceleration of the sea level rise observed over the past 150 years. According to the IPCC (2013, SPM), the mean rate of sea level rise was 1.7 mm/yr between 1901 and 2010, based on tide gauge data, while the average sea level rise was 3.2 mm/yr between 1993 and 2010, based on satellite altimetry data. Transforming raw satellite measurements into sea level variations is a complex process involving many corrections and adjustments that are orders of magnitude greater than the actual rise in sea level. 0.3 mm/year of the 3.2 mm/year trend in satellite-era sea level is a global isostatic adjustment to account for rising shorelines and sinking ocean floors caused by the melting of the ice sheets since the end of the last glacial period (Tisdale, 2015,181). In many coastal regions the land is not rising but subsiding – but no correction is made for that. Furthermore, the volume of the oceans is affected by numerous tectonic motions that are impossible to quantify. Tide gauges measure the local rate of sea level rise, relative to the local coast, regardless of whether it is rising or sinking or of changes in ocean volume. This is the information of most value to coastal communities. According to Nils-Axel Mörner (2017), global tide gauge datasets show a sea level rise rate of 1.7 mm/yr to 0.25 mm/yr depending on the choice of stations. Based on tide gauge records, Jevrejeva et al. (2008) found that the fastest sea level rise during the past 300 years occurred between 1920 and 1950, reaching a maximum of 2.5 mm/yr – something that cannot be blamed on CO2. The IPCC (2013, SPM) acknowledged that high rates of sea level rise similar to today’s occurred between 1920 and 1950. There is substantial multidecadal variability in the sea level change record, including an approx. 60-year oscillation (Curry, 2018a,b), which does not correlate with increasing CO2. Fig. 7.9 Top: Global sea level since 1700 (grey shading represents uncertainty). Bottom: Evolution of the rate of sea level change since 1700. (Jevrejeva et al., 2008) Between 1995 and 2007 Arctic sea ice extent declined by 30%, leading to claims that it was in a ‘death spiral’. Numerous predictions that the Arctic Ocean would be ice-free in summer have already failed (wattsupwiththat.com). In fact, summer Arctic sea ice extent has not declined any further since 2007. If the prediction of an ice-free Arctic eventually comes true, it would be good news for shipping. Looking back at earth’s history, it is unusual for there to be year-round ice at both poles. John Holdren, who was an extreme coolist in the 1970s but later became an ardent warmist, predicted in 1987 that CO2-induced famines could kill as many as a billion people by 2020. In 2009, while serving as science adviser to President Obama, he claimed that this prediction might still come true. In reality, the number of undernourished people fell by 21% from the early 1990s to 2012-14. Food availability worldwide has risen from 2220 kcal per person per day in the early 1960s to 2790 kcal in 2006-08 (worldhunger.org). Instead of undermining food security, warming and CO2 enrichment are boosting food production. The prevailing theory is that the increase in anthropogenic greenhouse gas emissions has reduced the climate’s ability to radiate heat to outer space by about 1%. Those who accept this argument could say that humans bear 1% of the responsibility for bad weather. But that argument is never heard because it’s not scary or dramatic enough. What’s more, since the oceans have only warmed by hundredths of a degree since the 1950s, the observed average change in the global energy budget is only about 0.25% (Spencer, 2017a). And as we’ve seen, there is no empirical proof that all or most of this change is due to human activity. The only proof comes from man-made models that reflect the beliefs and assumptions of those who create them and are unable to accurately simulate the earth’s climate. 8. The cost of the climate cult ‘The clock is ticking. We have 10 years to save the planet.’ Doomsayers have been spouting this sort of nonsense for half a century; it began during the global cooling scare in the 1970s, and continued when the global warming scare got under way in the late 1980s (Morano, 2018, ch. 13). Climate crusader Al Gore makes millions of dollars from companies that depend on government subsidies that he personally lobbies to perpetuate. In 2006 he announced that the climate ‘tipping point’ was 10 years away, but two years later he announced that it was still 10 years away. Prince Charles, another climate clown, proclaimed in 2009 that we had 100 months (8⅓ years) to prevent climate catastrophe, while James Hansen declared that we had four years, and UK prime minister Gordon Brown announced that we had just 50 days. In 2015 Prince Charles announced that the climate apocalypse had been postponed to 2050. In 2007 Rajendra Pachauri, then chief of the IPCC, declared: ‘If there’s no action before 2012, that’s too late.’ 2012 came and went without disaster, but disaster did strike three years later when Pachauri had to resign following a sex scandal. Armageddon postponed (again) ... The climate change panic has more to do with politics, prestige and the power of money than with objective science. The emails leaked during the 2009 Climategate scandal exposed the efforts by a well-funded clique of alarmists to manipulate science to fit the catastrophist narrative, and to silence and sideline scientists with opposing views (see Climategate). Geologist Bob Carter (2015) writes: All the classic tools of propaganda and spin have been deployed for the advancement of public alarm about global warming, including scientific malfeasance, noble cause corruption, the makeover of formerly independent expert groups such as academies of science, the indoctrination of school children from kindergarten onwards and the ad hominem demonisation of scientists who fail to conform to the orthodox IPCC view. John McLean, an IPCC expert reviewer, states: ‘The reality is that the UN IPCC is in effect little more than a UN-sponsored lobby group, created specifically to investigate and push the “man-made warming” line’ (Morano, 2018, ch. 3). A document listing over 1000 international scientists, of various political persuasions, who disagree with the IPCC consensus (groupthink) can be found here. Geophysicist and green guru James Lovelock, the author of the Gaia hypothesis, was once a climate alarmist, but in 2010 he stated: ‘The great climate science centers around the world are more than well aware how weak their science is. ... We haven’t got the physics worked out yet’ (cfact.org). Greenpeace cofounder Patrick Moore left Greenpeace in 1986 because he wanted to base his environmentalist positions ‘on science and logic rather than sensationalism, misinformation, and fear’. Martin Hertzberg, a retired Navy meteorologist, states: ‘As a scientist and life-long liberal Democrat, I find the constant regurgitation of the anecdotal, fear mongering clap-trap about human-caused global warming to be a disservice to science’ (Morano, 2018, ch. 9). Physicist Denis Rancourt believes that the global warming movement is ‘as much a psychological and social phenomenon as anything else’. He says that ‘by far the most destructive force on the planet is power-driven financiers and profit-driven corporations and their cartels backed by military might’ and that ‘the global warming myth is a red herring that contributes to hiding this truth’; he also believes that the ‘climate change scam is now driven by the top-level financiers newly eyeing a multi-trillion-dollar paper economy of carbon trading’ (Morano, 2018, ch. 9). From 1993 to 2013 total US expenditure on climate change amounted to more than $165 billion, with over $35 billion going to climate science (Morano, 2018, ch. 15). Since this money predominantly supports the alarmist point of view, it has stunted and distorted climate science and spawned an entire industry cashing in on climate fears. As earth scientist Ian Plimer (2015) says, ‘there are now armies of bureaucrats, politicians, scientists, and businesses living off the climate catastrophe scare’. The funding for climate alarmists is estimated to be 3500 times higher than that for their opponents (abc.net.au). Where there’s a trough, there are pigs. Fossil fuels like coal, oil and gas are highly concentrated forms of energy that have brought enormous benefits to humanity, enabling the majority of people in the industrialized nations to live longer and healthier lives and escape grinding poverty. Current efforts to decarbonize the economy include the hasty, large-scale installation of expensive, heavily subsidized, low-efficiency solar and wind technologies that are still only in their infancy. Since solar and wind energy are intermittent and unreliable, and there is no large-scale storage technology on the horizon that could solve this problem, these methods of electrical generation have to be backed up by fossil fuel generators to avoid blackouts when the wind stops blowing and the sun stops shining (see The energy future). A 1% increase in the installed capacity of wind power results in electricity generation from oil and natural gas increasing by 0.26% and 0.22% respectively (Marques et al., 2018). This is costly and inefficient. Mainstream media reports about wind and solar farms typically highlight their nameplate capacity (i.e. the power they would produce under optimal conditions), and fail to mention that the actual output is usually 70 to 80% lower. Since it costs two to eight times more to generate a megawatt-hour of electricity from wind and solar energy than from coal and natural gas (cornwallalliance.org), the rush to ‘go green’ is inflating the price of electricity and driving more people into fuel poverty. In Germany, for example, electricity prices rose by 51% during its expansion of solar and wind energy from 2006 to 2016 (environmentalprogress.org). In 2017, wind produced an estimated 0.68% of global energy, and solar PV 0.24%, and together they received $125 billion in subsidies (linkedin.com/pulse). By 2021, subsidies for wind power in the UK will reach £7.1 billion, or £265 per household (Homewood, 2017). Many governments and agencies in the developed world are doing their best to prevent developing countries from using fossil fuels to generate the affordable energy that they need to lift themselves out of dire poverty, as the West has already done. Electricity helped raise life expectancy in China from 59 to 75 years, but 1.1 billion poor people across the world still don’t have electricity (yahoo.com). Some 3 billion people still cook and keep warm by burning fuels like wood, dung and charcoal, which give off deadly fumes. The World Health Organization estimates that this indoor air pollution causes around 3.8 million deaths a year (who.int; Lomborg, 2014; Paunio, 2018). The climate is a highly complex, nonlinear, dynamic system governed by hundreds of factors. Governments will never be able to control the climate or stop it from changing, no matter how many trillions of dollars they spend or how many taxes they impose. And they certainly won’t achieve this by tinkering with one atmospheric trace gas. Whether the earth warms or cools, humans must either adapt or die. Abbot, John, and Marohasy, Jennifer (2017). Ocean acidification: not yet a catastrophe for the Great Barrier Reef. In Marohasy, 2017a, ch. 2 Abdussamatov, H.I. (2013). Grand minimum of the Total Solar Irradiance leads to the Little Ice Age. Journal of Geology & Geophysics, v. 2, no. 113, omicsonline.org Bastardi, Joe (2018). The Climate Chronicles: Inconvenient revelations you won’t hear from Al Gore – and others. Relentless Thunder Press Björnbom, Pehr (2013). A comparison of Gösta Pettersson’s carbon cycle model with observations. klimatupplysningen.se Brady, Howard T. (2017). Mirrors and Mazes: A guide through the climate debate. Canberra: Jamison Centre, 2nd ed. (Kindle ed.) Carter, Robert M. (2015). The scientific context. In Moran, 2015, ch. 5 Christiansen, B., and Ljungqvist, F.C. (2012). The extra-tropical northern hemisphere temperature in the last two millennia: reconstructions of low-frequency variability. Climate of the Past, v. 8, 765-86, clim-past.net Christy, John (2017). Testimony given to a hearing of the Committee on Science, Space and Technology of the US House of Representatives on 29 March 2017. In: Climate Science: Assumptions, policy implications and the scientific method. Global Warming Policy Foundation, thegwpf.org Christy, J.R., and McNider, R.T. (2017). Satellite bulk tropospheric temperatures as a metric for climate sensitivity. Asia-Pacific Journal of Atmospheric Science, v. 53, no. 4, 511-8, wattsupwiththat.com Curry, Judith (2017). Climate Models for the Layman. Global Warming Policy Foundation, thegwpf.org Curry, Judith (2018a). Sea level rise acceleration (or not): Part III – 19th & 20th century observations. judithcurry.com Curry, Judith (2018b). Sea level rise acceleration (or not): Part IV – Satellite era record. judithcurry.com Curry, Judith (2018c). Sea level rise acceleration (or not): Part VI. Projections for the 21st century. judithcurry.com Ellis, R., and Palmer, M. (2016). Modulation of ice ages via precession and dust-albedo feedbacks. Geoscience Frontiers, v. 7, no. 6, 891-909, sciencedirect.com Eschenbach, Willis (2013). Air conditioning Nairobi, refrigerating the planet. wattsupwiththat.com Eschenbach, Willis (2018a). Glimpsed through the clouds. wattsupwiththat.com Eschenbach, Willis (2018b). Clouds and El Nino. wattsupwiththat.com Farmer, G.T., and Cook J. (2013). Earth’s albedo, radiative forcing and climate change. In: Climate Change Science: A modern synthesis. Springer, Dordrecht, 217-29, link.springer.com Fuss, S., et al. (2014). Betting on negative emissions, Nature Climate Change, v. 4, 850-3, nature.com Gasparrini, A., et al. (2015). Mortality risk attributable to high and low ambient temperature: a multicountry observational study. The Lancet, v. 386, no. 9991, 369-75, thelancet.com Glassman, J.A. (2010). On why CO2 is known not to have accumulated in the atmosphere and what is happening with CO2 in the modern era. rocketscientistsjournal.com Hansen, J., et al. (1981). Climate impact of increasing atmospheric carbon dioxide. Science, v. 213, no. 4511, 958-66, pubs.giss.nasa.gov Hansen, J.E., and Lebedeff, S. (1987). Global trends of measured surface air temperature. Journal of Geophysical Research, v. 92, 13345-72, pubs.giss.nasa Harde, Hermann (2017). Scrutinizing the carbon cycle and CO2 residence time in the atmosphere. Global and Planetary Change, v. 152, 19-26, sciencedirect.com Homewood, Paul (2017). Wind power – some basic facts. notalotofpeopleknowthat.com Humlum, O., Stordahl, K., and Solheim, J.-E. (2013). The phase relation between atmospheric carbon dioxide and global temperatures. Global and Planetary Change, v. 100, 51-69, sciencedirect.com Idso, Craig D. (2017). Carbon dioxide and plant growth. In Marohasy, 2017a, ch. 13 IPCC (2012). Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. Special Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press, ipcc.ch IPCC (2013). Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press, ipcc.ch Istvan, Rud (2014). Blowing Smoke: Essays on energy and climate. Houston, TX: Strategic Book Publishing & Rights Co (e-book) Javier (2016). Periodicities in solar variability and climate change: a simple model. euanmearns.com Javier (2017a). Nature unbound III: Holocene climate variability (Part A). judithcurry.com Javier (2017b). Nature unbound V – The elusive 1500-year Holocene cycle. judithcurry.com Javier (2017c). Nature unbound VI – Centennial to millennial solar cycles. judithcurry.com Javier (2018a). Nature unbound VIII – Modern global warming. judithcurry.com Javier (2018b). The effect of volcanoes on climate and climate on volcanoes. wattsupwiththat.com Javier (2018c). Nature unbound IX – 21st century climate change. judithcurry.com Jevrejeja, S. et al. (2008). Recent global sea level acceleration started over 200 years ago? Geophysical Research Letters, v. 35, L08715, psmsl.org Jones, P.D., and Harpham, C. (2013). Estimation of the absolute surface air temperature of the earth. Journal of Geophysical Research: Atmospheres, v. 118, 3213-17, agupubs.onlinelibrary.wiley.com Kirkby, J. (2007). Cosmic rays and climate. Surveys in Geophysics, v. 28, 333-75, arxiv.org Kummer, Larry (2015). Manufacturing climate nightmares: misusing science to create horrific predictions. fabiusmaximus.com Lewis, Nicholas, and Curry, Judith (2018). The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. Journal of Climate, JCLI-D-17-0667, journals.ametsoc.org, niclewis.files.wordpress.com Leybourne, B., Smoot, C., and Longhinos, B. (2014). World encircling tectonic vortex street – geostreams revisited: the southern ring current EM plasma-tectonic coupling in the western Pacific Rim. EGU General Assembly 2014, adsabs.harvard.edu Lindzen, Richard S. (2015). Global warming, models and language. In Moran, 2015, ch. 3 Lindzen, R.S., and Choi, Y.-S. (2009). On the determination of climate feedbacks from ERBE data. Geophysical Research Letters, v. 36, L16705, agupubs.onlinelibrary.wiley.com Loehle, C., and McCulloch, J.H. (2008). Correction to: A 2000-year global temperature reconstruction based on non-tree ring proxies. Energy & Environment, v. 19, 93-100, scienceandpublicpolicy.org Lomborg, Bjorn (2014). The deadliest environmental threat (it’s not global warming). nypost.com Lomborg, Bjørn (2017). The impact and cost of the 2015 Paris Climate Summit, with a focus on US policies. In Marohasy, 2017a, ch. 15 Marohasy, Jennifer, ed. (2017a). Climate Change: The Facts 2017. Melbourne: Institute of Public Affairs (Kindle ed.) Marohasy, Jennifer (2017b). The homogenisation of Rutherglen. In Marohasy, 2017a, ch. 9 Marques, A.C., Fuinhas, J.A., and Pereira, D.A. (2018). Have fossil fuels been substituted by renewables? An empirical assessment for 10 European countries. Energy Policy, v. 116, 257-65, sciencedirect.com McIntyre, Steve (2015). Antarctic ice mass controversies. climateaudit.org McIntyre, Steve (2017). Reconciling model-observation reconciliations. climateaudit.org McKitrick, Ross (2015). The hockey stick: a retrospective. In Moran, 2015, ch. 14 McKitrick, Ross (2018). All those warming-climate predictions suddenly have a big, new problem. financialpost.com Michaels, Patrick J. (2018). Time to cool it: The U.N.’s moribund high-end global warming emissions scenario. cato.org Middleton, David (2010). CO2: ice cores vs. plant stomata. wattsupwiththat.com Montford, A.W. (2010). The Hockey Stick Illusion: Climategate and the corruption of science. London: Stacey International Moran, Alan, ed. (2015). Climate Change: The facts 2014. Stockade Books (Kindle ed.) Morano, Marc (2018). The Politically Incorrect Guide to Climate Change. Washington: Regnery Publishing (Kindle ed.) Mörner, N.-A. (2017). Sea level manipulation. International Journal of Engineering Science Invention, v. 6, no. 8, 48-51, ijesi.org NIPCC (Nongovernmental International Panel on Climate Change) (2013). Climate Change Reconsidered II: Physical Science. Chicago, IL: The Heartland Institute, climatechangereconsidered.org, heartland.org NIPCC (2014). Climate Change Reconsidered II: Biological Impacts. Summary for Policymakers. Chicago, IL: The Heartland Institute, heartland.org Nova, Joanne (2017). Mysterious revisions to Australia’s long hot history. In Marohasy, 2017a, ch. 8 Paunio, Mikko (2018). Kicking Away the Energy Ladder: How environmentalism destroys hope for the poorest. Global Warming Policy Foundation, thegwpf.org Pielke, Roger, Jr. (2017). Weather-related natural disasters: Should we be concerned about a reversion to the mean? riskfrontiers.com Pielke, Roger, Jr. (2018). Opening up the climate policy envelope. Issues in Science and Technology, summer, google.com Plimer, Ian (2015). The science and politics of climate change. In Moran, 2015, ch. 1 Quinn, John M. (2010). Global Warming: Geophysical counterpoints to the enhanced greenhouse theory. Pittsburgh, PA: Dorrance Publishing Co. Ridd, Peter (2017). The extraordinary resilience of Great Barrier Reef corals, and problems with policy science. In Marohasy, 2017a, ch. 1 Rorsch, A., Courtney, R.S., and Thoenes, D. (2005). The interaction of climate change and the carbon dioxide cycle. Energy & Environment, v. 16, no. 2, 217-38, journals.sagepub.com Salby, Murry (2016). Atmosphere carbon dioxide. Video presentation, University College London, edberry.com Scafetta, Nicola (2013). Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles. Earth-Science Reviews, v. 126, 321-57, arxiv.org Scafetta, Nicola (2014). The sun has a significant influence on the climate. mwenb.nl Schneider, D.P., et al. (2006). Antarctic temperatures over the past two centuries from ice cores. Geophysical Research Letters, v. 33, L16707, agupubs.onlinelibrary.wiley.com Scotese, C.R. (2010). The CO2 record in plant fossils. geocraft.com Soon, William (2015). Sun shunned. In Moran, 2015, ch. 4 Spencer, Roy W. (2008). Climate Confusion: How global warming hysteria leads to bad science, pandering politicians and misguided policies that hurt the poor. New York: Encounter Books Spencer, Roy W. (2014). How much of atmospheric CO2 increase is natural? drroyspencer.com Spencer, Roy W. (2017a). An Inconvenient Deception: How Al Gore distorts climate science and energy policy. 2nd ed. (Kindle ed.) Spencer, Roy W. (2017b). Inevitable Disaster: Why hurricanes can’t be blamed on global warming. Kindle ed. Spencer, R.W., and Braswell, W.D. (2014). The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate model. Asia-Pacific Journal of Atmospheric Sciences, v. 50, no. 2, 229-37, link.springer.com Steele, Jim (2013). Landscapes & Cycles: An environmentalist’s journey to climate skepticism. CreateSpace Steinthorsdottir, M., et al. (2013). Stomatal proxy record of CO2 concentrations from the last termination suggests an important role for CO2 at climate change transitions. Quaternary Science Reviews, v. 68, 43-58, sciencedirect.com Svensmark, Henrik, and Shaviv, Nir (2017). Finally! The missing link between exploding stars, clouds and climate on earth. sciencebits.com Tisdale, Bob (2015). On Global Warming and the Illusion of Control: Part 1. bobtisdale.com Tisdale, Bob (2018). Dad, Why Are You a Global Warming Denier? A short story that’s right for the times. Kindle ed. Vinther, B.M., et al. (2006). Extending Greenland temperature records into the late eighteenth century. Journal of Geophysical Research, v. 111, D11105, crudata.uea.ac.uk Wanner, H., et al. (2008). Mid- to Late Holocene climate change: an overview. Quaternary Science Reviews, v. 27, nos. 19-20, 1791-1828, citeseerx.ist.psu.edu Watts, Anthony (2017). Creating a false warming signal in the US temperature record. In Marohasy, 2017a, ch. 5 Yndestad, H., and Solheim, J.-E. (2016). The influence of solar system oscillation on the variability of the Total Solar Irradiance. New Astronomy, v. 51, 135-52, researchgate.net Zhu, Z., et al. (2016). Greening of the earth and its drivers. Nature Climate Change, v. 6, 791-5, nature.com Zimmerman, P.R. et al. (1982). Termites: a potentially large source of atmospheric methane, carbon dioxide, and molecular hydrogen. Science, v. 218, no. 4572, 563-5, science.org Zwally, H.J., et al. (2015). Mass gains of the Antarctic ice sheet exceed losses. Journal of Glaciology, v. 61, no. 230, 1019-36, cambridge.org Climategate and the corruption of climate science Climate change controversies The global warming scare The energy future Earth’s meteoric veil
<urn:uuid:dbe25894-eba5-4d91-bca7-f4b5a92e83e3>
3.828125
24,450
Knowledge Article
Science & Tech.
55.96725
95,552,026
+44 1803 865913 By: LA Frakes, JE Francis and JI Syktus 274 pages, 50 illus The changes in the Earth's climate over the past 600 million years, from the Cambrian to the Quaternary, come under scrutiny in this book. The geological evidence for ancient climates is examined, such as the distribution of climate-sensitive sediments. The Earth's climate has changed many times throughout the Phanerozoic. Thus in this book the climate history has been divided into Warm and Cool modes, intervals when either the Earth was in a former `greenhouse' state with higher levels of atmospheric CO2 and polar regions free of ice, or the global climate was cooler and ice was present in high latitudes. The studies presented here highlight the complex interactions between the carbon cycle, continental distribution, tectonics, sea level variation, ocean circulation and temperature change as well as other parameters. In particular, the potential of the carbon isotope records as an important signal of the past climates of the Earth is explored. This book will be useful to all students and researchers with an interest in palaeoclimates and palaeoenvironments. '! will undoubtedly be useful to students and teachers seeking an overview of palaeoclimates.' Mercian Geologist There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I have always been MOST impressed by the efficiency, courtesy, integrity and professionalism of NHBS! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:da75b608-b58e-4beb-b8fd-c3b719ebc233>
3.34375
342
Product Page
Science & Tech.
35.674289
95,552,041
In keeping with the desirability of using bioregenerative systems for CELSS, the primary thrust in advanced water reclamation technology is in the area of bioreactors. A major effort in this area has been been made over the past several years in the Hybrid Regenerative Water Recovery Development Laboratory at NASA-JSC. Owing to the vulnerability of systems dependent upon living organisms, physico-chemical water recovery methods will be required as back-up systems for use during long duration missions. A variety of electrochemical, membrane, sorption, and catalytic processes have been investigated. References to both bioregenerative and physico-chemical water recovery processes are given below. Return to Regenerative Life Support Author: Tugrul Sezen BACK TO COURSE MAIN PAGE BACK TO SPACE SETTLEMENT HOME PAGE BACK TO COURSE MAIN PAGE Curator: Al Globus NASA Responsible Official: Dr. Ruth Globus If you find any errors on this page contact Al Globus. This mirror of the NASA Ames Research Center Space Settlement web site is provided by:
<urn:uuid:4a258f0b-5042-46b8-833c-636e13730130>
2.65625
231
Knowledge Article
Science & Tech.
10.88589
95,552,045
Calcium oxide, also known as quicklime, is an alkaline substance which has been in use since the medieval age. It is believed that quicklime is one of the oldest chemicals known to the human race. It can also be referred to as burnt lime. Preparation of calcium oxide Calcium oxide can be produced by thermal decomposition of materials like limestone or seashells that contain calcium carbonate (CaCO3; mineral calcite) in a lime kiln. The process that is used to prepare burnt lime is known as calcination. It is a process which starts with thermally decomposing the reactants at high temperatures but ensuring that the temperature is kept well below the melting point. Calcium carbonate undergoes calcination at temperatures ranging between 1070oC-1270oC. These reactions are usually held in a rotary kiln. The products formed as a result of the reaction are burnt lime and carbon dioxide. The carbon dioxide that is formed is immediately removed so that the reaction is preceded until the completion of the process in accordance with Le-Chatelier’s principle. CaCO3 → CaO +CO2 This reaction is reversible and exothermic in nature in the forward direction. Properties of calcium oxide - Quick lime is an amorphous white solid with a high melting point of 2600o - It is a very stable compound and withstands high temperatures. - In the presence of water, it forms slaked lime. This process is called slaking of lime. CaO+H2O → Ca (OH)2 - It is an oxide which is basic in nature and forms salts when it comes in contact with an acid. CaO+H2SO4 → CaSO4+H2O Uses of calcium oxide - It is extensively used for medicinal purpose and insecticides. - It finds its application in manufacturing of cement, paper, and high-grade steel. - Lime is used as a reagent in laboratories for dehydration, precipitation, etc. - It is the cheapest alkali available which is an important ingredient in the manufacturing of caustic soda. There are few things that users should keep in mind regarding Calcium Oxides. The reaction between quicklime and water is usually vigorous. Quicklime can cause severe irritation especially when inhaled or if it comes in contact with wet skin or eyes. Some of the effects of inhalation include sneezing, coughing, or labored breathing. Additionally it can also result in abdominal pain, nausea, burns with perforation of the nasal septum, and vomiting. When quicklime reacts with water it can release enough heat to even ignite combustible materials. To learn more about calcium compounds, you can download Byju’s – The Learning App. Practise This Question
<urn:uuid:7b92574d-fe69-4556-b84a-d564df366443>
3.796875
586
Knowledge Article
Science & Tech.
38.158846
95,552,082
Influence of the “Tropical Pump” on Trace Constituents and Temperature Part of the Nato ASI Series book series (volume 54) Suppose that there is a trace constituent that is stratified vertically and is initially at rest with constant mixing ratio surfaces corresponding to horizontal surfaces. When we “turn on” a meridional mass circulation with upward motion at low latitudes and downward motion at high latitudes, the effect would be to push the constituent upward at low latitudes and downward at high latitudes, so that the mixing ratio surfaces become tilted (Fig. 1). Breaking planetary waves in the stratosphere will produce irreversible mixing of the tracer along isentropic surfaces. This mixing will tend to flatten the surfaces of the constant mixing ratio. Thus, mixing competes with the effects of the diabatic circulation. Mixing, however, is a local process, occurring in the region of wave breaking, while the diabatic circulation is a global scale process. KeywordsGravity Wave Rossby Wave Potential Vorticity Planetary Wave Geostrophic Wind These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Andrews DG, JR Holton, CB Leovy (1987) Middle atmospheric dynamics. Academic Press OrlandoGoogle Scholar Charney JG, PG Drazin (1961) Propagation of planetary-scale disturbances from the lower into the upper atmosphere. J Geophys Res 66:83–109CrossRefGoogle Scholar Holton JR (1992) Introduction to dynamic meteorology. Academic PressGoogle Scholar Holton JR (1986) Meridional distribution of stratospheric trace constituents. J Atmos Sci 43:1238–1242CrossRefGoogle Scholar Lindzen RS (1981) Turbulence and stress owing to gravity wave and tidal breakdown. J Geophys Res 86:9707–9714CrossRefGoogle Scholar Rosenlof KH (1995) The seasonal cycle of the residual mean meridional circulation in the stratosphere. J Geophys Res 100:5173–5191CrossRefGoogle Scholar Yulaeva E, JR Holton, JM Wallace (1994) On the cause of the annual cycle in the tropical lower stratospheric temperature. J Atmos Sci 51:169–174.CrossRefGoogle Scholar © Springer-Verlag Berlin Heidelberg 1997
<urn:uuid:e22126a7-2fad-4e63-98ad-583cb895a8f0>
2.8125
505
Academic Writing
Science & Tech.
31.979519
95,552,113
In the Spanish Mediterranean environment, scrub vegetation occupies a greater area than does forest. The impact of wildfire on the scrub vegetation and recovery afterward affects a number of other processes, including water erosion. While recovered vegetation considerably influences soil protection and erosion control, this function has scarcely been studied. This study discusses the behavior and architecture of recovering (or regenerating) typical Mediterranean shrub vegetation and the subsequent impact on soil protection. The study compared two protective forage species (Medicago arborea L. and Psoralea bituminosa L.). The research was performed in field conditions on a set of four experimental plots. A control plot was maintained with no vegetation cover. Runoff and soil loss by water erosion between 1989 and 1992 were studied on each of these plots. The natural vegetation was found to have a more significant protective effect (69.2% decrease in soil loss) than the other species tested. Soil loss on the Medicago plot decreased by 41.7%, and soil loss on the Psoralea plot decreased by 29.3%. That the Psoralea was only recently planted must be considered in evaluating its protective effects. Weitere Kapitel dieses Buchs durch Wischen aufrufen - Effect Of Mediterranean Shrub on Water Erosion Control J. L. Rubio - Springer Netherlands Fallstudie Überschwemmungskarten/© Thaut Images | Fotolia
<urn:uuid:8250a437-5023-46f0-8c5a-243dbcf8f359>
3.109375
296
Truncated
Science & Tech.
31.751162
95,552,116
Stem Cell Researchers Just Figured Out How to Create New Embryos Apr 07, 2014 13:22 Researchers from the University of Virginia have made a scientific discovery: they figured how to turn stem cells into full blown fish embryos! That means, scientists can control embryonic development. And that means, they could potentially grow organs or an entire organism from stem cells. "We have generated an animal by just instructing embryonic cells the right way," said Chris Thisse, who made the discovery with her husband Bernard. She added, "If we know how to instruct embryonic cells, we can pretty much do what we want." The researchers think it is possible to manipulate the signals to direct the formation of organs, which in this case, will mean great things for people on the organ donor waiting list. If you are looking forward to rank your website on top, you might have asked SEO experts a common question – how much time does it take to reach the top rank on Google? The answer is complicated and can never be definite. Read more It is, without doubt, evident that SEO has had a massive impact in the internet world today, so much that emerging website owners are already informed on techniques like keyword SERP tracking to give the ultimate SEO value to their site. Additionally, useful tools have come up, such as the social network checker that allows web owners, bloggers, small business owners as well as large enterprises check whether the content they share on social media have an impact on their overall digital marketing strategy, and it is affecting the overall ranking of their sites. Read more Demand for 3D printing has exploded in recent years with hundreds of new 3D printer producers being developed to help meet this huge growth in demand. So, who are the new trailblazers in the industry? We take a look at the top five manufacturers of 3D printers, who are set to shape the industry with new products and innovations. Read more
<urn:uuid:8a3df0f3-90ce-4c1a-a7d9-f30a243775c4>
2.65625
395
News Article
Science & Tech.
44.888654
95,552,118
Authors: David Johnson Energy to Matter (E2M) proposes a structure for quarks and nucleons, and uses these to generate 3-dimensional models of atomic structure of elements in the Periodic Table and their bonding characteristics. This short article and associated videos present some of the modelled atomic structures and shows how the structure of the nucleus relates to the observed physical and bonding characteristics of the elements involved. Comments: A 9 page paper containing 9 explanatory diagrams [v1] 2018-04-23 04:06:10 Unique-IP document downloads: 21 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:618ffb52-ee72-43bb-8abb-59de261fed0f>
2.890625
257
Truncated
Science & Tech.
34.607109
95,552,120
How do substances (ions, proteins, ligands etc. etc.) cross from one side of the membrane to another?© BrainMass Inc. brainmass.com July 21, 2018, 1:46 pm ad1c9bdddf When thinking about movement across the cell membrane whether, it is from the outside of the cell or the inside, it is important to remember the fundamental principle of energy conservation. Essentially, energy can be transferred from one form or another but cannot be lost or destroyed. In dealing with the transport across the cell membrane it is important to understand that any movement of substances requires energy. This energy can come from establishing a gradient or using molecules (such as ATP) to transfer energy. A gradient can be established through concentration or by electrical potential. This means that on one side of the membrane there is a higher amount of substance or charge and on the other side there is a lower amount of substance or charge. Only by having the membrane in place do we ... Transport across the cell membrane (Diffusion, Osmosis, Facilitated Diffusion, Active Transport).
<urn:uuid:0ab8e840-f0ea-4339-9390-8036146845ea>
3.96875
226
Truncated
Science & Tech.
47.83703
95,552,141
Using scanning tunneling spectroscopy, we study the transport of electrons through C(60) molecules on different metal surfaces. When electrons tunnel through a molecule, they may excite molecular vibrations. A fingerprint of these processes is a characteristic sub-structure in the differential conductance spectra of the molecular junction reflecting the onset of vibrational excitation. Although the intensity of these processes is generally weak, they become more important as the resonant character of the transport mechanism increases. The detection of single vibrational levels crucially depends on the energy level alignment and lifetimes of excited states. In the limit of large current densities, resonant electron-vibration coupling leads to an energy accumulation in the molecule, which eventually leads to its decomposition. With our experiments on C(60) we are able to depict a molecular scale picture of how electrons interact with the vibrational degrees of freedom of single molecules in different transport regimes. This understanding helps in the development of stable molecular devices, which may also carry a switchable functionality. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:06b54a10-e53b-459d-a5e1-19219dcbff43>
2.609375
228
Academic Writing
Science & Tech.
8.772609
95,552,156
Southwestern streams originating in high mountains are fed by snowpack and often flow throughout the year because of this summertime water source. There are also an abundance of intermittent or ephemeral southwestern streams that flow only in spring or during heavy rains. At lower altitudes these streams flow through desert and steppe habitats with very low rainfall, and are often the only water source across large landscapes. The stream corridors harbor plants, such as native cottonwood and willows that are unable to survive in the dryer surrounding uplands. The value of western streams is so great that it is essentially incalculable. Just their importance for supplying drinking water to major cities—such as Phoenix, Las Vegas, and Los Angeles—is huge. They also supply essential irrigation water for large-scale farming, especially during the summer growing period when temperatures are high and snowpack melt becomes the primary source of water. Western streams provide critical in stream flows that nurture diverse ecosystems and wildlife, without which the ecology of the Southwest would be radically different. The diversity and productivity of western streams are important for many Native American tribes. For example, the Lower Colorado River is vital to the Cocopah Tribe for subsistence, cultural, economic, and recreational activities. Whether permanent or intermittent, western streams are important recreational areas throughout the southwest, with both the permanent and higher altitude streams supporting recreational fishing. The stream habitats are used extensively by migratory birds during nesting and migration, as well as a variety of other wildlife including beaver and deer. The threat to western streams from climate change is extensive. As a result of rising winter temperatures reducing the winter accumulation of snowpack in western mountains, the amount of available spring and summer melt-water is declining. An equally significant threat is that spring temperatures are arriving as much as three to four weeks earlier than in the past, also reducing the amount of water available in the summer and fall. With only a 1.5 degree Fahrenheit increase in average global temperature, the Colorado River may shrink to its lowest level in at least 500 years. This is expected to occur within the lifetime of children born now, even with immediate reductions in greenhouse gas emissions. Lake Mead, a major reservoir on the Colorado River, is less than half full and could run dry by 2020. Increased severity of droughts and flood events from climate change will also affect water supply and water quality. The decline of water availability caused by climate change will exacerbate an already severe shortage of water in many areas of the Southwest. Water prices will likely sky-rocket as growing water demand conflicts with declining water availability. Mandatory water restrictions will likely affect people’s daily lives and the economic survival of irrigated croplands. These water wars may leave fish and wildlife last in line. Addressing exacerbated water shortages brought on by climate change will require a diversity of approaches. Considerable investment will be needed to reduce water demand by assisting homes, businesses, and agriculture in developing new water conservation strategies and low water use technologies. Stream corridors should be restored to natural conditions wherever possible to maintain optimal water flows and habitat for fish and wildlife. Restoration of native plant species along southwestern streams is vital for wildlife survival and diversity. Ensuring base flows and restoring natural flood flows is also critical to preserving natural stream habitat processes. On the Lower Colorado River alone, riparian and river flow restoration has a price tag of more than $250 million. Place your order today for the themed box that delivers everything you need to create family memories while discovering nature and wildlife.Read More Find out what it means to source wood sustainably, and see how your favorite furniture brands rank based on their wood sourcing policies, goals, and practices.Read More Climate change is allowing ticks to survive in greater numbers and expand their range—influencing the survival of their hosts and the bacteria that cause the diseases they carry.Read More Tell your members of Congress to save America's vulnerable wildlife by supporting the Recovering America's Wildlife Act.Read More You don't have to travel far to join us for an event. Attend an upcoming event with one of our regional centers or affiliates.
<urn:uuid:ce48e230-02f3-4eff-9143-4533b8e5bf11>
4.25
836
Knowledge Article
Science & Tech.
26.6175
95,552,157
In the geometry of planar curves, a vertex is a point of where the first derivative of curvature is zero. This is typically a local maximum or minimum of curvature, and some authors define a vertex to be more specifically a local extreme point of curvature. However, other special cases may occur, for instance when the second derivative is also zero, or when the curvature is constant. For space curves, on the other hand, a vertex is a point where the torsion vanishes. A hyperbola has two vertices, one on each branch; they are the closest of any two points lying on opposite branches of the hyperbola, and they lie on the principal axis. On a parabola, the sole vertex lies on the axis of symmetry and in a quadratic of the form: For a circle, which has constant curvature, every point is a vertex. Cusps and osculation Vertices are points where the curve has 4-point contact with the osculating circle at that point. In contrast, generic points on a curve typically only have 3-point contact with their osculating circle. The evolute of a curve will generically have a cusp when the curve has a vertex; other, more degenerate and non-stable singularities may occur at higher-order vertices, at which the osculating circle has contact of higher order than four. Although a single generic curve will not have any higher-order vertices, they will generically occur within a one-parameter family of curves, at the curve in the family for which two ordinary vertices coalesce to form a higher vertex and then annihilate. According to the classical four-vertex theorem, every simple closed planar smooth curve must have at least four vertices. A more general fact is that every simple closed space curve which lies on the boundary of a convex body, or even bounds a locally convex disk, must have four vertices. If a planar curve is bilaterally symmetric, it will have a vertex at the point or points where the axis of symmetry crosses the curve. Thus, the notion of a vertex for a curve is closely related to that of an optical vertex, the point where an optical axis crosses a lens surface. - Agoston (2005), p. 570; Gibson (2001), p. 126. - Gibson (2001), p. 127. - Fuks & Tabachnikov (2007), p. 141. - Agoston (2005), p. 570; Gibson (2001), p. 127. - Gibson (2001), p. 126. - Fuks & Tabachnikov (2007), p. 142. - Agoston (2005), Theorem 9.3.9, p. 570; Gibson (2001), Section 9.3, "The Four Vertex Theorem", pp. 133–136; Fuks & Tabachnikov (2007), Theorem 10.3, p. 149. - Sedykh (1994); Ghomi (2015) - Agoston, Max K. (2005), Computer Graphics and Geometric Modelling: Mathematics, Springer, ISBN 9781852338176. - Fuks, D. B.; Tabachnikov, Serge (2007), Mathematical Omnibus: Thirty Lectures on Classic Mathematics, American Mathematical Society, ISBN 9780821843161 - Ghomi, Mohammad (2015), Boundary torsion and convex caps of locally convex surfaces, arXiv: , Bibcode:2015arXiv150107626G - Gibson, C. G. (2001), Elementary Geometry of Differentiable Curves: An Undergraduate Introduction, Cambridge University Press, ISBN 9780521011075. - Sedykh, V.D. (1994), "Four vertices of a convex space curve", Bull. London Math. Soc., 26 (2): 177–180
<urn:uuid:06260b4e-b840-4bf0-acd0-6a8abb4bf2fb>
3.5625
821
Knowledge Article
Science & Tech.
58.284422
95,552,169
Forest ecologists have long sought to understand why so many different species of trees can coexist in the same niche. A modeling study is now providing clues. Forests, especially tropical forests, are home to thousands of species of trees—sometimes tens to hundreds of tree species in the same forest—a level of biodiversity ecologists have struggled to explain. © Dariush M | Shutterstock In a new study published in the journal Proceedings of the National Academy of Sciences (PNAS), researchers at the International Institute for Applied Systems Analysis (IIASA) and their colleagues in Australia are now providing a first model that elucidates the ecological and evolutionary mechanisms underlying these natural patterns. “Forests in particular and vegetation in general are central for understanding terrestrial biodiversity, ecosystem services, and carbon dynamics,” says IIASA Evolution and Ecology Program Director Ulf Dieckmann. Forest plants grow to different heights and at different speeds, with the tallest trees absorbing the greatest amounts of sunlight, and shorter trees and shrubs making do with the lower levels of sunlight that filter through the canopy. These slow-growing shade-tolerant species come in an unexpectedly large number of varieties—in fact, far more than ecological models have been able to explain until now. Traditional ecological theory holds that each species on this planet occupies its own niche, or environment, where it can uniquely thrive. However, identifying separate niches for each and every species has been difficult, and may well be impossible, especially for the observed plethora of shade-tolerant tropical trees. This raises the fundamental question: are separate niches really always needed for species coexistence? In the new study, the researchers combined tree physiology, ecology, and evolution to construct a new model in which tree species and their niches coevolve in mutual dependence. While previous models had not been able to predict a high biodiversity of shade-tolerant species to coexist over long periods of time, the new model demonstrates how physiological differences and competition for light naturally lead to a large number of species, just as in nature. At the same time, the new model shows that fast-growing shade-intolerant tree species evolve to occupy narrow and well-separated niches, whereas slow-growing shade-tolerant tree species have evolved to occupy a very broad niche that offers enough room for a whole continuum of different species to coexist—again, just as observed in nature. Providing a more comprehensive understanding of forest ecosystems, the resulting model may prove useful for researchers working on climate change and forest management. Dieckmann says, “We hope this work will result in a better understanding of human impacts on forests, including timber extraction, fire control, habitat fragmentation, and climate change.” The study was led by Daniel Falster at Macquarie University in Australia, who was a participant in the 2006 IIASA Young Scientists Summer Program. Falster DS, Brännström Å, Westoby M, Dieckmann U (2017). Multi-trait successional forest dynamics enable diverse competitive coexistence. Proceedings of the National Academy of Sciences (PNAS). doi: 10.1073/pnas.1610206114. Katherine Leitzell | idw - Informationsdienst Wissenschaft Researchers discover natural product that could lead to new class of commercial herbicide 16.07.2018 | UCLA Samueli School of Engineering Advance warning system via cell phone app: Avoiding extreme weather damage in agriculture 12.07.2018 | Leibniz-Zentrum für Agrarlandschaftsforschung (ZALF) e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Materials Sciences 23.07.2018 | Information Technology 23.07.2018 | Health and Medicine
<urn:uuid:d03cd6a1-a464-47fa-9b52-73f5571fca15>
4
1,279
Content Listing
Science & Tech.
31.367464
95,552,174
Causes of Global Warming There are several causes of Global Warming, it is important to learn about what is affecting our world. Global Warming is a very serious threat to our way of life.Scientists have spent decades figuring out what is causing global warming. They've looked at the natural cycles and events that are known to influence climate. But the amount and pattern of warming that'sbeen measured can't be explained by these factors alone. The only way to explain the pattern is to include the effect of greenhouse gases (GHGs) emitted by humans. One of the first things scientistslearned is that there are several greenhouse gases responsible for warming, and humans emit them in a variety of ways. Most come from the combustion of fossil fuels in cars, factories and electricityproduction. The gas responsible for the most warming is carbon dioxide, also called CO2. Other contributors include methane released from landfills and agriculture (especially from the digestivesystems of grazing animals), nitrous oxide from fertilizers, gases used for refrigeration and industrial processes, and the loss of forests that would otherwise store CO2. Different greenhouse gases havevery different heat-trapping abilities. Some of them can even trap more heat than CO2. A molecule of methane produces more than 20 times the warming of a molecule of CO2. Nitrous oxide is 300 timesmore powerful than CO2. Other gases, such as chlorofluorocarbons (which have been banned in much of the world because they also degrade the ozone layer), have heat-trapping potential thousands of timesgreater than CO2. But because their concentrations are much lower than CO2, none of these gases adds as much warmth to the atmosphere as CO2 does. In order to understand the effects of all the gasestogether, scientists tend to talk about all greenhouse gases in terms of the equivalent amount of CO2. Since 1990, yearly emissions have gone up by about 6 billion metric tons of "carbon dioxide... Leer documento completo Regístrate para leer el documento completo.
<urn:uuid:fece7e87-3057-47a1-9eb2-242da933598e>
3.390625
424
Truncated
Science & Tech.
39.527905
95,552,188
So far, it’s been a frigid winter. Nine people across the U.S. have already died from cold weather this past week, according to Time, while cities like Chicago, Des Moines and Omaha have recorded near- or record lows. Des Moines had a high of just -1 degrees, NBC reported, while the town of Erie, Pennsylvania, has seen nearly seven feet of snowfall since Christmas Eve. And even Jackonsville, Florida, was colder than Anchorage, Alaska, on Tuesday, the AP reported. Now cue the bomb cyclone, caused by a weather phenomenon known as a “bombogenesis.” Be the first to know. No one covers what is happening in our community better than we do. And with a digital subscription, you'll never miss a local story. According to the National Oceanic and Atmospheric Association, “bombogenesis” happens when the air pressure drops more than 24 millibars in 24 hours. Millibars are a way meteorologists measure atmospheric pressure, and such a steep drop like that — possibly caused by cold air over warm ocean waters — can trigger a bomb cyclone. And this one could be big. “Some computer models are projecting a minimum central air pressure of below 950 millibars at its peak, which would be nearly unheard of for this part of the world outside of a hurricane,” Andrew Freedman wrote for Mashable. “For comparison, Hurricane Sandy had a minimum central pressure of about 946 millibars when it made its left hook into New Jersey in 2012.” Hearing the word "bombogenesis" and confused about it? Here's an explanation and how it relates to the upcoming East Coast storm. Stay tuned for forecast updates and read our key messages for this storm in our short range discussion: https://t.co/VNkruevXS6 pic.twitter.com/iwfR3CRcNd— NWS WPC (@NWSWPC) January 3, 2018 Forecasters expect the temperature to dip into the low 40s and high 30s by Thursday as a cold front moves through South Florida. So what could this ominously named weather phenomenon do? For starters, it could keep us really cold. The bomb cyclone could trap the cold weather that is currently hitting the middle of the country when it heads for the East Coast, according to Time, basically putting much of the eastern part of the U.S. in a “deep freeze.” You’ll also see a marked increase in rain, winds and snow. Plus, there might be lightning and a blizzard at the same time thanks to the bomb cyclone, which are in season from October to March, The Weather Channel wrote. All day Thursday meteorologists are going to be glued to the new GOES-East satellite watching a truly amazing extratopical "bomb" cyclone off New England coast. It will be massive -- fill up entire Western Atlantic off U.S. East Coast. Pressure as low as Sandy & hurricane winds pic.twitter.com/6M4S3y75wT— Ryan Maue | weather.us (@RyanMaue) January 2, 2018 This particular bomb cyclone is expected to start off the coast of Florida on Wednesday, then make its way up to New England the following day, Newsweek wrote. Along the way, the chilling storm is expected to bring snow and ice to Georgia, the Carolinas, the DC area and other states along its frigid path. Some Key Impacts for wintry precipitation across SE GA and NE FL.— NWS Jacksonville (@NWSJacksonville) January 2, 2018 Down Power Lines, Widespread Electrical Outages, Snapped Trees or Branches, possible with Wintry Mix. Stranded Motorists, Disruption of Emergency Services are possible, with small amounts of ice and snow. . pic.twitter.com/k2fdwgktgA And here’s the bottom line for those dreading the cold: Temperatures are expected to dip to 20 to 40 degrees lower than normal, The Washington Post wrote, while wind speeds along the coast will be somewhere between 30 to 50 miles per hour. Basically, expect a wintry hurricane.
<urn:uuid:843ad688-6414-4780-b353-b9c5db6e5374>
3.484375
890
News Article
Science & Tech.
58.351277
95,552,190
The researchers are the first to reproduce a specific component of this natural process in a test tube – an essential step to fully understanding how these structures grow. With the new method described, these and other researchers now can delve even deeper into the various interactions that must occur for these structures – called lipopolysaccharides – to form, potentially discovering new antibiotic targets along the way.Lipopolysaccharides are composed primarily of polysaccharides – strings of sugars that are attached to bacterial cell surfaces. They help bacteria hide from the immune system and also serve as identifiers of a given type of bacteria, making them attractive targets for drugs. But before a drug can be designed to inhibit their growth, scientists must first understand how polysaccharides are developed in the first place. The study is published in the April 25 online edition of the journal Nature Chemical Biology. The researchers used a harmless strain of Escherichia coli as a model for this work, which would apply to other E. coli strains and similar Gram-negative bacteria, a reference to how their cell walls are structured. The surface of these bacteria house the lipopolysaccharide, which is a three-part molecular structure embedded into the cell membrane. Two sections of this structure are well understood, but the third, called the O-polysaccharide, has to date been impossible to reproduce. Two significant challenges have hindered research efforts in this area: The five sugars strung together to compose this section of the molecule are difficult to chemically prepare in the lab, and one of the key enzymes that initiates the structure’s growth process doesn’t easily function in a water-based solution in a test tube. Ohio State synthetic chemists and biochemists put their heads together to solve these two problems, Woodward said. To produce the five-sugar chain, the researchers started with a chemically prepared building block containing a single sugar and introduced enzymes that generated a five-sugar unit from that single carbohydrate. “The first part was done chemically, and in the second part, we used the exact same enzymes that are normally present in a bacterial cell to transform the single sugar into a five-sugar string,” Woodward said. Once these sugars join to make a five-sugar chain, a specific number of these chains are joined together to fully form the O-polysaccharide. A protein is required to connect those chains – the protein that doesn’t respond well to the test-tube environment. Early attempts to produce this protein in the lab resulted in clumping structures that did not function. So Woodward and colleagues produced this protein in the presence of what are known as “chaperone” proteins. “And basically what the chaperones do is help the protein fold into its correct state. We were able to produce the desired enzyme and also were able to verify that it was functional,” Woodward said. This protein is called Wzy. It is a sugar polymerase, or an enzyme that interacts with the five-sugar chain to begin the process of linking several five-sugar units together. Getting this far into the process was important, but the researchers also completed one additional step to define yet another protein’s role. Wzy connected the five-sugar chains, but it did so with no defined limit to the number of five-sugar units involved, a feature that does not match the natural process. On an actual bacterial cell wall, the length of the polysaccharide falls within a relatively narrow range of the number of chains connected. So the scientists introduced another protein, called Wzz, to the mixture. This protein is known as a “chain length regulator.” With this protein in the mix, the lengths of the resulting polysaccharides were confined to a much more narrow range. “We were able to replicate the exact polysaccharide biosynthetic pathway in vitro, getting the correct lengths,” Woodward said. “This is important because now you can begin to look at a whole host of other properties in the system.” The group already started trying to answer one compelling question: whether the two proteins, Wzy and Wzz, have to interact to fully achieve formation of the polysaccharide. “We’ve shown in some preliminary results that they do interact, but we haven’t determined whether that interaction has any functional relevance,” Woodward said. With this knowledge in hand, researchers now have access to information about how all three parts of the lipopolysaccharide, the large biomolecule on Gram-negative bacteria cell surfaces, is formed. One thing they already knew is that the entire process takes place on an inner membrane and is then exported to the outer membrane on the cell surface. Now that scientists can reproduce formation of the lipopolysaccharide, they can more directly characterize the export process – a step in the pathway that serves as another potential antibiotic target, Woodward noted. This work was supported by the National Institutes of Health, including its Predoctoral Trainee Program, the China Scholarship Council, the National Cancer Institute, the National Science Foundation and the Bill & Melinda Gates Foundation. Co-authors on the study are Wen Yi, Lei Li, Guohui Zhao, Hironobu Eguchi, Perali Ramu Sridhar, Hongjie Guo, Jing Katherine Song, Edwin Motari, Li Cai, Patrick Kelleher, Xianwei Liu, Weiqing Han, Wenpeng Zhang and Mei Li, all former or current Ohio State graduate students or postdoctoral researchers in biochemistry and chemistry; Yan Ding of Shandong University in China; and Peng George Wang, Ohio Eminent Scholar and professor of biochemistry and chemistry at Ohio State.Contact: Robert Woodward, (614) 292-8704; firstname.lastname@example.org Robert Woodward | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:fa56a150-bc89-4b77-997d-36cbda356fc5>
3.875
1,899
Content Listing
Science & Tech.
35.342949
95,552,222
Forests around the world are being decimated as the planet grows steadily warmer. Mass die-offs in California, the Southwest and Europe are not only tied to global warming by new studies, they will add to it. Just a few years after mountain pine beetles killed millions of acres of lodgepole pine forests in the Rocky Mountains, the U.S. Forest Service is reporting widespread tree deaths in drought-hammered Southern California. Even Europe's cool, moist forests have been losing trees at a fast rate. Large-scale simultaneous forest loss on different continents could have an impact on forests' ability to absorb atmospheric carbon, scientists say. A recent aerial survey by the U.S. Forest Service tallied 26 million more dead trees in Southern and Central California between last October and April. In those six months, evergreen forests across 1,200 square miles—beset by drought and pine beetles—perished, transforming from living, breathing organisms into sticks of dead tinder and providing fuel for wildfires. In total, the agency has counted 66 million dead trees in the state since 2010. "We're seeing almost an entire species lost, of the predominant species for this area," said Mariposa County Supervisor Kevin Cann, in a May 23 video by the California State Association of Counties. "It's inevitable that you want to deny this reality. You don't want to believe that your entire ponderosa forest can be wiped out. This summer, we're going to hold our breath for a while and try to be really prepared to react as fires happen." Scientists have increasingly linked forest mortality with climate impacts—and in Southern California, it is most directly tied to a steady increase in droughts that weaken trees, making them more susceptible to pine beetles. A 2015 study in the journal Geophysical Research Letters found that human-caused warming "substantially increased the overall likelihood of extreme California droughts." Widespread climate impacts to forests have also been documented in other regions, including the Southwest, and in Europe. Global forest mortality has potentially profound ecological and societal implications, given the importance of forests to everything from water supplies, wildlife habitat and food to the global carbon cycle. Collectively, the world's forests suck up about a quarter of human-caused carbon emissions. "From a forester's perspective it is a catastrophe. Environmental services are under threat for an extended period of time," said Klaus Katzensteiner, a forest ecologist at the University of Vienna, Austria. In many places, he said the global forest changes will affect water flows and water quality, increase wildfire danger and alter the carbon balance of forests for a long time. Drought is believed to be the main culprit in California, weakening millions of trees across the state. One study published in the Proceedings of the National Academy of Sciences shows how California forests lost huge amounts of water during the drought; another study in the International Journal of Wildland Fire shows the link between regional moisture deficits and wildfire activity. Warmer temperatures since the 1980s have also enabled pine beetles to breed in record numbers. The insects lay their eggs beneath the bark of trees during the summer, introducing a fungus that interrupts the flow of moisture and nutrients from the roots to the rest of the tree. Within a year, the trees die. Pine beetles and related insects are native to forests and play an important ecological role by killing older trees to make way for new growth. In the past, their populations have been checked by extreme cold in winter. But in recent decades, those cold periods have been less common in many mountain regions. More larvae survive the winter and get a head start on attacking new trees earlier in the spring. Warmer temperatures have also enabled the insect to spread to higher elevations and, in some areas, to breed more than once a year. In 2002, then a record-warm year for the planet, ips beetles started to spread across the Southwest and within two years, they killed nearly 90 percent of the piñon pines across millions acres in Colorado, Utah and Arizona. Researchers have reported little forest regeneration. According to one study published in the journal Ecosphere in 2012, the piñon pines across New Mexico and Oklahoma were producing fewer seeds, with the biggest drop in production occuring in areas that warmed the most during the March to October growing season. Overall, average temperatures in the study areas had increased by about 2.3 degrees Fahrenheit in the past 40 years. The semi-arid Southwest, where forests historically have experienced extreme conditions, is a global hotspot for climate impacts. Along with the piñon decline, other forest researchers have projected that 72 percent of the region's needleleaf evergreen forests could die by 2050, with nearly 100 percent mortality by 2100, due to warmer and drier conditions, according to a 2014 study published in the journal Nature Climate Change. Even in the relatively moist and cool climate of central Europe, global warming and extreme drought have significantly affected forests, as outlined in two studies published this month. The research shows that many forest types are growing more vulnerable, including to changes that are more subtle than mass die-offs. "Climate change in the Alps has been more pronounced than the global average," said Katzensteiner of the University of Vienna. "The frequency of extreme events will increase...It's always the extremes that affect a system." In the mountains of southern Germany—one of the wettest regions in central Europe— researchers from the Technical University in Munich measured a 14 percent drop since the 1980s in organic forest topsoil, called humus, at a series of sites first established to monitor radioactivity from the Chernobyl nuclear reactor meltdown. Since there were no changes in land-use at the sites, the researchers concluded that the region's warming temperatures are most likely to blame. With some local variations, temperatures across the study area have gone up by about 3 degrees Fahrenheit since the 1980s, the study said. The warming is speeding up microbial activity in the forest soils, which breaks down the organic carbon and releases to the atmosphere, said researcher Jörg Prietzel, who published his peer-reviewed findings in Nature Geoscience. Prietzel said that, because the sites were carefully marked to ensure precise measurements of the Chernobyl fallout, he was able to return to the exact spots to take new samples, showing the value of long-term environmental monitoring. "Only long-term research shows statistically relevant changes," he said. "The critical issue is, the basic function of the soil's organic matter is storage for water and nutrients." With less humus, there's less water and fewer nutrients available for the trees, Prietzel said. "The forest in general is under more environmental stress, with warmer temperatures that are not familiar to many tree species," he said. "The loss of organic soil thus is part of another climate feedback loop." The second study, published this month in Global Change Biology, looked at beech trees, which form Europe's most prolific forests from England all the way to Southern France, finding that an extreme drought in the late 1970s is affecting the trees to this day. With climate models projecting a more frequent recurrence of severe droughts, the consequences for beech forests are likely to be significant, according to University of Stirling forest ecologist Alistair Jump. "There's a big risk of widespread mortality when the next dry spell hits," Jump said. "It really should make us think about what's going to happen in terms of ranges changing for all types of forests," he added. "Forests won't just necessarily sweep northward. We will probably see landscape-level reductions in density in many forests. Many of these issues affecting beech will affect other trees. We're seeing this on a global scale, with die-backs all over the world." Some scientists have raised concern that the issue is being underestimated. "We surmise that mortality vulnerability is being discounted in part due to difficulties in predicting threshold responses to extreme climate events," according to a comprehensive forest study published last year in the journal Ecosphere by leading forest and climate researchers. The research documented tree deaths due to hotter and more frequent droughts, as well as warming feedbacks from forest die-offs. In the decades of warming ahead, "there are key climate change drivers we know with high confidence will drive forest deaths around the world," the researchers said.
<urn:uuid:430a2a3b-284a-4ec1-ad72-5da86e6252cf>
3.421875
1,729
Truncated
Science & Tech.
38.969165
95,552,230
The climate is changing. Warming temperatures and altered weather patterns will likely have dramatic effects on landscapes at high elevation and in northern latitudes. Thus, a significant change over upcoming decades is certain for subarctic parks such as Denali. Tools like repeat photography show that glaciers are receding, lakes are shrinking, and vegetation boundaries are shifting at very fast rates. How are animals responding to these shifts? For the vast majority of Denali’s animals (aka the invertebrates), there is an incomplete picture. Researchers don’t have a good idea about which species live here or how they are distributed across habitats. To be able to track invertebrate responses to effects from climate change, it is necessary to start with a basic understanding of who lives here, now. Patterns of arthropod diversity and activity along elevational gradients in Denali Arthropods make up the most diverse and abundant component of Denali’s fauna. They perform important roles in almost all park ecosystems (e.g., pollination, nutrient cycling). However, far less is known about their diversity and habitat associations than for vertebrate species such as grizzly bears. Many arthropods are known to be good indicators of habitat health and diversity. Elsewhere in the world, they are used to monitor changes associated with climate change, through shifts in range (i.e., where they live) and phenology (i.e., the timing of their life cycle). Denali makes an ideal natural laboratory to investigate arthropods in different habitats, especially those that habitats that are expanding (e.g., forest and shrub habitats) and shrinking (e.g., open tundra). This project focuses on two important arthropod groups in Denali: pollinators (bees, flower flies, butterflies) and ground-dwelling beetles and spiders (primarily predators). Researchers want to know how arthropod communities change as elevation increases, because the highest elevation habitats are likely most affected by climate change. To do this, they have set up plots in four different habitat types: low elevation forest; mid-elevation shrub; high elevation tundra; and rocky habitats above the tundra. Researchers measure the diversity of arthropod species in each habitat plot every two weeks throughout the growing season using a veriety of collecting techniques (see photos). This allows them to track where each species lives, and when it is active. The results of the project will show researchers what patterns of arthropod distribution and activity (phenology) look like now, and will allow them to monitor changes in the future. Because these tiny animals are major players in ecosystem processes, it is expected that the project will provide valuable insights into how climate change may affect Denali ecosystems. Another goal of the project is to educate park staff and visitors about arthropod diversity through various activities (e.g., bioblitzes, field seminars, presentations) and media. The project is a collaborative effort between Denali NPP and the entomology program at University of Alaska Fairbanks (UAF), including one graduate student.
<urn:uuid:ea7121f5-9d37-4a5b-aafa-e6d4a0497e73>
4.15625
640
Knowledge Article
Science & Tech.
28.396004
95,552,231
A View from Emerging Technology from the arXiv Best of 2012: Chinese Physicists Smash Distance Record For Teleportation In May, a Chinese team teleported photons through 100 kilometres of free space, opening the way for satellite-based quantum communications “Teleportation is the extraordinary ability to transfer objects from one location to another without travelling through the intervening space. The idea is not that the physical object is teleported but the information that describes it. This can then be applied to a similar object in a new location which effectively takes on the new identity. And it is by no means science fiction. Physicists have been teleporting photons since 1997 and the technique is now standard in optics laboratories all over the world. The phenomenon that makes this possible is known as quantum entanglement, the deep and mysterious link that occurs when two quantum objects share the same existence and yet are separated in space. Teleportation turns out to be extremely useful. Because teleported information does not travel through the intervening space, it cannot be secretly accessed by an eavesdropper. For that reason, teleportation is the enabling technology behind quantum cryptography, a way of sending information with close-to-perfect secrecy. Unfortunately, entangled photons are fragile objects. They cannot travel further than a kilometre or so down optical fibres because the photons end up interacting with the glass breaking the entanglement. That severely limits quantum cryptography’s usefulness. However, physicists have had more success teleporting photons through the atmosphere. In 2010, a Chinese team announced that it had teleported single photons over a distance of 16 kilometres. Handy but not exactly Earth-shattering. Now the same team says it has smashed this record. Juan Yin at the University of Science and Technology of China in Shanghai, and a bunch of mates say they have teleported entangled photons over a distance of 97 kilometres across a lake in China. Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today
<urn:uuid:2b600c8c-3907-41cd-bb98-b4be87e25079>
3.09375
409
Truncated
Science & Tech.
25.53919
95,552,232
Let X have a Poisson distribution with a mean of 4. Find Let X have a Poisson distribution with a variance of 4. Find P(X=2) Customers arrive at a travel agency at a mean rate of 11 per hour. Assuming that the number of arrivals per hour has a Poisson distribution, give the probability that more than 10 customers arrive in a given hour. If X has a Poisson distribution such that 3P(X=1) = P(X=2), find P(X=4). Flaws in a certain type of drapery material appears on the average of one in 150 square feet. If we assume a Poisson distribution, find the probability of at most one flaw appearing in 225 square feet.© BrainMass Inc. brainmass.com July 19, 2018, 3:18 pm ad1c9bdddf Please see the attachment for solution. If you need any further ... The solution contains various statistical problem using the application Poisson distribution.
<urn:uuid:39edd337-7af2-4939-885f-7c718f51ef86>
2.515625
209
Tutorial
Science & Tech.
72.615455
95,552,238
Nonlinear Waves in Circular Basins Waves generated near resonance in bays or harbours, by storms or earthquakes, may rise to large wave heights. This investigation examines waves progressing around a circular basin of finite uniform depth, with particular attention being given to those waves near resonance. Multiple families of free steady waves are associated with each of the depths at which resonance occurs, with different families having different orderings of the wave components composing them. Linear stability calculations indicate that those free waves dominated by resonating wave components are linearly unstable, but calculations of the nonlinear time evolution over many wave periods do not confirm the instability in the weakly unstable examples. When the model is made more realistic by including periodic forcing and wavenumber dependent damping, some of these weakly unstable examples are found to be stable, while others are unstable but evolve slowly in time into periodically modulated waves. Although the occurrence of multiple families of free waves is interesting for theoretical reasons, practical calculations of waves near resonance should include both forcing and damping. For any given depth ratio, forcing amplitude, and forcing period, there may be no, one, or multiple steady progressive waves. KeywordsInternal Resonance Wave Component Depth Ratio Force Amplitude Free Wave Unable to display preview. Download preview PDF.
<urn:uuid:aaf5e2fa-7a9c-4119-ad6e-e42100cbf372>
2.6875
261
Truncated
Science & Tech.
4.803359
95,552,256
The TRAPPIST-1 system appears at first to be a decent place to search for extraterrestrial life. It has seven rough planets generally an indistinguishable size from Earth, and some of them are in the “habitable zone” of the star where fluid water could exist at first glance. Presently, researchers from School of Earth and Space Exploration at Arizona State University say there might really be excessively water for the TRAPPIST-1 planets to harbor life. Space experts initially reported the revelation of three exoplanets around TRAPPIST-1 out of 2016. At that point, after a year we learned of four more exoplanets in the system. TRAPPIST-1 is just 39 light years away, making it an incredible method to think about the conduct and conditions on exoplanets. While every one of the planets are comparable in size to Earth (no gas monsters have been distinguished), just three of the planets (TRAPPIST-1e, f, and g) are circling at a separation that would enable them to have fluid water. Having fluid water is a prerequisite for life as we probably am aware it, however it turns out you may need some land, as well. All the TRAPPIST-1 planets were found with the travel strategy. A telescope watches a star for little dunks in splendor. These plunges can give away an exoplanet going before the star from our point of view. That can disclose to us how huge an exoplanet is. By viewing the way travel signals fluctuate after some time, space experts can likewise appraise an exoplanet’s mass. Set up those together, and you have a surmised thickness. The Arizona State University utilized this information to make PC models of six of the seven planets in the TRAPPIST-1 system. They didn’t break down the furthest planet, TRAPPIST-1h, in light of the fact that insufficient is thought about its properties. The deepest planets in the system (b and c) are accepted to be around 10 percent water by mass. The more inaccessible TRAPPIST-1f and g are an astounding 50 percent water. Exoplanets TRAPPIST-1d and e are amidst the system and have water masses in the middle of the others. You’ve presumably heard Earth is 70 percent water, however that is surface territory. Water makes up only 0.2 percent of Earth’s mass. In this way, the TRAPPIST-1 planets could be staggeringly wet. The external planets would have in excess of 1,000 times the volume of water we have on Earth. The researchers call attention to this could obstruct the improvement of life in light of the fact that there are sure substance forms that happen on dry land. Likewise, the weight of all that dilute pushing on the mantle could avoid most volcanic movement. Without the carbon dioxide from volcanic movement, even planets in the livable zone could have succumbed to a runaway snowball impact. Since TRAPPIST-1 is a cool red midget, every one of these planets circle close — not as much as the separation of Mercury’s circle around the sun. That implies they’re presented to more radiation and sun oriented flares. They’re likely additionally tidally bolted so a similar face dependably indicates the star. Having a considerable measure of water could help scatter the warmth and assimilate radiation, so perhaps there’s still some expectation.
<urn:uuid:e8d853ce-0ab3-4404-8cca-afd03cb87c95>
3.828125
723
Truncated
Science & Tech.
49.664873
95,552,261
We may be one large step closer to a future driven by fusion power — the elusive, limitless, and zero-carbon energy source that’s even a step-up from renewables. A collaboration between MIT and a new private company, Commonwealth Fusion Systems (CFS), aims to bring the world’s first fusion power plant online in the next 15 years, using a novel approach. Fusion powers the sun and other stars. It involves lighter atoms like hydrogen smashing together to form heavier elements, like helium, and releasing massive amounts of energy while doing so. This energy release happens, however, at very, very extreme temperatures — in the range of hundreds of millions of degrees Celsius — which would melt any material it came in contact with. So, in order to experiment with fusion in the laboratory, researchers use magnetic fields to hold that smashed together soup of subatomic particles, called plasma, suspended and away from the walls of the experimental chamber. The trickiness of using fusion as a form of energy is that, to date, every experiment has yielded net negative energy — meaning more energy goes into heating that subatomic soup than comes out for potential use. Now, this collaboration is launching an experiment known as SPARC, which will use new, high-temperature superconductors to build smaller, more powerful high-field magnets to power an experimental fusion reactor. SPARC’s goal? The first-ever net positive energy gain from fusion. This fusion experiment is designed to produce 100 megawatts of heat, thanks to those new magnets. It won’t turn that heat into electricity, but in 10-second pulses, it could produce twice the power needed to heat the plasma, and as much power as is used by a small city. “This is an important historical moment: Advances in superconducting magnets have put fusion energy potentially within reach, offering the prospect of a safe, carbon-free energy future,” MIT President L. Rafael Reif told MIT News. While this team’s approach to fusion power seems promising; a number of previous collaborations were unable to get fusion energy off the ground. Researchers at the University of New South Wales tried, and failed, to create fusion through hydrogen-boron reactions. The International Thermonuclear Experimental Reactor (ITER) in France is also making progress, but SPARC is set to dethrone the project in terms of size. SPARC will be only 1/65th of ITER’s volume, because these new high-field magnets make it possible to build smaller fusion plants needed to achieve a given level of power. If SPARC is successful, and the fusion project design proliferates worldwide, it’s possible fusion energy could start to help meet global energy demands. Researching carbon-free fusion energy is critical during an era in which greenhouse gases continue to drive climate change. “The aspiration is to have a working power plant in time to combat climate change,” Bob Mumgaard, CEO of Commonwealth Fusion Systems, told The Guardian. “We think we have the science, speed and scale to put carbon-free fusion power on the grid in 15 years.” The post MIT Is Taking on Fusion Power. Could This Be the Time It Actually Works? appeared first on Futurism.
<urn:uuid:e8678d08-576d-488d-b9d5-25a5b3f7b5a3>
3.71875
681
Truncated
Science & Tech.
39.741151
95,552,266
a. Show that the skin depth in a poor conductor (sigma << WE) is (2/sigma) (E/mu)^1/2 when independent of frequency. Find the skin depth (in meters) for (pure) water. b. Show that the skin depth in a good conductor (sigma >> WE) is lambda/2pi (where lambda is the wavelength in the conductor). Find the skin depth (in nanometers) for a typical meter (sigma= 10^7 omega meters ^-1) in the visible range (w = 10^15 /s), assuming E=Eo and mu=mu_o. Why are metals opaque? c. Show that in a good conductor the magnetic field lags the electric field by 45 degrees and the find the ratio of their amplitudes. For a numerical example use the 'typical metal' in part (b). See attachment for better symbol representation.© BrainMass Inc. brainmass.com July 21, 2018, 3:40 pm ad1c9bdddf This solution contains step-by-step calculations to determine the skin depth in meters for pure water, skin depth for a good conductor, and it also shows the ratio of amplitudes of a lagged magnetic and electric field due to a good conductor. All workings and formulas are shown with brief explanations.
<urn:uuid:a3c6f733-2e41-42ed-a9b6-84ad11f1cbe8>
3.625
281
Tutorial
Science & Tech.
71.168491
95,552,272
posted by ree 1. A 100N block lies on a frictional surface. A force of 50N was applied horizontally where the block had moved 5 m from the positive direction. Find the work done by the applied force and the work done by the weight of the block. 2. Using Example 1, if the applied force is directed along the positive x direction. Find the work done by the applied force. Problem Set #2 1. A block of mass 10 kg lying on a horizontal surface is pulled to the right by a force of 10N at an angle of 45° with the horizontal. The coefficient of sliding friction between block and surface is 0.1. Find the total work done on the block when it has moved a distance of 1m from its initial position. 2. Find the Kinetic Energy of a 1000 kg car moving at 25km/hr and when it is moving at 100 km/hr.
<urn:uuid:cd7ba3ec-f3b9-4542-b35f-b3fd72498f58>
3.8125
191
Content Listing
Science & Tech.
85.699953
95,552,277
In a study published in the February 18, 2011, online issue of Science, they report that a population of Hudson River fish apparently evolved rapidly in response to the toxic chemicals, which were first introduced in 1929, and were banned fifty years later. PCBs, or polychlorinated biphenyls, were used in hundreds of industrial and commercial applications, especially as electrical insulators. “We’ve found evolutionary change going on very quickly due to toxic exposure, and just one gene is responsible for it,” says Isaac Wirgin, a population geneticist, associate professor of environmental medicine at NYU School of Medicine, and the study’s lead investigator. “There are not many examples of this in the scientific literature." General Electric released approximately 1.3 million pounds of PCBs into the Hudson River from 1947 to 1976. The Atlantic tomcod, Microgadus tomcod, is a common bottom-feeding fish in the Hudson that is not usually eaten by humans. The fish, which typically reaches a length of 10 inches, had long been known to survive exposure to PCBs, and levels of the chemical in its liver are among the highest reported in nature. However, scientists did not understand the biological mechanism that allowed the tomcod to survive chemical exposures that kill most other fishes. Dr. Wirgin and scientists at NOAA Fisheries Service in New Jersey and the Woods Hole Oceanographic Institution in Massachusetts spent four years capturing tomcod from contaminated and relatively clean areas of the Hudson River during the winter months, when tomcod spawn in the river. The fish were screened for genetic variants in a gene encoding a protein known to regulate the toxic effects of PCBs, which is called the aryl hydrocarbon receptor2, or AHR2. This gene also is involved in mediating the effects of other halogenated hydrocarbon compounds, a group that includes PCBs. Slight alterations—the deletion of only six base pairs in DNA of the AHR2 gene—appear to protect tomcod from PCBs, according to the study. Normally, when unaltered AHR2 binds to PCBs, it triggers a cascade of reactions that transmit the toxic effects of the compound. However, the study found that PCBs bind poorly to the variant AHRs, which apparently blunts the chemicals’ effects. Tomcod from cleaner waters occasionally carried mutant AHR2, suggesting that these variants existed in minor proportions prior to PCB pollution, says Dr. Wirgin. After the chemical was released, tomcod carrying the mutation had an advantage over others in the population because PCBs otherwise lead to lethal heart defects in young fish. The study’s findings suggest that this advantage drove genetic changes in these fish over some fifty years. “We think of evolution as something that happens over thousands of generations,” says Dr. Wirgin. “But here it happened remarkably quickly.” The study co-authors are: Nirmal K. Roy and Matthew Loftus, the NYU School of Medicine; R. Christopher Chambers, the NOAA Fisheries Service, Highland, New Jersey; and Diana G. Franks and Mark E. Hahn, Woods Hole Oceanographic Institution.About NYU School of Medicine: Lorinda Klein | Newswise Science News Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8572ee09-aa16-4f02-be22-5adbd68a66ea>
3.6875
1,250
Content Listing
Science & Tech.
38.9307
95,552,288
Disclinations, Amorphization and Cracks at Grain Boundaries in Nanocrystalline Materials The processes developing in the late stages of plastic deformation in conventional (i.e., non-nanocrystalline in their initial state) crystalline materials are characterized by defect structures which have passed over some intermediate steps in their evolution [56, 263–266, 381, 389, 430–440]. Their principal difference from the defect structures inherent in the initial stages of plastic deformation is the mainly disclinational (rotational) character of the defect configurations (misorientation bands, fragment boundaries, high-angle grain boundaries of deformation origin, etc.) observed in experiments, which may be considered as carriers of rotational plasticity. At this late stage of plastic deformation in metallic materials, there exist two general paths for further evolution of the defect structures. The first path is the generation of microcracks in the regions of highest stress concentration, leading to fracture of the material. The second alternative is a transition to a nanocrystalline and/or amorphous structure of the metal [441, 442]. Obviously, the second path for structural evolution allows the deformed sample to achieve larger steps of plastic deformation than the first and this explains why it is very promising for metal-forming technologies. Nowadays, this approach is widely used in various techniques for fabricating NCMs and amorphous alloys (e.g., ball milling and mechanical alloying of powders , equal-channel angular pressing [28,443], etc.). In the 1980s, probably the first amorphous—nanocrystalline composites [36,37] were obtained under the combined action of high pressure and intensive shear. KeywordsTriple Junction Boundary Segment Crack Generation Equilibrium Length Microcrack Generation Unable to display preview. Download preview PDF.
<urn:uuid:e51d56f0-b9f3-416a-8960-dbc71682159e>
2.90625
384
Truncated
Science & Tech.
22.52778
95,552,309
In flies, 22-23-nucleotide (nt) microRNA duplexes typically contain mismatches and begin with uridine, so they bind Argonaute1 (Ago1), whereas 21-nt siRNA duplexes are perfectly paired and begin with cytidine, promoting their loading into Ago2. A subset of Drosophila endogenous siRNAs-the hairpin-derived hp-esiRNAs-are born as mismatched duplexes that often begin with uridine. These would be predicted to load into Ago1, yet accumulate at steady-state bound to Ago2. In vitro, such hp-esiRNA duplexes assemble into Ago1. In vivo, they encounter complementary target mRNAs that trigger their tailing and trimming, causing Ago1-loaded hp-esiRNAs to be degraded. In contrast, Ago2-associated hp-esiRNAs are 2'-O-methyl modified at their 3' ends, protecting them from tailing and trimming. Consequently, the steady-state distribution of esiRNAs reflects not only their initial sorting between Ago1 and Ago2 according to their duplex structure, length, and first nucleotide, but also the targeted destruction of the single-stranded small RNAs after their loading into an Argonaute protein. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:c276bf56-2a8e-4e03-9178-f854d8b51985>
2.625
289
Academic Writing
Science & Tech.
23.157027
95,552,311
- Research news - Open Access Making enzymes from proteins © BioMed Central Ltd 2004 Published: 29 June 2004 Using computational design, researchers have transformed a protein with no catalytic abilities into a highly active enzyme. Scientists said the experiment, reported in the June 25 Science, represents a valuable step in the quest to design enzymes from scratch (Science 2004, 304:1967-1971). "In principle, the design tools are general and may be used to design many different enzymes at will. If this turns out to be true, then we can really start to design catalysts at will," Homme Hellinga at Duke University Medical Center in Durham, NC, senior author on the paper, told us. Hellinga and his team began with ribose-binding protein (RBP), a molecule they had in prior computational biology experiments made into a high-affinity receptor for nonnatural ligands such as serotonin and trinitrotoluene. Their latest research transformed RBP into an enzyme highly active as a triose phosphate isomerase (TIM). TIM is active in glycolysis, catalyzing the interconversion between the ketose dihydroxyacetone phosphate (DHAP) and the aldose glyceraldehyde-3-phosphate (GAP). "This is really the best demonstration to date that these algorithms can be useful for real practical problems and also for providing fundamental insight into how enzymes do what they do," said Bill DeGrado at the University of Pennsylvania School of Medicine in Philadelphia, who was not involved in the study. TIM's catalytic abilities have their origins in the precise orientation of three critical amino acid residues - glutamate, histidine, and lysine - in its active site and in controlled movements of the protein chain during catalysis. To transform RBP into TIM, the researchers used algorithms to predict mutations that altered RBP's layer of residues so it could bind GAP and DHAP. The authors then introduced catalytically active residues into this receptor design. First, they defined the most favorable geometrical orientations of key interactions contributing to catalysis-specifically, those of the three catalytically essential residues glutamate, histidine, and lysine with respect to the enediol intermediate. Next, the researchers used a combinatorial search algorithm to find positions for these residues that satisfy these geometrical constraints. Finally, they used their receptor design algorithm to optimize the potential active site. Fourteen RBP variants were designed, produced, and assayed for TIM activity. Just like TIM, the RBP variants had a mechanism to close the active site after the substrate was bound, except they used a hinge-bending mechanism. The most active variant was much less thermostable than native RBP and was stabilized with computational design by adding mutations to the protein matrix, correcting interactions with the binding surfaces it surrounded. The resulting RBP variant NovoTim1.2, which had additional surface amino acid substitutions remote from the active site, catalyzed the TIM reaction at a rate 100,000- to 1,000,000-fold more efficiently than uncatalyzed reactions. It also proved biologically active, supporting the growth of Escherichia coli under gluconeogenic conditions just as regular TIM would. "In the long run, this work could help to establish natural as well as nonnatural reactions on natural as well as artificial protein scaffolds. This could help to make chemical transformations more environment friendly or to produce tailored pharmaceuticals," Reinhard Sterner at the University of Regensburg in Germany, who was not involved in the study, told us. To further improve the catalytic activity, the authors turned to directed evolution. NovoTim1.2 was further mutagenized randomly by an error-prone polymerase chain reaction. Because TIM is absolutely necessary for gluconeogenic growth on glycerol, according to Hellinga, the researchers plated about 100,000 cells on glycerol to select for improved mutants, and four colonies survived. "It remains to be demonstrated, but I think their approach has the potential for tremendous generality. There are many reactions one would like to catalyze that have no enzyme in nature, or the enzymes that are in nature are highly fragile. So far, the approaches that have been taken are random screening or directed evolution. What they're showing here is computation, when combined with these experimental methods, now may allow one to have new starting points for evolution of highly effective and highly novel catalysts," DeGrado said. Although the variant's efficiency parameters were about three orders of magnitude below that of wildtype TIM, Sterner and Franz Schmid of the University of Bayreuth in Germany, in a perspective accompanying Hellinga's team's paper, wrote that "this reduced efficiency does not diminish the achievement, considering that native TIM is a kinetically perfect enzyme with a turnover that is limited only by the diffusion-determined rate at which substrate and enzyme encounter each other." Researchers first tried to design enzymes de novo in the 1980s and came up with the first catalytic antibodies, known as abzymes. Recently, scientists at Caltech used computational design to transform the catalytically inert protein thioredoxin into an esterase. However, both abzymes and the designed thioredoxin are much less active than the RBP variants transformed into TIM, Sterner said. - Science, [http://www.sciencemag.org] - Homme W. Hellinga, [http://www.biochem.duke.edu/Hellinga/hellinga.html] - Triose Phosphate Isomerase, [http://www.bio.cmu.edu/Courses/03231/ProtStruc/1ypi.htm] - William F. DeGrado, [http://www.uphs.upenn.edu/biocbiop/faculty/pages/degrado.html] - Reinhard Sterner, [http://www.uni-koeln.de/math-nat-fak/biochemie/sterner/Home.htm] - Catalytic antibodiesGoogle Scholar - Selective chemical catalysis by an antibodyGoogle Scholar - Abzymes: Catalytic Antibodies, [http://www.whfreeman.com/immunology/CH05/catab.htm]
<urn:uuid:d695f870-24ad-4ca7-a626-5329d11f3a2a>
3.015625
1,337
Academic Writing
Science & Tech.
28.191215
95,552,332
Just Earth News | @JustEarthNews | 22 Dec 2017 New York, Dec 22(Just Earth News): The Security Council on Thursday said that United Nations peacekeeping missions will continue to consider ways to reduce the environmental impact of their operations, in line with relevant UN resolutions and mindful of the goals set out in international accords on the environment, including the Paris Agreement on climate change. Through an agreed press statement, the 15-member Council reaffirmed the basic principles of peacekeeping, while stressing that it remains cognizant of the possible environmental impact of the peacekeeping operations it mandates. The Council underscored the importance that peacekeeping operations endeavor to minimize their impact on the sustainability of the ecosystems where they are deployed, based on sound consideration of the risks, benefits and costs. Mindful of the goals set out by the international agreements on the environment, including the Paris Agreement, the members of the Security Council expressed willingness that UN peacekeeping missions, in full conformity with the established mandates, continue consideration for the reduction of their environmental impact. The members of the Council underlined the importance to comprehensively address the environmental impact of peacekeeping operations, in close coordination with the relevant parties involved, including troop and police contributing countries, also through meetings of the Security Council’s Working Group on Peacekeeping Operations and of the relevant bodies of the General Assembly. In addition, the Council recognized that consideration for environmental management includes taking into account the impact of peacekeeping operations on the historical and cultural heritage in the areas of deployment and how segments of the population may be differently affected by environmental degradation. The Council encouraged UN Member States to incorporate, as appropriate, environmental guidelines into their national training programmes for military and police personnel in preparation for deployment to UN peacekeeping operations. - At least 94 dead as heavy rain inundates Japan - NASA’s Kepler Spacecraft pauses science observations to download science data - Canada: Heat wave in Quebec kills 33 people - ISRO carries out technology demonstration to qualify Crew Escape System - NASA's NuSTAR Mission proves superstar Eta Carinae shoots cosmic rays
<urn:uuid:e3aa0c1b-3844-4624-a856-768485918d15>
2.8125
427
News (Org.)
Science & Tech.
-17.575731
95,552,336
03 September 2006 03 September 2006 Researchers at the Advanced Technology Institute, University of Surrey have reversed the growth of carbon nanotubes from catalysts, using electron beam irradiation in an electron microscope. High resolution imaging of this reverse process led to the conclusion that carbon nanotube growth is essentially a surface-driven process. Carbon nanotubes – tubes formed from a repeating arrangement of carbon atoms with diameter of the order of a billionth of a metre – have remarkable mechanical, electronic and optical properties. Their potential applications range from ultra-strong ropes to ultra-small transistors, as well as field-emission displays, biosensors and optical switches. Unfortunately it is not yet possible to produce carbon nanotubes on a large scale with controlled properties (such as diameter and chirality – the degree of spiral in the arrangement of the carbon atoms). One important method for producing tubes is to use small particles of a metal such as nickel, which at high temperatures catalyse the decomposition of a carbon-containing gas forming carbon nanotubes which ‘grow’ on each metal particle. This process has not yet been fully understood, but recent work at the University of Surrey sheds new light on the interaction between the catalysts and the carbon atoms involved in the growth. ""There is still a hot debate about whether carbon nanotubes grow from catalysts as a result of carbon diffusing through or on the surface of the catalyst"", said Dr Vlad Stolojan, who led the research team. ""This is mainly because the result of the growth process can only be observed at room temperature, after the process is completed. Through analysing the physics behind the controlled growth reversal that we observed, we concluded that the steady-state part of the growth process is surface-driven and demonstrated that the carbon nearest to the catalyst’s surface is highly mobile"". Stolojan and his co-workers studied the reversal process with high spatial resolution, in a transmission electron microscope, and have shown that the catalyst remains attached to the nanotube throughout the irradiation sequence, whilst an equivalent of 1 carbon atom is consumed per every nickel atom in the catalyst. By considering the effects of heating and irradiation, they have discovered that the carbon atoms at the catalyst surface are very easily removed (also confirmed by theoretical simulations), followed by a rapid rearrangement of the nanotube’s atoms around the catalyst. They have also discovered that changes in the nanotube’s growth direction are linked to a sudden rotation of the catalyst. The observed controlled growth reversal under the high-energy electron irradiation will allow for controlling the height of individual nanotubes within patterned arrays, thus offering three-dimensional control of nanotube arrays for field-emission applications. ""The ability to observe the behaviour of the catalyst during the growth-reversal of the nanotube is exciting, as it allows the reverse-engineering of the steady-state growth process. Ultimately, this can help establish the relationship between the catalyst’s crystalline structure and the chirality of the resulting nanotube; the control of the chirality being the true ‘holy grail’ of carbon nanotube growers."" said Prof Ravi Silva, the Director of the Advanced Technology Institute, University of Surrey from the UK. Sharp & Tappin has installed and commissioned a Compcut 200 composite plate saw at Renault Sport Racing in Enstone, Oxfordshire, UK. Electric GT Holdings and SPV Racing recently unveiled the race-ready version of the EPCS V2.3 Tesla P100DL at Circuit de Barcelona-Catalunya. The car features lightweight body parts made using Bcomp's ampliTex and powerRibs natural fibre composite reinforcement products, contributing to a 500 kg weight reduction over the road edition. UK company Codem Composites has provided key bodywork components to support the F1 team Sahara Force India.
<urn:uuid:cdb594c8-b28d-4c2b-9ad0-2c7207235c89>
3.734375
813
News (Org.)
Science & Tech.
17.854485
95,552,343
|Think about Loose Coupling| Re: Perl Idioms Explained - && and || "Short Circuit" operatorsby demerphq (Chancellor) |on Oct 22, 2003 at 22:10 UTC||Need Help??| You split "C-style" and "short cicuit" into two sections. The whole point of refering to them as "C-Style" is because they are "short circuit". I believe the reason was that C is the most commonly known old school language that defines its logical operators to "short circuit". Turbo Pascal's logical operators were short circuit, but I dont know if this is part of the Pascal language definition, or just one of many Borland extensions. OTOH, Basic generally does not have short circuit operators (and thus presumably Fortran does not have them either). However I'd guess that most modern languages have operators which short circuit considering the convenience they offer. Without them you have to write this as a nested if, like this which is a source of perpetual frustration to me when I have to hack on VB code. (A chore that luckily I do very rarely these days.) Anyway, good meditation. First they ignore you, then they laugh at you, then they fight you, then you win.
<urn:uuid:67ce21af-ea89-4cbd-81db-56554776bb26>
2.984375
267
Comment Section
Software Dev.
58.177075
95,552,360
Amateur astronomy is a hobby whose participants enjoy watching the sky, and the abundance of objects found in it with the unaided eye, binoculars, or telescopes. Even though scientific research is not their main goal, many amateur astronomers make a contribution to astronomy by monitoring variable stars, tracking asteroids and discovering transient objects, such as comets and novae. The typical amateur astronomer is one who does not depend on the field of astronomy as a primary source of income or support, and does not have a professional degree or advanced academic training in the subject. Many amateurs are beginners or hobbyists, while others have a high degree of experience in astronomy and often assist and work alongside professional astronomers. check Astrobiology and Outreach for more details
<urn:uuid:dca6f05f-2113-4dcf-8433-b3d358a39692>
3.3125
151
Knowledge Article
Science & Tech.
9.019643
95,552,373
Using exceptions in Perl 6 Exceptions in Perl 6 are objects that hold information about errors. An error can be, for example, the unexpected receiving of data or a network connection no longer available, or a missing file. The information that an exception objects store is, for instance, a human-readable message about the error condition, the backtrace of the raising of the error, and so on. All built-in exceptions inherit from Exception, which provides some basic behavior, including the storage of a backtrace and an interface for the backtrace printer. Ad hoc exceptions can be used by calling die with a description of the error: die "oops, something went wrong";# RESULT: «oops, something went wrong in block <unit> at my-script.p6:1␤» It is worth noting that die prints the error message to the standard error Typed exceptions provide more information about the error stored within an exception object. For example, if while executing .zombie copy on an object, a needed path foo/bar becomes unavailable, then an X::IO::DoesNotExist exception can be raised: die X::IO::DoesNotExist.new(:path("foo/bar"), :trying("zombie copy"))# RESULT: «Failed to find 'foo/bar' while trying to do '.zombie copy'# in block <unit> at my-script.p6:1» Note how the object has provided the backtrace with information about what went wrong. A user of the code can now more easily find and correct the problem. It's possible to handle exceptional circumstances by supplying a die X::IO::DoesNotExist.new(:path("foo/bar"), :trying("zombie copy"));CATCH# OUTPUT: «some kind of IO exception was caught!» Here, we are saying that if any exception of type X::IO occurs, then the message some kind of IO exception was caught! will be sent to stderr, which is what $*ERR.say does, getting displayed on whatever constitutes the standard error device in that moment, which will probably be the console by default. CATCH block uses smartmatching similar to how given/when smartmatches on options, thus it's possible to catch and handle various categories of exceptions inside a To handle all exceptions, use a default statement. This example prints out almost the same information as the normal backtrace printer. Note that the match target is a role. To allow user defined exceptions to match in the same manner, they must implement the given role. Just existing in the same namespace will look alike but won't match in a After a CATCH has handled the exception, the block enclosing the CATCH block is exited. In other words, even when the exception is handled successfully, the rest of the code in the enclosing block will never be executed. die "something went wrong ...";CATCHsay "This won't be said."; # but this line will be never reached since# the enclosing block will be exited immediately# OUTPUT: «something went wrong ...␤» Compare with this: CATCHsay "Hi! I am at the outer block!"; # OUTPUT: «Hi! I am at the outer block!␤» See "Resuming of Exceptions", for how to return control back to where the exception originated. try block is a normal block with the use fatal pragma turned on and an implicit CATCH block that drops the exception, which means you can use it to contain them. # OUTPUT: «Failure␤»try Any exception that is thrown in such a block will be caught by the implicit CATCH block or a CATCH block provided by the user. In the latter case, any unhandled exception will be rethrown. If you choose not to handle the exception, they will be contained by the block. is Exceptiontrysay "I'm alive!";try Which would output: I'm alive!No, I expect you to DIE Mr. Bond!I'm immortal.Just stop already!in block <unit> at exception.p6 line 21 CATCH block is handling just the X::AdHoc exception thrown by the die statement, but not the E exception. In the absence of a CATCH block, all exceptions will be contained and dropped, as indicated above. resume will resume execution right after the exception has been thrown; in this case, in the die statement. Please consult the section on resuming of exceptions for more information on this. try-block is a normal block and as such treats its last statement as the return value of itself. We can therefore use it as a right-hand side. say try // "oh no"; # OUTPUT: «99999␤»say try // "oh no"; # OUTPUT: «oh no␤» Try blocks support else blocks indirectly by returning the return value of the expression or Nil if an exception was thrown. with try +"♥"else# OUTPUT: «not my number!␤» try can also be used with a statement instead of a block: say try "some-filename.txt".IO.slurp // "sane default";# OUTPUT: «sane default␤» try actually causes is, via the use fatal pragma, an immediate throw of the exceptions that happen within its scope, but by doing so the CATCH block is invoked from the point where the exception is thrown, which defines its scope. my = "333";sub bad-subtry# OUTPUT: «Error 111 X::AdHoc: Something bad happened␤» Exceptions can be thrown explicitly with the .throw method of an This example throws an AdHoc exception, catches it and allows the code to continue from the point of the exception by calling the "OBAI".say;# OUTPUT: «OHAI␤OBAI␤» CATCH block doesn't match the exception thrown, then the exception's payload is passed on to the backtrace printing mechanism. "OBAI".say;# RESULT: «foo# in block <unit> at my-script.p6:1» This next example doesn't resume from the point of the exception. Instead, it continues after the enclosing block, since the exception is caught, and then control continues after the "OBAI".say;# OUTPUT: «OBAI␤» throw can be viewed as the method form of die, just that in this particular case, the sub and method forms of the routine have different names. Exceptions interrupt control flow and divert it away from the statement following the statement that threw it. Any exception handled by the user can be resumed and control flow will continue with the statement following the statement that threw the exception. To do so, call the method .resume on the exception object. CATCH # this is step 2die "We leave control after this."; # this is step 1say "We have continued with control flow."; # this is step 3 Resuming will occur right after the statement that has caused the exception, and in the innermost call frame: sub bad-sub# OUTPUT:# Error X::AdHoc: Something bad happened# Returned not returning In this case, .resume is getting to the return statement that happens right after the die statement. Please note that the assignment to $return is taking no effect, since the CATCH statement is happening inside the call to bad-sub, which, via the return statement, assigns the not returning value to it. If an exception is thrown and not caught, it causes the program to exit with a non-zero status code, and typically prints a message to the standard error stream of the program. This message is obtained by calling the gist method on the exception object. You can use this to suppress the default behavior of printing a backtrace along with the message: is X::AdHocdie X::WithoutLineNumber.new(payload => "message")# prints "message\n" to $*ERR and exits, no backtrace # OUTPUT: «X::ControlFlow::Return: Attempt to return outside of any Routine␤»# was CX::Return
<urn:uuid:4c37bebc-02a3-4e26-8e48-ed80c6f13397>
3.421875
1,809
Documentation
Software Dev.
58.237178
95,552,385
In linear algebra, the Schmidt decomposition (named after its originator Erhard Schmidt) refers to a particular way of expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantum information theory, for example in entanglement characterization and in state purification, and plasticity. Let and be Hilbert spaces of dimensions n and m respectively. Assume . For any vector in the tensor product , there exist orthonormal sets and such that , where the scalars are real, non-negative, and, as a (multi-)set, uniquely determined by . The Schmidt decomposition is essentially a restatement of the singular value decomposition in a different context. Fix orthonormal bases and . We can identify an elementary tensor with the matrix , where is the transpose of . A general element of the tensor product can then be viewed as the n × m matrix Write where is n × m and we have Let be the first m row vectors of , the column vectors of V, and the diagonal elements of Σ. The previous expression is then which proves the claim. Some properties of the Schmidt decomposition are of physical interest. Spectrum of reduced states Consider a vector w of the tensor product in the form of Schmidt decomposition Form the rank 1 matrix ρ = w w*. Then the partial trace of ρ, with respect to either system A or B, is a diagonal matrix whose non-zero diagonal elements are |αi |2. In other words, the Schmidt decomposition shows that the reduced state of ρ on either subsystem have the same spectrum. Schmidt rank and entanglement The strictly positive values in the Schmidt decomposition of w are its Schmidt coefficients. The number of Schmidt coefficients of , counted with multiplicity, is called its Schmidt rank, or Schmidt number. If w can be expressed as a product then w is called a separable state. Otherwise, w is said to be an entangled state. From the Schmidt decomposition, we can see that w is entangled if and only if w has Schmidt rank strictly greater than 1. Therefore, two subsystems that partition a pure state are entangled if and only if their reduced states are mixed states. Von Neumann entropy A consequence of the above comments is that, for bipartite pure states, the von Neumann entropy of the reduced states is a well-defined measure of entanglement. For the von Neumann entropy of both reduced states of ρ is , and this is zero if and only if ρ is a product state (not entangled). In the field of plasticity, crystalline solids such as metals deform plastically primarily along crystal planes. Each plane, defined by its normal vector ν can "slip" in one of several directions, defined by a vector μ. Together a slip plane and direction form a slip system which is described by the Schmidt tensor . The velocity gradient is a linear combination of these across all slip systems where the scaling factor is the rate of slip along the system. - Pathak, Anirban (2013). Elements of Quantum Computation and Quantum Communication. London: Taylor & Francis. pp. 92–98. ISBN 978-1-4665-1791-2.
<urn:uuid:775ebdaa-7ec0-47a2-aa84-9653be28d7b3>
2.890625
683
Knowledge Article
Science & Tech.
50.036406
95,552,386
Ocean acidification, a direct result of increased CO2 emission, is set to change the Earth's marine ecosystems forever and may have a direct impact on our economy, resulting in substantial revenue declines and job losses. Intensive fossil-fuel burning and deforestation over the last two centuries have increased atmospheric CO2 levels by almost 40%, which has in turn fundamentally altered ocean chemistry by acidifying surface waters. Fish levels and other sea organisms such as planktons, crabs, lobsters, shrimp and corals are expected to suffer, which could leave fishing communities at the brink of economic disaster. Published today, Monday, 1 June, in IOP Publishing's Environmental Research Letters, the paper 'Anticipating ocean acidification's economic consequences on commercial fisheries' suggests a series of measures to manage the impact that declining fishing harvests and revenue loss will have on a wide range of businesses from commercial fishing to wholesale, retail and restaurants. Ocean acidification and declining carbonate ion concentration in sea water could directly damage corals and mollusks which all depend on sufficient carbonate levels to form shells successfully. Subsequent losses of prey such as plankton and shellfish would also alter food webs and intensify competition among predators for nourishment. As harvesting levels drop, job losses are likely to follow. The seafood industry is big business, bringing in large revenues and employing thousands. Seafood sales at New York restaurants supported around 70,000 full-time jobs in 1999 alone, while US domestic fisheries provided a primary sale value of $5.1 billion in 2007. In 2007, there were almost 13,000 fishermen in the UK that harvested £645 million of marine products, 43% of which was shellfish. As the team of researchers from Massachusetts points out, "The worldwide political, ethical, social and economic ramifications of ocean acidification, plus its capacity to switch ecosystems to a different state following relatively small perturbations, make it a policy-relevant "tipping element" of the earth system." "Preparing for ocean acidification's effects on marine resources will certainly be complex, because it requires making decade-to-century plans for fisheries, which are normally managed over years to decades, to respond to shorter-term economic and environmental factors." In order to combat the likely future decline in ocean species, regional solutions such as flexible fishery management plans, studies of seawater chemistry and support for fishing communities must be implemented now to absorb inevitable changes in the future. Lena Weber | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ab67a43d-97d3-4371-9a7d-340529471b69>
3.546875
1,107
Content Listing
Science & Tech.
30.312447
95,552,392
Pathways & interactions DNA mismatch repair MutH/Restriction endonuclease, type II (IPR011337) Short name: DNA_rep_MutH/RE_typeII Overlapping homologous superfamilies There are four classes of restriction endonucleases: types I, II,III and IV. All types of enzymes recognise specific short DNA sequences and carry out the endonucleolytic cleavage of DNA to give specific double-stranded fragments with terminal 5'-phosphates. They differ in their recognition sequence, subunit composition, cleavage position, and cofactor requirements [PMID: 15121719, PMID: 12665693], as summarised below: - Type I enzymes (EC:188.8.131.52) cleave at sites remote from recognition site; require both ATP and S-adenosyl-L-methionine to function; multifunctional protein with both restriction and methylase (EC:184.108.40.206) activities. - Type II enzymes (EC:220.127.116.11) cleave within or at short specific distances from recognition site; most require magnesium; single function (restriction) enzymes independent of methylase. - Type III enzymes (EC:18.104.22.168) cleave at sites a short distance from recognition site; require ATP (but doesn't hydrolyse it); S-adenosyl-L-methionine stimulates reaction but is not required; exists as part of a complex with a modification methylase methylase (EC:22.214.171.124). - Type IV enzymes target methylated DNA. Type II restriction endonucleases (EC:126.96.36.199) are components of prokaryotic DNA restriction-modification mechanisms that protect the organism against invading foreign DNA. These site-specific deoxyribonucleases catalyse the endonucleolytic cleavage of DNA to give specific double-stranded fragments with terminal 5'-phosphates. Of the 3000 restriction endonucleases that have been characterised, most are homodimeric or tetrameric enzymes that cleave target DNA at sequence-specific sites close to the recognition site. For homodimeric enzymes, the recognition site is usually a palindromic sequence 4-8 bp in length. Most enzymes require magnesium ions as a cofactor for catalysis. Although they can vary in their mode of recognition, many restriction endonucleases share a similar structural core comprising four beta-strands and one alpha-helix, as well as a similar mechanism of cleavage, suggesting a common ancestral origin [PMID: 15770420]. However, there is still considerable diversity amongst restriction endonucleases [PMID: 14576294, PMID: 11827971]. The target site recognition process triggers large conformational changes of the enzyme and the target DNA, leading to the activation of the catalytic centres. Like other DNA binding proteins, restriction enzymes are capable of non-specific DNA binding as well, which is the prerequisite for efficient target site location by facilitated diffusion. Non-specific binding usually does not involve interactions with the bases but only with the DNA backbone [PMID: 11557805]. This entry represents restriction endonucleases EcoRV, NaeI, HincII, and Sau3AI, as well as the DNA mismatch repair protein MutH, which are closely related in sequence and structure. EcoRV recognises the DNA sequence GATATC and cleaves after T-1 [PMID: 15170321], NaeI recognises GCCGCC and cleaves after C-2 [PMID: 10856254], HincII recognises GTYRAC and cleaves after the pyrimidine Y [PMID: 15476804], and Sau3AI recognises GATC and cleaves prior to G-1 [PMID: 11316811]. MutH, along with MutS and MutL, is essential for initiation of methyl-directed DNA mismatch repair to correct mistakes made during DNA replication in Escherichia coli. MutH cleaves a newly synthesized and unmethylated daughter strand 5' to the sequence d(GATC) in a hemi-methylated duplex. Activation of MutH requires the recognition of a DNA mismatch by MutS and MutL. With sequence homology to Sau3AI and structural similarity to PvuII endonuclease, MutH shows sequence and structural similarity with PvuII and Sau3AI, indicating a strong relationship with these enzymes through divergent evolution, suggesting that type II restriction endonucleases evolved from a common ancestor [PMID: 9482749].
<urn:uuid:2b462d1f-8c46-453e-9147-32a63763e2f1>
2.96875
992
Knowledge Article
Science & Tech.
30.588
95,552,418
Please assist with the following problem. I am totally lost with this so if you could also include statements telling me what you did, I would appreciate it. The entire chapter talks about array index out of bounds which has completely confused me and there are no examples in the book to help. If you could complete this program, it will help me to better understand how this works and what it is used for. Design and implement the class myArray that solves the array index out of bound problem and also allows the user to begin the array index starting at any integer, positive or negative. Every object of the type myArray is an array of type int. During execution, when accessing an array component, if the index is out of bounds, the program must terminate with an appropriate error message. Consider the following statements; myArray<int> list(5); //Line 1 myArray<int> myList(2,13); //Line 2 myArray<int> yourList(-5,9); //Line 3 The statement in line 1 declares list to be an array of 5 components, the component type is int, and the components are: list , list , ..., list ; The statement in line 2 declares myList to be an array of 11 components, the component type is int, and the components are; myList, myList, ..., myList; the statement in line 3 declares yourList to be an array of 14 components, the component type is int, and the componest are: yourList[-5], yourList[-4], ..., yourList, ..., yourList. Write a program to test the class myArray.© BrainMass Inc. brainmass.com July 15, 2018, 9:16 pm ad1c9bdddf The solution examines an array index out of bounds.
<urn:uuid:9bdc3593-17b7-45fe-89d6-c13a103b605d>
3.25
382
Q&A Forum
Software Dev.
65.689883
95,552,424
World Ostracoda Database What is an ostracod? Ostracods are amazing small crustaceans, which inhabit virtually all aquatic environments on Earth. The group is characterized by a body completely enclosed between two valves which in many species occur as calcified “shells”. Thus, they have a "seed"-like appearance, and therefore are also known by the term "seed shrimps" or "mussel shrimps". Ostracods range from warm waters of the tropics to very cold environments such polar seas and are found from intertidal zones to many thousands of metres depth in the deep sea They are also adapted to freshwater niches such as rivers, lakes and even temporary ponds. Most species reproduce sexually but some of them reproduce asexually by parthenogenesis. Ostracods are generally tiny animals, with a body length mostly ranging from 0.2 mm to 2.0 mm. The planktonic genus Gigantocypris can, however, reach 32mm in length. Characteristically ostracods have eight pairs of appendages (though they can be fewer in number), which function mostly in feeding, locomotion and for sensory purposes. Because ostracods have a calcified carapace they have a high preservational capacity. They are by far the most abundant arthropods in the fossil record, occurring in countless numbers at more or less all stratgraphic records from at least the Ordovician onwards. Their soft-parts have even been found in 450 million-year-old rocks. Ostracods have utility: they are used to date and correlate rock sequences world-wide, and are good palaeoenvironmental indicators, revealing information on, for example, palaeobathymetry, palaeosalinity and palaeoclimatic changes of our planet throuph time. Interestingly, ostracods have survived the 5 ‘big extinctions’ of life that have occurred over the last 540 million years, and have also survived in zero gravity for 4 months in the Russian Mir space station! For other interesting facts about ostracods see Williams et al. (2015). The class encloses over 33,000 described species and subspecies (see Kempf Ostracoda Database for details), and many more species remain unknown to science. There are two subclasses with living representatives: Myodocopa and Podocopa. The first subclass is exclusive to marine environments, but occupies the benthos as well as the plankton, while podocopans occur in marine, brackish and freshwater environments and occupies almost exclusively benthos (but also the benthopelagic zone). This World Ostracoda Database is part of the World Register of Marine Species (WoRMS), a global initiative to provide a register of all names of marine organisms living today (or extinct since a geologically short time). But the World Ostracoda Databasealso includes freshwater and fossil taxa. The present database has the following objectives: - to provide an authoritative list of the world's Ostracoda taxa (species, genera, families…), with focus on the recent taxa, but also with information on fossils, - to provide a list of papers published on ostracods, and - to provide a base link to other online databases. - Angel, Martin: Halocyprida, Myodocopida - Brandão, Simone-N.: Myodocopa, Ostracoda - Drapun, Inna: Halocypridoidea - Karanovic, Ivana: Cypridoidea, Darwinulocopina, Limnocytheridae, Myodocopa, Terrestricytheroidea - Meidla, Tõnu: Eridostraca, Leiocopa, Leperditelloidea, Leperditicopa, Ostracod incertae sedis, Palaeocopa, Podocopa - Perrier, Vincent: Myodocopa, Ostracod incertae sedis New editors are very welcome! Currently, there are 10,168 accepted species in the database, among 14,541 species and 20,332 taxa names. Acceptance is an editorial decision, but we acknowledge that such decisions need to be re-examined frequently in the light of new information. If you disagree with synonymies, generic assignments or higher level systematics, or if you find any omission, mistake, or merely a typo, please send us your argument supported opinion. We intend to update this database frequently, so that your corrections will be incorporated. By downloading or consulting data from this website, the visitor acknowledges that he/she agrees to the following: - If data are extracted from this website for secondary analysis resulting in a publication, the website should be cited as follows: Brandão, S. N.; Angel, M. V.; Karanovic, I.; Perrier, V. & Meidla, T. (2018). World Ostracoda Database. Accessed at http://www.marinespecies.org/ostracoda on 2018-07-17 - If any data constitutes a substantial proportion of the records used in secondary analyses (i.e. more than 25% of the data are derived from this source, or the data are essential to arrive at the conclusion of the analysis), the authors/managers of the database should be contacted. It may be useful to contact us directly in case there are additional data that may strengthen the analysis or there are features of the data that are important to consider but may not have been apparent from the metadata. The International Research Group on Ostracoda An Atlas of Southern Ocean Planktonic Ostracods Atlas of Atlantic Planktonic Ostracods Ostracod Research at the Lake Biwa Museum, Japan
<urn:uuid:15e4189a-cc52-4fa3-b02c-4ef0c1f2968f>
4.09375
1,238
Knowledge Article
Science & Tech.
23.946282
95,552,434
The International Panel on Climate Change (IPCC) and other prominent researchers have predicted that stronger and more frequent storms may occur as a result of global warming trends. The tiny tremors, or microseisms, offer a new way to discover whether these predictions are already coming true, said Richard Aster, a geophysics professor at the New Mexico Institute of Mining and Technology. Unceasing as the ocean waves that trigger them, the microseisms show up as five- to 30-second oscillations of Earth’s surface at seismographic stations around the world. Even seismic monitoring stations “in the middle of a continent are sensitive to the waves crashing all around the continent,” Aster said. As storm winds drive ocean waves higher, the microseism signals increase their amplitude as well, offering a unique way to track storm intensities across seasons, over time, and at different geographical locations. For instance, Aster and colleagues Daniel McNamara from the U.S. Geological Survey and Peter Bromirski of the Scripps Institution of Oceanography recently published analysis in the Seismological Society of America journal Seismological Research Letters showing that microseism data collected around the Pacific Basin and throughout the world could be used to detect and quantify wave activity from multi-year events such as the El Niño and La Niña ocean disruptions. The findings spurred them to look for a microseism signal that would reveal whether extreme storms were becoming more common in a warming world. In fact, they saw “a remarkable thing,” among the worldwide microseism data collected from 1972 to 2008, Aster recalled. In 22 of the 22 stations included in the study, the number of extreme storm events had increased over time, they found. While the work on evaluating changes in extreme storms is “still very much in its early stages”, Aster is “hoping that the study will offer a much more global look” at the effects of climate change on extreme storms and the wind-driven waves that they produce. At the moment, most of the evidence linking the two comes from studies of hurricane intensity and shoreline erosion in specific regions such as the Pacific Northwest Gulf of Mexico, he noted. The researchers are also working on recovering and digitizing older microseism records, potentially creating a data set that stretches back to the 1930s. Aster praised the work of the long-term observatories that have collected the records, calling them a good example of the “Cinderella science”—unloved and overlooked—that often support significant discoveries. “It’s absolutely great data on the state of the planet. We took a prosaic time series, and found something very interesting in it,” he said. Nan Broadbent | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:557dbee6-b144-4ad7-9046-185741654466>
3.84375
1,202
Content Listing
Science & Tech.
38.174494
95,552,457
New technology is helping more people see Frank Lloyd Wright’s winter home. Local, Federal Groups Fund $3 Million For Arizona Grasslands Conservation Arizona’s grasslands lie northwest of Flagstaff and two-thirds of it have been impacted by drought, invasive species and wildfire. Now, $3 million is going directly to preserving the habitat for native species like mule deer and golden eagles. The U.S. Department of Agriculture contributed half of that money, which will help the local game and fish department remove shrubs and juniper trees as well as preserve water supplies. The Arizona Game and Fish Department is one partner who’s pledged to match the USDA money, amounting to $3 million dollars for on-the-ground conservation projects for up to 30,000 acres. Game and Fish Conservation Manager Al Eiden says Arizona grasslands are a critical habitat that are degrading at a fast rate. "This is going to provide food for wildlife and food for cattle, and improve the landscape to the point where we’re hopefully making a positive impact on water,” he said. He put together the proposal for the USDA conservation money because grasslands are critical for recharging groundwater supply. “When you have a lot of trees out there sucking up the moisture coming down you end up with bare ground rather than heavy grass out there," Eiden said. "And then when rain comes down it hits the ground and runs off and doesn’t infiltrate into the groundwater.” Eiden says the goal is to preserve reliable water sources for wildlife on up to 100,000 acres.
<urn:uuid:56b06822-a970-4964-9062-76a010761b36>
3.09375
335
News Article
Science & Tech.
48.869144
95,552,461
Protein interactions direct cellular functions and their responses to pathogens and are important therapeutic targets. Scientists from the GSF Research Centre for Environment and Health have recently developed a method enabling simultaneous visualization of individual proteins and their interactions in living cells. This is achieved by engineering the proteins to constantly emit red or blue fluorescent signals and to produce an additional yellow fluorescent signal upon interaction (see image below). Dr. Ruth Brack-Werner, Director of the GSF Institute of Molecular Virology (IMV) explains the decisive advantage of the new approach: “ In previous assays, signals were generated only by interacting proteins, whereas the individual partners remained undetected. However, the absence of signals could not be used to rule out protein interactions since the absence of one or both interaction partners would have the same effect. To overcome this problem Brack-Werner and her team developed the so-called extended bimolecular fluorescence complementation (exBiFC) which allows simultaneous monitoring of individual proteins and their interactions. Dr. Ruth Brack-Werner, Institute of Molecular Virology of the GSF [300 dpi resolution for print] Photo: private. Brack-Werner and her colleagues’ groundbreaking research work focusses on mechanisms that control replication of the human immunodeficiency virus (HIV), which causes AIDS. “HIV replication is based on the interaction of cellular proteins with viral proteins. Interactions involving viral regulatory factors have a direct impact on the amount of virus produced by the HIV host cell”, Brack-Werner explains. “Preventing HIV proteins from interacting with their crucial partners is a promising approach to developing novel therapies.” Therefore the GSF-scientists developed and validated exBiFC with the HIV Rev protein, which is an accelerator of HIV production. Various assays investigating Rev interactions in artificial settings indicate that the activity of Rev depends on the interaction of Rev molecules with each other and with cellular proteins. The latter include Exportin 1, which transports proteins from the nucleus to the cytoplasm and RISP, a modulator of HIV gene expression discovered by the Brack-Werner team in previous studies. Brack-Werner and her team demonstrated that exBIFC allows visualization of interactions of Rev with itself and with Exportin1 and RISP in living cells. In addition they were able to compare the strengths of the interactions of Rev with its partners by analysing the intensities of the signals in cell images. ExBiFC has a wide range of potential appllications and represents an important tool for the elucidation of protein interaction networks and discovery of novel antiviral factors. Thus exBIFC has an enormous potential in the battle against leading global health problems such as infectious diseases and cancers.GSF - Forschungszentrum für Umwelt und Gesundheit, Germany Heinz Joerg Haury | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:dec684c0-f049-45fa-ac71-a6828e57aca3>
3.25
1,248
Content Listing
Science & Tech.
32.978146
95,552,469
The division of labor is more efficient than a struggle through life without help from others – this is also true for microorganisms. Bacteria that divide their metabolic labor (left colony) grow faster than bacterial cells that produce all amino acids on their own (right colony). S. Pande / MPI for Chemical Ecology Researchers from Research Group Experimental Ecology and Evolution at the Max Planck Institute for Chemical Ecology and their colleagues at the Friedrich Schiller University in Jena, Germany came to this conclusion when they performed experiments with microbes. The scientists worked with bacteria that were deficient in the production of a certain amino acid and therefore depended on a partner to provide the missing nutrient. Bacterial strains that complemented each other’s need by providing the required amino acid showed a fitness increase of about 20% relative to a non-deficient strain without partner. This result helps to explain why cooperation is such a widespread model of success in nature. (The ISME Journal, 28 November 2013, DOI: 10.1038/ismej.2013.211) Ecology and evolution: close relatives Each life form on our planet has to adapt to its environment as good as it can. Apart from getting used to climate conditions and food supply, each species must get along with other organisms in the habitat. In the course of evolution species adapt continuously to each other and to the environment by changing their genetic features. This is why cold resistant species live at the poles and heat resistant species in the deserts. Also nutritional needs and metabolic regulation underlie the principle of evolution. So let’s take a look at the world of microbes in this context! “No matter where you look: Microbial communities can be found in almost every habitat you can think of,” says Christian Kost, leader of the research group “Experimental Ecology and Evolution” at the Max Planck Institute for Chemical Ecology in Jena, Germany. Microbes often live in symbiosis with higher organisms, but they also cooperate with each other in order to optimally utilize the resources that are available to them. Interestingly, a look at the genome of cooperating bacterial strains shows that some of them are unable to perform all vital metabolic functions on their own. Instead, they rely on their cooperative partner. Their environment, that is to say other organisms, provides the nutrients they cannot produce themselves anymore. However, the result of the cooperation is a risky dependency: If one partner is lost, the other dies as well. Can such a dependency in fact be a trait that is selected for and which is maintained for a longer period in a bacterial population? Is this assumption compatible with Darwin’s theory of the “survival of the fittest”? If so, cooperating partners should perform as good or even better than microbes without partner in terms of fitness. Synthetic Ecology: simulating ecological parameters in a test tube To bring a naturally evolved symbiotic community from the real world into the lab to study such cooperation, is often very difficult. Therefore, scientists used a synthetic model: Escherichia coli bacteria were genetically modified in such a way that one bacterial strain was unable to produce a certain amino acid anymore, such as tryptophan, but produced all other amino acids in high concentrations. If this strain grows in a culture with another strain unable to produce arginine, another amino acid, both strains are able to feed each other. Amazingly, such co-culture experiments showed that the growth of these bacterial cells was increased by 20% in comparison to the unmodified wild-type strain that was able to produce all essential amino acid by itself. The inability of the deficient strain to produce an essential amino acid had a positive effect on its growth when a partner was present that compensated this loss. This can be explained by the considerably reduced energy costs both strains had to invest for producing the exchanged amino acids. Specializing on the production of certain, but not all necessary amino acids made the bacterial cells more efficient and thus resulted in faster growth. Interestingly, the two cooperating, amino acid exchanging strains even outcompeted a self-sustaining wild-type strain. The research results from Christian Kost’s lab illustrate why symbiotic relationships with bacteria are so prevalent. In the course of evolution, an association may get so close that the mutualistic partners merge into a new, multicellular organism. The research project was funded by the Volkswagen Foundation, the Jena School for Microbial Communication, the Fundação Calouste Gulbenkian and the Fundação para a Ciência e a Tecnologia as well as Siemens SA Portugal. [JWK/AO]Original Publication: Download of high resolution pictures on http://www.ice.mpg.de/ext/735.html Angela Overmeyer | Max-Planck-Institut Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:474910bb-9c85-4047-996f-cb924925b515>
3.625
1,584
Content Listing
Science & Tech.
35.318706
95,552,505
The findings suggest that as open farmland replaces forests and “agroforests” – where crops are grown under trees – reduced number of bird species and shifts in the populations of various types of birds may hurt “ecosystem services” that birds provide to people, such as eating insect pests, spreading seeds and pollinating crops. “We found that agroforests are better overall for bird biodiversity in the tropics than open farms,” says study author Çaðan H. Þekercioðlu (pronounced Cha-awn Shay-care-gee-oh-loo), an assistant professor of biology at the University of Utah. “This doesn’t mean people should farm in intact forests,” the ornithologist adds. “But if you have the option of having agroforest versus open farmland, that is better for biodiversity, with shade coffee and shade cacao [the source of cocoa and chocolate] being the prime examples.” Þekercioðlu’s new study, funded by the University of Utah, is being published this month in the Journal of Ornithology. He will present the findings Thursday, Aug. 9, at the Ecological Society of America’s annual meeting in Portland, Ore.If consumers wish to support bird diversity and agroforests, “a good way is by choosing certified, bird-friendly, shade coffee or shade chocolate,” he says. While such coffee or chocolate often cost more because they are more labor-intensive to produce, the certification “is usually better for the farmers’ income as well.” Other crops grown in shade include cardamom, which is a spice, and yerba mate, which is steeped in hot water to make a beverage popular in South America. Study Focuses on Birds of Forests, Farms or Both An agroforest “is a type of farm where the crops are grown under trees at a reasonable density,” Þekercioðlu says. “Often, it’s not like forest-forest – it feels more like a open park,” although in Ethiopia “commercial coffee is grown under full-on forests in its original native habitat.” Þekercioðlu conducted the study in two steps. First, “I used my world bird database that has information on all the 10,000-plus bird species of the world,” he says. “I sorted birds based on habitat choices and compared species that prefer forests to those that prefer agricultural areas and others that prefer both forests and agricultural areas.”Next, he reviewed about 40 previously published studies that examined bird communities in forests, agroforests and open agricultural areas. “As you go to more and more open agriculture, you lose some bird groups that provide important ecosystem services like insect control [insect eaters], seed dispersal [fruit eaters] and pollination [nectar eaters], while you get higher numbers of granivores [seed and grain eaters] that actually can be crop pests,” Þekercioðlu says. Specifically: -- Insectivores or insect-eating birds do best in forests – especially those that live near the ground in the understory, the layer of plants below the tree canopy and above the ground cover. But small and medium insect-eating birds, especially migrant and canopy species, do well in agroforests. The number of insect-eating species declines on open farms, where they help control pests. -- Frugivores or fruit-eating birds, especially larger ones, “do best in forest because they have more habitat and more food, and the large ones often are hunted outside forests in agricultural settings. Overall, frugivores – especially smaller ones – do OK in agroforests, but the number of fruit-eating species decline significantly on open farms.” Frugivores help spread the seeds of the fruits they eat. -- Nectarivores or nectar-eating birds help pollinate many plants. They “tend to increase in agroforests compared with forests. A lot of nectar-eating birds obviously like flowers, and many plants flower when there’s some light. When you have extensive forest its often pretty shady so not many things are in flower at any given time.” The nectar eaters are less common on open farms. -- Omnivores, which are birds that eat many things, “tend to do better in agroforests and especially on open farms” than in forests, because their diet is so generalized instead of specialized in certain foods. - Granivores, or grain- and seed-eating birds are “the only group that significantly increases in open agricultural areas. A lot of the seeds they eat are grass seeds, but also from crops. Some of these seed-eating bird species are major agricultural pests, and that’s another reason for encouraging agroforests. In completely open agricultural systems, you have more seed-eating birds that can cause significant crop losses.” While the study found fewer species on farms than in agroforests, and fewer on agroforests than in forests, Þekercioðlu says it doesn’t answer a key question: “Does the decline in the number species translate into a decline in individuals providing a given ecosystem service?” If so, farms and agroforests have lost birds that provide important insect-control, pollination and seed-dispersal services. “It is possible you may lose a lot of species, but some of the remaining species increase in number and compensate and for the decline in ecosystem services by the lost species,” he adds. “It’s one of the biggest questions in ecology.” The Trend toward Sun CoffeeNoting that the study found forests have more tropical bird species than agroforests, which in turn have more bird species that open farms, Þekercioðlu says: “A lot of threatened species globally are found only in forests, and most of them disappear from agroforests and open agricultural areas.” Lee Siegel | Newswise Science News Further reports about: > Agroforests > Bird Communication > Birds > Cha-awn > Ethiopia > Farms > Frugivores > Rainforest > Shay-care-gee-oh-loo > Utah > agricultural areas > agricultural pest > bird species > ecosystem services > endangered bird > environmental problem > migratory birds > seed-eating birds > tropical bird > tropical forest Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:1c0c0c9c-c2a4-49bf-8bad-1c6377cd04c5>
3.375
2,050
Content Listing
Science & Tech.
37.568269
95,552,515
The so-called Platonic Solids are regular polyhedra. “Polyhedra” is a Greek word meaning “many faces.” There are five of these, and they are characterized by the fact that each face is a regular polygon, that is, a straight-sided figure with equal sides and equal angles: Four triangular faces, four vertices, and six edges. Six square faces, eight vertices, and twelve edges. Eight triangular faces, six vertices, and twelve edges. Twelve pentagonal faces, twenty vertices, and thirty edges. Twenty triangular faces, twelve vertices, and thirty edges. It is natural to wonder why there should be exactly five Platonic solids, and whether there might conceivably be one that simply hasn't been discovered yet. However, it is not difficult to show that there must be five—and that there cannot be more than five. First, consider that at each vertex (point) at least three faces must come together, for if only two came together they would collapse against one another and we would not get a solid. Second, observe that the sum of the interior angles of the faces meeting at each vertex must be less than 360°, for otherwise they would not all fit together. Now, each interior angle of an equilateral triangle is 60°, hence we could fit together three, four, or five of them at a vertex, and these correspond to to the tetrahedron, the octahedron, and the icosahedron. Each interior angle of a square is 90°, so we can fit only three of them together at each vertex, giving us a cube. (We could fit four squares together, but then they would lie flat, giving us a tesselation instead of a solid.) The interior angles of the regular pentagon are 108°, so again we can fit only three together at a vertex, giving us the dodecahedron. And that makes five regular polyhedra. What about the regular hexagon, that is, the six-sided figure? Well, its interior angles are 120°, so if we fit three of them together at a vertex the angles sum to precisely 360°, and therefore they lie flat, just like four squares (or six equilateral triangles) would do. For this reason we can use hexagons to make a tesselation of the plane, but we cannot use them to make a Platonic solid. And, obviously, no polygon with more than six sides can be used either, because the interior angles just keep getting larger. The Greeks, who were inclined to see in mathematics something of the nature of religious truth, found this business of there being exactly five Platonic solids very compelling. The philosopher Plato concluded that they must be the fundamental building blocks—the atoms—of nature, and assigned to them what he believed to be the essential elements of the universe. He followed the earlier philosopher Empedocles in assigning fire to the tetrahedron, earth to the cube, air to the octahedron, and water to the icosahedron. To the dodecahedron Plato assigned the element cosmos, reasoning that, since it was so different from the others in virtue of its pentagonal faces, it must be what the stars and planets are made of. Kepler’s Platonic Solids Model of the Cosmos Although this might seem naive to us, we should be careful not to smile at it too much: these were powerful ideas, and led to real knowledge. As late as the 16th century, for instance, Johannes Kepler was applying a similar intuition to attempt to explain the motion of the planets. Early in his life he concluded that the distances of the orbits, which he assumed were circular, were related to the Platonic solids in their proportions. This model is represented in this woodcut from his treatise Mysterium Cosmographicum. Only later in his life, after his friend the great astronomer Tycho Brahe bequeathed to him an enormous collection of astronomical observations, did Kepler finally reason to the conclusion that this model of planetary motion was mistaken, and that in fact planets moved around the sun in ellipses, not circles. It was this discovery that led Isaac Newton, less than a century later, to formulate his law of gravity—which governs planetary motion—and which ultimately gave us our modern conception of the universe. The beauty and interest of the Platonic solids continue to inspire all sorts of people, and not just mathematicians. For a look at how one artist used these shapes, you may wish to study the M.C. Escher Minitext. - [MLA] Smith, B. Sidney. "Platonic solid." Platonic Realms Interactive Mathematics Encyclopedia. Platonic Realms, 3 Nov 2015. Web. 17 Jul 2018. <http://platonicrealms.com/> - [APA] Smith, B. Sidney (3 Nov 2015). Platonic solid. Retrieved 17 Jul 2018 from the Platonic Realms Interactive Mathematics Encyclopedia: http://platonicrealms.com/encyclopedia/Platonic-solid/ Platonic Solids in Douglas Fir A set of the 5 Platonic solids, hand-crafted from Douglas fir. Each piece is between 2.5 and 3.5 inches in diameter (6 to 9 centimeters). They are cleanly cut and sanded and display beautifully in natural wood, or they may be stained, painted, or decorated. Platonic Solids Model Paper Printable polyhedron nets to create your own paper models of the Platonic Solids: tetrahedron, cube, octahedron, dodecahedron, and icosahedron.
<urn:uuid:14e780d5-eed9-4b4a-8d09-b7c5f6a83bb4>
4.375
1,196
Knowledge Article
Science & Tech.
45.837051
95,552,524
Scientists have assembled the world’s smallest house by using a combination of robotics and nanotechnology. The micro-house even has a door that a house mite can fit through. The house has been devised, according to Engadget, as a proof-of-concept study from a nanorobotics team based at the Femto-ST Institute in Besancon, France. The researchers successfully assembled a new microbotics system termed the μRobotex nanofactory. By deploying tiny robots the researchers can construct microstructures within a large vacuum chamber. Within this they can fix components onto optical fiber tips at a level of nanometer accuracy. The idea behind the microhouse construction was in order to demonstrate that the latest advances in optical sensing technologies can be used to manipulate ion guns (via gas injection), electron beams and finely controlled robotic piloting, so that a variety of different constructs can be rendered. As an example of the complexity and tiny scale of operations, the ion gun focuses on an area only 300 micrometers by 300 micrometers so that it can to fire ions onto the fiber tip and silica membrane. This forms part of the area of lab-on-fiber technologies. In the early stages of this technology there were no robotic actuators available for for nanoassembly, which limited what engineers could achieve in terms of creating microstructures at the nano-scale. A recent advance in miniaturtized-sensing elements has addressed this. These sensing elements can be fitted onto fiber tips, allowing scientists to manipulate different components. The technology allows enables scientists to insert optical fibers as thin as a strand of human hair into previously inaccessible locations such as jet engines, to detect radiation levels, or into human blood vessels to detect viral particles. Image Credit: FEMTO-ST Institute News This Week Tiny particles called quantum dots reduce symptoms in mice primed to develop a type of Parkinson’s disease, and also block formation of the toxic protein clumps in Alzheimer’s. They could one day be a [...] Physicist Seth Fraden is developing a new generation of machines modeled on living creatures. His latest invention might one day treat disease by swimming its way through our blood. As a kid, physicist Seth [...] Richard Feynman gave his famous talk "There's Plenty of Room at the Bottom" (Original Transcript Available Here : http://muonray.blogspot.ie/2012/12/ri...) on December 29th 1959 at the annual meeting of the American Physical Society at [...] Postnova Analytics has published a new application note that describes a new approach for analysis of titanium dioxide nanoparticles in commercial sunscreens. The technique, which combines Inverse Supercritical Fluid Extraction (I-SFE) and Miniaturized Asymmetrical [...] Leave it to Richard Branson to find motivation to go to the gym in traveling to space. On Tuesday, a ship from Brason’s space flight company, Virgin Galactic, achieved supersonic speed in a test [...] Machine-learning algorithms tuned to detecting cancer DNA in the blood could pave the way for personalized cancer care. copyright by www.the-scientist.com Modern cancer medicine is hampered by two big challenges—detecting cancers when they are [...]
<urn:uuid:3d565233-705d-419d-8301-958d597258ae>
3.609375
672
Content Listing
Science & Tech.
39.439044
95,552,526
Authors: George Rajna Researchers at the University of Vienna and the Austrian Academy of Sciences develop a new theoretical framework to describe how causal structures in quantum mechanics transform. JILA scientists have invented a new imaging technique that produces rapid, precise measurements of quantum behavior in an atomic clock in the form of near-instant visual art. The unique platform, which is referred as a 4-D microscope, combines the sensitivity and high time-resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy. The experiment relied on a soliton frequency comb generated in a chip-based optical microresonator made from silicon nitride. This scientific achievement toward more precise control and monitoring of light is highly interesting for miniaturizing optical devices for sensing and signal processing. It may seem like such optical behavior would require bending the rules of physics, but in fact, scientists at MIT, Harvard University, and elsewhere have now demonstrated that photons can indeed be made to interact-an accomplishment that could open a path toward using photons in quantum computing, if not in light sabers. Optical highways for light are at the heart of modern communications. But when it comes to guiding individual blips of light called photons, reliable transit is far less common. Theoretical physicists propose to use negative interference to control heat flow in quantum devices. Particle physicists are studying ways to harness the power of the quantum realm to further their research. Comments: 60 Pages. [v1] 2018-03-28 09:32:11 Unique-IP document downloads: 14 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:21f6a092-0941-4718-ad74-ebe1fbce369a>
2.90625
465
Knowledge Article
Science & Tech.
29.923684
95,552,540