text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
n = ["Michael", "Lieberman"] # Add your function here def join_strings(words): result = "" for i in range(len(words)): words.append(result) return result print join_strings(n) """"Hey Guys I've already completed the course I just wanted to figure out why the .append() method doesn't seem to work. Please Help."""
your result is empty string and you are appending empty string. Maybe you want to append i?
Thanks! Have just tried your method. But now its giving me this error:
Oops, try again. join_strings(['x', 'y', 'z', 'a']) returned '' instead of 'xyza'.
Tried it on IDLE too
append to result and not to words:
You need to append your i to result to get string
Oops, try again. join_strings(['x', 'y', 'z', 'a']) resulted in an error: 'str' object has no attribute 'append'
And also trying to run this other code on IDLE please tell me what I'm dooing wrong. I want to create a single list out of the sub lists.
n = [[1, 2, 3], [4, 5, 6, 7, 8, 9]]
results =
for lists in n:
for numbers in lists:
result += lists[numbers]
your variable is called result not results
Haha yea that was a typo. I wrote result but still gave same error.
Oh, your return should be outside of the loop and you can't append string, it just works for list. You can try something like this
result += words[i] | <urn:uuid:9f97609d-a4f5-472d-a700-28d6e8822820> | 3.171875 | 364 | Comment Section | Software Dev. | 83.207188 | 95,563,095 |
February 23, 1989
In a counterattack on the effects of acid rain, American scientists and wildlife experts are increasingly bringing dead lakes and streams back to life by using the simple technique of adding limestone to the water. The technique, long in use in Sweden, is not a permanent solution to the problem of acidification, experts say. That will come only when industrial emissions that cause acid rain are sufficiently reduced, even proponents of the technique say. But they believe "liming," as it... | <urn:uuid:2b00da46-3c8a-467b-9b0e-223c102a340b> | 2.890625 | 99 | Truncated | Science & Tech. | 41.941155 | 95,563,103 |
Ch.4. Groundwater. Recharge in Arid Region Evaporation is significant, which can make a big enrichment in isotopic compositions Evaporative enrichment in alluvial groundwater.
Fig. 4-8 The isotopic composition of wadi runoff for three rainfall events in northern Oman. The regression lines for the summer rains (slopes indicated) show strong evaporation trends at humidities less than 50%. The local water line for northern Oman (NOMWL) is defined as d2H = 7.5 d18O + 16.1.
Fig. 4-9 Deep groundwaters from fractured carbonate aquifers and shallow alluvial groundwaters in northern Oman. Alluvial groundwaters have experienced greater evaporative enrichment. Also shown is the average evaporation slope (s = 4.5) for the region, with h = 0.5.
the kinetic fractionation factors using Gonfiantini’s equations in Chapter 2, giving De18Ov-bl = –7.1‰ and De2Hv-bl = –6.3‰.
(etotal = ev-l + Dev-bl) for evaporation under these conditions, for the mean annual temperature of 30°C (and ev-l Table 1-4), are then –16.0‰ for 18O and –78‰ for 2H.
d18Ogw–d18Oprec = e18Ototal· lnf = 4‰
f=0.78 22% evaporation
Fig. 4-19 Storm hydrograph separation for
a two-component system using d18O
Fig. 4-20 Storm hydrograph separation for a three-component system using d18O and dissolved silica (Si).
Table 4-3 Summary of pre-event contribution to streamflow discharge for rainfall and snowmelt events (n) for various landuse drainage basins (from Buttle, 1994)
Fig. 4-24 The fractional mixing of two groundwaters quantified on the basis of their stable isotope contents, shown as the fraction of groundwater "A" in the "A-B" mixture.
Fig. 4-25 Ternary mixing diagram for groundwaters from crystalline rocks of the Canadian Shield. The glacial meltwater end-member was identified by the isotopic depletion observed in many of the intermediate depth groundwaters, and its 18O–Cl– composition determined by extrapolation to a 3H-free water (Douglas, 1997). | <urn:uuid:0d9bc17c-5f13-48c2-b701-ed71d8238798> | 2.6875 | 533 | Academic Writing | Science & Tech. | 59.342544 | 95,563,137 |
Diamond Helps Develop New Way of Studying the Tiniest Microcrystals
News Aug 11, 2015
Unpicking the mysteries of microcrystals can be a huge challenge for scientists. But a European team led by scientists from DESY - a German national research centre composed of a series of particle accelerators - have now used Diamond to develop a new type of sample holder in which several thousand microcrystals can be positioned on a single silicon chip at the same time and then be examined by crystallographic methods.
Scientists crystallize biological molecules such as proteins or viruses, and then use X-ray crystallography to visualize their atomic structure allowing us to understand how they function. Many of these structures are essential to understanding biological processes and developing new types of drugs. However, obtaining crystals can be difficult and time consuming.
At Diamond, crystals are exposed to intense light in the form of X-rays; this results in the diffraction patterns from which the atomic structure is determined. Recent developments in X-ray sources with increasing brilliance, such as Diamond, DESY’s PETRA III, or the new generation of free electron lasers (FELs), have opened the door to examining ever smaller crystals. These microcrystals are considerably easier to “grow”, but because of their small size they call for new approaches in preparing specimens.
Researchers from DESY in Germany, the Paul Scherrer Institute in Switzerland and Diamond Light Source in the UK have developed a new type of sample holder for ‘serial protein crystallography’. The holder consists of a single crystal of silicon with a regular array of pores.
As the tiny crystals fall into the pores, the holder allows them to be positioned with great precision, while at the same time producing virtually no disruptive signal of its own during the X-ray diffraction experiment, something that is unavoidable using conventional methods. About 20,000 microcrystals fit on the two square millimetre silicon chip, all of which could be scanned in less than three minutes at a source such as the LCLS in California, making it an extremely effective mechanism for crystallography experiments.
The team used Diamond’s I24 beamline to test the effectiveness of the new chip. Armin Wagner, one of the paper’s authors and principal beamline scientist for Diamond’s Long-wavelength MX beamline, comments: “Chips provide an exciting new way for sample mounting, both at sources such as Diamond and at free electron lasers, and could help address many of the challenges associated with sample mounting in microcrystallography. This is a great example of how synchrotrons are assisting with technical advances for the new FELs, where experimental time is extremely precious. We’ll be refining the chip technique further with experiments at LCLS in July using crystals supplied by the Division of Structural Biology (STRUBI) at the University of Oxford.”
The chip can be used at microfocus beamlines of synchrotron light sources such as Diamond, as well as with X-ray free electron lasers such as the LCLS in Stanford and the forthcoming European XFEL in Hamburg.
In contrast to the methods typically used so far, such as liquid jets, in which microcrystals are surrounded by a liquid or a gel and then analyzed using X-rays, the new sample holder positions the crystals in small holes in a membrane made from a single silicon crystal, just ten micrometres thick, which can then be scanned by an X-ray beam. This technique ensures that the crystals can be accurately located.
“Our new sample holder allows us to characterize tiny microcrystals with a unique level of efficiency,” explains Alke Meents, the scientist at DESY who is in charge of the work. Philip Roedig, a scientist at DESY and the principal author of the study, continues: “You can think of the chip as being like a sieve. The silicon membrane consists of a matrix of many tiny holes which are slightly smaller than the crystals themselves. In order to prepare the specimen, a drop of the mother solution containing the microcrystals is placed on top of the chip, and then the solution is drawn off from below. The crystals are left sticking in the holes, like in a sieve, and can be scanned by the X-ray beam, crystal by crystal.”
Natural Product Could Lead to New Class of Commercial HerbicideNews
By looking for microorganism's protective shield, specifically the genes that can make it, a team discovered a new and potentially highly effective type of weed killer. This finding could lead to the first new class of commercial herbicides in more than 30 years.READ MORE | <urn:uuid:dda588dc-8923-4a1f-95c5-f2048fc6de29> | 3.453125 | 972 | News Article | Science & Tech. | 31.243936 | 95,563,138 |
Species Detail - Variegated Horsetail (Equisetum variegatum) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Equisetum hyemale subsp. variegatum, Equisetum hyemale subsp. wilsonii, Equisetum hyemale var. variegatum, Equisetum variegatum var. wilsonii, Equisetum variegatum var. wilsonii, Equisetum wilsonii, Hippochaete variegata
Schleich. ex F. Weber & D. Mohr
11 April (recorded in 1998)
4 December (recorded in 2015)
National Biodiversity Data Centre, Ireland, Variegated Horsetail (Equisetum variegatum), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Species/180381> | <urn:uuid:dc51236d-15c6-499f-820e-47edb2a2be8f> | 2.515625 | 239 | Structured Data | Science & Tech. | 16.451329 | 95,563,149 |
Title text: The good news is that according to the latest IPCC report, if we enact aggressive emissions limits now, we could hold the warming to 2°C. That's only HALF an ice age unit, which is probably no big deal.
This comic represents the impacts due to climate change by demonstrating the changes in climate that should be expected with a given change in global temperature. This is done by detailing the world's climate in geologic periods where the global average temperature has changed by one or more "Ice Age Units," or IAU. The comic defines an IAU as the difference in global temperature between today and the last ice age, about 4.5 °C. An IAU of 0 represents modern global temperature. It was later followed with a similar but much more elaborate chart in 1732: Earth Temperature Timeline.
One IAU unit happens to be the expected increase in global temperature the world will see by the end of year 2100. The prediction of 4-5 degrees Celsius of warming may not appear significant, but is easy to see as a substantial difference when comparing today to the last ice age.
- An IAU of -4 is associated with Snowball Earth. Snowball earth is a near-total freezing of the entire surface around 650 million years ago, in the Cryogenian. This may have been the greatest ice age known to have occurred on Earth.
- An IAU of -1 is associated with the last ice age. During this time Randall's neighborhood was buried under an ice sheet.
- An IAU of +1 is the predicted global temperature by the end of year 2100. While it makes sense to assume it's just as drastic a difference as -1 IAU, we still don't know the actual nature of what it would be, which is why it is represented by a question mark in the comic.
- An IAU of +2 is associated with the "Hothouse Earth" of the early Cretaceous period. At this time there were "palm trees at the poles" as there were polar forests during Cretaceous summers. (Average temperature of North Pole during the summer is 0 °C or 32 °F. 0+2*4.5 = 9 °C = 48.2 °F, hot enough for trees to grow at the North Pole under hypothetical 2 IAU scenarios)
An increase of 4.5 °C (+1 IAU) seems like a small change in temperature, but the changes it would cause are likely very large as it can also be described as halfway to palm trees at the poles.
The topic of ice coverage over various cities has previously been covered in 1225: Ice Sheets. The image of Boston from that comic is reused at the top of the huge chart in 1732: Earth Temperature Timeline.
This comic shows the extreme extend to which global warming can (and will) change our environment. Randall presented this view earlier in 164: Playing Devil's Advocate to Win. Climate change, especially global warming, is a recurring theme in xkcd and Randall is clearly convinced that we are causing it.
The title text expands, demonstrating that the potential impacts of an increase by the IPCC report's best case scenario of 2 °C, about half an ice age unit, makes controlling climate change seem more urgent. The figure of 2 °C is the most commonly agreed temperature target that assumes the creation of aggressive emissions limits at the time of the publishing of the comic.
The 4.5 degree increase is predicted by the bern2.5cc simulation (a moderate simulation) of the A1FI scenario. In the A1FI scenario the world has a high dependence on fossil fuels, experiences "very rapid economic growth", a declining world population by 2050, as well as a high rate of increase in energy efficiency after 2050.
The oldest known animal fossils (sponges) are from the Snowball Earth, while flowering plants became the dominant plant species during the Cretaceous period. It is believed that the entire Earth was frozen for the first time about 2,400 to 2,100 million years ago, which could have been a result of the Great Oxygenation Event.
The 200m sea level rise given in the last panel for a "Cretaceous Hothouse" (i.e. if all ice on earth melted, including the Antarctic ice cap) could not be explained by this melt-off alone. If all the ice melted the water level would only increase by about 60-80m, according to Antarctica, IPCC Third Assessment Report (section 11.2.3 on Greenland and Antarctic Ice Sheets) and Sea Level and Climate: USGS Water-Science School. Additional sea level rise can be expected from thermal expansion of seawater, and indeed the main reason for rising sea level at the moment is actually caused by this expansion of the sea due to increasing temperature. But the high-end 500-year projection for a 4x increase in CO2, at expansion of the sea, is for an additional 2m due to thermal expansion, with a decreasing rate of growth over time. (Some of the sea level change in the Cretaceous are due to changes in bathymetry.)
The 5th and most recent Intergovernmental Panel on Climate Change (IPCC AR5) presents four alternative trajectories for future concentrations of greenhouse gasses, termed Representative Concentration Pathways (RCPs): RCP2.6, RCP4.5, RCP6, and RCP8.5. They are named after possible ranges of radiative forcing values in the year 2100 relative to pre-industrial values (+2.6, +4.5, +6.0, and +8.5 W/m2, respectively). The hottest of these, RCP8.5, is predicted to result in a warming of 2.6 °C to 4.8 °C for the 2081−2100 period, and between 3 and 5.5 by the year 2100 (Working Group I Summary for Policymakers).
The lack of internationally binding agreements makes breaching an increase of 2 °C more and more likely.
- Without prompt, aggressive limits on CO2 emissions, the Earth will likely warm by an average of 4°-5°C by the century's end.
- HOW BIG A CHANGE IS THAT?
- [A ruler chart is drawn inside a frame.]
- In the coldest part of the last ice age, Earth's average temperature was 4.5°C below the 20th century norm.
- Let's call a 4.5°C difference one "Ice Age Unit."
- [A ruler with five main divisions — each again with 3 smaller quarter division markers. Above it the five main divisions are marked as follows with 0 in the middle:]
- -2 IAU -1 IAU 0 +1 IAU +2 IAU
- [Next to the 0 marking a black arrow points toward 0.25 on the scale and above it is written:]
- Where we are today
- [The rest of the text is below the ruler.]
- [To the far left below -2 IAU a curved arrow points to the left. Below it is written:]
- Snowball earth (-4 IAU)
- [Below -1 IAU a black arrow point toward this division. Below the arrow is written:]
- 20,000 years ago
- [Below this an image of a glacier. At the top of the image is written:]
- My neighborhood:
- [At the bottom of the image is an arrow pointing to the glacier:]
- Half a mile of ice
- [Below 0 IAU a black arrow point toward this division. Below the arrow is written:]
- Average during modern times
- [Below this an image of Cueball standing on a green field with a city skyline in the background. At the top of the image is written:]
- My neighborhood:
- Cueball: Hi!
- [Below +1 IAU a black arrow point toward this division. Below the arrow is written:]
- Where we'll be in 86 years
- [Below this a white image. At the top of the image is written:]
- My neighborhood:
- [Below this is a very large:]
- [Below +2 IAU a black arrow point toward this division. Below the arrow is written:]
- Cretaceous hothouse
- +200m sea level rise
- No glaciers
- Palm trees at the poles
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!
Scary thoughts there... Kynde (talk) 05:11, 9 June 2014 (UTC)
I imagine the Earth's axial tilt wouldn't change even if the temperature changed by +2 IAU. So, would palm trees survive the extreme day/night lengths at the poles? 126.96.36.199 05:31, 9 June 2014 (UTC) P.S. Also, wouldn't the North Pole be underwater, so incapable of supporting palm trees?
Also, regarding the IAU, is it a reference to the IAU that named an asteroid after Randall?
"While it says it's "probably no big deal," this is probably a joke, because even half of an Ice Age would be a lot of ice." The article has it wrong. It's a 2 degree increase, not decrease. Ice would melt. 188.8.131.52 07:33, 9 June 2014 (UTC)
- -- Fixed 184.108.40.206 (talk) (please sign your comments with ~~~~)
To prevent global warming, act yesterday! ... or, well, since we already failed to do it, maybe ... just maybe ... we should invest some resources to ADAPTING to the change. Because the USSR communist party wanted to command “wind and rain” and how it worked?
... of course, we SHOULD be trying to lower the CO2 emissions ... not like Germany, which replaced it's nuclear power plants with coal ones ... -- Hkmaly (talk) 10:03, 9 June 2014 (UTC)
- While it is true that we have build more coal plants, the majority part that replace the nuclear power is from renewable energy, see diagram on wikipedia. --220.127.116.11 15:51, 9 June 2014 (UTC)
- ... note that burning biomass, while renewable, also adds CO2. Not speaking about oil. You shouldn't be closing nuclear plants, you should be closing coal ones if you have exceed energy. -- Hkmaly (talk) 10:02, 10 June 2014 (UTC)
- While burning biomass adds CO2, the whole point of "burning a biologically-sourced fuel" like biodiesel is that you are merely returning to the atmosphere CO2 that was sucked out of the atmosphere by the biological material in the first place. So you grow an acre of plant material, and that acre of plant material sucks a certain amount of CO2 out of the atmosphere. When you then burn that plant material, you are releasing that CO2 back in to the atmosphere. Thus it is a "net zero" operation. While yes, it would be better to do a "net negative" operation (plant more plants while NOT releasing ANY CO2,) a net zero operation is still better than what we're doing now - releasing massive amounts of CO2 that have been locked up for geological-scale lengths of time, all in a VERY short timeframe. If you were to replace all work-generation power sources with "net zero" sources like biodiesel production and biomass generation, the levels of CO2 in the atmosphere would stop rising immediately. (Well, once they have reached equilibrium from other sources, anyway.) But of course, the difficulty is growing sufficient biological fuel material fast enough to create enough fuel for our needs. (The famous "it would take more farmland than currently exists on all of planet Earth, all of it dedicated to growing corn, to grow enough corn to make enough corn-derived ethanol to fuel every vehicle on the planet" problem.) So obviously energy efficiency and non-bio-fuel renewable energy methods are also needed. But biofuel (burning biomass, ethanol, biodiesel, etc,) is still a SIGNIFICANT improvement over oil/natural gas/coal. 18.104.22.168 07:31, 20 June 2014 (UTC)
Well, this seems like a topic that could generate heated comments. 22.214.171.124 10:09, 9 June 2014 (UTC)
Would anyone care to comment on the +200 meter sea rise? I googled "how much would sea level rise" a bit, and I seem to bump into 60 to 70 meters repeatedly for all glaciers melting. I found nothing direct from IPCC. I wonder if Randall really has another view on this. 126.96.36.199 (talk) (please sign your comments with ~~~~)
- Cretaceous sea levels are generally accepted to have been 200m above the present level - you have large shallow seas (with geological evidence showing depths of 200m) over many of the continents - e.g. the Eromunga Sea in Australia. This is not from the IPCC, it predates that considerably. 188.8.131.52 11:35, 15 June 2014 (UTC)
- I hope the explanation isn't that he made a meter/feet mistake. 184.108.40.206 13:04, 9 June 2014 (UTC)
- I would assert that he rounded for a clean read for a relative scale. Also, the '+' denotes the likelihood of a larger actual amount. 220.127.116.11 (talk) (please sign your comments with ~~~~)
- 60 meters is indeed the amount the sea would rise if all the glacial ice melted. However, that figure presumably does not take into account have much the sea would rise by expansion due to the increased heat. That is, after all, the main reason for rising sea levels today. So I would guess that the +200 figure is the 60 meters of added water from glacial ice plus the amount it would rise due to warming and expanding. Calebxy (talk)
- While that's possible, and desalination of water can also cause it to expand (sea water is more dense than fresh), we shouldn't try to justify the numbers if they are incorrect. If we can find some reliable data to suggest the rise would be 200 ft instead of 200m, we should include that. Or at least include a range of estimates from reliable sources. 18.104.22.168 15:42, 9 June 2014 (UTC)
- Having just re-read the explanation after posting my comment, I can see that the article attempts to do just that. But the link provided says 110 to 770 mm. Isn't the millimeters? 22.214.171.124 15:44, 9 June 2014 (UTC)
- But the sea level would rise more than 60m if the expansion of the sea is taken into account. If the earth became as hot as the graph indicates, then logically the seas would expand considerably. Calebxy (talk) 16:04, 9 June 2014 (UTC)
- Sometimes metres and centimetres and millimetres might get confounded, I'm thinking the 200m figure from the cretaceous is a joke, because the 2 ice age units increase isn't really part of the serious discussion, right? cretaceous 200m higher sea level rise prediction chart and also the other wiki's section on future sea level . I hope I did this correctly, as don't have an account and haven't done this before. I realize it's a pretty dead thread.-- 126.96.36.199 23:48, 20 November 2017 (UTC)
- Cretaceous sea levels seem to have been that high, but this tends to be attributed to the shape of the ocean basins, in particular the mid-ocean ridges, rather than to the temperature. 188.8.131.52 17:01, 9 June 2014 (UTC)
So sad that Randall is pushing the carbon tax agenda long after the AGW myth has been debunked. IGnatius T Foobar (talk) 16:00, 9 June 2014 (UTC)
- Waitwhat? a) I saw no mention of tax. b) AGW==Anthropogenic Global Warming==debunked? This may not be the place for this whole discussion (despite the relevance), but it's far from debunked. And even if "there was going to be some Global Warming anyway", you can't dismiss the probability that we're adding something to this effect and making it more extreme. If not pushing it over the edge in some way. (I'm actually more optimistic than that, but I do find "it's a myth!" to be annoyingly naive, so excuse me if I try to balance that out. It's really not worth tying this discussion box up in this debate, however.) 184.108.40.206 18:36, 9 June 2014 (UTC)
- I'm not as sure that it isn't worth it. GCC is fact. GW, might be. AGW, that's where we get into the mythical and unproven range, because it's *really hard* to tell the difference between correlation and causation, and because of other problems I wrote below.Seebert (talk) 19:28, 10 June 2014 (UTC)
- Randall is a scientist. He follows scientific consensus. 220.127.116.11 20:03, 9 June 2014 (UTC)
- Randall is a comic artist. While he's a really smart guy, he popularizes science, he doesn't do the experiments himself.Seebert (talk) 19:28, 10 June 2014 (UTC)
- No snark intended here, and I am a non-scientist, so I do not speak from a position of authority. However, I thought (one of the) the point(s) of science was that you don't have to do the science yourself in order to understand and interpret the results. In fact, you can read the reports and conclusions of others in order to draw your own. In law, for example, we follow the cases that have been established in similar situations so that we can advise our clients on the best course (and by best, I mean the course that won't land you in court paying outrageous fees) of action. We don't have to experience it ourselves in order to reach the desired outcome. We can draw analogies from similar fact patterns. Right? Orazor (talk) 09:09, 9 October 2014 (UTC)
- Wrong. Meta analysis, while useful, is not original scientific research. It is the first order derivative of science. Law is art, not science, and is not related to the truth at all. Analogies are not facts, analogies are designed to hide the facts, and therefore, hide the truth.Seebert (talk) 14:32, 22 April 2015 (UTC)
- There is nothing scientific about following consensus. 18.104.22.168 (talk) (please sign your comments with ~~~~)
- Of course there is... When 99% of climatologists are reasonably certain (which means "very very sure" for non-scientists) that there is Global Warning and that the primary cause is us (humanity greenhouse gas emissions), I wouldn't say that AGW has been "debunked" and that there is nothing scientific in following this consensus (after having made sure of its existence by reading diverse peer-reviewed studies of the field) ! You may have an agenda to defend but could you at least try to make some sense, please. Note that this doesn't mean that the current political propositions are the right way to go about it and that this comic doesn't say anything about that. Jedaï (talk) 21:47, 9 June 2014 (UTC)
- And this is why climatologists playing with models instead of actually examining data from the real world, aren't scientists. It's possible to get so addicted to your models, that you fail to realize that you've fallen into confirmation bias. And consensus, also known as mob-based peer pressure, is only as smart as the lowest IQ in the mob. Which is why climatologists, attempting to top each other's predictions, have a tendency to fall for worst case scenarios, such as Randall's scenario above.Seebert (talk) 02:42, 10 June 2014 (UTC)
- There really ISN'T anything scientific about following consensus. Correlation is not causation. The 99% figure will be scientifically relevant if it will be produced by every scientist independently proving it, not by consensus. And even then ... 100% scientists though time is same everywhere ... then Einstein came with theory and models ... and THEN the models were verified. By Sir Arthur Eddington four years later. THAT made Einstein famous. We don't really have the same kind of proof for AGW. We have lot of data which has been tampered with or cherry-picked, even the scientists can't be sure what to believe. What we DO have proof for is that climate is changing (although some of those changes are LOWERING of temperature).
- And about the political propositions ... most of them fail to reduce the greenhouse gas emissions itself, not speaking about global temperature - but their economic effect would be huge. -- Hkmaly (talk) 10:02, 10 June 2014 (UTC)
- Where is he speaking about carbon tax? "Acting now" does not equal one possible instrument. There are plenty of ways for climate change mitigation.--Ojdo (talk) 07:55, 11 June 2014 (UTC)
I *think* (haven't confirmed) that the 200 m figure is the difference between the peak of the last ice age (sea level low—"-1 IAU" in the strip) and if everything melted. We've already come up 140 m, so we can't go up 200 m from here. 22.214.171.124 20:16, 9 June 2014 (UTC)
There are several troubling things with this comic (including the sea level figure), but the most basic is the opening statement: "Without prompt, aggressive limits on CO2 emissions, the Earth will likely warm by an average of 4°-5°C by the century’s end." This is probably from the latest IPCC report, but it takes the worst of several proposed scenarios, and claims it to be the likely one. RCP8.5 projects 2.6C-4.8C, and I suppose that's what getting averaged *up* to "4.5C" for the temperature line in the comic. The second most troubling thing is that mouse-over text, regarding the 2C lid if we "enact aggressive emissions limits now"—this is an entirely arbitrary (unscientific) number based on largely unspecified changes to what the world is doing now. It gives me the sense that Randall didn't look too deep... 126.96.36.199 20:43, 9 June 2014 (UTC)
According to Wikipedia, the polar forests during the Ceretaceous period were temperate, not tropical. Thus Firs in the North and Evergreens in Antartica, not Palm trees. http://en.wikipedia.org/wiki/Polar_forests_of_the_Cretaceous Seebert (talk) 21:17, 9 June 2014 (UTC)
Oh wait, did he really say "Palm trees at the poles"? The north pole is already 4,261 meters under water. The nearest land is 700 km away. 188.8.131.52 05:14, 10 June 2014 (UTC)
- It's hyperbole. 184.108.40.206 05:46, 10 June 2014 (UTC)
- Not completely. It's refering to a specific time, the ceretaceous period. When there where forests above 85 degrees in both north and south poles. The forests where temperate though, so palm trees are hyperbole. 220.127.116.11 12:18, 10 June 2014 (UTC)
- No, it's not hyperbole at all, actually there were tropical-climate trees in polar latitudes in the northern hemisphere during parts of the Cretaceous. 18.104.22.168 11:26, 15 June 2014 (UTC)
- Citation please- everything I could find was Temperate Rain Forests (kind of like still exist in Washington State and British Columbia).Seebert (talk) 12:28, 16 June 2014 (UTC)
Independent of everything else, I'm having a tough time reconciling the fact that sea level was apparently 6m or more higher during the Roman era. E.g. the roman settlements and their harbors in places like Caister and Burgh Castle in Norfolk, England? I'm not aware that England has risen 6m. Seems to me that if see levels were to rise as much as 6m we'd just be back to where things were 1600-1700 years ago. 22.214.171.124 (talk) (please sign your comments with ~~~~)
- I'd like to research that, so [needs citation]Seebert (talk) 17:22, 11 June 2014 (UTC)
- Things can be complicated by the likes of 'rebound' of the local area of the Earth's crust after the removal of the weight of glacial ice from various landmasses (although I'm not sure whether that was still producing such measureable effects to those particular locations in Roman times) and other effects. 126.96.36.199 11:07, 12 June 2014 (UTC)
- 1600-1700 years ago there were 6+ billion fewer people (a large proportion with dwellings near shorelines, or economically dependant on them somehow) on the planet! 188.8.131.52 11:38, 15 June 2014 (UTC)
According to the Scientific Forecasts from 1986, this should have had already happened by the year 2000: http://www.nytimes.com/1986/06/24/movies/earth-s-climatic-crisis-examined-by-nova.html 184.108.40.206 01:18, 28 June 2014 (UTC)
- That link is basically a TV Guide listing for a rerun of a NOVA program which was filmed in 1983. The listing was written by a movie critic who presumably watched the program but may not have quoted it correctly. Anyway, that's popular media, not real science. If you want real science, look at peer-reviewed scientific journals. In the 30+ years since that program was filmed, we have gathered a LOT more data. It's not surprising that our understanding of what's going on is more complete now than it was in 1983. That's how science works. The more data you gather, the more accurate your predictions become (hence older predictions were generally less accurate).220.127.116.11 18:53, 23 February 2015 (UTC)
Since I used to live next to Burgh Castle, can I point out that the castle is indeed now c6m higher than the current estuary level. The nearby town of Great Yarmouth is built on land that first appeared above the waves around 1100AD. In Roman times it was possible to sail from Burgh Castle to the castle at Caistor - that's why they were built, to defend the mouth of the estuary between them.If you look at map very roughly all the green was under water circa 300AD --18.104.22.168 19:04, 1 November 2014 (UTC)
- All the angry people who like to shout "AGW has been debunked! The models aren't exact! I have a fantasy that because I'm the smartest person in the world I will be rich by 30 and therefore I hate anything related to taxes! I've been trained to growl at liberals!" What do they want to do? Even if they're right that the changes would have happened even without humanity and that the effects will be more chaotic and less straightforward and that the 25-year projection will really take 45 years and so on... Does that mean we should just gleefully accept all the changes? I realize that San Francisco; New York, Brussels, Amsterdam, and Stockholm all being underwater sounds like fun to a right-wing partisan--no more hippies, no more "liberal media", no more UN and EU, no more wildly successfully social democracies that disprove all of their economic theories, etc.--don't they care that most of the world's financial and knowledge industries, every conservative think-tank, and most of Rupert Murdoch's houses will also be gone? 22.214.171.124 22:01, 24 September 2015 (UTC)
This image is fun to cite!
Munroe, Randall. The Good News Is That According to the Latest IPCC Report, If We Enact Aggressive Emissions Limits Now, We Could Hold the Warming to 2°C. That's Only HALF an Ice Age Unit, Which Is Probably No Big Deal. Digital image. Xkcd.com. Xkcd, 9 June 2014. Web. 8 Dec. 2015. <http://xkcd.com/1379/>.
Not too sure what any teachers will think of that.
(126.96.36.199 00:06, 9 December 2015 (UTC))
"Sea level was higher during most of the Cretaceous than at any other time in Earth history, and it was a major factor influencing the paleogeography of the period. In general, world oceans were about 100 to 200 metres (330 to 660 feet) higher in the Early Cretaceous and roughly 200 to 250 metres (660 to 820 feet) higher in the Late Cretaceous than at present. The high Cretaceous sea level is thought to have been primarily the result of water in the ocean basins being displaced by the enlargement of midoceanic ridges." https://www.britannica.com/science/Cretaceous-Period Jubal Harshaw (talk) 03:35, 14 August 2016 (UTC) | <urn:uuid:9c9b990f-10b5-425c-b043-5dcfbdc843da> | 4.125 | 6,289 | Comment Section | Science & Tech. | 73.277801 | 95,563,164 |
An app or online product can be built in a huge variety of ways, using different languages, methodologies and technologies.
However there are three features which you’ll need to incorporate into almost any digital product – a database, some software and a front end. The different parts of an application are often referred to as the 'development stack' or just 'stack'.
We’ve outlined these below to give you some areas to research if you’re starting out learning to build apps yourself or some topics and questions to discuss with any technical staff you work with.
If your app needs to store information then you’ll need some kind of database. Database design is very important as it can have a large impact on how scalable your product is (which roughly means how many users can use your app at the same time).
You’ll need to decide on the type of database, what database management system (DBMS) you use and where and how it is hosted.
Typically databases fall into two categories: relational and non-relational.
Relational databases resemble tables with rows and columns and are the most commonly used type. They are usually accessed with a SQL-based DBMS like MySQL or SQL-server.
Non-relational, usually called ‘NoSQL’, databases store data in a variety of different ways. Often data is stored on documents with their own internal structure, allowing them to be more flexible regarding the needs of your app. Popular examples of NoSQL DBMSs include MongoDB and CouchDB but it is a growing area and there are many options to look into.
In terms of hosting, the main decision is whether to store all your data on one server or to have a distributed (‘cloud’) database which allows data to be broken up and stored wherever is most convenient. Issues like security and reliability are things to consider when choosing a host.
There are pros and cons to any database setup so spend some time researching or discussing what might be the best option for you.
This comprises the core logic of your product. The software of your application (which along with the database is sometimes referred to as the ‘back end’) is the bridge between your users and the database, responding to their queries by selecting, manipulating or storing data.
In a web-based application software runs on a server rather than on a user’s (‘client’s’) computer. This means that, like a database, your software code will also need to be hosted if you are creating an application that uses the web.
Software can be written in any of a number of programming languages. Common choices include Ruby, PHP, Python, Java and C#.
Each language will usually have its own choice of application frameworks. A framework generates the basic structure of an application and provides you with a series of readymade commands to do common functions like accessing data and generating web pages. The Rails framework for Ruby and .NET for C# are good examples of this. There’s no obligation to use a framework but they can make it a lot easier to get an application up and running quickly whilst also reducing the amount of repetitive (‘boilerplate’) code that you need to write.
For most languages, there is a choice of Integrated Developer Environments (IDEs) to download. An IDE is essentially a very advanced text editor which is designed specifically to make it easy to write and manage software in a specific language or group of languages. Some of them, like Eclipse for Java-based languages, can build your software and can be integrated with a server and testing packages so you can do all your development, testing and previewing in one place.
However like with your choice of database, there is absolutely no right or wrong option; it’s important to choose something that works for both the developer and the product.
The front end (sometimes called the ‘view’ or ‘user interface’) is the part of your product that your users interact with. For a web based application the front end is a web site accessed via a browser.
Typically a web page is made up of:
- an HTML (Hypertext Markup Language) document which gives it content and underlying structure
- one or more CSS (Cascading Style Sheet) files which provide the style features like colours, fonts, sizes and layout
These three types of file are combined and then displayed as a web page by a browser. Most browsers also allow you to view the raw files which can be a good way of learning how the page was put together.
For other types of application (phone apps and desktop software) the front end is often written in the same language as the software itself.
It is this part of an application where you need to consider exactly how your users will be accessing your product. If it is available on the internet and they might be using a number of different screen sizes then you might want to think about the front end concepts of progressive enhancement and responsive design. If it is an app exclusively for touch screen devices (an iOS app for example) then it would be worth putting more effort into designing for touch and thinking about different gestures and phone and tablet only features like tilting.
It is typical to create prototypes of your user interface in order to test how well they function before or alongside creating the application itself.
It is worth putting a lot of effort into the front end. Both the visual design and the usability as a whole are important to focus on. When making the front end, keep in mind the concepts of user experience (UX), user interface design and accessibility. | <urn:uuid:4ef7663c-af40-49ef-8b14-0dd5b042a571> | 3.109375 | 1,163 | Tutorial | Software Dev. | 40.694269 | 95,563,176 |
Sunshine duration or sunshine hours is a climatological indicator, measuring duration of sunshine in given period (usually, a day or a year) for a given location on Earth, typically expressed as an averaged value over several years. It is a general indicator of cloudiness of a location, and thus differs from insolation, which measures the total energy delivered by sunlight over a given period.
Sunshine duration is usually expressed in hours per year, or in (average) hours per day. The first measure indicates the general sunniness of a location compared with other places, while the latter allows for comparison of sunshine in various seasons in the same location. Another often-used measure is percentage ratio of recorded bright sunshine duration and daylight duration in the observed period.
An important use of sunshine duration data is to characterize the climate of sites, especially of health resorts. This also takes into account the psychological effect of strong solar light on human well-being. It is often used to promote tourist destinations.
If the Sun were to be above the horizon 50% of the time for a standard year consisting of 8,760 hours, apparent maximal daytime duration would be 4,380 hours for any point on Earth. However, there are physical and astronomical effects that change that picture. Namely, atmospheric refraction allows the Sun to be still visible even when it physically sets below the horizon. For that reason, average daytime (disregarding cloud effects) is longest in polar areas, where the apparent Sun spends the most time around the horizon. Places on the Arctic Circle have the longest total annual daytime, 4,647 hours, while the North Pole receives 4,575. Because of elliptic nature of the Earth's orbit, the Southern Hemisphere is not symmetrical: the Antarctic Circle, with 4,530 hours of daylight, receives five days less of sunshine than its antipodes. The Equator has a total daytime of 4,422 hours per year.
Given the theoretical maximum of daytime duration for a given location, there is also a practical consideration at which point the amount of daylight is sufficient to be treated as a "sunshine hour". "Bright" sunshine hours represent the total hours when the sunlight is stronger than a specified threshold, as opposed to just "visible" hours. "Visible" sunshine, for example, occurs around sunrise and sunset, but is not strong enough to excite the sensor. Measurement is performed by instruments called sunshine recorders. For the specific purpose of sunshine duration recording, Campbell–Stokes recorders are used, which use a spherical glass lens to focus the sun rays on a specially designed tape. When the intensity exceeds a pre-determined threshold, the tape burns. The total length of the burn trace is proportional to the number of bright hours. Another type of recorder is the Jordan sunshine recorder. Newer, electronic recorders have more stable sensitivity than that of the paper tape.
In order to harmonize the data measured worldwide, in 1962 the World Meteorological Organization (WMO) defined a standardized design of the Campbell–Stokes recorder, called an Interim Reference Sunshine Recorder (IRSR). In 2003, the sunshine duration was finally defined as the period during which direct solar irradiance exceeds a threshold value of 120 W/m².
Sunshine duration follows a general geographic pattern: subtropical latitudes (about 25° to 40° north/south) have the highest sunshine values, because these are the locations of the eastern sides of the subtropical high pressure systems, associated with the large-scale descent of air from the upper-level tropopause. Many of the world's driest climates are found adjacent to the eastern sides of the subtropical highs, which create stable atmospheric conditions, little convective overturning, and little moisture and cloud cover. Desert regions, with nearly constant high pressure aloft and rare condensation—like North Africa, the Southwestern United States, Western Australia, and the Middle East—are examples of hot, sunny, dry climates where sunshine duration values are very high.
The two major areas with the highest sunshine duration, measured as annual average, are the central and the eastern Sahara Desert—covering vast, mainly desert countries such as Egypt, Sudan, Libya, Chad, and Niger—and the Southwestern United States (Arizona, California, Nevada). The city claiming the official title of the sunniest in the world is Yuma, Arizona, with over 4,000 hours (about 91% of daylight time) of bright sunshine annually, but many climatological books suggest there may be sunnier areas in North Africa. In the belt encompassing northern Chad and the Tibesti Mountains, northern Sudan, southern Libya, and Upper Egypt, annual sunshine duration is estimated at over 4,000 hours. There is also a smaller, isolated area of sunshine maximum in the heart of the western section of the Sahara Desert around the Eglab Massif and the Erg Chech, along the borders of Algeria, Mauritania, and Mali where the 4,000-hour mark is exceeded, too. Some places in the interior of the Arabian Peninsula receive 3,600–3,800 hours of bright sunshine annually. The largest sun-baked region in the world (over 3,000 hours of yearly sunshine) is North Africa. The sunniest month in the world is December in Eastern Antarctica, with almost 23 hours of bright sun daily.
Conversely, higher latitudes (above 50° north/south) lying in stormy westerlies have much cloudier and more unstable and rainy weather, and often have the lowest values of sunshine duration annually. Temperate oceanic climates like those in northwestern Europe, the western coast of Canada, and areas of New Zealand's South Island are examples of cool, cloudy, wet, humid climates where cloudless sunshine duration values are very low. The areas with the lowest sunshine duration annually lie mostly over the polar oceans, as well as parts of northern Europe, southern Alaska, northern Russia, and areas near the Sea of Okhotsk. The cloudiest place in the United States is Cold Bay, Alaska, with an average of 304 days of heavy overcast (covering over 3/4 of the sky). In addition to these polar oceanic climates, certain low-latitude basins enclosed by mountains, like the Sichuan and Taipei Basins, can have sunshine duration as low as 1,000 hours per year, as cool air consistently sinks to form fogs that winds cannot dissipate. Tórshavn in the Faroe Islands is among the cloudiest places in the world with yearly only 840 sunshine hours.
None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.
All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.
The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely. | <urn:uuid:69e94942-fb6b-4bc4-910c-e50a7973f5d7> | 4.34375 | 1,510 | Knowledge Article | Science & Tech. | 33.034566 | 95,563,188 |
The surface of our planet moves in all sorts of ways. Erosion is one cause; landslides, as we’ve recently experienced, are another. Something most of us don’t think about all too often—except for those in seismically active areas—is the movement of the tectonic plates.
The Great Rift Valley runs some 3,700 miles, from Lebanon in the north all the way down to Mozambique, roughly following the path of the Red Sea. The part commonly known today as the East African Rift extends more than 1,800 miles from the Gulf of Aden down through Ethiopia, Kenya, and Tanzania. As you might have seen in various articles and photos in the popular press, a huge crack is appearing in southwestern Kenya, part of it just 30 miles from Nairobi, so far causing part of a highway to collapse and fueling lots of speculation about the future shape of the continent. It was originally thought to be a dramatic expansion of the existing rift.StormCon: The Surface Water Quality Conference and Expo - Join us in Denver this August 12–16 at StormCon: The North American Surface Water Quality Conference & Expo. Your colleagues from around the country will be there at the largest stormwater-specific conference of the year and you should be there too! Get details & register today at www.StormCon.com.
The movement of the tectonic plates created the Red Sea millions of years ago, as the African and Arabian plates moved apart. (They’re still moving about a centimeter each year.) Scientists say these same phenomena—the movement and rupture of plates—will, in a few million years, create a new sea in eastern Africa, along with a large new island offshore.
Although it’s generally accepted that this scenario will play out as predicted, some scientists think the sudden appearance of the crack, and several smaller ones nearby, is due at least in part to erosion rather than to tectonic movement. The main crack, which in most places is several yards wide and about as deep, is not continuous and the sides do not fit together like puzzle pieces—evidence, according to some, that it’s more likely caused by erosion of soft volcanic ash and sediments being transported by recent heavy rains. Several smaller cracks have also appeared in the vicinity of the main one.
You can see more photos of the fissure and an artist’s rendering of what the new island might look like here. | <urn:uuid:f6fc9cdf-7c0f-420f-af08-4b4a31c4e142> | 3.59375 | 507 | Knowledge Article | Science & Tech. | 48.009989 | 95,563,190 |
KISSIMMEE, Fla. — A black hole's epic "burp" may help solve one of the deep mysteries of the galactic core.
The dust-filled expanses of spiral galaxies like the Milky Way are bursting with star formation — the dustier the area is, the more likely it is for new stars to form there. But astronomers have found that stars rarely form in the center of a galaxy, where a supermassive black hole often rests, and researchers don't know why. Smaller, elliptical galaxies also show little star formation.
A black hole relatively close to the Milky Way — a mere 26 million light-years from Earth — has shown evidence of a huge X-ray blast outward that may have "snowplowed," or swept away, nearby star-forming dust.
"This is the best example of snowplowed material I've ever seen," Eric Schlegel, lead author on the new study and researcher at the University of Texas at San Antonio, said in a press conference on Jan. 5 at the American Astronomical Society's winter meeting in Kissimmee, Florida. "This is clearly a way of ejecting gas from a galaxy."
"For an analogy, astronomers often refer to black holes as 'eating' stars and gas," Schlegel added in a statement. "Apparently, black holes can also burp after their meal."
Schlegel's team analyzed data from the orbiting Chandra X-ray Observatory to investigate the dwarf galaxy NGC 5195, which is in the process of merging with the flashier Whirlpool galaxy. The researchers observed two arcs of X-rays near the center of the dwarf galaxy, which appear to be the remnants of two huge blasts outward from the black hole. Plus, the view from an optical telescope revealed a region of cooler hydrogen gas just past the X-ray arcs, which suggests the blasts pushed dust outward.
Schlegel said that dust and gas from the galaxy's collision with the Whirlpool galaxy could have slingshotted around the black hole, but that he finds that unlikely. More probably, the black hole actually reacted to the large additional quantities of dust pushed into its path, resulting in the "burp," he said. The inner X-ray arc may have taken about 1 million to 3 million years to expand to its current position, and the outer one 3 million to 6 million years, officials said in the statement.
There's much more to explore about this system, Schlegel noted. "How that reaction goes is very unclear, but I think it's clearly something that's worthy of study at other wavelengths," he said at the conference, "in addition to using simulations to try to [replicate] it."
The researchers said they suspect this type of reaction might have been much more common in the early universe, when galaxies were more densely packed together, but this "nearby" galaxy exhibiting the behavior is an exciting opportunity to see it happening up close, with less distortion. The effect can reach further out than the dramatic winds that push material away from a black hole, and it is an interesting new example of how a supermassive black hole's activity can shape a galaxy, a process called feedback, the researchers said.
The scientists also found something strange along the border of the blast: Although the "burp" may have swept dust-forming gas away from the center of the galaxy, enough was pushed up together outside the outer arc to form new stars, as well.
"We think that feedback keeps galaxies from becoming too large," co-author Marie Machacek, a researcher at the Harvard-Smithsonian Center for Astrophysics, in Massachusetts, said in the statement. "But at the same time, it can be responsible for how some stars form. This shows that black holes can create, not just destroy."
These results have been submitted to The Astrophysical Journal.
- 7 Far-Out Summer Vacation Ideas for Science Fiction Fans
- NASA Centers to Visit for an Out of This World Vacation
- Night Sky Events to Show Your Kids in Summer 2018
- Are We Alone? The Question Is Worthy of Serious Scientific Study
This article originally published at Space.com here | <urn:uuid:4f605a03-4d9d-4af6-8387-638664d8c172> | 3.453125 | 865 | Truncated | Science & Tech. | 46.281122 | 95,563,204 |
Excitation in the Early Solar Nebula — New Experimental Findings
Inferences about the formation of primordial matter in our solar system rest on analysis of the earliest preserved materials in meteorites, of the structure of the solar system today, and of matter in evolving stellar systems elsewhere.
The isotope distribution in meteorites suggests that molecular excitation processes similar to those observed today in circumstellar regions and dark interstellar clouds were operating in the early solar nebula. Laboratory model experiments together with these observations give evidence on the thermal state of the source medium from which refractory meteoritic dust formed. They indicate that rl o9gnce excitation of the broad isotopic bands of molecules such as 12C 16O, MgO, O2, AlO and OH by strong UV line sources such as H Lyd, Mg II, HB and Ca II may induce selective reactions resulting in the anomalous isotopic composition of oxygen and possibly other elements in refractory oxide condensates in meteorites.
The temperature of the grains condensing from this medium can be determined from the interdiffusion of elements between metal grains in contact with each other; the results of such analyses illustrate the large temperature differential between condensing dust and the surrounding source plasma. The metal diffusion couples mostly consist of platinum or platinum metal alloys in contact with nickel iron, encased in refractory oxide grains. These consist of minerals such as magnesium aluminate (spinel) and calcium aluminum silicates (melilite and pyroxene). The metal interdiffusion shows that they have formed at temperatures ≤ 1000 K; this is less than or about one half of the temperature surmised from consideration of thermodynamic rather than thermal radiation equilibrium.
KeywordsIsotope Fractionation Interdiffusion Coefficient Source Medium Nickel Iron Space Medium
Unable to display preview. Download preview PDF.
- Alfven, H.: 1980, Cosmic plasma Reidel, Dordrecht.Google Scholar
- Alfven, H. and Arrhenius, G.: 1976, NASA Spec. Publ. SP-345, U. S. Govt. Print. Off.Google Scholar
- Arrhenius, G.: 1972, Proc. Nobel. Symp., 21, p. 117.Google Scholar
- Arrhenius, G.: 1978, in S. F. Dermott, ed. Origin of the Solar System Wiley, N. Y.Google Scholar
- Corrigan, M., Fitzgerald, R., Mendis, D. and Arrhenius, G.: 1980, Meteoritics 15, p. 4.Google Scholar
- Geisel, S., Kleinmann, D. and Low, F.: 1970, Ap. Space Sci. p. L101.Google Scholar
- D. and Harteck, P.: 1966, J. Chem. Phys., 44, p. Chem. Ser. 89, p. 65.Google Scholar
- Matano, C.: 1933,apan J. Phys., 8, p. 109.Google Scholar
- JPenzias, A.: 1980, Purcell, J. D. and Science, 208, p. 663.Google Scholar
- Winnewisser, G., Churchill, Co and Walmsley, C.: 1979, Astrophysics of interstellar molecules. In: G. Chantry, ed., Modern Aspects of Microwave Spectroscopy, Acad. Press, N. Y.Google Scholar | <urn:uuid:2ba16743-2af0-4222-add1-1e557f2a6ec7> | 2.96875 | 715 | Academic Writing | Science & Tech. | 52.756459 | 95,563,206 |
Gamma ray spectroscopy is a useful tool for analysing the isotopic composition of a radioactive material. The energy of photons in gamma-ray spectroscopy are typically on the order of 10-1000keV, introducing a collection of interactions not observed in traditional optical spectroscopy. These interactions include the degree of penetration through a medium of a specific composition, lead for example, and the energetic dependency on such interactions.
Gamma-ray sources in most cases are the result of a parent nucleus decaying via alpha particle emission, resulting in an daughter nucleus which is in an excited state. The nucleus transitions to the ground state configuration and releases the energy difference as a photon. The shell model of the nucleus best explains this effect, whereby a nucleon is found in a higher shell upon decay and transitions to the lowest unoccupied vacancy. (Front Matter) Figure 1 outlines this process.
As the gamma-rays are not charged particles, they cannot be detected directly, although interactions with matter can produce measurable effects such as the Photoelectric Effect and Compton Scattering. Measuring the intensity of gamma-ray emissions from a source can be completed using the apparatus shown in Figure 2. This setup involves placing a source under a photomultiplier tube (PMT) with a scintillating crystal. Gamma-rays from the source enter the scintillator, where an interaction causes an electron to be ejected into the PMT. The electron is accelerated through several dynodes to a collector plate, with each successive dynode producing greater numbers of electrons of a specific energy. The resultant voltage/current pulse recorded at the detector is proportional to the energy of the incident gamma-ray.
The pulses are shaped through an amplifier, where the height of each peak correlates to the energy of the gamma-ray. This requires the amplifier output to be analysed in steps or small windows of energy. Use of a discriminator circuit can be used to create the range of observed energy. The step is referred to as a channel for which a given energy (pulse height) can be recorded. The range of energy ’viewed’ in a channel can be varied by changing the PMT voltage or the gain of the amplifier, defined as delta E. Therefore, sweeping over a range of channels will detect a range of gamma-ray energies from the sample. This can be accomplished using a multi-channel analyser (MCA). Single photon counting is obtained using this method, hence values are reported as numbers of counts per channel. This obtains a spectrum of energy values which must be calibrated using known values of spectrum features.
Using the above described method, the resolution of the setup depends on several factors such as the voltage of the PMT, number of channels used, amplifier gain and type of scintillator crystal used. Here within the resolution is assessed using the variables which give the greatest control: the PMT voltage and amplifier gain.
The resolution of the scintillator was assessed by varying the voltage of the PMT and adjusting the amplifier gain setting. A sodium iodide crystal was used for the scintillator whilst the MCA was set for 1024 channels and kept constant for all measurements. When assessing the resolution, the 31keV peak found from Cs137 was chosen as it’s shift was observable over a wider range of parameter combinations. The full-width half-maximum (FWHM) was recorded for each peak shift, allowing the resolution to be calculated using : FWHM(keV)/31keV
Variation of the coarse gain setting was used. The gain was varied from 1 to 200, with a constant voltage. Table 1 shows the combination of parameters used for this section.
The voltage was set as low as possible to observe distinguishable peaks for the Cs137 spectrum, with a gain of 60. The voltage was increased by 100V until the PMT’s limit of 1200V. Table 2 shows the combination of paramters used for this section.
The spectrum of available sources was recorded using the highest gain setting with a mid-range voltage (700-900V). Each sample was recorded for 120 seconds using 1024 channels. The results from this section provided a set of peaks over a range of energies that were used for attenuation measurements.
Measurement of the attenuation coefficient for a series of materials of differing density was performed over a range of gamma ray energies. The materials analysed were lead, polyethylene (PE), aluminium and steel. Lead and PE discs of differing thicknesses were used, whilst sheets of steel and aluminium were added to record to attenuation of a variety of thicknesses. This process was repeated for each source. Table 3 shows the corresponding energies and sources that were used in this study. The mass attenuation coefficient was also recorded using density values provided by NIST. Each sample was recorded for 60 seconds using the parameters specific for the source, where the resultant parameters are outlined in the results section. Samples were also subject to the Grubbs test prior to processing (at g<0.05).
Increasing the gain resulted in a linear increase of peak position. The resolution is observed to increase quadratically with gain. These relationships are outlined in Figure 2.
Voltage caused a exponential shift in peak position, whilst the resolution was recorded to increase in a linear manner with respect to PMT voltage. Figure 3 shows this trend.
The complete spectrum for each isotope can be found in Apenndix A of this report. The interpretation of each spectrum is as follows.
The attenuation coefficients of each material with respect to the gamma-ray energy is shown in Figures 4-7 for lead, steel, aluminium and polyethylene. Reference data for each material was also plotted (Kumar 1997). Some measurements did not provide reliable values as the source was either completely attenuated or not affected by the thickness of material. These cases were disregarded to maintain a result which coincided with recordings of attenuated gamma-rays from the source. | <urn:uuid:5241121e-c0dd-469a-9924-c82b4636277e> | 3.71875 | 1,229 | Academic Writing | Science & Tech. | 39.717054 | 95,563,237 |
While it is known that Euclid’s five axioms include a proposition that a line consists at least of two points, modern geometry avoid consistently any discussion on the precise definition of point, line, etc. It is our aim to clarify one of notorious question in Euclidean geometry: how many points are there in a line segment? – from discrete-cellular space (DCS) viewpoint. In retrospect, it may offer an alternative of quantum gravity, i.e. by exploring discrete gravitational theories. To elucidate our propositions, in the last section we will discuss some implications of discrete cellular-space model in several areas of interest: (a) cell biology, (b) cellular computing, (c) Maxwell equations, (d) low energy fusion, and (e) cosmology modelling.
Comments: 15 Pages. This paper has been submitted to JOURNAL OF PURE AND APPLIED MATHEMATICS. Your comments are welcome
Unique-IP document downloads: 900 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:706977e7-b02b-4eb2-a60d-f1d1d2710b90> | 2.796875 | 344 | Academic Writing | Science & Tech. | 37.491295 | 95,563,252 |
Authors: Frank H. Makinson
Cycle One is a concept that identifies a universal method to identify the frequency of electromagnetic waves without creating a starting point of 1 Hz. The Cycle One concept starts at a numeric value where the velocity of light has the same numeric value as a frequency. The interest in a Cycle One concept started when a geometric relationship was identified between wavelength and frequency. The frequency value is not arbitrary. The frequency value was mathematically predicted in 1944 and in 1951 it was detected coming from space. The triangle pair wavelength-frequency relationship would not have been recognized until after the 1951 radio astronomy discovery. The triangle pair relationship establishes a universal unit of length, a time duration, and allows a unit of energy to be defined.
Comments: 7 Pages.
Unique-IP document downloads: 30 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:cbe50664-e3c1-4732-b269-1bd6c5b685f7> | 2.828125 | 303 | Truncated | Science & Tech. | 36.330882 | 95,563,253 |
Species traits and environmental conditions govern the relationship between biodiversity effects across trophic levels
- 478 Downloads
Changing environments can have divergent effects on biodiversity–ecosystem function relationships at alternating trophic levels. Freshwater mussels fertilize stream foodwebs through nutrient excretion, and mussel species-specific excretion rates depend on environmental conditions. We asked how differences in mussel diversity in varying environments influence the dynamics between primary producers and consumers. We conducted field experiments manipulating mussel richness under summer (low flow, high temperature) and fall (moderate flow and temperature) conditions, measured nutrient limitation, algal biomass and grazing chironomid abundance, and analyzed the data with non-transgressive overyielding and tripartite biodiversity partitioning analyses. Algal biomass and chironomid abundance were best explained by trait-independent complementarity among mussel species, but the relationship between biodiversity effects across trophic levels (algae and grazers) depended on seasonal differences in mussel species’ trait expression (nutrient excretion and activity level). Both species identity and overall diversity effects were related to the magnitude of nutrient limitation. Our results demonstrate that biodiversity of a resource-provisioning (nutrients and habitat) group of species influences foodweb dynamics and that understanding species traits and environmental context are important for interpreting biodiversity experiments.
KeywordsBiodiversity partitioning Complementarity Ecosystem function Environmental context Freshwater Mollusk Nutrient limitation Species traits Trophic level
We thank T. Garrett for allowing access to the field site, R. Deaton, S. and B. Dengler, D. Fenolio, S. Frazier, P. Jeyasingh, M. Jones, S. Jones, F. March, K. Reagan, R. Remington, and E. Webber for field and/or laboratory assistance, and D. Allen for comments on the manuscript. This study was funded by the National Science Foundation (DEB-0211010) and is a contribution to the program of the Oklahoma Biological Survey.
- ASTM (1995) Standard methods for the examination of water and wastewater. American Public Health Association/American Water Works Association/Water Environment Federation, AlexandriaGoogle Scholar
- Cardinale BJ, Palmer MA, Collins SL (2002) Species diversity enhances ecosystem functioning through interspecific facilitation. Nature 414:427–429Google Scholar
- Hillebrand H, Shurin JB (2005) Biodiversity and aquatic food webs. In: Belgrano A, Scharler UM, Dunne J, Ulanowicz RE (eds) Aquatic food webs–an ecosystem approach. Oxford University Press, Oxford, pp 184–197Google Scholar
- Hillebrand H, Gruner DS, Borer ET, Bracken MES, Cleland EE, Elser JJ, Harpole WS, Ngai JT, Seabloom EW, Shurin JB, Smith JE (2007) Consumer versus resource control of producer diversity depends on ecosystem type and producer community structure. Proc Natl Acad Sci USA 104:10904–10909PubMedCrossRefGoogle Scholar
- Hunter MD, Price PW (1992) Playing chutes and ladders: heterogeneity and the relative roles of bottom–up and top–down forces in natural communities. Ecology 73:724–732Google Scholar
- Matthews WJ, Vaughn CC, Gido KB, Marsh-Matthews E (2005) Southern Plains Rivers. In: Benke AC, Cushing CE (eds) Rivers of North America. Elsevier, London, pp 283–325Google Scholar
- Pringle CM, Triska FJ (1996) Effects of nutrient enrichment on periphyton. In: Hauer FR, Lamberti GA (eds) Methods in stream ecology. Academic Press, San Diego, pp 607–623Google Scholar
- Schmid B, Hector A, Huston MA, Inchausti P, Nijs I, Leadley PW, Tilman D (2002) The design and analysis of biodiversity experiments. In: Loreau N, Naeem NS, Inchausti P (eds) Biodiversity and ecosystem functioning: synthesis and perspectives. Oxford University Press, Oxford, pp 61–75Google Scholar
- Spooner DE, Vaughn CC (2011) Species’ traits, dominance, and environmental gradients interact to govern primary production in freshwater mussel communities. Oikos (in press). doi: 10.1111/j.1600-0706.2011.19380.x | <urn:uuid:baf23839-af1b-4137-9206-b6b87410142c> | 2.625 | 943 | Academic Writing | Science & Tech. | 28.037549 | 95,563,260 |
Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see?
Got It game for an adult and child. How can you play so that you know you will always win?
In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest?
This task follows on from Build it Up and takes the ideas into three dimensions!
Try adding together the dates of all the days in one week. Now multiply the first date by 7 and add 21. Can you explain what happens?
Watch this animation. What do you notice? What happens when you try more or fewer cubes in a bundle?
Investigate the sum of the numbers on the top and bottom faces of a line of three dice. What do you notice?
Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens?
We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes?
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
This challenge focuses on finding the sum and difference of pairs of two-digit numbers.
Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions.
Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total?
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...?
Put the numbers 1, 2, 3, 4, 5, 6 into the squares so that the numbers on each circle add up to the same amount. Can you find the rule for giving another set of six numbers?
Can you explain the strategy for winning this game with any target?
An investigation that gives you the opportunity to make and justify predictions.
This challenge, written for the Young Mathematicians' Award, invites you to explore 'centred squares'.
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be?
This challenge encourages you to explore dividing a three-digit number by a single-digit number.
Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores.
Are these statements relating to odd and even numbers always true, sometimes true or never true?
Strike it Out game for an adult and child. Can you stop your partner from being able to go?
In this game for two players, the idea is to take it in turns to choose 1, 3, 5 or 7. The winner is the first to make the total 37.
Find the sum of all three-digit numbers each of whose digits is odd.
Can you find all the ways to get 15 at the top of this triangle of numbers? Many opportunities to work in different ways.
Are these statements always true, sometimes true or never true?
Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game.
Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread?
Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here.
What happens if you join every second point on this circle? How about every third point? Try with different steps and see if you can predict what will happen.
What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters.
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
While we were sorting some papers we found 3 strange sheets which seemed to come from small books but there were page numbers at the foot of each page. Did the pages come from the same book?
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"?
In how many different ways can you break up a stick of 7 interlocking cubes? Now try with a stick of 8 cubes and a stick of 6 cubes.
Find a route from the outside to the inside of this square, stepping on as many tiles as possible.
Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48.
Can you make dice stairs using the rules stated? How do you know you have all the possible stairs?
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
This challenge asks you to imagine a snake coiling on itself.
Use two dice to generate two numbers with one decimal place. What happens when you round these numbers to the nearest whole number?
List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it?
What happens when you round these three-digit numbers to the nearest 100?
This activity involves rounding four-digit numbers to the nearest thousand.
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
Think of a number, square it and subtract your starting number. Is the number you’re left with odd or even? How do the images help to explain this? | <urn:uuid:5dd342e5-db26-446d-a70d-ab83861c5e02> | 3.78125 | 1,344 | Content Listing | Science & Tech. | 75.593037 | 95,563,262 |
Changing Ocean Chemistry Due To Human Activity
News Sep 07, 2016
Oceanographers from MIT and Woods Hole Oceanographic Institution report that the northeast Pacific Ocean has absorbed an increasing amount of anthropogenic carbon dioxide over the last decade, at a rate that mirrors the increase of carbon dioxide emissions pumped into the atmosphere.
The scientists, led by graduate student Sophie Chu, in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, found that most of the anthropogenic carbon (carbon arising from human activity) in the northeast Pacific has lingered in the upper layers, changing the chemistry of the ocean as a result. In the past 10 years, the region’s average pH has dropped by 0.002 pH units per year, leading to more acidic waters. The increased uptake in carbon dioxide has also decreased the availability of aragonite — an essential mineral for many marine species’ shells.
Overall, the researchers found that the northeast Pacific has a similar capacity to store carbon, compared to the rest of the Pacific. However, this carbon capacity is significantly lower than at similar latitudes in the Atlantic.
“The ocean has been the only true sink for anthropogenic emissions since the industrial revolution,” Chu says. “Right now, it stores about 1/4 to 1/3 of the anthropogenic emissions from the atmosphere. We’re expecting at some point the storage will slow down. When it does, more carbon dioxide will stay in the atmosphere, which means more warming. So it’s really important that we continue to monitor this.”
Chu and her colleagues have published their results in the Journal of Geophysical Research: Oceans.
Tipping the scales
The northeast Pacific, consisting of waters that flow from Alaska’s Aleutian Islands to the tip of southern California, is considered somewhat of a climate canary — sensitive to changes in ocean chemistry, and carbon dioxide in particular. The region sits at the end of the world’s ocean circulation system, where it has collected some of the oldest waters on Earth and accumulated with them a large amount of dissolved inorganic carbon, which is naturally occurring carbon that has been respired by marine organisms over thousands of years.
“This puts the Pacific at this already heightened state of high carbon and low pH,” Chu says.
Add enough atmospheric carbon dioxide into the mix, and the scales could tip toward an increasingly acidic ocean, which could have an effect first in sea snails called pteropods, which depend on aragonite (a form of calcium carbonate) to make their protective shells. More acidic waters can make carbonate less available to pteropods.
“These species are really sensitive to ocean acidification,” Chu says. “It’s harder for them to get enough carbonate to build their shells, and they end up with weaker shells, and have reduced growth rates.”
Chu and her colleagues originally set out to study the effects of ocean acidification on pteropods, rather than the ocean’s capacity to store carbon. In 2012, the team embarked on a scientific cruise to the northeast Pacific, where they followed the same route as a similar cruise in 2001. During the month-long journey, the scientists collected samples of pteropods, as well as seawater, which they measured for temperature, salinity, and pH.
Upon their return, Chu realized that the data they collected could also be used to gauge changes in the ocean’s anthropogenic carbon storage. Ordinarily, it’s extremely difficult to tease out anthropogenic carbon in the ocean from carbon that naturally arises from breathing marine organisms. Both types of carbon are classified as dissolved inorganic carbon, and anthropogenic carbon in the ocean is miniscule compared to the vast amount of carbon that has accumulated naturally over millions of years.
To isolate anthropogenic carbon in the ocean and observe how it has changed through time, Chu used a modeling technique known as extended multiple linear regression — a statistical method that models the relationships between given variables, based on observed data. The data she collected came from both the 2012 cruise and the previous 2001 cruise in the same region.
She ran a model for each year, plugging in water temperature, salinity, apparent oxygen utilization, and silicate. The models then estimated the natural variability in dissolved inorganic carbon for each year. That is, the models calculated the amount of carbon that should vary from 2001 to 2012, only based on natural processes such as organic respiration. Chu then subtracted the 2001 estimate from the 2012 estimate — a difference that accounts for sources of carbon that are not naturally occurring, and are instead anthropogenic.
The researchers found that since 2001, the northeast Pacific has stored 11 micromoles per kilogram of anthropogenic carbon, which is comparable to the rate at which carbon dioxide has been emitted into the atmosphere. Most of this carbon is stored in surface waters. In the northern part of the region in particular, anthropogenic carbon tends to linger in shallower waters, within the upper 300 meters of the ocean. The southern region of the northeast Pacific stores carbon a bit deeper, within the top 600 meters.
Chu says this shallow storage is likely due to a subpolar gyre, or rotating current, that pushes water up from the deep, preventing surface waters from sinking. In contrast, others have observed that similar latitudes in the Atlantic have stored carbon much deeper, due to evaporation and mixing, leading to increased salinity and density, which causes carbon to sink.
The team calculated that the increase in anthropogenic carbon in the upper ocean caused a decrease in the region’s average pH, making the ocean more acidic as a result. This acidification also had an effect on the region’s aragonite, decreasing its saturation state over the last decade.
Richard Feely, a senior scientist at the National Oceanic and Atmospheric Administration, says that the group’s results show that this particular part of the ocean is “highly sensitive to ocean acidification.”
“Our own work with pteropods, and that of others, indicates that some marine organisms are already being impacted by ocean acidification processes in this region,” says Feely, who did not contribute to the study. “Laboratory studies indicate that many species of corals, shellfish, and some fish species will be impacted in the near future. As this study, and others, has shown, the region will soon become undersaturated with respect to aragonite later this century.”
While the total amount of anthropogenic carbon appears to be increasing with each year, Chu says the rate at which the northeast Pacific has been storing carbon has remained relatively the same since 2001. That means that the region could still have a good amount of “room” to store carbon, at least for the foreseeable future. But already, her team and others are seeing in the acidification trends the ocean’s negative response to the current rate of carbon storage.
“It would take hundreds of thousands of years for the ocean to absorb the majority of CO2 that humans have released into the atmosphere,” Chu says. “But at the rate we’re going, it’s just way faster than anything can keep up with.”
Lake gives Clues to Earth's Ancient AtmosphereNews
A sample of ancient oxygen, teased out of a 1.4 billion-year-old evaporative lake deposit in Ontario, provides fresh evidence of what the Earth’s atmosphere and biosphere were like during the interval leading up to the emergence of animal life.READ MORE
Beach Water Quality Forecasting at Your FingertipsNews
Researchers have identified computer models that provide accurate short-term forecasts, or “nowcasts,” of beach water quality. This could improve on current methods that can require up to 24 hours to obtain results.READ MORE | <urn:uuid:fa1a81e4-5dec-4476-bc5a-ce20335b48eb> | 3.5 | 1,641 | News Article | Science & Tech. | 30.602738 | 95,563,266 |
1 Calculate the divergence and curl of the vector field F(x,y,z) = 2xi+3yj+4zk.
2 Find the potential of the function for the conservative vector field:
F(x,y,z) = (y+z)i + (x+z)j + (x+y)k.
3 Use Green's theorem to calculate the work done when a particle is moved along the helix x = cos t, y = sin t, z = 2t from (1,0,0) to (1,0,4pi) against the force
F(x,y,z) = -yi +xj +zk.
4 Calculate the outward flux of the vector field F = xi + yj +zk across the closed surface S, the boundry of the solid paraboloid bounded by the xy - plane and z = 9 - x^2 - y^2.© BrainMass Inc. brainmass.com July 20, 2018, 6:36 pm ad1c9bdddf
The solution covers several basic properties of vector fields: divergence, curl, flux, potential function. It also explains what a conservative vector field. The detailed and step-by-step explanations provide the students a clear prospective of properties of the vector field. | <urn:uuid:88001288-f4db-4592-a5ed-d8bbcd077d63> | 3.609375 | 281 | Tutorial | Science & Tech. | 84.622011 | 95,563,273 |
Heating of a ruthenium surface on which carbon monoxide and atomic oxygen are coadsorbed leads exclusively to desorption of carbon monoxide. In contrast, excitation with femtosecond infrared laser pulses enables also the formation of carbon dioxide. The desorption is caused by coupling of the adsorbate to the phonon bath of the ruthenium substrate, whereas the oxidation reaction is initiated by hot substrate electrons, as evidenced by the observed subpicosecond reaction dynamics and density functional calculations. The presence of this laser-induced reaction pathway allows elucidation of the microscopic mechanism and the dynamics of the carbon monoxide oxidation reaction.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:8b0e0a9d-10a2-4e22-ae10-1cb11819b1c3> | 2.59375 | 154 | Academic Writing | Science & Tech. | -2.424079 | 95,563,281 |
What's up in
Even in the age of sun-observing satellites, astronomers like Jay Pasachoff still seek out total solar eclipses for the tales they can tell about our sun.
The astrophysicist Andrea Ghez spent two decades proving that a supermassive black hole anchors the center of the Milky Way galaxy. Her new plan? Test what happens when things get too close.
Kaisa Matomäki has proved that properties of prime numbers over long intervals hold over short intervals as well. The techniques she uses have transformed the study of these elusive numbers.
The evolutionary biologist Jessica Flack seeks the computational rules that groups of organisms use to solve problems.
The computational immunologist Purvesh Khatri embraces messy data as a way to capture the messiness of disease. As a result, he’s making elusive genomic discoveries.
Time isn’t just another dimension, argues Tim Maudlin. To make his case, he’s had to reinvent geometry.
Angela Olinto’s new balloon experiment takes her one step closer to the unknown source of the most energetic particles in the universe.
The computational biologist John Novembre uses our genetic code to rewrite the history of humanity.
Computational physicist Sharon Glotzer is uncovering the rules by which complex collective phenomena emerge from simple building blocks.
Get highlights of the most important news delivered to your email inbox | <urn:uuid:fcfaf869-1811-4b46-8d43-d62f0f416f26> | 2.734375 | 292 | Content Listing | Science & Tech. | 33.228038 | 95,563,289 |
A new solar power plant is being built in Chile that has some exciting outcome on the horizon. It's expected to provide enough energy, both day and night, to power up to 13,000 homes annually. Cox Energia of Spain has won a bid of 140 gigawatt-hours of generation at a rate of $34.40 per megawatt-hour. What does this mean in terms of the green energy revolution across the planet? This is the move that will make Chile one of the top renewable energy spots in the world.
One unique aspect of Cox Energia’s bid is that it includes supplying power at night. The company hasn’t announced how that will exactly happen, and it raises questions as the bid only covers solar photovoltaic cells. It’s anticipated that the company will be turning toward lithium-ion battery backup, which is already being used in a number of large-scale renewable energy solutions.
At the end of November, Tesla met their aggressive goal of creating a Powerpack facility in South Australia. 100 megawatts of capacity now back up the Hornsdale Wind Farm so it can provide more efficient energy to consumers. Batteries have become the preferred option due to how cheap that technology is becoming -- similar to solar.
“I’m assuming they are going to combine their awards with some kind of storage solution, because there is no way they can generate solar overnight,” said Manan Parikh, Solar Analyst for Greentech Media Research Americas. “There’s got to be some other kind of technology in there.”
Cox Energia’s extremely low bid suggests that the company expects costs of solar and backup technology to fall further in the future. The Enel Group won a bid at $21.48 per megawatt-hour for 593 megawatts of renewable energy from wind and solar sources. When both go online in 2024, Chile expects electric prices to cut in half for residents.
Enel Green Power is a subsidiary of the original energy firm, Enel. Established in Italy in 2008, the company has a complete focus on the renewable energy industry in many parts of the world. In addition to Chile, the company has projects either operating or in development in 23 United States and two Canadian provinces.
Prikh also notes that this could give energy companies a chance to adopt a different technology that could crop up between now and then: "There’s an entire technology that could be created that we don’t even know about. It gives developers a lot of leeway in figuring out how they are going to go about doing all this.”
Lithium-ion batteries certainly have some caveats, such as losing their charge over extended periods of time. Of course, other alternatives for energy storage, such as thermal energy and supercapacitors, are still being developed, and do hold promise for the future. However, it's possible that battery costs will likely be down enough to purchase them in abundance before these other methods become cheaper. No matter what, it's certain that green energy is ready to boom.
Here's how Jamie and Ingrid Kwong converted an old fisherman's shack into a sustainable Australian beach house.
TerraCycle helps you recycle batteries and cigarette butts with its free shipping programs.
When it comes to plastic bags, one question persists: Are they recyclable, or not? | <urn:uuid:5eb336e9-8613-4bdd-8457-a31fde4c66a2> | 2.5625 | 709 | News Article | Science & Tech. | 46.100489 | 95,563,302 |
Simulating a dual beam combiner at SUSI for narrow-angle astrometry
The Sydney University Stellar Interferometer (SUSI) has two beam combiners, i.e. the Precision Astronomical Visible Observations (PAVO) and the Microarcsecond University of Sydney Companion Astrometry (MUSCA). The primary beam combiner, PAVO, can be operated independently and is typically used to measure properties of binary stars of less than 50 milliarcsec (mas) separation and the angular diameters of single stars. On the other hand, MUSCA was recently installed and must be used in tandem...[Show more]
|Collections||ANU Research Publications|
|01_Kok_Simulating_a_dual_beam_2013.pdf||1.78 MB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated. | <urn:uuid:5e7ba750-d5b7-434e-81a4-30c28b378dae> | 2.578125 | 198 | Truncated | Science & Tech. | 22.214448 | 95,563,314 |
Draconid Meteor Shower
The Annual Draconid Meteor Shower occurs each year around Oct 8th when Earth passes through a minefield of dusty debris from Comet Giacobini-Zinner.
The Draconid Meteor Shower radiates from the fiery mouth of the northern constellation Draco the Dragon and peaks on the nights of 7th - 8th of October. This shower had an unusually rich peak in 2011, but meteor rates this year are expected to be back to normal, meaning only a handful of meteors each hour.
Unlike most meteor showers, the Draconids are best seen in the evening, instead of before dawn.
Last updated on: Saturday 27th May 2017
There are no comments for this post. Be the first! | <urn:uuid:e6b34a01-e14f-4990-8059-9b5a6755b32b> | 3.390625 | 155 | Knowledge Article | Science & Tech. | 56.434746 | 95,563,315 |
A research ship has recovered at least two bits of molten rock from the bottom of the Pacific Ocean that scientists say could have come from a meteorite.
The preliminary findings are the result of an unprecedented survey conducted this week by the Exploration Vessel Nautilus in the National Oceanic and Atmospheric Administration’s Olympic Coast National Marine Sanctuary, about 15 miles (25 kilometers) off the coast of Washington state.
If scientists are correct, the two flecks of rock identified today could be the first pieces of a meteorite recovered from the ocean after its descent was observed.
The meteorite in question was spotted on March 7 as it flared through the sky and into the Pacific. Marc Fries, cosmic dust curator at NASA’s Johnson Space Center in Texas, analyzed the eyewitness reports as well as radar readings and other data to zero in on a roughly half-mile-wide (square-kilometer) area where fragments of the meteorite should have ended up.
Fries’ analysis suggested that about 2 tons’ worth of meteoritic material fell into the ocean, in pieces ranging from mere flecks to 5-inch-wide fragments.
Over the course of seven hours on Monday, two remotely operated vehicles surveyed the site, about 300 feet beneath the ocean surface.
Fries and his colleagues used a sediment scoop and a suction hose sampler to vacuum up promising samples of silt from the seafloor. Magnets and other instruments helped them sift through the muck.
Today, the team highlighted the discovery of two small fragments showing the signs of fusion crust — bits of a meteorite’s exterior that melted and flowed as they blazed through the atmosphere. The fragments look like blobs of pottery glaze and measure about a tenth of an inch (2 millimeters) across.
“We now have samples, and we couldn’t be happier,” Fries said in a KCPQ report.
During the weeks to come, the fragments will be analyzed further to confirm whether they came from March’s meteorite, and Fries will look for still more bits in the samples of silt.
Confirmed meteorite samples will be housed at the Smithsonian Institution. Eventually, it’ll be up to the Meteoritical Society to determine whether there’s enough material in the bits to qualify as a formally named meteorite.
The meteorite hunt was just one of the activities planned as part of the E/V Nautilus’ expedition, which is led by Nicole Raineault of the Ocean Exploration Trust.
Support for this week’s expedition is being provided by NOAA’s Office of Ocean Exploration and Research, Ocean Exploration Trust and the National Geographic Society. The meteorite-hunting team also includes scientists from the Olympic Coast National Marine Sanctuary, NASA and the University of Washington.
NautilusLive.org is providing streaming video from the E/V Nautilus, and updates are also available via the @EVNautilus Twitter account and National Geographic’s Open Explorer mission log. | <urn:uuid:91a5a0e4-d68e-4811-a241-7ebbb000bf32> | 3.1875 | 634 | News Article | Science & Tech. | 37.717868 | 95,563,347 |
On Tuesday night, a meteor flew over southeast Michigan causing the earth to shake, simulating an earthquake, leaving many citizens in shock. However, researchers say that meteor might not have actually caused a “textbook” earthquake.
“In fact, meteors do not cause earthquakes to rupture along a fault,” William Yeck, a research geophysicist at the United States Geological Survey’s National Earthquake Information Center in Golden, Colorado, told ABC News.
The real cause of the Tuesday night phenomenon was when the meteor, measuring at approximately two yards in diameter and traveling around 28,000 mph, exploded in the sky above the Michigan area.
“That explosion generated shock waves that traveled down to the ground northeast of Detroit, where residents heard a loud boom and felt the ground beneath them tremble,” Bill Cooke, the lead of NASA’s Meteoroid Environment Office at the Marshall Space Flight Center in Huntsville, Alabama, reported.
Although startling, this occurrence is not particularly dangerous, with not one reported death by a meteor in history.
Did you enjoy this article? Consider supporting College Media Network's mission to support the next generation of journalists. For as little as $2 a month, you can help keep our site ad-free and the future of journalism alive. Go here to donate.
Sign up for the Morning Scoop
Teen Who Urged Boyfriend’s Suicide Through Text Says It Was Free Speech
Apparently, the First Amendment extends to telling your boyfriend to get back into the truck and kill himself. | <urn:uuid:84760d0c-a83c-4078-986a-59fa4214436a> | 3.703125 | 320 | Truncated | Science & Tech. | 40.67717 | 95,563,369 |
Werner Heisenberg arrived at quantum mechanics by a quite different route. For him, the essence of the theory was in the fact that the process of measurement of one dynamical variable of a system (the position of a particle, for instance) caused a disturbance of the system that rendered the determination of a complementary variable (in this case momentum) uncertain. Heisenberg thus started with the concept of a point particle and considered the resulting “uncertainty” to be a consequence of the measurement process. In this way, he rejected the classical goal of describing the behavior of physical systems and made the theory into one in which the central goal was to determine the results of experiments. It was through this approach that the human observer came to be a central actor in the theory. Whereas Schrödinger’s waves suggested a physical reality, Heisenberg’s theory was concerned not with the physical world itself but with how we came to know it.
KeywordsQuantum Mechanic Physical Reality Human Observer Point Particle Discrete Frequency
Unable to display preview. Download preview PDF. | <urn:uuid:9df2c935-38ac-4eae-9e0a-00367a5d27e2> | 3.4375 | 225 | Truncated | Science & Tech. | 30.73552 | 95,563,377 |
Other Names and/or Listed subspecies:
Bizant River Shark, Queensland River Shark
Status/Date Listed as Endangered:
Area(s) Where Listed As Endangered:
Australia, Brunei Darussalam, Indonesia, Malaysia, Papua New Guinea
The speartooth shark is a medium-sized whaler shark found in the tropical waters of the western Pacific Ocean. It is a requiem shark (live-bearing shark) with a broadly round snout and small eyes. It has a uniformily slate-gray surface and its underparts are white. Younger sharks have noticable marks, and the marks disappear as they age. Speartooth sharks have fewer teeth compared to other requiem sharks, with a total of 54 rows in both jaws. The teeth are very different in the lower and upper jaws. The upper teeth are tall, broad, flat, triangular and blade-like as its name suggests, and the lower teeth are narrow, tall, erect, and slightly hooked to straight cusps. It is estimated that adults can reach up to 9.84 feet in length.
The biology of this species is nearly uknown. It is known to occur in tidal rivers and estuaries, indicating that large tropical river systems appear to be the primary habitat. Speartooth sharks are viviparous, meaning they give birth to fully developed live young. Diet consists primarily of fish such as prawns, gobies, gudgeons, jewfish, and bream.
The main threat to the speartooth shark is hunting and habitat degradation. Some are accidently caught by fishermen, and therefore eaten or left to die. The speartooth shark is protected under the Environment Protection and Biodiversity Converstion Act of 1999. Also, some speartooth sharks occur within the Kakadu National Park, a protected area that appears to be a safe habitat for the species.
Copyright Notice: This article is licensed under the GNU Free
Documentation License. It uses material from the Wikipedia article "Speartooth shark".
Speartooth Shark Facts Last Updated:
April 30, 2017
To Cite This Page:
Glenn, C. R. 2006. "Earth's Endangered Creatures - Speartooth Shark Facts" (Online).
Accessed 7/22/2018 at http://earthsendangered.com/profile.asp?sp=1652&ID=4.
Need more Speartooth Shark facts? | <urn:uuid:d80caa5a-430f-4681-b7ed-743f83d3ad0c> | 3.71875 | 512 | Knowledge Article | Science & Tech. | 55.411575 | 95,563,398 |
Predictions of Greenland ice loss and its impact on rising sea levels may have been greatly underestimated, according to scientists at the University of Leeds.
The finding follows a new study, which is published today in Nature Climate Change, in which the future distribution of lakes that form on the ice sheet surface from melted snow and ice – called supraglacial lakes – have been simulated for the first time.
Previously, the impact of supraglacial lakes on Greenland ice loss had been assumed to be small, but the new research has shown that they will migrate farther inland over the next half century, potentially altering the ice sheet flow in dramatic ways.
Dr Amber Leeson from the School of Earth and Environment and a member of the Centre for Polar Observation and Modelling (CPOM) team, who led the study, said: “Supraglacial lakes can increase the speed at which the ice sheet melts and flows, and our research shows that by 2060 the area of Greenland covered by them will double.”
Supraglacial lakes are darker than ice, so they absorb more of the Sun’s heat, which leads to increased melting. When the lakes reach a critical size, they drain through ice fractures, allowing water to reach the ice sheet base which causes it to slide more quickly into the oceans. These changes can also trigger further melting.
Dr Leeson explained: “When you pour pancake batter into a pan, if it rushes quickly to the edges of the pan, you end up with a thin pancake. It’s similar to what happens with ice sheets: the faster it flows, the thinner it will be.
“When the ice sheet is thinner, it is at a slightly lower elevation and at the mercy of warmer air temperatures than it would have been if it were thicker, increasing the size of the melt zone around the edge of the ice sheet.”
Until now, supraglacial lakes have formed at low elevations around the coastline of Greenland, in a band that is roughly 100 km wide. At higher elevations, today’s climate is just too cold for lakes to form.
In the study, the scientists used observations of the ice sheet from the Environmental Remote Sensing satellites operated by the European Space Agency and estimates of future ice melting drawn from a climate model to drive simulations of how meltwater will flow and pool on the ice surface to form supraglacial lakes.
Since the 1970s, the band in which supraglacial lakes can form on Greenland has crept 56km further inland. From the results of the new study, the researchers predict that, as Arctic temperatures rise, supraglacial lakes will spread much farther inland – up to 110 km by 2060 – doubling the area of Greenland that they cover today.
Dr Leeson said: “The location of these new lakes is important; they will be far enough inland so that water leaking from them will not drain into the oceans as effectively as it does from today’s lakes that are near to the coastline and connected to a network of drainage channels.”
“In contrast, water draining from lakes farther inland could lubricate the ice more effectively, causing it to speed up.”
Ice losses from Greenland had been expected to contribute 22cm to global sea-level rise by 2100. However, the models used to make this projection did not account for changes in the distribution of supraglacial lakes, which Dr Leeson’s study reveals will be considerable.
If new lakes trigger further increases in ice melting and flow, then Greenland’s future ice losses and its contribution to global sea-level rise have been underestimated.
The Director of CPOM, Professor Andrew Shepherd, who is also from the School of Earth and Environment at the University of Leeds and is a co-author of the study, said: “Because ice losses from Greenland are a key signal of global climate change, it’s important that we consider all factors that could affect the rate at which it will lose ice as climate warms.
“Our findings will help to improve the next generation of ice sheet models, so that we can have greater confidence in projections of future sea-level rise. In the meantime, we will continue to monitor changes in the ice sheet losses using satellite measurements.”
The study was funded by the Natural Environment Research Council (NERC) through their support of the Centre for Polar Observation and Modelling and the National Centre for Earth Observation.
The research paper, Supraglacial lakes on the Greenland ice sheet advance inland under warming climate, is published in Nature Climate Change on 15 December 2014.
Dr Amber Leeson and Professor Andrew Shepherd are available for interview. Please contact the University of Leeds Press Office on 0113 343 4031 or email firstname.lastname@example.org
For journalist going to the AGU Fall Meeting, please note that Professor Shepherd will also be in attendance, if this helps to facilitate an interview.
Press Office | AlphaGalileo
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:b90e54bb-03df-4e4b-a940-83424b4ca16c> | 4.1875 | 1,666 | Content Listing | Science & Tech. | 44.294567 | 95,563,418 |
The University of Warwick and QuantIC researchers at Heriot Watt University and the University of Glasgow performed a study in optical sensing, which could considerably enhance the precision of measuring nanoscopic structures.
QuantIC is part of the UK National Quantum Technologies Program and is the UK Quantum Technology Hub in Quantum Enhanced Imaging.
The researchers used pairs of photons, which are essential components of energy that make up light, to develop a method that determines the thickness of objects that are less than a 100,000th of the width of a human hair.
In the latest technique, two near-identical photons are fired onto a component called a beamsplitter and their subsequent behavior is monitored – with some 30,000 photons detected every second, and 500bn in use during an entire experiment.
Identical photons tend to ‘buddy up’ and continue to travel together – the outcome of a mild quantum interference effect. As a result of this, the team’s newly developed setup provides the same stability and precision as current one-photon methods that, owing to the equipment needed, are more expensive.
Providing a host of promising applications, such as research to better understand DNA, cell membranes, and even quality control for nanoscopic 2D materials of one atom’s thickness, for example, graphene, the latest study represents a major improvement on current two-photon techniques with up to 100 times better resolution.
Image Credit: University of Warwick
News This Week
Tiny particles called quantum dots reduce symptoms in mice primed to develop a type of Parkinson’s disease, and also block formation of the toxic protein clumps in Alzheimer’s. They could one day be a [...]
Physicist Seth Fraden is developing a new generation of machines modeled on living creatures. His latest invention might one day treat disease by swimming its way through our blood. As a kid, physicist Seth [...]
Richard Feynman gave his famous talk "There's Plenty of Room at the Bottom" (Original Transcript Available Here : http://muonray.blogspot.ie/2012/12/ri...) on December 29th 1959 at the annual meeting of the American Physical Society at [...]
Postnova Analytics has published a new application note that describes a new approach for analysis of titanium dioxide nanoparticles in commercial sunscreens. The technique, which combines Inverse Supercritical Fluid Extraction (I-SFE) and Miniaturized Asymmetrical [...]
Leave it to Richard Branson to find motivation to go to the gym in traveling to space. On Tuesday, a ship from Brason’s space flight company, Virgin Galactic, achieved supersonic speed in a test [...]
Machine-learning algorithms tuned to detecting cancer DNA in the blood could pave the way for personalized cancer care. copyright by www.the-scientist.com Modern cancer medicine is hampered by two big challenges—detecting cancers when they are [...] | <urn:uuid:46b516fa-5151-46d2-bfc8-37223f08d268> | 3.4375 | 595 | Content Listing | Science & Tech. | 40.793279 | 95,563,429 |
Join Barron Stone for an in-depth discussion in this video Finding solutions with itertools, part of Code Clinic: Python (2014).
In this video, we'll take a look at the section…of code that calculates solutions…to the n queens problem.…It's based on an example that Raymond Hettinger…posted on activestate.com which demonstrates…the power of the Python itertools module.…The itertools module provides a number of iterator…functions which can be used as efficient…building blocks for algorithms.…For this program, we'll specifically be using…the permutations function, and I'm going…to demonstrate that here using idle.…You can import the permutations function…from the itertools module by typing…from itertools import permutations.…
Now permutations takes as an input an object…such as a list.…Zero one two.…Using that list, it'll generate an iterable object…which returns every possible ordering…of elements in that list.…So you can see here we just get an object…back from our call to permutations.…So let me demonstrate how permutations can work…on that list by copying it.…We'll put this list inside of a for loop.…We'll print out every possible result returned…
Barron introduce challenges and provides an overview of his solutions in Python. Challenges include topics such as statistical analysis, searching directories for images, and accessing peripheral devices.
Visit other courses in the series to see how to solve the exact same challenges in languages like C#, C++, Java, PHP, and Ruby.
Skill Level Intermediate
Q: Why can't I access the Lake Pend Orielle site (http://lpo.dt.navy.mil)?
A: The Lake Pend Orielle site is not accessible in some geographical areas. We have contacted the owner of the server to try to resolve this issue.
Q: I am unable to access the Lake Pend Oreille data from outside the U.S.
A: A static copy of this data is provided here for lynda.com members outside of the U.S | <urn:uuid:27fdf4da-b036-4d0f-b25b-7674eccc96d3> | 3.140625 | 448 | Truncated | Software Dev. | 53.471603 | 95,563,434 |
How to Help a Planet
We had a wonderful year together in 2014 looking at all the fascinating aspects of nature in the South East of Crete and sometimes beyond. Identifying plants, birds and insects and finding out a little bit about them. There will be more of the same this year but we’ll be looking deeper into what happens and why in the natural world. So be prepared to get your hands and knees dirty as we get down close and personal with nature.
But first, some phenology. There are a few birds that I recorded in January between 2004 and 2010 that I haven’t seen in January since. These are: Chaffinch, Dunlin, Pied Flycatcher and Kestrel. So let’s go down to the olive groves near Long Beach where the undergrowth is not too dense and see if we can find some Chaffinches and maybe a Flycatcher. Here we are and what a pleasant greeting – a pair of Painted Lady Butterflies pirouetting in the sunlight. You toddle off over that waste ground where there are plenty of posts and things which make good vantage points for Flycatchers to ambush unwary insects and I’ll scour the olive grove for Chaffinches. We’ll meet back here in an hour and compare notes. What have you found? Plenty of White Wagtails and a nice Stonechat but not a hint of a Flycatcher, Pied or otherwise. No matter, it’s all useful data. I’ve had a little more luck. I’ve found some skylarks running around but also three or four Chaffinches.
While we’re here I noticed a damp hollow with a couple of red dragonflies darting about. Both the Common Darter and the Scarlet Darter are on our “to find list” for January so let’s go and see if we can identify them. That’s a bit of a surprise, a Red-veined Darter. She’s rather an elderly lady which is only to be expected as I generally see them no later than November. See the yellow patch at the base of the wing? That’s one of the identification keys. The Scarlet Darter has a bright orange patch and the Common and Southern Darters have none at all.
Back in Victorian England in the nineteenth century there was a passing fad for creating Mosseries, either in the garden or in glass jars called terrariums, and I thought we’d have a go at reviving the fad by starting our own this winter (you can all join in with this one. I’ve put some simple instructions up in the Naturalists Group on Facebook). Mosses are plants of course and along with Hornworts and Liverworts make up the phylum Bryophyta. Mosses are totally distinct from flowering plants in a number of ways, one of which is that they start life as spores rather than seeds.
These spores develop into thread like structures called protonema and become sexually mature as gametophytes. These are responsible for producing sperm and eggs and produce the familiar carpet of moss. The long stilts which poke up from this carpet are called sporophytes and the capsules on the end (called sporangia) contain new spores which starts the whole cycle again.
It looks like our mossary may become an insectarium as I’ve just discovered some little yellow eggs in amongst our moss scrapings. I haven’t a clue what they are although I’d hazard a guess at some sort of fly. We’ll just have to wait and see if they hatch.
This is a brief taste of what’s to come this year. We’ll be out and about together trying to fill in the gaps in the phenology record and making a lot of new observations (and maybe some new discoveries). I’ll be starting a few projects to get your teeth into (and they’re not just for children – there’s no reason why an erudite professor of physics or a CEO of an international company shouldn’t have a mossary on their desks); and I dare say we’ll have more lunches and a few courtyard chats like we did last year.
As a species we need to know a lot more about the planet we live on than we do at present. If we are to continue to evolve and survive the sixth mass extinction into which we have put ourselves and all other life on Earth with reckless abandon then every scrap of information counts. So get photographing and get involved with data collectors such as iNaturalist or Project Noah and add your observations. One to add to your New Year’s Resolutions this year: make a contribution to world knowledge. Until next week – happy hunting.
Naturalists (the facebook page that accompanies this blog) | <urn:uuid:76c09cd8-7f19-4907-86fd-03e988669640> | 2.625 | 1,014 | Personal Blog | Science & Tech. | 59.524467 | 95,563,439 |
Many species travel in highly organized groups1,2,3. The most quoted function of these configurations is to reduce energy expenditure and enhance locomotor performance of individuals in the assemblage4,5,6,7,8,9,10,11. The distinctive V formation of bird flocks has long intrigued researchers and continues to attract both scientific and popular attention4,7,9,10,11,12,13,14. The well-held belief is that such aggregations give an energetic benefit for those birds that are flying behind and to one side of another bird through using the regions of upwash generated by the wings of the preceding bird4,7,9,10,11, although a definitive account of the aerodynamic implications of these formations has remained elusive. Here we show that individuals of northern bald ibises (Geronticus eremita) flying in a V flock position themselves in aerodynamically optimum positions, in that they agree with theoretical aerodynamic predictions. Furthermore, we demonstrate that birds show wingtip path coherence when flying in V positions, flapping spatially in phase and thus enabling upwash capture to be maximized throughout the entire flap cycle. In contrast, when birds fly immediately behind another bird—in a streamwise position—there is no wingtip path coherence; the wing-beats are in spatial anti-phase. This could potentially reduce the adverse effects of downwash for the following bird. These aerodynamic accomplishments were previously not thought possible for birds because of the complex flight dynamics and sensory feedback that would be required to perform such a feat12,14. We conclude that the intricate mechanisms involved in V formation flight indicate awareness of the spatial wake structures of nearby flock-mates, and remarkable ability either to sense or predict it. We suggest that birds in V formation have phasing strategies to cope with the dynamic wakes produced by flapping wings.
Access optionsAccess options
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
Subscribe to Journal
Get full journal access for 1 year
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
The Waldrappteam assisted with data collection and provided logistical support (J.F., B.V.). We thank members of the Structure & Motion Laboratory for discussions and assistance, particularly J. Lowe, K. Roskilly, A. Spence and S. Amos, and C. White and R. Bomphrey for reading an earlier draft of the paper. Funding was provided by an Engineering and Physical Sciences Research Council grant to A.M.W., J.R.U. and S.Ha. (EP/H013016/1), a Biotechnology and Biological Sciences Research Council grant to A.M.W. (BB/J018007/1) and a Wellcome Trust Fellowship (095061/Z/10/Z) to J.R.U.
An animated movie showing a section of the ibis flight, taken from the 5 Hz GPS logger data. Each individual bird is identified by a number displayed on the tip of the left wing.
A short video clip of the ibis flying behind the paraplane during a training flight. | <urn:uuid:32e822ea-cea3-490b-96d5-5a682abe2637> | 2.6875 | 677 | Truncated | Science & Tech. | 55.474143 | 95,563,444 |
|Part of a series on|
|Beyond the Standard Model|
Dark matter is a hypothetical form of matter that is thought to account for approximately 80% of the matter in the universe, and about a quarter of its total energy density. The majority of dark matter is thought to be non-baryonic in nature, possibly being composed of some as-yet undiscovered subatomic particles.[note 1] Its presence is implied in a variety of astrophysical observations, including gravitational effects that cannot be explained unless more matter is present than can be seen. For this reason, most experts think dark matter to be ubiquitous in the universe and to have had a strong influence on its structure and evolution. The name dark matter refers to the fact that it does not appear to interact with observable electromagnetic radiation, such as light, and is thus invisible (or 'dark') to the entire electromagnetic spectrum, making it extremely difficult to detect using usual astronomical equipment.
The primary evidence for dark matter is that calculations show that many galaxies would fly apart instead of rotating, or would not have formed or move as they do, if they did not contain a large amount of unseen matter. Other lines of evidence include observations in gravitational lensing, from the cosmic microwave background, from astronomical observations of the observable universe's current structure, from the formation and evolution of galaxies, from mass location during galactic collisions, and from the motion of galaxies within galaxy clusters. In the standard Lambda-CDM model of cosmology, the total mass–energy of the universe contains 4.9% ordinary matter and energy, 26.8% dark matter and 68.3% of an unknown form of energy known as dark energy. Thus, dark matter constitutes 84.5%[note 2] of total mass, while dark energy plus dark matter constitute 95.1% of total mass–energy content.
Because dark matter has not yet been observed directly, it must barely interact with ordinary baryonic matter and radiation. The primary candidate for dark matter is some new kind of elementary particle that has not yet been discovered, in particular, weakly-interacting massive particles (WIMPs), or gravitationally-interacting massive particles (GIMPs). Many experiments to directly detect and study dark matter particles are being actively undertaken, but none has yet succeeded. Dark matter is classified as cold, warm, or hot according to its velocity (more precisely, its free streaming length). Current models favor a cold dark matter scenario, in which structures emerge by gradual accumulation of particles.
Although the existence of dark matter is generally accepted by the scientific community, some astrophysicists, intrigued by certain observations that do not fit the dark matter theory, argue for various modifications of the standard laws of general relativity, such as MOND, TeVeS, or entropic gravity. These models attempt to account for all observations without invoking supplemental non-baryonic matter.
- 1 History
- 2 Technical definition
- 3 Observational evidence
- 3.1 Galaxy rotation curves
- 3.2 Velocity dispersions
- 3.3 Galaxy clusters
- 3.4 Gravitational lensing
- 3.5 Cosmic microwave background
- 3.6 Structure formation
- 3.7 Bullet Cluster
- 3.8 Type Ia supernova distance measurements
- 3.9 Sky surveys and baryon acoustic oscillations
- 3.10 Redshift-space distortions
- 3.11 Lyman-alpha forest
- 4 Composition of dark matter: baryonic vs. nonbaryonic
- 5 Classification of dark matter: cold, warm or hot
- 6 Detection of dark matter particles
- 7 Alternative hypotheses
- 8 In philosophy of science
- 9 In popular culture
- 10 See also
- 11 Notes
- 12 References
- 13 External links
The hypothesis of dark matter has an elaborate history. In a talk given in 1884, Lord Kelvin estimated the number of dark bodies in the Milky Way from the observed velocity dispersion of the stars orbiting around the center of the galaxy. By using these measurements, he estimated the mass of the galaxy, which he determined is different from the mass of visible stars. Lord Kelvin thus concluded that "many of our stars, perhaps a great majority of them, may be dark bodies". In 1906 Henri Poincaré in "The Milky Way and Theory of Gases" used "dark matter", or "matière obscure" in French, in discussing Kelvin's work.
The first to suggest the existence of dark matter, using stellar velocities, was Dutch astronomer Jacobus Kapteyn in 1922. Fellow Dutchman and radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the local galactic neighborhood and found that the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be erroneous.
In 1933, Swiss astrophysicist Fritz Zwicky, who studied galactic clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass that he called dunkle Materie ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated that the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred that some unseen matter provided the mass and associated gravitation attraction to hold the cluster together. This was the first formal inference about the existence of dark matter. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. However, Zwicky did correctly infer that the bulk of the matter was dark.[clarification needed]
The first robust indications that the mass to light ratio was anything other than unity came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula (known now as the Andromeda Galaxy), which suggested that the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to missing matter.
Vera Rubin and Kent Ford in the 1960s and 1970s provided further strong evidence, also using galaxy rotation curves. Rubin worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy. This result was confirmed in 1978. An influential paper presented Rubin's results in 1980. Rubin found that most galaxies must contain about six times as much dark as visible mass; thus, by around 1980 the apparent need for dark matter was widely recognized as a major unsolved problem in astronomy.
At the same time that Rubin and Ford were exploring optical rotation curves, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (HI) often extends to much larger galactic radii than those accessible by optical studies, allowing the sampling of rotation curves – and thus of the total mass distribution – to a new dynamical regime. Early mapping of Andromeda with the 300-foot telescope at Green Bank and the 250-foot dish at Jodrell Bank already showed that the HI rotation curve did not trace the expected Keplerian decline. As more sensitive receivers became available, Morton Roberts and Robert Whitehurst were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii, Figure 16 of that paper combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the HI data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic HI spectroscopy was being developed. In 1972, David Rogstad and Seth Shostak published HI rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended HI disks.
A stream of observations in the 1980s supported the presence of dark matter, including gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics.
In standard cosmology, matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ρ ∝ a−3. This is in contrast to radiation, which scales as the inverse fourth power of the scale factor ρ ∝ a−4, and a cosmological constant, which is independent of a. These scalings can be understood intuitively: for an ordinary particle in a cubical box, doubling the length of the sides of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is larger because an increase in scale factor causes a proportional redshift. A cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.
In principle, "dark matter" means all components of the universe that are not visible but still obey ρ ∝ a−3. In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons." Context will usually indicate which meaning is intended.
Galaxy rotation curves
The arms of spiral galaxies rotate around the galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then we can model the galaxy as a point mass in the centre and test masses orbiting around it, similar to the solar system.[note 3] From Kepler's Second Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat as distance from the center increases.
If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude that the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there is a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.
Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.
As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.
Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
- From the scatter in radial velocities of the galaxies within clusters
- From X-rays emitted by hot gas in the clusters. From the X-ray energy spectrum and flux, the gas temperature and density can be estimated, hence giving the pressure; assuming pressure and gravity balance determines the cluster's mass profile.
- Gravitational lensing (usually of more distant galaxies) can measure cluster masses without relying on observations of dynamics (e.g., velocity).
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.
One of the consequences of general relativity is that massive objects (such as a cluster of galaxies) lying between a more distant source (such as a quasar) and an observer should act as a lens to bend the light from this source. The more massive an object, the more lensing is observed.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters. Lensing can lead to multiple copies of an image. By analyzing the distribution of multiple image copies, scientists have been able to deduce and map the distribution of dark matter around the MACS J0416.1-2403 galaxy cluster.
Weak gravitational lensing investigates minute distortions of galaxies, using statistical analyses from vast galaxy surveys. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements. Dark matter does not bend light itself; mass (in this case the mass of the dark matter) bends spacetime. Light follows the curvature of spacetime, resulting in the lensing effect.
Cosmic microwave background
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the CMB by its gravitational potential (mainly on large scales), and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the cosmic microwave background (CMB).
The cosmic microwave background is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The series of peaks can be predicted for any assumed set of cosmological parameters by modern computer codes such as CMBFast and CAMB, and matching theory to data, therefore, constrains cosmological parameters. The first peak mostly shows the density of baryonic matter, while the third peak relates mostly to the density of dark matter, measuring the density of matter and the density of atoms.
The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks. After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003-12, and even more precisely by the Planck spacecraft in 2013-15. The results support the Lambda-CDM model.
The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model but difficult to reproduce with any competing model such as MOND.
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.
If dark matter does not exist, then the next most likely explanation is that general relativity – the prevailing theory of gravity – is incorrect. The Bullet Cluster, the result of a recent collision of two galaxy clusters, provides a challenge for modified gravity theories because its apparent center of mass is far displaced from the baryonic center of mass. Standard dark matter theory can easily explain this observation, but modified gravity has a much harder time, especially since the observational evidence is model-independent.
Type Ia supernova distance measurements
Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. The data indicates that the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected that the total energy density of everything in the universe to sum to 1 (Ωtot ~ 1). The measured dark energy density is ΩΛ = ~0.690; the observed ordinary (baryonic) matter energy density is Ωb = ~0.0482 and the energy density of radiation is negligible. This leaves a missing Ωdm = ~0.258 that nonetheless behaves like matter (see technical definition section above) – dark matter.
Sky surveys and baryon acoustic oscillations
Baryon acoustic oscillations (BAO) are regular, periodic fluctuations in the density of the visible baryonic matter (normal matter) of the universe. These are predicted to arise in the Lambda-CDM model due to the early universe's acoustic oscillations in the photon-baryon fluid and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (~ 1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130 or 160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.
Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding but more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. The effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures assuming Earth is not at a special location in the Universe.
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Composition of dark matter: baryonic vs. nonbaryonic
|Unsolved problem in physics:|
Dark matter can refer to any substance that interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons.[note 4] However, for the reasons outlined below, most scientists think the dark matter is dominated by a non-baryonic component, which is likely composed of a currently unknown fundamental particle (or similar exotic state).
Baryons (protons and neutrons) make up ordinary stars and planets. However, baryonic matter also encompasses less common black holes, neutron stars, faint old white dwarfs and brown dwarfs, collectively known as massive compact halo objects (MACHOs), which can be hard to detect.
However multiple lines of evidence suggest the majority of dark matter is not made of baryons:
- Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars.
- The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements. If there are more baryons, then there should also be more helium, lithium and heavier elements synthesized during the Big Bang. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. In contrast, large-scale structure and other observations indicate that the total matter density is about 30% of the critical density.
- Astronomical searches for gravitational microlensing in the Milky Way found that at most a small fraction of the dark matter may be in dark, compact, conventional objects (MACHOs, etc.); the excluded range of object masses is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates.
- Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background. Observations by WMAP and Planck indicate that around five-sixths of the total matter is in a form that interacts significantly with ordinary matter or photons only through gravitational effects.
Candidates for non-baryonic dark matter are hypothetical particles such as axions, sterile neutrinos, weakly-interacting massive particles (WIMPs), gravitationally-interacting massive particles (GIMPs), or supersymmetric particles. The three neutrino types already observed are indeed abundant, and dark, and matter, but because their individual masses – however uncertain they may be – are almost certainly tiny, they can only supply a small fraction of dark matter, due to limits derived from large-scale structure and high-redshift galaxies.
Unlike baryonic matter, nonbaryonic matter did not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is revealed only via its gravitational effects, or, weak lensing. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection).
Dark matter aggregation and dense dark matter objects
If dark matter is as common as observations suggest, an obvious question is whether it can form objects equivalent to planets, stars, or black holes. The answer has historically been that it cannot, because of two factors:
- It lacks an efficient means to lose energy: Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The Virial theorem suggests that such a particle would not stay bound to the gradually forming object - as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
- It lacks a range of interactions needed to form structures: Ordinary matter interacts in many different ways. This allow it to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. But there is no evidence that dark matter is capable of such a wide variety of interactions, since it only seems to interact through gravity and through some means no stronger than the weak interaction. (although this is speculative until dark matter is better understood).
This question has been debated heavily during recent years. In 2016–2017 the idea of dense dark matter or dark matter being black holes, including primordial black holes, made a comeback following results of gravitation wave detection. These were again ruled out in December 2017, but research and theories based on these still continue as at 2018, including approaches to dark matter cooling, and the question is by no means settled.
Classification of dark matter: cold, warm or hot
Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, indicating how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion – this is an important distance called the free streaming length (FSL). Primordial density fluctuations smaller than this length get washed out as particles spread from overdense to underdense regions, while larger fluctuations are unaffected; therefore this length sets a minimum scale for later structure formation. The categories are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): dark matter particles are classified as cold, warm, or hot according as their FSL; much smaller (cold), similar (warm), or much larger (hot) than a protogalaxy.
Cold dark matter leads to a bottom-up formation of structure while hot dark matter would result in a top-down formation scenario; the latter is excluded by high-redshift galaxy observations.
Candidate particles can be grouped into three categories on the basis of their effect on the fluctuation spectrum (Bond et al. 1983). If the dark matter is composed of abundant light particles which remain relativistic until shortly before recombination, then it may be termed "hot". The best candidate for hot dark matter is a neutrino ... A second possibility is for the dark matter particles to interact more weakly than neutrinos, to be less abundant, and to have a mass of order 1 keV. Such particles are termed "warm dark matter", because they have lower thermal velocities than massive neutrinos ... there are at present few candidate particles which fit this description. Gravitinos and photinos have been suggested (Pagels and Primack 1982; Bond, Szalay and Turner 1982) ... Any particles which became nonrelativistic very early, and so were able to diffuse a negligible distance, are termed "cold" dark matter (CDM). There are many candidates for CDM including supersymmetric particles.— M. Davis, G. Efstathiou, C. S. Frenk, and S. D. M. White, The evolution of large-scale structure in a universe dominated by cold dark matter
Another approximate dividing line is that warm dark matter became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size and in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million K. Standard physical cosmology gives the particle horizon size as 2ct (speed of light multiplied by time) in the radiation-dominated era, thus 2 light-years. A region of this size would expand to 2 million light years today (absent structure formation). The actual FSL is approximately 5 times the above length, since it continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic. In this example the FSL would correspond to 10 million light-years or 3 Mpc today, around the size containing an average large galaxy.
The 2.7 million K photon temperature gives a typical photon energy of 250 electron-volts, thereby setting a typical mass scale for warm dark matter: particles much more massive than this, such as GeV – TeV mass WIMPs, would become non-relativistic much earlier than 1 year after the Big Bang and thus have FSLs much smaller than a protogalaxy, making them cold. Conversely, much lighter particles, such as neutrinos with masses of only a few eV, have FSLs much larger than a protogalaxy, thus qualifying them as hot.
Cold dark matter
Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions.
Studies of Big Bang nucleosynthesis and gravitational lensing convinced most cosmologists that MACHOs cannot make up more than a small fraction of dark matter. According to A. Peter: "... the only really plausible dark-matter candidates are new particles."
The 1997 DAMA/NaI experiment and its successor DAMA/LIBRA in 2013, claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.
Many supersymmetric models offer dark matter candidates in the form of the WIMPy Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model that explain the small neutrino mass through the seesaw mechanism.
Warm dark matter
Warm dark matter comprises particles with an FSL comparable to the size of a protogalaxy. Predictions based on warm dark matter are similar to those for cold dark matter on large scales, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies. Some researchers consider this a better fit to observations. A challenge for this model is the lack of particle candidates with the required mass ~ 300 eV to 3000 eV.
No known particles can be categorized as warm dark matter. A postulated candidate is the sterile neutrino: a heavier, slower form of neutrino that does not interact through the weak force, unlike other neutrinos. Some modified gravity theories, such as scalar-tensor-vector gravity, require "warm" dark matter to make their equations work.
Hot dark matter
Hot dark matter consists of particles whose FSL is much larger than the size of a protogalaxy. The neutrino qualifies as such particle. They were discovered independently, long before the hunt for dark matter: they were postulated in 1930, and detected in 1956. Neutrinos' mass is less than 10−6 that of an electron. Neutrinos interact with normal matter only via gravity and the weak force, making them difficult to detect (the weak force only works over a small distance, thus a neutrino triggers a weak force event only if it hits a nucleus head-on). This makes them 'weakly-interacting light particles' (WILPs), as opposed to WIMPs.
The three known flavours of neutrinos are the electron, muon, and tau. Their masses are slightly different. Neutrinos oscillate among the flavours as they move. It is hard to determine an exact upper bound on the collective average mass of the three neutrinos (or for any of the three individually). For example, if the average neutrino mass were over 50 eV/c2 (less than 10−5 of the mass of an electron), the universe would collapse. CMB data and other methods indicate that their average mass probably does not exceed 0.3 eV/c2. Thus, observed neutrinos cannot explain dark matter.
Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies that the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies. Deep-field observations show instead that galaxies formed first, followed by clusters and superclusters as galaxies clump together.
Detection of dark matter particles
If dark matter is made up of sub-atomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs are popular search candidates, the Axion Dark Matter eXperiment (ADMX) searches for axions. Another candidate is heavy hidden sector particles that only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.
Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil the nucleus will emit energy as, e.g., scintillation light or phonons, which is then detected by sensitive apparatus. To do this effectively it is crucial to maintain a low background, and so such experiments operate deep underground to reduce the interference from cosmic rays. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations claim to have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX and SuperCDMS.
A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of our galaxy) two dark matter particles could annihilate to produce gamma rays or Standard Model particle-antiparticle pairs. Alternatively if the dark matter particle is unstable, it could decay into standard model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in our galaxy or others. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves, opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.
Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow. The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded that this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In April 2012, an analysis of previously available data from its Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
In 2013 results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays that could be due to dark matter annihilation.
Collider searches for dark matter
An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. It is important to note that any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, the dark matter of our Universe.
Because dark matter remains to be conclusively identified, many other hypotheses have emerged aiming to explain the observational phenomena that dark matter was conceived to explain. The most common method is to modify general relativity. General relativity is well-tested on solar system scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can conceivably eliminate the need for dark matter. The most well-known theories of this class are MOND and its relativistic generalization TeVeS, f(R) gravity and entropic gravity. Alternative theories abound.
A problem with alternative hypotheses is that the observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity.
The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter.
In philosophy of science
In philosophy of science, dark matter is an example of an auxiliary hypothesis, an ad hoc postulate that is added to a theory in response to observations that falsify it. It has been argued that the dark matter hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper.
In popular culture
Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties. Such descriptions are often inconsistent with the hypothesized properties of dark matter in physics and cosmology.
- Related theories
- Dark energy
- Conformal gravity
- Entropic gravity
- Dark electromagnetism
- Massive gravity
- Unparticle physics
- A small portion of dark matter could be baryonic. See Baryonic dark matter.
- Since dark energy, by convention, does not count as matter, this is 26.8/(4.9 + 26.8)=0.845
- This is a consequence of the shell theorem and the observation that spiral galaxies are spherically symmetric to a large extent (in 2D).
- Astronomers define the term baryonic matter to refer to ordinary matter made of protons, neutrons and electrons, including neutron stars and black holes from the collapse of ordinary matter. Strictly speaking, electrons are leptons not baryons; but since their number is equal to the protons while their mass is far smaller, electrons give a negligible contribution to the average density of baryonic matter. Baryonic matter excludes other known particles such as photons and neutrinos. Hypothetical primordial black holes are also generally defined as non-baryonic, since they would have formed from radiation, not matter.
- "Dark Matter". CERN Physics. 20 January 2012.
- Siegfried, T. (5 July 1999). "Hidden Space Dimensions May Permit Parallel Universes, Explain Cosmic Mysteries". The Dallas Morning News.
- Trimble, V. (1987). "Existence and nature of dark matter in the universe". Annual Review of Astronomy and Astrophysics. 25: 425–472. Bibcode:1987ARA&A..25..425T. doi:10.1146/annurev.aa.25.090187.002233.
- "Planck Mission Brings Universe Into Sharp Focus". NASA Mission Pages. 21 March 2013.
- "Dark Energy, Dark Matter". NASA Science: Astrophysics. 5 June 2015.
- Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; (Planck Collaboration); et al. (22 March 2013). "Planck 2013 results. I. Overview of products and scientific results – Table 9". Astronomy and Astrophysics. 1303: 5062. arXiv: . Bibcode:2014A&A...571A...1P. doi:10.1051/0004-6361/201321529.
- Francis, Matthew (22 March 2013). "First Planck results: the Universe is still weird and interesting". Arstechnica.
- "Planck captures portrait of the young Universe, revealing earliest light". University of Cambridge. 21 March 2013. Retrieved 21 March 2013.
- Sean Carroll, Ph.D., Caltech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 46, Accessed 7 October 2013, "...dark matter: An invisible, essentially collisionless component of matter that makes up about 25 percent of the energy density of the universe... it's a different kind of particle... something not yet observed in the laboratory..."
- Ferris, Timothy. "Dark Matter". Retrieved 10 June 2015.
- Jarosik, N.; et al. (2011). "Seven-Year Wilson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results". Astrophysical Journal Supplement. 192 (2): 14. arXiv: . Bibcode:2011ApJS..192...14J. doi:10.1088/0067-0049/192/2/14.
- Copi, C. J.; Schramm, D. N.; Turner, M. S. (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science. 267 (5195): 192–199. arXiv: . Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624.
- Bertone, G.; Hooper, D.; Silk, J. (2005). "Particle dark matter: Evidence, candidates and constraints". Physics Reports. 405 (5–6): 279–390. arXiv: . Bibcode:2005PhR...405..279B. doi:10.1016/j.physrep.2004.08.031.
- Kroupa, P.; et al. (2010). "Local-Group tests of dark-matter Concordance Cosmology: Towards a new paradigm for structure formation". Astronomy and Astrophysics. 523: 32–54. arXiv: . Bibcode:2010A&A...523A..32K. doi:10.1051/0004-6361/201014892.
- Cooper, Keith. "Correlation between galaxy rotation and visible matter puzzles astronomers".
- Angus, G. (2013). "Cosmological simulations in MOND: the cluster scale halo mass function with light sterile neutrinos". Monthly Notices of the Royal Astronomical Society. 436: 202–211. arXiv: . Bibcode:2013MNRAS.436..202A. doi:10.1093/mnras/stt1564.
- de Swart, J. G.; Bertone, G.; van Dongen, J. (2017). "How dark matter came to matter". Nature Astronomy. 1 (59): 0059. arXiv: . Bibcode:2017NatAs...1E..59D. doi:10.1038/s41550-017-0059.
- "A history of dark matter". Ars Technica. Retrieved 2017-02-08.
- Kapteyn, Jacobus Cornelius (1922). "First attempt at a theory of the arrangement and motion of the sidereal system". Astrophysical Journal. 55: 302–327. Bibcode:1922ApJ....55..302K. doi:10.1086/142670.
It is incidentally suggested that when the theory is perfected it may be possible to determine the amount of dark matter from its gravitational effect.(emphasis in original)
- Rosenberg, Leslie J (30 June 2014). Status of the Axion Dark-Matter Experiment (ADMX) (PDF). 10th PATRAS Workshop on Axions, WIMPs and WISPs. p. 2.
- Oort, J.H. (1932) "The force exerted by the stellar system in the direction perpendicular to the galactic plane and some related problems", Bulletin of the Astronomical Institutes of the Netherlands, 6: 249–287.
- "The Hidden Lives of Galaxies: Hidden Mass". Imagine the Universe!. NASA/GSFC.
- Kuijken, K.; Gilmore, G. (July 1989). "The Mass Distribution in the Galactic Disc – Part III – the Local Volume Mass Density" (PDF). Monthly Notices of the Royal Astronomical Society. 239 (2): 651–664. Bibcode:1989MNRAS.239..651K. doi:10.1093/mnras/239.2.651.
- Zwicky, F. (1933). "Die Rotverschiebung von extragalaktischen Nebeln". Helvetica Physica Acta. 6: 110–127. Bibcode:1933AcHPh...6..110Z.
- Zwicky, F. (1937). "On the Masses of Nebulae and of Clusters of Nebulae". The Astrophysical Journal. 86: 217. Bibcode:1937ApJ....86..217Z. doi:10.1086/143864.
- Zwicky, F. (1933), "Die Rotverschiebung von extragalaktischen Nebeln", Helvetica Physica Acta, 6: 110–127, Bibcode:1933AcHPh...6..110Z See also Zwicky, F. (1937), "On the Masses of Nebulae and of Clusters of Nebulae", Astrophysical Journal, 86: 217, Bibcode:1937ApJ....86..217Z, doi:10.1086/143864
- Some details of Zwicky's calculation and of more modern values are given in Richmond, M., Using the virial theorem: the mass of a cluster of galaxies, retrieved 10 July 2007
- Freese, Katherine (4 May 2014). The Cosmic Cocktail: Three Parts Dark Matter. Princeton University Press. ISBN 978-1-4008-5007-5.
- Babcock, H, 1939, "The rotation of the Andromeda Nebula", Lick Observatory bulletin; no. 498
- Overbye, Dennis (December 27, 2016). "Vera Rubin, 88, Dies; Opened Doors in Astronomy, and for Women". The New York Times. Retrieved December 27, 2016.
- First observational evidence of dark matter. Darkmatterphysics.com. Retrieved 6 August 2013.
- Rubin, Vera C.; Ford, W. Kent, Jr. (February 1970). "Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions". The Astrophysical Journal. 159: 379–403. Bibcode:1970ApJ...159..379R. doi:10.1086/150317.
- Bosma, A. (1978). "The distribution and kinematics of neutral hydrogen in spiral galaxies of various morphological types" (Ph.D. Thesis). Rijksuniversiteit Groningen.
- Rubin, V.; Thonnard, W. K. Jr.; Ford, N. (1980). "Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)". The Astrophysical Journal. 238: 471. Bibcode:1980ApJ...238..471R. doi:10.1086/158003.
- Randall 2015, pp. 13–14.
- Roberts, Morton S. (May 1966). "A High-Resolution 21-cm Hydrogen-Line Survey of the Andromeda Nebula". The Astrophysical Journal. 159: 639–656. Bibcode:1966ApJ...144..639R. doi:10.1086/148645.
- Gottesman, S. T.; Davies, R. D.; Reddish, V. C. (1966). "A neutral hydrogen survey of the southern regions of the Andromeda nebula". Monthly Notices of the Royal Astronomical Society. 133 (4): 359–387. Bibcode:1966MNRAS.133..359G. doi:10.1093/mnras/133.4.359.
- Roberts, Morton S.; Whitehurst, Robert N. (October 1975). "The rotation curve and geometry of M31 at large galactocentric distances". The Astrophysical Journal. 201: 327–346. Bibcode:1975ApJ...201..327R. doi:10.1086/153889.
- Rogstad, D. H.; Shostak, G. Seth (September 1972). "Gross Properties of Five Scd Galaxies as Determined from 21-centimeter Observations". The Astrophysical Journal. 176: 315–321. Bibcode:1972ApJ...176..315R. doi:10.1086/151636.
- Randall 2015, pp. 14–16.
- Bergstrom, L. (2000). "Non-baryonic dark matter: Observational evidence and detection methods". Reports on Progress in Physics. 63 (5): 793–841. arXiv: . Bibcode:2000RPPh...63..793B. doi:10.1088/0034-4885/63/5/2r3.
- Daniel Baumann. "Cosmology: Part III Mathematical Tripos, Cambridge University" (PDF). p. 21−22.
- Dark energy is a term often used nowadays as a substitute for cosmological constant. It is basically the same except that dark energy might depend on scale factor in some unknown way rather than necessarily being constant.
- "Serious Blow to Dark Matter Theories?" (Press release). European Southern Observatory. 18 April 2012.
- Corbelli, E. & Salucci, P. (2000). "The extended rotation curve and the dark matter halo of M33". Monthly Notices of the Royal Astronomical Society. 311 (2): 441–447. arXiv: . Bibcode:2000MNRAS.311..441C. doi:10.1046/j.1365-8711.2000.03075.x.
- Faber, S. M.; Jackson, R. E. (1976). "Velocity dispersions and mass-to-light ratios for elliptical galaxies". The Astrophysical Journal. 204: 668–683. Bibcode:1976ApJ...204..668F. doi:10.1086/154215.
- Binny, James; Merrifield, Michael (1998). Galactic Astronomy. Princeton University Press. p. 712-713.
- "Cosmological Parameters from Clusters of Galaxies". arXiv: . Bibcode:2011ARA&A..49..409A. doi:10.1146/annurev-astro-081710-102514.
- "Dark Matter May be Smoother than Expected – Careful study of large area of sky imaged by VST reveals intriguing result". www.eso.org. Retrieved 8 December 2016.
- Taylor, A. N.; et al. (1998). "Gravitational Lens Magnification and the Mass of Abell 1689". The Astrophysical Journal. 501 (2): 539–553. arXiv: . Bibcode:1998ApJ...501..539T. doi:10.1086/305827.
- Wu, X.; Chiueh, T.; Fang, L.; Xue, Y. (1998). "A comparison of different cluster mass estimates: consistency or discrepancy?". Monthly Notices of the Royal Astronomical Society. 301 (3): 861–871. arXiv: . Bibcode:1998MNRAS.301..861W. doi:10.1046/j.1365-8711.1998.02055.x.
- Cho, Adrian (2017). "Scientists unveil the most detailed map of dark matter to date". Science. doi:10.1126/science.aal0847.
- Natarajan, Priyamvada; Chadayammuri, Urmila; Jauzac, Mathilde; Richard, Johan; Kneib, Jean-Paul; Ebeling, Harald; Jiang, Fangzhou; Bosch, Frank van den; Limousin, Marceau; Jullo, Eric; Atek, Hakim; Pillepich, Annalisa; Popa, Cristina; Marinacci, Federico; Hernquist, Lars; Meneghetti, Massimo; Vogelsberger, Mark (2017). "Mapping substructure in the HST Frontier Fields cluster lenses and in cosmological simulations". Monthly Notices of the Royal Astronomical Society. 468 (2): 1962. arXiv: . Bibcode:2017MNRAS.468.1962N. doi:10.1093/mnras/stw3385.
- Refregier, A. (2003). "Weak gravitational lensing by large-scale structure". Annual Review of Astronomy and Astrophysics. 41 (1): 645–668. arXiv: . Bibcode:2003ARA&A..41..645R. doi:10.1146/annurev.astro.41.111302.102207.
- Quasars, lensing, and dark matter. Physics for the 21st Century. Annenberg Foundation. 2017.
- Hubble snaps dark matter warping spacetime. Rik Myslewski, The Register. 14 Oct 2011.
- "Content of the Universe - Pie Chart". Wilkinson Microwave Anisotropy Probe. National Aeronautics and Space Administration. Retrieved 9 January 2018.
- The details are technical. See Wayne Hu (2001). "Intermediate Guide to the Acoustic Peaks and Polarization". for an intermediate-level introduction.
- Hinshaw, G.; et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results". The Astrophysical Journal Supplement. 180 (2): 225–245. arXiv: . Bibcode:2009ApJS..180..225H. doi:10.1088/0067-0049/180/2/225.
- Ade, P.A.R.; et al. (2015). "Planck 2015 results. XIII. Cosmological parameters". Astron. Astrophys. 594: A13. arXiv: . Bibcode:2016A&A...594A..13P. doi:10.1051/0004-6361/201525830.
- Skordis, C.; et al. (2006). "Large Scale Structure in Bekenstein's theory of relativistic Modified Newtonian Dynamics". Phys. Rev. Lett. 96: 011301. arXiv: . Bibcode:2006PhRvL..96a1301S. doi:10.1103/PhysRevLett.96.011301.
- "Hubble Maps the Cosmic Web of "Clumpy" Dark Matter in 3-D" (Press release). NASA. 7 January 2007.
- A. H. Jaffe. "Cosmology 2012: Lecture Notes" (PDF).
- L. F. Low (12 October 2016). ""Constraints on the composite photon theory". Modern Physics Letters A. Bibcode:2016MPLA...3175002L. doi:10.1142/S021773231675002X.
- Clowe, Douglas; et al. (2006). "A Direct Empirical Proof of the Existence of Dark Matter". The Astrophysical Journal Letters. 648 (2): L109–L113. arXiv: . Bibcode:2006ApJ...648L.109C. doi:10.1086/508162.
- Chris Lee (21 September 2017). "Science-in-progress: Did the Bullet Cluster withstand scrutiny?". Ars Technica.
- Ethan Siegel (9 November 2017). "The Bullet Cluster Proves Dark Matter Exists, But Not For The Reason Most Physicists Think". Forbes.
- M. Markevitch; S. Randall; D. Clowe; A. Gonzalez; et al. (16–23 July 2006). "Dark Matter and the Bullet Cluster" (PDF). 36th COSPAR Scientific Assembly. Beijing, China. Abstract only
- Kowalski, M.; et al. (2008). "Improved Cosmological Constraints from New, Old, and Combined Supernova Data Sets". The Astrophysical Journal. 686 (2): 749–778. arXiv: . Bibcode:2008ApJ...686..749K. doi:10.1086/589937.
- "Will the Universe expand forever?". NASA. 24 January 2014. Retrieved 16 March 2015.
- "Our universe is Flat". FermiLab/SLAC. 7 April 2015.
- Marcus Y. Yoo (2011). "Unexpected connections". Engineering & Science. Caltech. LXXIV1: 30.
- "Planck Publications: Planck 2015 Results". European Space Agency. February 2015. Retrieved 9 February 2015.
- Percival, W. J.; et al. (2007). "Measuring the Baryon Acoustic Oscillation scale using the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey". Monthly Notices of the Royal Astronomical Society. 381 (3): 1053–1066. arXiv: . Bibcode:2007MNRAS.381.1053P. doi:10.1111/j.1365-2966.2007.12268.x.
- Komatsu, E.; et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation". The Astrophysical Journal Supplement. 180 (2): 330–376. arXiv: . Bibcode:2009ApJS..180..330K. doi:10.1088/0067-0049/180/2/330.
- Peacock, J.; et al. (2001). "A measurement of the cosmological mass density from clustering in the 2dF Galaxy Redshift Survey". Nature. 410 (6825): 169–73. arXiv: . Bibcode:2001Natur.410..169P. doi:10.1038/35065528. PMID 11242069.
- Viel, M.; Bolton, J. S.; Haehnelt, M. G. (2009). "Cosmological and astrophysical constraints from the Lyman α forest flux probability distribution function". Monthly Notices of the Royal Astronomical Society. 399 (1): L39–L43. arXiv: . Bibcode:2009MNRAS.399L..39V. doi:10.1111/j.1745-3933.2009.00720.x.
- COSMOS - The SAO Encyclopedia of Astronomy. "Baryonic Matter". Swinburne University of Technology. Retrieved 9 April 2018.
- Randall 2015, p. 286.
- Achim Weiss, "Big Bang Nucleosynthesis: Cooking up the first light elements" in: Einstein Online Vol. 2 (2006), 1017
- Raine, D.; Thomas, T. (2001). An Introduction to the Science of Cosmology. IOP Publishing. p. 30. ISBN 0-7503-0405-7. Archived from the original on 30 May 2013.
- Tisserand, P.; Le Guillou, L.; Afonso, C.; Albert, J. N.; Andersen, J.; Ansari, R.; Aubourg, É.; Bareyre, P.; Beaulieu, J. P.; Charlot, X.; Coutures, C.; Ferlet, R.; Fouqué, P.; Glicenstein, J. F.; Goldman, B.; Gould, A.; Graff, D.; Gros, M.; Haissinski, J.; Hamadache, C.; De Kat, J.; Lasserre, T.; Lesquoy, É.; Loup, C.; Magneville, C.; Marquette, J. B.; Maurice, É.; Maury, A.; Milsztajn, A.; Moniez, M. (2007). "Limits on the Macho content of the Galactic Halo from the EROS-2 Survey of the Magellanic Clouds". Astronomy and Astrophysics. 469 (2): 387–404. arXiv: . Bibcode:2007A&A...469..387T. doi:10.1051/0004-6361:20066017.
- Graff, D. S.; Freese, K. (1996). "Analysis of a Hubble Space Telescope Search for Red Dwarfs: Limits on Baryonic Matter in the Galactic Halo". The Astrophysical Journal. 456: L49. arXiv: . Bibcode:1996ApJ...456L..49G. doi:10.1086/309850.
- Najita, J. R.; Tiede, G. P.; Carr, J. S. (2000). "From Stars to Superplanets: The Low‐Mass Initial Mass Function in the Young Cluster IC 348". The Astrophysical Journal. 541 (2): 977–1003. arXiv: . Bibcode:2000ApJ...541..977N. doi:10.1086/309477.
- Wyrzykowski, Lukasz et al. (2011) The OGLE view of microlensing towards the Magellanic Clouds – IV. OGLE-III SMC data and final conclusions on MACHOs, MNRAS, 416, 2949
- Freese, Katherine; Fields, Brian; Graff, David (2000). "Death of Stellar Baryonic Dark Matter Candidates". arXiv: [astro-ph].
- Freese, Katherine; Fields, Brian; Graff, David (2000). "Death of Stellar Baryonic Dark Matter". The First Stars. ESO Astrophysics Symposia. p. 18. arXiv: . Bibcode:2000fist.conf...18F. doi:10.1007/10719504_3. ISBN 3-540-67222-2.
- Canetti, L.; Drewes, M.; Shaposhnikov, M. (2012). "Matter and Antimatter in the Universe". New J. Phys. 14 (9): 095012. arXiv: . Bibcode:2012NJPh...14i5012C. doi:10.1088/1367-2630/14/9/095012.
- Bertone, G.; Merritt, D. (2005). "Dark Matter Dynamics and Indirect Detection". Modern Physics Letters A. 20 (14): 1021–1036. arXiv: . Bibcode:2005MPLA...20.1021B. doi:10.1142/S0217732305017391.
- Synopsis: A Way to Cool Dark Matter - American Physical Society: "One widely held belief about dark matter is that it cannot cool off by radiating energy. If it could, then it might bunch together and create compact objects in the same way that baryonic matter forms planets, stars, and galaxies. Observations so far suggest that dark matter doesn't do that—it resides only in diffuse halos ... As a result, it is extremely unlikely there are very dense objects like stars made out of entirely (or even mostly) dark matter."
- Why Doesn't Dark Matter Form Black Holes? - Forbes column, authored by Ethan Siegel, whose Forbes biography identifies the author as a "Ph.D. astrophysicist, author, and science communicator, who professes physics and astronomy... numerous awards for science writing ... including the award for best science blog by the Institute of Physics.
- Silk, Joseph (6 December 2000). "IX". The Big Bang: Third Edition. Henry Holt and Company. ISBN 978-0-8050-7256-3.
- Vittorio, N.; J. Silk (1984). "Fine-scale anisotropy of the cosmic microwave background in a universe dominated by cold dark matter". Astrophysical Journal Letters. 285: L39–L43. Bibcode:1984ApJ...285L..39V. doi:10.1086/184361.
- Umemura, Masayuki; Satoru Ikeuchi (1985). "Formation of Subgalactic Objects within Two-Component Dark Matter". Astrophysical Journal. 299: 583–592. Bibcode:1985ApJ...299..583U. doi:10.1086/163726.
- Davis, M.; Efstathiou, G., Frenk, C. S., & White, S. D. M. (May 15, 1985). "The evolution of large-scale structure in a universe dominated by cold dark matter". Astrophysical Journal. 292: 371–394. Bibcode:1985ApJ...292..371D. doi:10.1086/163168.
- Hawkins, M. R. S. (2011). "The case for primordial black holes as dark matter". Monthly Notices of the Royal Astronomical Society. 415 (3): 2744–2757. arXiv: . Bibcode:2011MNRAS.415.2744H. doi:10.1111/j.1365-2966.2011.18890.x.
- Carr, B. J.; et al. (May 2010). "New cosmological constraints on primordial black holes" (PDF). Physical Review D. 81 (10): 104019. arXiv: . Bibcode:2010PhRvD..81j4019C. doi:10.1103/PhysRevD.81.104019.
- Peter, A. H. G. (2012). "Dark Matter: A Brief Review". arXiv: [astro-ph.CO].
- Garrett, Katherine; Dūda, Gintaras (2011). "Dark Matter: A Primer". Advances in Astronomy. 2011: 1–22. arXiv: . Bibcode:2011AdAst2011E...8G. doi:10.1155/2011/968283.
MACHOs can only account for a very small percentage of the nonluminous mass in our galaxy, revealing that most dark matter cannot be strongly concentrated or exist in the form of baryonic astrophysical objects. Although microlensing surveys rule out baryonic objects like brown dwarfs, black holes, and neutron stars in our galactic halo, can other forms of baryonic matter make up the bulk of dark matter? The answer, surprisingly, is no...
- Bertone, G. (2010). "The moment of truth for WIMP dark matter". Nature. 468 (7322): 389–393. arXiv: . Bibcode:2010Natur.468..389B. doi:10.1038/nature09509. PMID 21085174.
- Olive, Keith A. (2003). "TASI Lectures on Dark Matter". p. 21
- Jungman, Gerard; Kamionkowski, Marc; Griest, Kim (1 March 1996). "Supersymmetric dark matter". Physics Reports. 267 (5–6): 195–373. arXiv: . Bibcode:1996PhR...267..195J. doi:10.1016/0370-1573(95)00058-5.
- "Neutrinos as Dark Matter". Astro.ucla.edu. 21 September 1998. Retrieved 6 January 2011.
- Gaitskell, Richard J. (2004). "Direct Detection of Dark Matter". Annual Review of Nuclear and Particle Science. 54: 315–359. Bibcode:2004ARNPS..54..315G. doi:10.1146/annurev.nucl.54.070103.181244.
- "NEUTRALINO DARK MATTER". Retrieved 26 December 2011. Griest, Kim. "WIMPs and MACHOs" (PDF). Retrieved 26 December 2011.
- Drees, M.; Gerbier, G. (2015). "Dark Matter" (PDF). Chin. Phys. C. 38: 090001.
- Bernabei, R.; Belli, P.; Cappella, F.; Cerulli, R.; Dai, C. J.; d’Angelo, A.; He, H. L.; Incicchitti, A.; Kuang, H. H.; Ma, J. M.; Montecchia, F.; Nozzoli, F.; Prosperi, D.; Sheng, X. D.; Ye, Z. P. (2008). "First results from DAMA/LIBRA and the combined results with DAMA/NaI". Eur. Phys. J. C. 56 (3): 333–355. arXiv: . Bibcode:2008EPJC...56..333B. doi:10.1140/epjc/s10052-008-0662-y.
- Drukier, A.; Freese, K.; Spergel, D. (1986). "Detecting Cold Dark Matter Candidates". Physical Review D. 33 (12): 3495–3508. Bibcode:1986PhRvD..33.3495D. doi:10.1103/PhysRevD.33.3495.
- Davis, Jonathan H. (2015). "The Past and Future of Light Dark Matter Direct Detection". Int. J. Mod. Phys. A. 30 (15): 1530038. arXiv: . Bibcode:2015IJMPA..3030038D. doi:10.1142/S0217751X15300380.
- Stonebraker, Alan (3 January 2014). "Synopsis: Dark-Matter Wind Sways through the Seasons". Physics – Synopses. American Physical Society. Retrieved 6 January 2014.
- Lee, Samuel K.; Mariangela Lisanti, Annika H. G. Peter, and Benjamin R. Safdi (3 January 2014). "Effect of Gravitational Focusing on Annual Modulation in Dark-Matter Direct-Detection Experiments". Phys. Rev. Lett. American Physical Society. 112 (1): 011301 (2014) [5 pages]. arXiv: . Bibcode:2014PhRvL.112a1301L. doi:10.1103/PhysRevLett.112.011301. PMID 24483881.
- The Dark Matter Group. "An Introduction to Dark Matter". Dark Matter Research. Sheffield, UK: University of Sheffield. Retrieved 7 January 2014.
- "Blowing in the Wind". Kavli News. Sheffield, UK: Kavli Foundation. Retrieved 7 January 2014.
Scientists at Kavli MIT are working on...a tool to track the movement of dark matter.
- "Dark matter even darker than once thought". Retrieved 16 June 2015.
- Bertone, Gianfranco (2010). "Dark Matter at the Centers of Galaxies". Particle Dark Matter: Observations, Models and Searches. Cambridge University Press. pp. 83–104. arXiv: . Bibcode:2010arXiv1001.3706M. ISBN 978-0-521-76368-4.
- Ellis, J.; Flores, R. A.; Freese, K.; Ritz, S.; Seckel, D.; Silk, J. (1988). "Cosmic ray constraints on the annihilations of relic particles in the galactic halo". Physics Letters B. 214 (3): 403–412. Bibcode:1988PhLB..214..403E. doi:10.1016/0370-2693(88)91385-8.
- Freese, K. (1986). "Can Scalar Neutrinos or Massive Dirac Neutrinos be the Missing Mass?". Physics Letters B. 167 (3): 295–300. Bibcode:1986PhLB..167..295F. doi:10.1016/0370-2693(86)90349-7.
- Randall 2015, p. 298.
- Sokol, Joshua; et al. (20 February 2016). "Surfing Gravity's Waves". New Scientist (3061).
- "Did Gravitational Wave Detector Find Dark Matter?". Johns Hopkins University. 15 June 2016. Retrieved 20 June 2015.
While their existence has not been established with certainty, primordial black holes have in the past been suggested as a possible solution to the dark matter mystery. Because there’s so little evidence of them, though, the primordial black hole-dark matter hypothesis has not gained a large following among scientists. The LIGO findings, however, raise the prospect anew, especially as the objects detected in that experiment conform to the mass predicted for dark matter. Predictions made by scientists in the past held that conditions at the birth of the universe would produce lots of these primordial black holes distributed approximately evenly in the universe, clustering in halos around galaxies. All this would make them good candidates for dark matter.
- Bird, Simeon; Cholis, Illian (19 May 2016). "Did LIGO Detect Dark Matter?". Physical Review Letters. 116, 201301 (20): 201301. arXiv: . Bibcode:2016PhRvL.116t1301B. doi:10.1103/PhysRevLett.116.201301. Retrieved 20 June 2016.
- Stecker, F.W.; Hunter, S; Kniffen, D (2008). "The likely cause of the EGRET GeV anomaly and its implications". Astroparticle Physics. 29 (1): 25–29. arXiv: . Bibcode:2008APh....29...25S. doi:10.1016/j.astropartphys.2007.11.002.
- Atwood, W.B.; Abdo, A. A.; Ackermann, M.; Althouse, W.; Anderson, B.; Axelsson, M.; Baldini, L.; Ballet, J.; et al. (2009). "The large area telescope on the Fermi Gamma-ray Space Telescope Mission". Astrophysical Journal. 697 (2): 1071–1102. arXiv: . Bibcode:2009ApJ...697.1071A. doi:10.1088/0004-637X/697/2/1071.
- Weniger, Christoph (2012). "A Tentative Gamma-Ray Line from Dark Matter Annihilation at the Fermi Large Area Telescope". Journal of Cosmology and Astroparticle Physics. 2012 (8): 7. arXiv: . Bibcode:2012JCAP...08..007W. doi:10.1088/1475-7516/2012/08/007.
- Cartlidge, Edwin (24 April 2012). "Gamma rays hint at dark matter". Institute Of Physics. Retrieved 23 April 2013.
- Albert, J.; Aliu, E.; Anderhub, H.; Antoranz, P.; Backes, M.; Baixeras, C.; Barrio, J. A.; Bartko, H.; Bastieri, D.; Becker, J. K.; Bednarek, W.; Berger, K.; Bigongiari, C.; Biland, A.; Bock, R. K.; Bordas, P.; Bosch‐Ramon, V.; Bretz, T.; Britvitch, I.; Camara, M.; Carmona, E.; Chilingarian, A.; Commichau, S.; Contreras, J. L.; Cortina, J.; Costado, M. T.; Curtef, V.; Danielyan, V.; Dazzi, F.; et al. (2008). "Upper Limit for γ‐Ray Emission above 140 GeV from the Dwarf Spheroidal Galaxy Draco". The Astrophysical Journal. 679: 428–431. arXiv: . Bibcode:2008ApJ...679..428A. doi:10.1086/529135.
- Aleksić, J.; Antonelli, L. A.; Antoranz, P.; Backes, M.; Baixeras, C.; Balestra, S.; Barrio, J. A.; Bastieri, D.; González, J. B.; Bednarek, W.; Berdyugin, A.; Berger, K.; Bernardini, E.; Biland, A.; Bock, R. K.; Bonnoli, G.; Bordas, P.; Tridon, D. B.; Bosch-Ramon, V.; Bose, D.; Braun, I.; Bretz, T.; Britzger, D.; Camara, M.; Carmona, E.; Carosi, A.; Colin, P.; Commichau, S.; Contreras, J. L.; et al. (2010). "Magic Gamma-Ray Telescope Observation of the Perseus Cluster of Galaxies: Implications for Cosmic Rays, Dark Matter, and Ngc 1275". The Astrophysical Journal. 710: 634–647. arXiv: . Bibcode:2010ApJ...710..634A. doi:10.1088/0004-637X/710/1/634.
- Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Boezio, M.; Bogomolov, E. A.; Bonechi, L.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Pascale, M. P.; De Rosa, G.; De Simone, N.; Di Felice, V.; Galper, A. M.; Grishantseva, L.; Hofverberg, P.; Koldashov, S. V.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malvezzi, V.; Marcelli, L.; Menn, W. (2009). "An anomalous positron abundance in cosmic rays with energies 1.5–100 GeV". Nature. 458 (7238): 607–609. arXiv: . Bibcode:2009Natur.458..607A. doi:10.1038/nature07942. PMID 19340076.
- Aguilar, M. (AMS Collaboration); et al. (3 April 2013). "First Result from the Alpha Magnetic Spectrometer on the International Space Station: Precision Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5–350 GeV". Physical Review Letters. 110 (14): 141102. Bibcode:2013PhRvL.110n1102A. doi:10.1103/PhysRevLett.110.141102. PMID 25166975. Retrieved 3 April 2013.
- AMS Collaboration (3 April 2013). "First Result from the Alpha Magnetic Spectrometer Experiment". Retrieved 3 April 2013.
- Heilprin, John; Borenstein, Seth (3 April 2013). "Scientists find hint of dark matter from cosmos". Associated Press. Retrieved 3 April 2013.
- Amos, Jonathan (3 April 2013). "Alpha Magnetic Spectrometer zeroes in on dark matter". BBC. Retrieved 3 April 2013.
- Perrotto, Trent J.; Byerly, Josh (2 April 2013). "NASA TV Briefing Discusses Alpha Magnetic Spectrometer Results". NASA. Retrieved 3 April 2013.
- Overbye, Dennis (3 April 2013). "New Clues to the Mystery of Dark Matter". The New York Times. Retrieved 3 April 2013.
- Kane, G.; Watson, S. (2008). "Dark Matter and LHC:. what is the Connection?". Modern Physics Letters A. 23 (26): 2103–2123. arXiv: . Bibcode:2008MPLA...23.2103K. doi:10.1142/S0217732308028314.
- Fox, P.J.; Harnik, R.; Kopp, J.; Tsai, Y. (2011). "LEP Shines Light on Dark Matter" (PDF). Phys. Rev. D. 84: 014028. arXiv: . Bibcode:2011PhRvD..84a4028F. doi:10.1103/PhysRevD.84.014028.
- For a review, see: Pavel Kroupa; et al. (December 2012). "The failures of the Standard Model of Cosmology require a new paradigm". International Journal of Modern Physics D. 21 (4). arXiv: . Bibcode:2012IJMPD..2130003K. doi:10.1142/S0218271812300030.
- For a review, see: Salvatore Capozziello & Mariafelicia De Laurentis (October 2012). "The dark matter problem from f(R) gravity viewpoint". Annalen der Physik. 524 (9–10). Bibcode:2012AnP...524..545C. doi:10.1002/andp.201200109.
- "New theory of gravity might explain dark matter".
- Phillip D. Mannheim (April 2006). "Alternatives to Dark Matter and Dark Energy". Progress in Particle and Nuclear Physics. 56 (2). arXiv: . Bibcode:2006PrPNP..56..340M. doi:10.1016/j.ppnp.2005.08.001.
- Austin Joyce; et al. (March 2015). "Beyond the Cosmological Standard Model". Physics Reports. 568. arXiv: . Bibcode:2015PhR...568....1J. doi:10.1016/j.physrep.2014.12.002.
- "Verlinde's new theory of gravity passes first test". December 16, 2016.
- Brouwer, Margot M.; et al. (11 December 2016). "First test of Verlinde's theory of Emergent Gravity using Weak Gravitational Lensing measurements". Monthly Notices of the Royal Astronomical Society. 466 (to appear): 2547–2559. arXiv: . Bibcode:2017MNRAS.466.2547B. doi:10.1093/mnras/stw3192.
- "First test of rival to Einstein's gravity kills off dark matter". 15 December 2016. Retrieved 20 February 2017.
- Sean Carroll (9 May 2012). "Dark Matter vs. Modified Gravity: a Trialogue". Retrieved 14 February 2017.
- Merritt, David "Cosmology and Convention", Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 57(1):41–52, February 2017.
|Wikimedia Commons has media related to Dark matter.|
- Dark matter at Curlie (based on DMOZ)
- Dark matter (Astronomy) at Encyclopædia Britannica
- A history of dark matter (February 2017), Ars Technica
- What is dark matter?, CosmosMagazine.com
- The Dark Matter Crisis 18 August 2010 by Pavel Kroupa, posted in General
- The European astroparticle physics network
- Helmholtz Alliance for Astroparticle Physics
- "NASA Finds Direct Proof of Dark Matter" (Press release). NASA. 21 August 2006.
- Tuttle, Kelen (22 August 2006). "Dark Matter Observed". SLAC (Stanford Linear Accelerator Center) Today.
- "Astronomers claim first 'dark galaxy' find". New Scientist. 23 February 2005.
- Sample, Ian (17 December 2009). "Dark Matter Detected". London: Guardian. Retrieved 1 May 2010.
- Video lecture on dark matter by Scott Tremaine, IAS professor
- Science Daily story "Astronomers' Doubts About the Dark Side ..."
- Gray, Meghan; Merrifield, Mike; Copeland, Ed (2010). "Dark Matter". Sixty Symbols. Brady Haran for the University of Nottingham.
- The Physicist Who Denies that Dark Matter Exists By Oded Carmeli | <urn:uuid:b4ca28b5-aa47-4314-a510-f162b2f1fae0> | 4.1875 | 18,858 | Knowledge Article | Science & Tech. | 60.459934 | 95,563,447 |
Researchers from the Max Planck Institute for Marine Microbiology in Bremen and their colleagues from the Helmholtz-Zentrum für Umweltforschung (UFZ) in Leipzig discovered microbial communities thriving on the hydrocarbon butane without the help of molecular oxygen. The microbial consortia, obtained from hydrothermally heated sediments in Guaymas Basin, Gulf of California, use unprecedented biochemistry to feed on butane.
Gaseous hydrocarbons in the seafloor
Cell specific visualization of the butane oxidizing consortia. In red the archaea Candidatus Syntrophoarcheum butanivorans, in green the bacteria Hotseep-1
Rafael Laso-Pérez. Max Planck Institute for Marine Microbiology and Victoria Orphan, Caltech, USA
Natural gas is often released at the seafloor surface. In the upper sediment layers methane is formed by methanogenic archaea (see box). Economically more valuable gas reservoirs are however situated in deeper layers of the seafloor.
Here gas forms in purely chemical reactions from geothermally heated biomass of plant, animal and microbial origin. Along with methane, the thermogenic (thermally generated) gas contains other hydrocarbons such as propane and butane.
As anyone who has lit a gas stove, a lighter or started a gas powered car knows, burning gas requires oxygen. Also many microorganisms eat hydrocarbon gases by using oxygen, for instance in well-oxygenated top sediment layers. When the oxygen is used up, other microorganisms take over, however most of their metabolic solutions to activate and fully oxidize their substrate are completely unknown.
How microorganisms consume natural gas without molecular oxygen
Distinct microorganisms specialized on the anaerobic oxidation of different hydrocarbons. Even though methane is the simplest hydrocarbon, its anaerobic oxidation (AOM) coupled to sulfate reduction involves a team of two specific partners, archaeal and bacterial. The methane-oxidizing archaea (ANME) use similar enzymes as their methane-producing (methanogenic) relatives, however in reverse direction.
To activate the methane molecule these archaea produce large quantities of the enzyme Methyl-Coenzyme M Reductase (MCR). MCR binds the methane as methyl group to the sulfur compound coenzyme M, thereby starting the oxidation process which eventually leads to carbon dioxide (CO2).
The ANME however do not possess enzymes for sulfate reduction: this part of the AOM process is covered by the sulfate-reducing partner. On the other hand, the anaerobic oxidation of the larger gases, propane and butane, was found in bacteria that couple hydrocarbon oxidation to sulfate reduction in one cell.
The novel pathway for butane is based on the degradation of the anaerobic methane oxidation
In their latest research published in Nature microbiologists from Bremen, Leipzig, Göttingen and Bielefeld describes a new type of microbial consortia thriving on a butane diet. Similar to AOM these consortia also consist of archaeal and bacterial species. Surprisingly, the genomes of both partner organisms lack the typical genes for anaerobic butane activation.
“Instead we found genes distantly related to the MCR of methanogenic and methanotrophic archaea. Do the encoded enzymes have a role in butane oxidation?” thought PhD student Rafael Laso-Pérez, first author of the research article “And if that proves to be true, what other reactions would lead to a complete oxidation of butane to carbon dioxide?” Only painstaking lab work, and many different analyses, could solve this puzzle.
To validate this theory the microbiologist Dr. Florin Musat and his colleagues from the ProVIS at the UFZ in Leipzig used ultra-high resolution mass spectroscopy to look at intracellular metabolites. His team succeeded in identifying the forecasted intermediate - butyl-coenzyme M. “Up to date, methyl-coenzyme M reductases where known to be highly specific to the methane metabolism.
By proving the formation of butyl-coenzyme M in these archaea we demonstrate that this enzyme family can indeed handle larger hydrocarbons”, explains Dr. Musat. The research teams completed the pathway by finding other genes of the methanogenesis pathway and key elements of the butyrate and acetate metabolism.
“Combining these pathways in the right manner allows complete oxidation of butane in archaea. Evolution is ingenious: Some old tricks from other species were copied and adapted. This is known as horizontal gene transfer, this means that DNA sequences are taken from other species” explains Dr. Gunter Wegener, initiator of this study. “It was quite a long way to solve this puzzle and involved investigators from many different fields.”
Like their methane-consuming relatives the butane-oxidizing archaea are not able to fulfill the job on their own. As in AOM consortia the butane oxidizers have partner bacteria. “Electron microscopy shows tiny protein compounds connecting bacterial and archaeal cells. Electrons travel on these nanowires from the archaea to the bacteria that use them to reduce sulfate. We were able to show syntrophic electron exchange via protein nanowires for a second example,” says Dr. Gunter Wegener and adds:” Studying the archaea is like a journey in the past, and the coenzyme M reactions are among the earliest in the history of life”. All discovered species need a name – and so their new lab pet was baptized Candidatus Syntrophoarchaeum butanivorans – a butane-eating archaeon that needs a little help (syntrophy) by a partner for life.
Open questions remain
Where else on the planet do these archaea exist? Why and how did evolution lead to team work of microorganisms in consortia vs. microorganisms able to catalyze both oxidation and reduction reactions? Are there other variants of methyl-coenzyme M reductases able to activate even higher alkanes than butane? Such questions will keep researchers busy at the Max Planck Institute for Marine Microbiology and the Helmholtz Zentrum für Umweltforschung (UFZ).
Please direct your questions to
Rafael Laso-Perez, Max Planck Institute for Marine Microbiology,
D-28359 Bremen, Phone: 0421 2028 867, email@example.com
Dr. Gunter Wegener, Max Planck Institute for Marine Microbiology,
D-28359 Bremen, Phone: 0421 2028 867, firstname.lastname@example.org
Dr. Florin Musat, Helmholtz-Zentrum für Umweltforschung (UFZ) Leipzig,
D-04318 Leipzig, Phone: 0341 235 1005, email@example.com
Or the press office
Dr. Manfred Schloesser and Dr. Fanni Aspetsberger
firstname.lastname@example.org Phone: 0421 2028 704
Thermophilic archaea activate butane via alkyl-CoM formation. Rafael Laso-Pérez, Gunter Wegener, Katrin Knittel, Friedrich Widdel, Katie J. Harding , Viola Krukenberg, Dimitri V. Meier, Michael Richter, Halina E. Tegetmeyer, Dietmar Riedel, Hans-Hermann Richnow, Lorenz Adrian, Thorsten Reemtsma, Oliver Lechtenfeld, Florin Musat. Nature, 2016 doi: 10.1038/nature20152
Max Planck Institute for Marine Microbiology, Bremen
MARUM, Zentrum für Marine Umweltwissenschaften, Universität Bremen
Max-Planck-Institut für Biophysikalische Chemie, Göttingen, Germany.
Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung, Bremerhaven
Centrum für Biotechnologie, Universität Bielefeld
Helmholtz Centre for Environmental Research – UFZ, Leipzig
http://www.mpi-bremen.de Web site of the Max Planck Institute for Marine Microbiology
Dr. Manfred Schloesser | Max-Planck-Institut für marine Mikrobiologie
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:cdda289e-c7ca-4330-8428-41765cb60f1d> | 3.765625 | 2,490 | Content Listing | Science & Tech. | 30.008257 | 95,563,453 |
New professor part of study outlining potential extinction of ‘Darwin’s Finches’
New UMass Dartmouth Biology Assistant Professor Jennifer Koop is a co-author of a new study through the University of Utah, which outlines mathematical simulations showing parasitic flies may spell extinction for Darwin’s finches in the Galapagos Islands. Pest-control efforts, according to the study recently published in the Journal of Applied Ecology, might save the birds that helped inspire Charles Darwin’s theory of evolution of natural selection.
The new study “shows that the fly has the potential to drive populations of the most common species of Darwin’s finch to extinction in several decades,” said University of Utah Biology Professor Dale Clayton, senior author of the study. But the research “is not all doom and gloom,” he adds. “Our mathematical model also shows that a modest reduction in the prevalence of the fly – through human intervention and management – would alleviate the extinction risk.”
“Darwin’s finches are one of the best examples we have of speciation,” Professor Koop, who served as first author on the study and conducted research as a University of Utah doctoral student before joining UMass Dartmouth. “They were important to Darwin because they helped him develop his theory of evolution by natural selection.”
The new study is based on five years of data collected by Professor Koop, Dr. Clayton and colleagues documenting fly damage to finch reproduction, and on mathematical modeling or simulation using that and other data. The study was performed on Santa Cruz Island in the Galapagos. An estimated 270,000 medium ground finches live on that island and perhaps 500,000 live throughout the Galapagos Islands.
[Darwin's Finches 2] Darwin’s finches live only in the Galapagos Islands, off the coast of mainland Ecuador. The finches began as one species and started evolving into separate species an estimated three to five million years ago. The new study dealt with medium ground finches among the most common of at least 14 species of Darwin’s finches. One of them, the mangrove finch, already is facing potential total extinction because it is present in only two populations on a single island.
Several approaches may be needed in pest-control efforts, such as introducing fly-parasitizing wasps, removing chicks from nests for hand-rearing, raising sterile male flies to mate with females so they can’t lay eggs in finch nests, and using insecticides, including placing pesticide-treated cotton balls where birds can collect them to self-fumigate their nests.
The case of the flies and finches exemplifies how “introduced pathogens and other parasites pose a major threat to global diversity,” according to the researchers’ writings, especially on islands, which tend to have smaller habitat sizes and lower genetic diversity. In addition to the medium ground finch, other abundant species of Darwin’s finches are the small ground finch, cactus finch and small tree finch.
To simulate such highly variable conditions and how they affect the probability of finches fledging from a fly-infested nest and thus population growth, the researchers used data from five years – 2008, 2009, 2010, 2012 and 2013. They ran three simulations: one weighted toward bad years for breeding and survival, one weighted toward good years and one equally weighted.
The researchers concluded that in two of the three scenarios their model predicted that medium ground finch populations on the island of Santa Cruz were declining and at risk of extinction within the next century. The significant role of nest infestation in extinction risk has an upside for medium ground finches.
“Even though these guys may be going locally extinct, the model also shows that if you can reduce the probability of infestation, then you significantly alleviate the risk of extinction,” Dr. Koop said.
Professor Clayton added, “If we can reduce the number of nests with the flies, then it will reduce the risk of extinction substantially.”
The simulations showed that a 40 percent reduction in fly infestation of nests would extend the predicted time to extinction by 60 years, which would mean more than 100 years to extinctions in the two gloomy scenarios. Predicted extinction times more than 100 years in the future are considered too uncertain and thus aren’t considered as valid predictions of extinction.
The researchers reason it is possible a rapid evolutionary response by the birds and their immune systems could develop the ability to combat the fly.
“That happens in other animals. The question is, will these finches have enough time to develop effective defenses before they are driven to extinction by the fly? It’s an arms race,” said Dr. Clayton.
The study was funded by the National Science Foundation, Sigma Xi, the Scientific Research Society, the National Institutes of Health, the Australian Research Council, the University of Utah Global Change and Sustainability Center, and a Frank Chapman grant from the American Museum of Natural History.
Koop and Clayton conducted the study with Fred Adler, a University of Utah professor of mathematics and biology; former Utah biology doctoral student Sarah Knutie, now at the University of South Florida; and former Utah mathematics postdoctoral fellow Peter Kim, now at the University of Sydney, Australia.
Jennifer Koop is an Assistant Professor in the Biology Department. Her research focuses on understanding the establishment and transmission-virulence dynamics of host-parasite interactions in which one or both species are considered to be invasive. She received her Ph.D. in Biology from the University of Utah where she worked in the Galapagos on Darwin’s finches. Most recently, she completed an NIH-postdoctoral fellowship at the University of Arizona working on systems in the Galapagos and desert southwest.
Editor's Note: Photos used in article courtesy of the University of Utah. | <urn:uuid:efc19f1c-f963-4175-aad5-2af8b83a3c28> | 3.59375 | 1,230 | News (Org.) | Science & Tech. | 32.68448 | 95,563,471 |
Ant colonies that are highly specialized have lower chances of survival when sudden changes occur
A characteristic of insect societies such as ants is the way tasks are distributed among group members. Not only queens and worker ants have clearly defined responsibilities but the workers themselves also have particular jobs to do when, for example, it comes to the care of the young, defense, and nest building activities. It is widely assumed that this division of labor is an essential factor that determines the success of such social groups. According to this view, a high degree of specialization provides advantages because the individual tasks are performed better and more effectively. It seems, however, that this advantage may actually be a great disadvantage in special circumstances. As researchers at Johannes Gutenberg University Mainz (JGU) have discovered, highly specialized ants lack the flexibility to adapt to new situations—with serious consequences for the entire colony.
"Our findings provide an explanation of why all-rounders tend to be ubiquitous and show how important they are for the flexibility and robustness of entire societies," said evolutionary biologist Professor Susanne Foitzik, commenting on the latest findings of her research group at Mainz University. Foitzik and Evelien Jongepier investigated what happens when ant colonies with different degrees of specialization are exposed to an external threat. They tested around 3,800 ants of the species Temnothorax longispinosus and, based on their behavior, formed separate ant colonies consisting of either all-rounders or specialists. These colonies were then subjected to attacks by the slave-making ants Temnothorax americanus and as a consequence were forced to either flee or defend themselves. The social parasite T. americanus carries out raids on neighboring host colonies in order to steal their young, often killing adult worker ants and the queen in the process.
"In contrast with the widely held assumption that an individual specialization provides social advantages, we have found that specialization can in fact have a negative impact on the propagation and the chances of survival and growth of a colony," summarized Jongepier, first author of the study. Ant colonies which were specialized in defense and the care of offspring lost almost 80 percent of their brood when attacked by slave-maker ants. All-rounders, in contrast, were able to save over half of their offspring. For a species such as T. longispinosus, which only reproduces once a year, such heavy brood losses can mean the virtual destruction of the colony or at least represent a serious threat to its future chances of survival. Such fitness costs associated with strict specialization mean that the all-rounder strategy probably has the better prospects of success in a natural environment with slave-makers. Ant communities living near to slave-makers are less highly specialized than colonies in areas which are free of these social parasites. "In the field, slave-makers represent an evolutionary factor that counteracts the development of higher levels of specialization," clarified Foitzik.
Flexibility in behavior pays off
Discussing their findings, Foitzik and Jongepier postulate that specialized communities probably have difficulty in changing their activities and so they are less flexible. Even if the findings cannot be applied directly to other spheres, they nevertheless provide insights into the evolution of social groups and provide indications of their potential viability. "Our results show how important it is to take ecological aspects into account when organizing work and individual patterns of behavior if benefits for the whole society are to be obtained," the researchers concluded. Thus, flexibility in behavior would be advantageous not only for ant colonies but also in connection with quite different organizational structures as well, especially when frequent disruptions are possible. These could range from metabolic cycles with all-rounder enzymes to financial systems that do not rely on just a few specialized institutions and are thus more stable.
Part of a colony of Temnothorax longispinosus with two queens, workers, and offspring in various stages of development
photo/©: Susanne Foitzik
Workers of the host species Temnothorax longispinosus fighting a slightly larger worker of the T. americanus slave-maker species that has invaded their colony
photo/©: Susanne Foitzik
A T. americanus slave-maker worker ant (left) interacting with a slave worker of the species T. longispinosus (right)
photo/©: Susanne Foitzik
Evelien Jongepier, Susanne Foitzik
Fitness costs of worker specialization for ant societies
Proceedings of the Royal Society B, 13 January 2016
Professor Dr. Susanne Foitzik
Institute of Zoology – Evolutionary Biology
Johannes Gutenberg University Mainz (JGU)
55099 Mainz, GERMANY
phone +49 6131 39-27840
fax +49 6131 39-27850
http://rspb.royalsocietypublishing.org/content/283/1822/20152572 (Article) ;
http://www.uni-mainz.de/presse/19919_ENG_HTML.php — press release "Parasitic tapeworm influences the behavior and lifespan of uninfected members of ant colonies", 1 Dec. 2015
Petra Giegerich | idw - Informationsdienst Wissenschaft
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:434e8a0d-ec7d-4da1-9312-73facb428eac> | 3.390625 | 1,729 | Content Listing | Science & Tech. | 32.009751 | 95,563,484 |
Astronomers have discovered that our Galaxy wobbles. An international team of astronomers around Mary Williams from the Leibniz-Institute of Astrophysics Potsdam (AIP) detected and examined this phenomenon with the RAdial Velocity Experiment (RAVE), a survey of almost half a million stars around the Sun. In addition to the regular Galactic rotation the scientists found the Milky Way moving perpendicular to the Galactic plane.
It is common knowledge that our Galaxy is permanently in motion. Being a barred spiral galaxy it rotates around the Galactic centre. It has now been discovered that our Galaxy, the Milky Way, also makes small wobbling or squishing movements. It acts like a Galactic mosh pit or a huge flag fluttering in the wind, north to south, from the Galactic plane with forces coming from multiple directions, creating a chaotic wave pattern. The source of the forces is still not understood however: possible causes include spiral arms stirring things up or ripples caused by the passage of a smaller galaxy through our own.
In this study, RAVE stars were used to examine the kinematics (velocities) of stars in a large, 3D region around the Sun - the region surveys 6500 light years above and below the Sun's position as well as inwards and outwards from the Galactic centre, reaching a quarter of the way to the centre. Using a special class of stars, red clump stars, which all have about the same brightness, mean distances to the stars could be determined. This was important as then the velocities measured with RAVE, combined with other survey data, could be used to determine the full 3D velocities (up-down, in-out and rotational). The RAVE red clump giants gave an unprecedented number of stars with which it is possible to study 3D velocities in a large region around the Sun.
The 3D movement patterns obtained showed highly complex structures. The aim was then to untangle these structures, concentrating on differences between the north and south of the Galactic plane. From these velocities it was seen that our Galaxy has a lot more going on than previously thought. The velocities going upwards and downwards show that there is a wave-like behaviour, with stars sloshing in and out. The novel element in our approach was true 3D observation, showing how complex the velocity landscape of the Galaxy really is. Modellers now have the challenge of understanding this behaviour, be it from ripples from an eaten galaxy or the wake from spiral arms. These new findings will make it possible to make 3D models of our Galaxy much more precise.
The publication can be found online at http://arxiv.org/abs/1302.2468 and was published this month in Monthly Notices of the Royal Astronomical Society (MNRAS).
RAVE is a multinational project with participation of scientists from Australia, Germany, France, UK, Italy, Canada, the Netherlands, Slovenia and the USA, coordinated by the Leibniz Institute for Astrophysics Potsdam (AIP), Germany. Funding of RAVE which guarantees extensive data, telescope and instrument access is provided by the participating institutions and the national research foundations.
The University of Groningen (UG) will start a Master’s degree programme in Mechanical Engineering
in academic year 2019-2020. The degree programme, which was approved by the Higher Education Efficiency Committee earlier this year, has now been granted...
University of Groningen to award Honorary Doctorate to Former UN Secretary-General Ban Ki-Moon
UG in 39th place on European Teaching Excellence ranking list | <urn:uuid:9a3b0737-0a52-4799-9e6f-8ce0a7c4d584> | 3.796875 | 745 | News (Org.) | Science & Tech. | 37.170625 | 95,563,487 |
NASA has detected “constructing blocks of life” in a 3 billion-year-old lakebed on Mars in an outrageous breakthrough.
The residence association reliable it had unclosed utterly a few “natural compounds” – renewing wish for Curiosity’s find for life on Mars.
The invention of molecules recorded in chronological bedrock suggest situations on a Purple Planet could as shortly as have been gainful to life.
It leaves open a odds that microorganisms as shortly as populated Mars – and nonetheless may.
However consultants contend they can’t attest both means.
The car-sized Curiosity corsair was launched from Florida in Nov 2011 and landed on Mars in Aug 2012.
Nasa has tasked a corsair with questioning a Martian internal continue and geology, and to see if a correct situations exist for microbial life – brazen of an contingent tellurian touchdown.
Researchers have now been in a position to analyse cavalcade samples of dirt collected by Curiosity – and so they’ve incited adult a “finest explanation but”, formed on consultants.
Nasa has already detected “restricted healthy compounds” on Mars, however scientists have lacked information on chronological healthy matter in Martian sediments.
Now they’ve detected “quite a few totally opposite healthy compounds” and confirmed will boost of methane within a Martian ambiance.
Curiosity collected samples from dual totally opposite websites within a Gale crater: Mohave and Confidence Hills.
These areas gulf mudstones that date again roughly 3 billion years.
Utilizing a rover, Nasa was in a position to remove new samples and regard them up, releasing molecules for evaluation.
These are a 8 systematic targets of Curiosity via a time on Mars…
- 1. Decide a impression and batch of healthy CO compounds
- 2. Stock a chemical constructing blocks of life (carbon, hydrogen, nitrogen, oxygen, phosphorous, and sulfur)
- 3. Determine options that will designate a formula of organic processes
Geological and geochemical targets:
- 4. Examine a chemical, isotopic, and mineralogical combination of a martian building and near-surface geological supplies
- 5. Interpret a processes that have fashioned and mutated rocks and soils
Planetary march of targets:
- 6. Assess long-timescale (i.e., 4-billion-year) windy expansion processes
- 7. Decide stream state, distribution, and biking of H2O and CO dioxide
Floor deviation goal:
- 8. Characterize a extended spectrum of building radiation, together with galactic immeasurable radiation, print voltaic electron occasions, and delegate neutrons
The invention is a poignant bonus within a find for explanation of life on Mars – healthy reserve are compound-based compounds that make adult a constructing blocks of life itself.
Nasa additionally used Curiosity to magnitude methane ranges on Mars.
We have famous in regards to a existence of methane on Mars for some time, however a origins have been a poignant supply of debate.
On Earth, a immeasurable infancy of methane is constructed by organic sources – home creatures.
Researchers have now analysed 3 Martian years’ (55 Earth months) of windy measurements.
And evidently methane ranges on Mars arise and tumble really clearly in step with seasons.
Methane ranges change between 0.24 and 0.65 parts-per-billion, peaking in instruction of a tip of summer deteriorate (within a northern hemisphere).
Sadly, Nasa scientists seem confident that it isn’t life on Mars pumping out methane into a ambiance.
As an alternative, a inspect suggests that outrageous quantities of a gasoline are saved in water-based crystals “within a cold Martian subsurface”.
It is many substantially that anniversary adjustments in heat trigger this vacillating launch of methane.
Curiosity now continues a goal to try a Gale crater, that spans 96 miles across.
The void is estimated to be around 3.5-3.8billion years old, and was initial celebrated in a late 19th century by Australian landowner and pledge astronomer Walter Frederick Gale.
In a centre of Gale is Aeolis Mons, a outrageous void that rises 18,000 feet high – that’s around two-thirds a tallness of Mount Everest.
Do we consider scientists will ever find life on Mars, or anywhere in a universe? Let us know in a comments!
We compensate for your stories! Do we have a story for The Sun Online news team? Email us during email@example.com or call 0207 782 4368 . We compensate for videos too. Click here to add yours. | <urn:uuid:56ef8b1c-650e-43a9-b521-c438195c7c1e> | 3.296875 | 978 | News Article | Science & Tech. | 39.392564 | 95,563,520 |
Observing Oxygen Evolution During Photosynthesis. See attached file for full problem description.© BrainMass Inc. brainmass.com July 23, 2018, 12:24 am ad1c9bdddf
The main design of this experiment is based on the requirement to have CO2 present for photosynthesis. Remember, there are two parts of photosynthesis, the light reaction and the dark reaction. However, many students believe (incorrectly) that the dark reactions can occur in the dark. No, not so! The dark reactions are better known as "light-independent" reactions. In other words, they don't DIRECTLY use light, but indirectly, they of course, require light. So, the light reactions convert light energy into ATP and NADPH so that these two molecules can be used to convert CO2 into glucose in the dark reactions. But remember, without light, the ATP and NADPH will run out, and the dark reactions won't take place either.
So, this experiment is a nice ... | <urn:uuid:05937c3e-58e6-4c72-a9f5-0f05fe978a3a> | 3.828125 | 208 | Truncated | Science & Tech. | 59.912698 | 95,563,524 |
|Did you know ...||Search Documentation:|
|Environment Control (Prolog flags)|
The predicates current_prolog_flag/2 and set_prolog_flag/2 allow the user to examine and modify the execution environment. It provides access to whether optional features are available on this version, operating system, foreign code environment, command line arguments, version, as well as runtime flags to control the runtime behaviour of certain predicates to achieve compatibility with other Prolog environments.
Flags marked rw can be modified by the user using
Flag values are typed. Flags marked as
bool can have the
false. The predicate
may be used to create flags that describe or control behaviour of
libraries and applications. The library
library(settings) provides an alternative interface for
managing notably application parameters.
Some Prolog flags are not defined in all versions, which is normally
indicated in the documentation below as ``if present and true''.
A boolean Prolog flag is true iff the Prolog flag is present
and the Value is the atom
true. Tests for
such flags should be written as below:
( current_prolog_flag(windows, true) -> <Do MS-Windows things> ; <Do normal things> )
Some Prolog flags are scoped to a source file. This implies that if they are set using a directive inside a file, the flag value encountered when loading of the file started is restored when loading of the file is completed. Currently, the following flags are scoped to the source file: generate_debug_info and optimise.
A new thread (see section 10) copies all flags from the thread that created the new thread (its parent).17This is implemented using the copy-on-write tecnhnique. As a consequence, modifying a flag inside a thread does not affect other threads.
user, default) or a `system' view. In system view all system code is fully accessible as if it was normal user code. In user view, certain operations are not permitted and some details are kept invisible. We leave the exact consequences undefined, but, for example, system code can be traced using system access and system predicates can be redefined.
true, the operating system is MacOSX. Defined if the C compiler used to compile this version of SWI-Prolog defines
__APPLE__. Note that the unix is also defined for MacOSX.
false), dots may be embedded into atoms that are not quoted and start with a letter. The embedded dot must be followed by an identifier continuation character (i.e., letter, digit or underscore). The dot is allowed in identifiers in many languages, which can make this a useful flag for defining DSLs. Note that this conflicts with cascading functional notation. For example,
Post.meta.authoris read as
.(Post, 'meta.author'if this flag is set to
Functor(arg)is read as if it were written
'Functor'(arg). Some applications use the Prolog read/1 predicate for reading an application-defined script language. In these cases, it is often difficult to explain to non-Prolog users of the application that constants and functions can only start with a lowercase letter. Variables can be turned into atoms starting with an uppercase atom by calling read_term/2 using the option
variable_namesand binding the variables to their name. Using this feature, F(x) can be turned into valid syntax for such script languages. Suggested by Robert van Engelen. SWI-Prolog specific.
--or the first non-option argument. See also os_argv.19Prior to version 6.5.2, argv was defined as os_argv is now. The change was made for compatibility reasons and because the current definition is more practical.
true(default) autoloading of library functions is enabled.
codes. If --traditional is given, the default is
symbol_char, which allows using
`in operators composed of symbols.20Older versions had a boolean flag
backquoted_strings, which toggled between
symbol_char. See also section 5.2.
true(default), print a backtrace on an uncaught exception.
true(default), try to reconstruct the line number at which the exception happened.
true, integer representation is bound by min_integer and max_integer. If
falseintegers can be arbitrarily large and the min_integer and max_integer are not present. See section 126.96.36.199.
-lswiplif the SWI-Prolog kernel is a shared (DLL). If the SWI-Prolog kernel is in a static library, this flag also contains the dependencies.
-lswiplon COFF-based systems. See section 12.5.
true(default), read/1 interprets
\escape sequences in quoted atoms and strings. May be changed. This flag is local to the module in which it is changed. See section 188.8.131.52.
library(ansi_term), which is loaded at startup if the two conditions below are both true. Note that this implies that setting this flag to
falsefrom the system or personal initialization file (see section 2.2 disables colored output. The predicate message_property/2 can be used to control the actual color scheme depending in the message type passed to print_message/2.
\+ current_prolog_flag(color_term, false)
' (see meta_predicate/1). Supported values are:
truein swipl-win.exe to indicate that the console supports menus. See also section 4.35.3.
library(thread). This flag is not available on systems where we do not know how to get the number of CPUs. This flag is not included in a saved state (see qsave_program/1).
trueif this instance of Prolog supports DDE as described in section 4.43.
Disabling these optimisations can cause the system to run out of memory on programs that behave correctly if debug mode is off.
true, start the tracer after an error is detected. Otherwise just continue execution. The goal that raised the error will normally fail. See also the Prolog flag report_error. Default is
[quoted(true), portray(true), max_depth(10), attributes(portray)].
true, show the context module while printing a stack-frame in the tracer. Normally controlled using the `C' option of the tracer.
swi. The code below is a reliable and portable way to detect SWI-Prolog.
is_dialect(swi) :- catch(current_prolog_flag(dialect, swi), _, fail).
string, which produces a string as described in section 5.2. If --traditional is given, the default is
codes, which produces a list of character codes, integers that represent a Unicode code-point. The value
charsproduces a list of one-character atoms and the value
atommakes double quotes the same as single quotes, creating a atom. See also section 5.
textmode. The initial value is deduced from the environment. See section 2.19.1 for details.
in arguments of built-in predicates that accept a file name (open/3, exists_file/1, access_file/2, etc.). The predicate expand_file_name/2 can be used to expand environment variables and wildcard patterns. This Prolog flag is intended for backward compatibility with older versions of SWI-Prolog.
true(default if threading is enabled), atom and clause garbage collection are executed in a seperate thread with the alias
gc. Otherwise the thread that detected sufficient garbage executes the garbage collector. As running these global collectors may take relatively long, using a seperate thread improves real time behaviour. The
gcthread can be controlled using set_prolog_gc_thread/1.
true(default) generate code that can be debugged using trace/0, spy/1, etc. Can be set to
falseusing the -nodebug. This flag is scoped within a source file. Many of the libraries have
:- set_prolog_flag(generate_debug_info, false)to hide their details from a normal trace.21In the current implementation this only causes a flag to be set on the predicate that causes children to be hidden from the debugger. The name anticipates further changes to the compiler.
trueif XPCE is around and can be used for graphics.
<home>/boot32.prc(32-bit machines) or
<home>/boot64.prc(64-bit machines) and to find its library as
remarithmetic functions. Value depends on the C compiler used.
(float division) always returns a float, even if applied to integers that can be divided.
f(',',a). Unquoted commas can only be used to separate arguments in functional notation and list notation, and as a conjunction operator. Unquoted bars can only appear within lists to separate head and tail, like
[Head|Tail], and as infix operator for alternation in grammar rules, like
a --> b | c.
[a :- b, c].must now be disambiguated to mean
[(a :- b), c].or
[(a :- b, c)].
X == -, true.write
X == (-), true.Currently, this is not entirely enforced.
true, SWI-Prolog has been compiled with large file support (LFS) and is capable of accessing files larger than 2GB on 32-bit hardware. Large file support is default on installations built using configure that support it and may be switched off using the configure option
warning. The list may contain the elements
threadto add the thread that generates the message to the message,
time(Format)to add a time stamp. The default time format is
%T.%3f. The default is
[thread]. See also format_time/3 and print_message/2.
false), enforce mitigation against the Spectre timing-based security vulnerability. Spectre based attacks can extract information from memory owned by the process that should remain invisible, such as passwords or the private key of a web server. The attacks work by causing speculative access to sensitive data, and leaking the data via side-channels such as differences in the duration of successive instructions. An example of a potentially vulnerable application is SWISH. SWISH allows users to run Prolog code while the swish server must protect the privacy of other users as well as its HTTPS private keys, cookies and passwords.
WARNING: Although a coarser timer makes a successful attack of this type harder, it does not reliably prevent such attacks in general. Full mitigation may require compiler support to disable speculative access to sensitive data.
false(default), unification succeeds, creating an infinite tree. Using
true, unification behaves as unify_with_occurs_check/2, failing silently. Using
error, an attempt to create a cyclic term results in an
occurs_checkexception. The latter is intended for debugging unintentional creations of cyclic terms. Note that this flag is a global flag modifying fundamental behaviour of Prolog. Changing the flag from its default may cause libraries to stop functioning properly.
.sofiles) or dynamic link libraries (
true, compile in optimised mode. The initial value is
trueif Prolog was started with the -O command line option. The optimise flag is scoped to a source file.
Later versions might imply various other optimisations such as integrating small predicates into their callers, eliminating constant expressions and other predictable constructs. Source code optimisation is never applied to predicates that are declared dynamic (see dynamic/1).
open(pipe(command), mode, Stream), etc. are supported. Can be changed to disable the use of pipes in applications testing this feature. Not recommended.
determinism, which implies the system prompts for alternatives if the goal succeeded while leaving choice points. Many classical Prolog systems behave as
groundness: they prompt for alternatives if and only if the query contains variables.
false), clause/2 does not operate on static code, providing some basic protection from hackers that wish to list the static code of your Prolog program. Once the flag is
true, it cannot be changed back to
false. Protection is default in ISO mode (see Prolog flag iso). Note that many parts of the development environment require clause/2 to work on static code, and enabling this flag should thus only be used for production code.
qcompile(+Atom)option of load_files/2.
editline. This causes the toplevel not to load a command line editor (
false) or load the specified one. If loading fails the flag is set to
library(readline)is loaded, providing line editing based on the GNU readline library.
library(editline)is loaded, providing line editing based on the BSD libedit. This is the default if
library(editline)is available and can be loaded.
boot32.prc, the file specified with -x or the running executable. See also resource/3.
true, print error messages; otherwise suppress them. May be changed. See also the debug_on_error Prolog flag. Default is
true, except for the runtime version.
true, SWI-Prolog is compiled with -DO_RUNTIME, disabling various useful development features (currently the tracer and profiler).
false), load_files/2 calls hooks to allow library(sandbox) to verify the safety of directives.
true, Prolog has been started from a state saved with qsave_program/[1,2].
.sofor most Unix systems and
.dllfor Windows. Used for locating files using the
executable. See also absolute_file_name/3.
falseif the hosting OS does not support signal handling or the command line option -nosignals is active. See section 184.108.40.206 for details.
true(full checking) and
loose. Using checking mode
loose(default), the system accepts byte I/O from text stream that use ISO Latin-1 encoding and accepts writing text to binary streams.
resource_error(table_space)exception is raised.
timezonevariable associated with the POSIX tzset() function. See also format_time/3.
default, starting a normal interactive session. This value may be changed using the command line option -t. The explicit value
prologis equavalent to
initialization(Goal,main)is used and the toplevel is
default, the toplevel is set to
backtracking(default), the toplevel backtracks after completing a query. If
recursive, the toplevel is implemented as a recursive loop. This implies that global variables set using b_setval/2 are maintained between queries. In recursive mode, answers to toplevel variables (see section 2.8) are kept in backtrackable global variables and thus not copied. In backtracking mode answers to toplevel variables are kept in the recorded database (see section 4.14.2).
The recursive mode has been added for interactive usage of CHR (see section 9),22Suggested by Falco Nogatz which maintains the global constraint store in backtrackable global variables.
true, top-level variables starting with an underscore (
_) are printed normally. If
falsethey are hidden. This may be used to hide bindings in complex queries from the top level.
false) show the internal sharing of subterms in the answer substitution. The example below reveals internal sharing of leaf nodes in red-black trees as implemented by the
library(rbtrees)predicate rb_new/1 :
?- set_prolog_flag(toplevel_print_factorized, true). ?- rb_new(X). X = t(_S1, _S1), % where _S1 = black('', _G387, _G388, '').
If this flag is
% where notation
is still used to indicate cycles as illustrated below. This example also
shows that the implementation reveals the internal cycle length, and not
the minimal cycle length. Cycles of different length are
indistinguishable in Prolog (as illustrated by
S == R).
?- S = s(S), R = s(s(R)), S == R. S = s(S), R = s(s(R)).
[quoted(true), portray(true), max_depth(10), attributes(portray)].
~(tilde) sequences are replaced:
|Type in module if
|Break level if not 0 (see break/0)|
|Debugging state if not normal execution (see debug/0, trace/0)|
|History event if history is enabled (see flag history)|
variable reference. See section 2.8.
false), garbage collections and stack-shifts will be reported on the terminal. May be changed. Values are reported in bytes as G+T, where G is the global stack value and T the trail stack value. `Gained' describes the number of bytes reclaimed. `used' the number of bytes on the stack after GC and `free' the number of bytes allocated, but not in use. Below is an example output.
% GC: gained 236,416+163,424 in 0.00 sec; used 13,448+5,808; free 72,568+47,440
true, `traditional' mode has been selected using --traditional. Notice that some SWI7 features, like the functional notation on dicts, do not work in this mode. See also section 5.
falseat startup, command line editing is disabled. See also the +/-tty command line option.
true, the operating system is some version of Unix. Defined if the C compiler used to compile this version of SWI-Prolog either defines
unix. On other systems this flag is not available. See also apple and windows.
fail, the predicate fails silently. If
warn, a warning is printed, and execution continues as if the predicate was not defined, and if
existence_errorexception is raised. This flag is local to each module and inherited from the module's import-module. Using default setup, this implies that normal modules inherit the flag from
user, which in turn inherit the value
system. The user may change the flag for module
userto change the default for all application modules or for a specific module. It is strongly advised to keep the
errordefault and use dynamic/1 and/or multifile/1 to specify possible non-existence of a predicate.
false), unload all loaded foreign libraries. Default is
falsebecause modern OSes reclaim the resources anyway and unloading the foreign code may cause registered hooks to point to no longer existing data or code.
error. The first two create the flag on-the-fly, where
warningprints a message. The value
erroris consistent with ISO: it raises an existence error and does not create the flag. See also create_prolog_flag/3. The default is
silent, but future versions may change that. Developers are encouraged to use another value and ensure proper use of create_prolog_flag/3 to create flags for their library.
false), variables must start with an underscore (
_). May be changed. This flag is local to the module in which it is changed. See section 220.127.116.11.
silent, messages of type
bannerare suppressed. The -q switches the value from the initial
truethe normal consult message will be printed if a library is autoloaded. By default this message is suppressed. Intended to be used for debugging purposes.
full(print a message at the start and end of each file loaded),
normal(print a message at the end of each file loaded),
brief(print a message at end of loading the toplevel file), and
silent(no messages are printed, default). The value of this flag is normally controlled by the option
silent(Bool)provided by load_files/2.
false), print messages indicating the progress of absolute_file_name/[2,3] in locating files. Intended for debugging complicated file-search paths. See also file_search_path/2.
10000 × Major + 100 × Minor + Patch
true(default), a warning is printed if an implicitly imported predicate is clobbered by a local definition. See use_module/1 for details.
true, the operating system is an implementation of Microsoft Windows. This flag is only available on MS-Windows based versions. See also unix.
attributesoption of write_term/3. Default is
trueit prints bold and underlined text using overstrike.
trueif the XPCE graphics system is loaded.
true, source code is being read for analysis purposes such as cross-referencing. Otherwise (default) it is being read to be compiled. This flag is used at several places by term_expansion/2 and goal_expansion/2 hooks, notably if these hooks use side effects. See also the libraries
permission_error. If the provided Value does not match the type of the flag, a
Some flags (e.g., unknown) are maintained on a per-module basis. The addressed module is determined by the Key argument.
In addition to ISO, SWI-Prolog allows for user-defined Prolog flags.
The type of the flag is determined from the initial value and cannot be
changed afterwards. Defined types are
boolean (if the
initial value is one of
atom if the initial value is any other atom,
if the value is an integer that can be expressed as a 64-bit signed
value. Any other initial value results in an untyped flag that can
represent any valid Prolog term.
read_only. The default is
term. The default is determined from the initial value. Note that
termrestricts the term to be ground.
true, do not modify the flag if it already exists. Otherwise (default), this predicate behaves as set_prolog_flag/2 if the flag already exists. | <urn:uuid:2a27dc22-312d-4398-ae39-d944dc7d658f> | 2.734375 | 4,777 | Documentation | Software Dev. | 49.721677 | 95,563,533 |
Fauna is all of the animal life of any particular region or time. The corresponding term for plants is flora. Flora, fauna and other forms of life such as fungi are collectively referred to as biota. Zoologists and paleontologists use fauna to refer to a typical collection of animals found in a specific time or place, e.g. the "Sonoran Desert fauna" or the "Burgess Shale fauna". Paleontologists sometimes refer to a sequence of faunal stages, which is a series of rocks all containing similar fossils. The study of animals of a particular region is called faunistics.
Fauna comes from the name Fauna, a Roman goddess of earth and fertility, the Roman god Faunus, and the related forest spirits called Fauns. All three words are cognates of the name of the Greek god Pan, and panis is the Greek equivalent of fauna. Fauna is also the word for a book that catalogues the animals in such a manner. The term was first used by Carl Linnaeus from Sweden in the title of his 1745 work Fauna Suecica.
Cryofauna refers to the animals that live in, or very close to, ice.
Infauna are benthic organisms that live within the bottom substratum of a water body, especially within the bottom-most oceanic sediments, rather than on its surface. Bacteria and microalgae may also live in the interstices of bottom sediments. In general, infaunal animals become progressively smaller and less abundant with increasing water depth and distance from shore, whereas bacteria show more constancy in abundance, tending toward one million cells per milliliter of interstitial seawater.
Epifauna, also called epibenthos, are aquatic animals that live on the bottom substratum as opposed to within it, that is, the benthic fauna that live on top of the sediment surface at the seafloor...
Macrofauna are benthic or soil organisms which are retained on a 0.5 mm sieve. Studies in the deep sea define macrofauna as animals retained on a 0.3 mm sieve to account for the small size of many of the taxa.
Megafauna are large animals of any particular region or time. For example, Australian megafauna.
Meiofauna are small benthic invertebrates that live in both marine and fresh water environments. The term Meiofauna loosely defines a group of organisms by their size, larger than microfauna but smaller than macrofauna, rather than a taxonomic grouping. One environment for meiofauna is between grains of damp sand (see Mystacocarida).
In practice these are metazoan animals that can pass unharmed through a 0.5 - 1 mm mesh but will be retained by a 30-45 ?m mesh, but the exact dimensions will vary from researcher to researcher. Whether an organism passes through a 1 mm mesh also depends upon whether it is alive or dead at the time of sorting.
Mesofauna are macroscopic soil animals such as arthropods or nematodes. Mesofauna are extremely diverse; considering just the springtails (Collembola), as of 1998, approximately 6,500 species had been identified. | <urn:uuid:ded9c2b5-634c-4b37-883b-7e2e70856cf2> | 3.90625 | 691 | Knowledge Article | Science & Tech. | 41.690036 | 95,563,557 |
Abstract: Pd/NHC Catalytic system, developed in the Ananikov laboratory, targeted on alternative technology of chemical utilization of organic sulfur species from crude oil (DOI: 10.1021/acscatal.5b01815).
Mercaptans or thiols are a special class of organic compounds that contains sulfur functional group, RSH. Various sulfur compounds are highly demanded in the formation of new materials in photonics, optics, pharmaceutical industry, organic chemistry, and nanotechnology.
Sulfur derivatives are, by far, the richest fossil source of functional molecules available in nature. Indeed, a diversity of sulfur species is present as contaminants in crude oil. Unfortunately, there are still no efficient technological tools to separate sulfur compounds from crude oil and utilize them in materials production. Petroleum industry wastes billions of tones of valuable compounds, which are annually destroyed to elemental sulfur.
It is a well-known fact that humans are very sensitive to thiols. Small molecular thiols have an extremely unpleasant smell, which even in trace-level concentration (1-5 parts per billion) can be easily detected by human’s nose.
A unique palladium catalyst was developed in the laboratory of Prof. Ananikov at Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. Pd complex with NHC ligand furnished chemical transformations of thiols into vinyl monomers, a useful component of new generation of polymeric materials. Even challenging EtSH and PrSH thiols were involved in the reaction and produced excellent outcome.
Chemical transformation was performed using atom–economic approach, which assures high yield and complete selectivity. This means that a pure product can be obtained just after completion of the reaction and isolation of the catalyst.
Mechanistic studies have revealed the key role of nuclearity of transition metal complexes (Figure 2) in the catalytic cycle. Monometallic Pd complex mediated quick reaction, where bimetallic Pd complex reacted much slower. The mechanistic findings are connected to the catalyst evolution problem and to the role of nucleation to nanoparticles revealed by this group earlier (doi: 10.1021/jo402038p).
Upon addition to alkynes, thiols were efficiently converted to vinyl thioethers – stable monomenrs, which are easy to handle and do not have an unpleasant odour.
Here comes the logical solution to many chemical dilemmas: a right catalyst may turn even unpleasant chemicals into valuable and friendly products.
The article «Pd-NHC Catalytic System for the Efficient Atom-Economic Synthesis of Vinyl Sulfides from Tertiary, Secondary, or Primary Thiols» by Evgeniya Degtyareva, Julia Burykina, Artem Fakhrutdinov, Evgeniy Gordeev, Victor Khrustalev, and Valentine Ananikov was published in ACS Catalysis journal published by American Chemical Society.
Reference: ACS Catal. 2015, 5, 7208−7213; DOI: 10.1021/acscatal.5b01815
On-line link: http://dx.doi.org/10.1021/acscatal.5b01815
Anna Mikhailova | Ananikov Laboratory, Prof. Valentine P. Ananikov
In borophene, boundaries are no barrier
17.07.2018 | Rice University
Research finds new molecular structures in boron-based nanoclusters
13.07.2018 | Brown University
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:05f0875f-11de-4455-9082-67f09b0f30d2> | 2.953125 | 1,311 | Content Listing | Science & Tech. | 31.697931 | 95,563,563 |
PLEASE NOTE THAT THIS COURSE IS CURRENTLY BEING UPDATED TO OFFER YOU THE OPPORTUNITY TO EARN A FREE DIGITAL BADGE ON COMPLETION OF THE COURSE. THE NEW VERSION OF THE COURSE WILL REPLACE THIS EXISTING COURSE AND WILL BE AVAILABLE VIA THE SAME LINK FROM MONDAY 30 JULY 2018. ONCE THIS HAS HAPPENED YOU WILL NO LONGER BE ABLE TO VIEW THE CURRENT CONTENT, AND THE COURSE WILL NO LONGER APPEAR IN YOUR MY OPENLEARN PROFILE. THIS ALSO MEANS THAT YOU WILL NO LONGER BE ABLE TO DOWNLOAD YOUR FREE STATEMENT OF PARTICIPATION. IF YOU HAVE ALREADY COMPLETED THE COURSE AND EARNED YOUR STATEMENT, WE RECOMMEND THAT YOU DOWNLOAD IT NOW SO THAT YOU HAVE A COPY FOR YOUR RECORDS. IF YOU HAVE NOT YET COMPLETED THE COURSE AND WISH TO DO SO, YOU NEED TO COMPLETE IT BEFORE FRIDAY 27 JULY 2018.
Explore the many moons of our Solar System. Find out what makes them special. Should we send humans to our Moon again?
There are lots of moons in our Solar System. The Earth is the only planet with just a single moon. Some moons are bigger than ours; many are much smaller. There are even tiny moons orbiting some of the asteroids. Some have ongoing volcanic eruptions; others are dead, heavily cratered lumps. One has rivers and lakes of liquid methane. Our own Moon has resources that could help open the Solar System for future exploration. A small handful of moons have conditions below their surfaces where primitive life might exist. | <urn:uuid:46df16e5-6334-4c79-98d0-4de6fc68cd10> | 3.640625 | 359 | News (Org.) | Science & Tech. | 57.442147 | 95,563,576 |
whatstyle finds a code format style that fits given source files.
Code formatters like clang-format or uncrustify usually need a specific style definition how to reformat the code. This program looks at your source code and generates a style definition that the reformatted source fits its original formatting as closely as possible.
It should help programmers to begin using a formatting tool right away without the need to invest hours of reading the formatting tool documentation.
First you choose one or more of your source files whose style you find representative of the style you want to keep. The source files should cover a wide range of language constructs and to keep runtime down not exceed a few thousand lines if possible.
You specify the formatter and source files as follows:
$ whatstyle.py -f clang-format tests/examples/gumbo-parser/utf8.c
whatstyle will then try different options while reporting intermediate results. After a while you get back a result like this:
### This style was chosen for your .clang-format - it perfectly matches your sources. BasedOnStyle: Google AlignAfterOpenBracket: DontAlign SpaceAfterCStyleCast: true
Adding the option
--mode resilient will usually add more options to your style so
that a heavily out of shape version of your sources can be better retransformed into your
Reading the documention of the invidual options of a formatter takes time and does not necessarily make clear how an option influences the formatting. You can try something like this instead:
$ whatstyle.py --variants tests/examples/xv6/printf.c
First the best matching style is chosen and then every option is replaced or augmented by
every possible value. All combinations that actually made a difference are grouped and
displayed side by side.
The variant on the left is the original from the best style, on the right is another
option setting that usually makes things worse.
Below the option values, differing code fragments are shown, you can use
to show more diff hunks per variation.
--ansi to display the variants table in an ANSI terminal,
--html to open it
in a browser or
--ansihtml for a darker look in a browser.
For information about some useful scenarios run:
$ whatstyle.py --usage-examples
or read the text at the beginning of whatstyle.py.
whatstyle needs at least Python 2.7 and it works as well with Python 3.2, 3.3, 3.4 and 3.5. Jython and pypy are supported.
Also whatstyle needs at least one code formatter in the current
The presence of either
git is optional but useful because the diff quality of
both of them may be better than Python's difflib and this results in a different and usually
better matching style.
This program should work on OS X, Windows, Linux, FreeBSD and OpenBSD.
The program basically works by reformatting the source with many combinations of options and running a diff between the original and the reformatted source code.
First the major standard styles (e.g. WebKit, GNU, LLVM) are evaluated and the closest one chosen as a baseline.
Successively every choice of every style option is added to test if the additional option further reduces the differences. When no more option settings can improve the result the most appropriate style has been found.
Among a number of candidate styles with the same diff quality the one with the least number of explicit options is chosen to keep the style definition brief.
whatstyle was written by Michael Krause.
whatstyle is available under the MIT license. See the LICENSE file for more info.
The project specific Open Source licenses of the source codes in tests/examples are present in their respective directories. | <urn:uuid:f5b77a4e-3d7b-4c6a-b1de-e02d0539ea9c> | 2.96875 | 803 | Documentation | Software Dev. | 46.938156 | 95,563,603 |
Ecologists have long sought to understand global patterns of biological diversity (1, 2). Most work on this topic has focused on visible aboveground organisms that can easily be counted, such as birds, butterflies, reptiles, and plants. In contrast, knowledge of the global ecology of most belowground organism groups is limited because of their microscopic size and hidden existence. Rapid advances in molecular techniques for analyzing soil communities are now offering unprecedented opportunities for understanding soil biodiversity (3, 4). On page 1078 of this issue, Tedersoo et al. (5) use pyrosequencing of soil samples to provide a comprehensive global study of a major group of soil organisms: soil fungi.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:0c30644d-09df-4343-bb17-884f1c416b51> | 3.40625 | 156 | Academic Writing | Science & Tech. | 22.116398 | 95,563,608 |
This has been established by Ingemar Jönsson, a researcher at Kristianstad University in Sweden.
It has been nearly a year since the ecologist Ingemar Jönsson had some 3,000 microscopic water bears sent up on a twelve-day space trip. The aim of the research project, which was supported by the European Space Agency, was to find out more about the basic physiology of tardigrades by seeing if they can survive in a space environment.
Now Ingemar Jönsson and his colleagues in Stockholm, Stuttgart, and Cologne are publishing their research findings, including an article in the international journal Current Biology.
"Our principal finding is that the space vacuum, which entails extreme dehydration, and cosmic radiation were not a problem for water bears. On the other hand, the ultraviolet radiation in space is harmful to water bears, although a few individual can even survive that," says Ingemar Jönsson.
The next challenge facing Ingemar Jönsson is to try to understand the mechanisms behind this exceptional tolerance in water bears. He suspects that even the water bears that got through the space trip without any trouble may in fact have incurred DNA damage, but that the animals managed to repair this damage.
"All knowledge involving the repair of genetic damage is central to the field of medicine," says Ingemar.
"One problem with radiation therapy in treating cancer today is that healthy cells are also harmed. If we can document and show that there are special molecules involved in DNA repair in multicellular animals like tardigrades, we might be able to further the development of radiation therapy."Tardigrades survive exposure to space in low Earth orbit
Current Biology, Vol 18, R729-R731, 09 September 2008
Ingemar Jönsson can be reached at phone: +46-(0)70 2666 541 or e-mail at: firstname.lastname@example.orgPressofficer Lisa Nordenhem, email@example.com; +46-703 176578
Ingemar Björklund | idw
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:d81260af-aece-4838-864d-ab9260056b90> | 3.421875 | 1,084 | Content Listing | Science & Tech. | 41.733748 | 95,563,640 |
Carbon storage: caught between a rock and climate change
Presenter: Herbert Huppert
Published: December 2012
Age: 14-19 and upwards
Views: 1010 views
Source/institution: Royal Society
Bakerian Prize Lecture by Professor Herbert Huppert FRS Institute of Theoretical Geophysics at the University of Cambridge. Since the formation of the Earth, the global mean surface temperature, carbon dioxide (CO2) and methane content of the atmosphere have varied considerably. But over the past 150 years there have been dramatic increases in all three values. Professor Herbert Huppert, the Director of the Institute of Theoretical Geophysics at the University of Cambridge, will explore this rise and its probable significance, as well as one option that can potentially halt the rise in CO2- carbon storage. This technology may combat the rise in greenhouse gases by storing CO2 in vast porous geological formations. For the last fifteen years there has been considerable effort devoted to storing, or sequestering, some of the millions of tons of CO2 resulting from the burning of fossil fuels which otherwise would have been emitted into the atmosphere. The first project, the Sleipner gas field off Norway’s coast, has successfully stored approximately 15 million tons of CO2 since 1996. This lecture will explore some of the physical, chemical and fluid dynamical processes in the storage of CO2, as well as evaluate the risk of leakage back into the atmosphere. | <urn:uuid:c3f208d4-7f43-4092-b125-6d81f695645b> | 3.3125 | 301 | Knowledge Article | Science & Tech. | 22.619053 | 95,563,649 |
Spring and summer extreme temperatures in Iberia during last century in relation to circulation types
- Fernández-Montes Sonia, Rodrigo Fernando S, Seubert Stefanie, Sousa Pedro M
- University of Almeria, Applied Physics, Almería, Spain, University of Augsburg, Institute of Geography, Augsburg, Germany, University of Lisbon, CGUL, IDL, Lisbon, Portugal
- Atmospheric Research SCI(E) SCOPUS
- Elsevier in 2013
- Cited Count
In the Iberian Peninsula the raise of temperatures has been notable from mid-1970s to mid-2000s, especially in spring and summer. This study analyses spatial and temporal relationships between extreme temperatures and atmospheric circulation types (CTs) defined over the Iberian Peninsula (IP) in these seasons. Station series (29) of maximum and minimum temperature are considered, starting from 1905 until 2006. The CTs (9 for spring and 8 for summer) are derived by a cluster method applied to daily mean SLP grids covering the period 1850–2003. Changes in the seasonal frequency of extreme temperatures and of CTs are analysed. Subsequently, the CTs are examined for their effectiveness in leading to moderately extreme temperatures (at each location) using an index that measures the contribution to extreme days with respect to the contribution to non-extreme days. Correlation between regional extreme series and CTs frequency is also tested.
1.Daily mean sea level pressure reconstructions for the European–North Atlantic region for the period 1850–2003. J. Clim.. Vol. 19. 27172742(2006) Ansell T. et al.
2.Classification, Seasonality and Persistence of Low-Frequency Atmospheric Circulation Patterns(1987) Robert E. Livezey et al. Monthly Weather ReviewEarth Science cited 0 times
3.The hot summer of 2010: redrawing the temperature record map of Europe. Science. Vol. 332. 220224(2011) Barriopedro D. et al.
4.Frequency and within-type variations of large scale circulation types and their effects on low-frequency climate variability in Central Europe since 1780. Int. J. Climatol.. Vol. 27. 473491(2007) Beck C. et al.
5.Observed changes in extreme temperatures over Spain during 1957–2002 using Weather Types. J. Climatol.. Vol. 9. 4561(2009) Bermejo M. et al.
6.Statistical downscaling of daily temperatures in the NW Iberian Peninsula from global climate models: validation and future scenarios. Clim. Res.. Vol. 48. 163176(2011) Brands S. et al.
7.The development of a new dataset of Spanish daily adjusted temperature series (SDATS) (1850–2003). Int. J. Climatol.. Vol. 26. 17771802(2006) Brunet M. et al.
8.Temporal and spatial temperature variability and change over Spain during 1850–2005. J. Geophys. Res.. Vol. 112. D12117(2007) Brunet M. et al.
9.A case‐study/guidance on the development of long‐term daily adjusted temperature datasets, WCDMP‐66/WMO‐TD‐1425.(2008) Brunet M. et al.
10.Tropical Atlantic influence on European heat waves. J. Clim.. Vol. 18. 28052811(2005) Cassou C. et al. | <urn:uuid:5c1243e6-7531-4681-a3af-e9739832995e> | 2.5625 | 739 | Academic Writing | Science & Tech. | 58.682697 | 95,563,682 |
Acacias across Africa have enormous ecological and economic importance, yet their population genetics are poorly studied. We used seven microsatellite loci to investigate spatial genetic structure and to identify potential ecological and geographic barriers to dispersal in the widespread acacia, Senegalia (Acacia) mellifera. We quantified variation among 791 individuals from 28 sampling locations, examining patterns at two spatial scales: (i) across Kenya including the Rift Valley, and (ii) for a local subset of 11 neighbouring locations on Mpala Ranch in the Laikipia plateau. Our analyses recognize that siblings can often be included in samples used to measure population genetic structure, violating fundamental assumptions made by these analyses. To address this potential problem, we maximized genetic independence of samples by creating a sibship-controlled data set that included only one member of each sibship and compared the results obtained with the full data set. Patterns of genetic structure and barriers to gene flow were essentially similar when the two data sets were analysed. Five well-defined geographic regions were identified across Kenya within which gene flow was localized, with the two strongest barriers to dispersal splitting the Laikipia Plateau of central Kenya from the Western and Eastern Rift Valley. At a smaller scale, in the absence of geographic features, regional habitat gradients appear to restrict gene flow significantly. We discuss the implications of our results for the management of this highly exploited species.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:493d2938-7190-4420-9c3d-20b4789be2ac> | 3.296875 | 309 | Academic Writing | Science & Tech. | 6.937079 | 95,563,710 |
World Bulletin / News Desk
Iceland could be the first country to generate electricity from magma - if plans to go ahead next year succeed - three percent of its energy needs will be generated this way, says Gudmundur Omar Fridleifsson, chief geologist of Iceland Deep Drilling Project (IDDP).
Iceland created the first magma-based geothermal energy system after accidently drilling approximately two kilometers into a chamber of molten lava in a caldera called Krafla in the north of the island five years ago.
With this incident, scientists from IDDP decided to use the magma to generate 36 megawatts of electricity in 2012.
However, the team's plans were put on hold when a valve failed during the process and the well had to be closed down.
"The power company then considered either reconditioning the well or drilling a new well to the magma chamber for steam production" said Fridleifsson.
"The IDDP program is now ready to drill the next well, IDDP-2, but this time not in Krafla, in the Reykjanes geothermal field in south west Iceland, which has seawater salinity and in many respects resembles black smoker systems on the ocean floor," he said.
Some modification and improvements are included in the design of the well and the flow line structure. An official decision on drilling well IDDP-2 remains to be taken but scientists are aiming to drill in 2015, according to Fridleifsson.
This improvement is very important for the world and it’s all about the world’s energy, said Mustafa Kumral, associate professor of geological engineering at Istanbul Technical University.
"New Zealand and Iceland are experienced countries with geothermal works because of their geological locations and geothermal sources. In these countries, geothermal power and thermal actions are very common due to volcanism. Besides, they have more opportunities compared to other countries" he said.
In the world and Turkey, generating energy from magma started to take attention, for now only by thoughts and not with the actions, he added.
Turkey has around 14 inactive volcanoes - the most recent eruption being 150 years ago at Mount Tendürek, a volcano in the southeast of Turkey.
IDDP pumps down water during drilling which hydrofractures the hot rock next to the magma body – then reverses the process and attracts the fluid into the well. It creates a Geothermal System forming an EGS-Magma system. IDDP claims that by this drilling, they “unintentionally” created the world’s first Magma-EGS system.
This new method of generating electricity could be important for Iceland, where geothermal energy and hydroelectricity make up almost 95 percent of the energy production and 85 percent of homes are heated by geothermal, according to figures from International Energy Agency.
Drilling into magma has only occurred once before; in Hawaii in 2007, however generating electricity was not successful.
If drilling into magma succeeds in Iceland it will be a revolution for the energy world, experts say.
The IDDP is partnership of three energy companies, National Power Company, HS Energy Ltd. and Reykjavik Energy. The government agency, National Energy Authority of Iceland is also a partner of this collaboration.
Brad Smith made a case for a government initiative to lay out rules for proper use of facial recognition technology, with input from a bipartisan and expert commission.
This year's sector showpiece event, opening Monday, will be buzzing on the back of rapid changes in the industry, as US titan Boeing and European arch rival Airbus vie for superiority in the skies.
MEERkat sets stage for Square Kilometre Array project, to allow astronomers to see the sky in unprecedented detail
Prominent users see follower counts decrease by millions after removal of accounts deemed suspicious or fraudulent
Social media giant fined £500,000 ($663,042) for failing to protect users’ private information
The deal marks the biggest overseas foray yet for Tesla, which is looking to expand into global markets, plans that faced a potential threat from the intensifying China-US trade row.
Floating robot is first AI-powered machine to help astronauts complete tasks aboard ISS
Turk Telekom supports increase of competition for higher Internet penetration in Turkey
‘We believe in future of Turkey and we will continue our investments without deceleration,’ says Renault Group’s COO
There was panic across the world today after WhatsApp went down this morning. Whatsapp is having login issues since 2:06 AM EDT.
New study reveal Curiosity rover found ancient organic molecules and seasonal methane gas shifts
Tech giant's Turkey chief aims to take local enterprise to global market
Apple is seeking slightly more than a billion dollars in damages, while Samsung wants a figure closer to $28 million.
His firm behind the project -- "The Boring Company" -- wants to create tunnels that link up with existing subway lines to "complement the system".
As global car giants race for an advantage in the world's largest car market, the Shanghai Commission of Economy and Information Technology awarded the two licences for BMW's 7 Series sedans on Monday, the regulator said.
Prime Minister Sheikh Hasina says Bangabandhu Satellite-1 will revolutionize telecom, broadcast sectors | <urn:uuid:e5694914-f369-4d67-84c0-ec7e56db76db> | 3.171875 | 1,103 | News Article | Science & Tech. | 30.421254 | 95,563,766 |
Interactions Between Boreal Forests and Climate Change
The increasing concentrations of CO2 and other greenhouse gases (Chapter 1) change the behaviour of radiation energy in the atmosphere. Several processes (Section 6.2) respond to this redistribution of radiation. The conservation equations of mass, energy and momentum described in Section 7.2 can be solved numerically to study the resulting effects on climate. Changes in climate occur both as a result of natural variability and as a response to anthropogenic forcing. Some part of the natural variability is forced, that is, caused by external factors such as solar variability and volcanic eruptions. Another part is unforced, associated with the nonlinear internal dynamics of the climate system such as interactions between the atmosphere and the oceans. Because of these natural mechanisms, climate has always varied, on time scales ranging from years to millions of years (Jansen et al., 2007), and it would continue to vary in the future regardless of what mankind is doing. Nevertheless, when we focus on this and the following centuries, the effects of natural variability will likely be secondary when compared with anthropogenic changes in the global climate (Meehl et al., 2007).
KeywordsClimate Change Soil Organic Matter Boreal Forest Leaf Area Index Surface Albedo
Unable to display preview. Download preview PDF. | <urn:uuid:e4e4d1ec-7ab6-49d3-91f0-82ab715e15f7> | 3.359375 | 265 | Truncated | Science & Tech. | 29.341796 | 95,563,784 |
Human exposure to the environmental toxin aluminium has been linked, if tentatively, to autism spectrum disorder. The presence of aluminium in inflammatory cells in the meninges, vasculature, grey and white matter is a standout observation and could implicate aluminium in the aetiology of ASD.
Consanguinity and reproductive health among Arabs
No single birthplace of mankind, say scientists
New discovery about HIV.
A bad marriage can seriously damage your health, say scientists
Research identifies new breast cancer therapeutic target
Massive diamond cache detected beneath Earth's surface
We Need to Capture Carbon to Fight Climate Change
Amber encased snake
Painkillers crafted with part of the drug Botox provided long-term pain relief in mice. Researchers added the modified Botox to molecules targeting pain-messaging nerve cells. Such painkillers could potentially one day be developed for humans as alternatives to more addictive drugs, such as opioids.
Malaria Vaccines: Recent Advances and New Horizons
Protecting data privacy is key to a smart energy future
“Because of global climate change, huge amounts of permafrost are rapidly warming. To microbes, they’re like freezers full of juicy chicken dinners that are thawing out,” -- researcher Virginia Rich. Researchers found 1,500 new microbial genomes, doubled known types of viruses in the world.
A bad mood may help your brain with everyday tasks - New research found that being in a bad mood can help some people’s executive functioning, such as their ability to focus attention, manage time and prioritize tasks. The same study found that a good mood has a negative effect on it in some cases.
10 new moons discovered around Jupiter
UT Scientists Identify genesis of Alzheimer's
Adult rats that had been exposed before birth and during nursing to a mixture of plastic chemicals called phthalates found in a wide range of consumer products in human level doses have fewer brain cells and perform worse on an attention-switching task than rats not exposed to the plastic chemicals
A new study by epidemiologists supports the viability of a potential way to reduce the risk of Alzheimer's disease. The authors looked at subjects who suffered severe herpes infection and who were treated aggressively with antiviral drugs, the relative risk of dementia was reduced by a factor of 10 | <urn:uuid:3ce70cf8-c518-4341-83b3-7431483dfad2> | 2.5625 | 466 | Content Listing | Science & Tech. | 18.54985 | 95,563,792 |
|CNIDARIA : CONICA : Plumulariidae||SEA ANEMONES AND HYDROIDS|
Description: A fan-like hydroid with irregularly branching main stems and regular side-branches arising in opposite pairs from the main stem. The hydrothecae are arranged on the upper edge of the side branch. They are tubular in shape with a smooth outer margin, and are surrounded by 4 smaller, defensive polyps, one on either side, and one above and below. The gonothecae are arranged on short pedicels. They are roughly oval shaped, and have a wide aperature. The capsule tapers towards the base, and is surrounded by four small, defensive polyps, located in the basal region. Typically 70mm with side-branches 10mm in length.
Habitat: A deep water species rarely found in less than 50m and thus not often encountered by divers. Found at sites with moderate to strong tidal streams and scour from mobile gravel.
Ecology: Grows in company of Diphasia spp. especially Diphasia alata, in sites with a high diversity of hydroids.
Distribution: The east coast of Rathlin below 40m; the Maidens, Co Antrim; North Channel 45m; east side of Ushant 50m. Also reported from Shetland, western Scotland, North Sea, Plymouth, Scillies and Roscoff.
Similar Species: This is a distinctive species, unlikely to be confused with any other apart from Polyplumaria frutescens.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Polyplumaria flabellata G O Sars, 1874. [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=D6100 Accessed on 2018-07-19
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:f09d5ba9-5487-4dca-a969-26e5e80ad61d> | 2.984375 | 462 | Knowledge Article | Science & Tech. | 45.1125 | 95,563,796 |
Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 27, 2014
In the framework of the EU project "Solarjet", scientists demonstrate for the first time the entire production path to liquid hydrocarbon fuels from water, CO2 and solar energy. The key technological component is a solar reactor developed at ETH Zurich. | <urn:uuid:a86f1351-167a-4af3-94f2-963128dad4ca> | 3.09375 | 80 | Truncated | Science & Tech. | 46.566154 | 95,563,814 |
Western Glacier StoneflyLatin name: Zapada Glacier,
Conservsation status: vulnerable (population is decreasing)
One of 3,500 species of Stoneflies, this rare insect breeds in only a few cold water streams immediately below melting glaciers or next to permanent snowfields in Glacier National Park, USA.
Since 1960, the average summer temperature in Glacier National Park has increased by around 1 °C and glaciers have declined by 35%. By counting Stoneflies, scientists can determine how quickly glaciers are melting and the temperature of streams. In a two year search begun in 2011, scientists found the Stonefly in only one of the six streams it had previously occupied and discovered that it had retreated to two different streams at higher altitudes. Satellite data confirm that the world’s glaciers are declining, affecting the availability of fresh water for humans, animals and plants, and contributing to sea level rise.
Other animals at risk
The Narwhal lives mainly in the Atlantic Arctic. Because of specialized habitat, narrow range and limited diet (Arctic cod and halibut), it is one of the Arctic species most vulnerable to climate change. The Narwhal breeds in bays and fjords, moving offshore during winter to areas of heavy ice pack, breathing through the few cracks. Sudden or extreme temperature change can cause these cracks to freeze shut, trapping the whales. Other threats are illegal hunting, industrial activities, and risks from oil development, exploration and shipping in the Arctic.
The Rusty Patched is the first bee to be listed as endangered in the US. Populations have declined as much as 87% from habitat loss, disease and pesticides. Climate threats include: warming and precipitation, early snow, late frost and drought. Bees and butterflies are important agricultural pollinators. In 2016, 40% of invertebrate pollinators (bees and butterflies) were listed as threatened with extinction.
In the last 30 years the Staghorn Coral population has decreased by 80% from disease, pollution, development and damage. Climate change is increasing the risk of extinction. Corals live in symbiotic (mutually beneficial) relation with algae. The coral receives nutrients and oxygen from algae, and the algae receive nutrients and carbon dioxide from the coral. Rising sea temperature increases algae growth so oxygen levels become too high for the coral, causing "bleaching"—the coral expels the algae and dies. Higher ocean acidity contributes to bleaching and also reduces the ability of corals and other marine animals to build hard shells. Other threats from climate change are sea level rise, changes in currents and storm damage.
Ivory Gulls are almost entirely dependent on sea ice and glaciers for nesting and food foraging. They feed on fish and shellfish that thrive near the edge of the ice, and on the remains of seals left by Polar Bears. Seal blubber is a source of heavy contaminants—Ivory Gull eggs show a higher concentration of mercury and pesticides than any Arctic sea bird. Other threats are illegal hunting and disturbance from diamond mining in the Canadian Arctic. | <urn:uuid:17708b9a-a84e-4448-b088-e8475b7ee2ab> | 4.0625 | 620 | Knowledge Article | Science & Tech. | 35.279296 | 95,563,830 |
Author: Song Y. Yan
Publisher: World Scientific
Release Date: 1996
This book is about perfect, amicable and sociable numbers, with an emphasis on amicable numbers, from both a mathematical and particularly a computational point of view. Perfect and amicable numbers have been studied since antiquity, nevertheless, many problems still remain. The book introduces the basic concepts and results of perfect, amicable and sociable numbers and reviews the long history of the search for these numbers. It examines various methods, both numerical and algebraic, of generating these numbers, and also includes a set of important and interesting open problems in the area. The book is self-contained, and accessible to researchers, students, and even amateurs in mathematics and computing science. The only prerequisites are some familiarity with high-school algebra and basic computing techniques.
Author: Song Y. Yan
Publisher: Springer Science & Business Media
Release Date: 2013-11-11
This book provides a good introduction to the classical elementary number theory and the modern algorithmic number theory, and their applications in computing and information technology, including computer systems design, cryptography and network security. In this second edition proofs of many theorems have been provided, further additions and corrections were made.
Author: Lines M E
Publisher: CRC Press
Release Date: 1986-01-01
Why do we count the way we do? What is a prime number or a friendly, perfect, or weird one? How many are there and who has found the largest yet known? What is the Baffling Law of Benford and can you really believe it? Do most numbers you meet in every day life really begin with a 1, 2, or 3? What is so special about 6174? Can cubes, as well as squares, be magic? What secrets lie hidden in decimals? How do we count the infinite, and is one infinity really larger than another? These and many other fascinating questions about the familiar 1, 2, and 3 are collected in this adventure into the world of numbers. Both entertaining and informative, A Number for Your Thoughts: Facts and Speculations about Numbers from Euclid to the Latest Computers contains a collection of the most interesting facts and speculations about numbers from the time of Euclid to the most recent computer research. Requiring little or no prior knowledge of mathematics, the book takes the reader from the origins of counting to number problems that have baffled the world's greatest experts for centuries, and from the simplest notions of elementary number properties all the way to counting the infinite.
Chapters: Perfect Number, Amicable Number, Table of Divisors, Hyperperfect Number, Harmonic Divisor Number, Friendly Number, Aliquot Sequence, Abundant Number, Superabundant Number, Highly Abundant Number, Weird Number, Sociable Number, Superperfect Number, Deficient Number, Untouchable Number, Colossally Abundant Number, Almost Perfect Number, Quasiperfect Number, Sublime Number. Source: Wikipedia. Pages: 95. Not illustrated. Free updates online. Purchase includes a free trial membership in the publisher's book club where you can select from more than a million books without charge. Excerpt: The tables below list all of the divisors of the numbers 1 to 1000. A divisor of an integer n is an integer m, say, for which n/m is again an integer (which is necessarily also a divisor of n). For example, 3 is a divisor of 21, since 21/3 = 7 (and 7 is also a divisor of 21). If m is a divisor of n then so is m. The tables below only list positive divisors. ...More: http: //booksllc.net/?id=22243
Author: Tony Crilly
Publisher: Hachette UK
Release Date: 2008-03-03
Just the mention of mathematics is enough to strike fear into the hearts of many, yet without it, the human race couldn't be where it is today. By exploring the subject through its 50 key insights - from the simple (the number one) and the subtle (the invention of zero) to the sophisticated (proving Fermat's last theorem) - this book shows how mathematics has changed the way we look at the world around us.
Author: David Wells
Publisher: Penguin UK
Release Date: 1997-09-04
Why was the number of Hardy's taxi significant? Why does Graham's number need its own notation? How many grains of sand would fill the universe? What is the connection between the Golden Ratio and sunflowers? Why is 999 more than a distress call? All these questions and a host more are answered in this fascinating book, which has now been newly revised, with nearly 200 extra entries and some 250 additions to the original entries. From minus one and its square root, via cyclic, weird, amicable, perfect, untouchable and lucky numbers, aliquot sequences, the Cattle problem, Pascal's triangle and the Syracuse algorithm, music, magic and maps, pancakes, polyhedra and palindromes, to numbers so large that they boggle the imagination, all you ever wanted to know about numbers is here. There is even a comprehensive index for those annoying occasions when you remember the name but can't recall the number.
Author: Diane Thiessen
Publisher: National Council of Teachers of
Release Date: 1998
Children's literature in mathematics has been a valuable tool for developing positive attitudes toward mathematics as well as for exploring mathematics. This book provides annotated bibliographies of children's literature books emphasizing mathematics education. Each review describes the book's content and accuracy, its illustrations and their appropriateness, the author's writing style, and indicates whether activities for the reader are included. Chapters in this book include: (1) "Early Number Concepts"; (2) "Number-Extensions and Connections"; (3) "Measurement"; (4) "Geometry and Spatial Sense"; and (5) "Series and Other Resources". (ASK) | <urn:uuid:24767581-0a94-417f-9fb1-3424bfac3e03> | 2.65625 | 1,260 | Content Listing | Science & Tech. | 42.592047 | 95,563,839 |
In 1970, a part-sterile plant in Uniform Test I, entry W6-4108 (from Wisconsin), was observed at Ames, Iowa. Seven seeds from this part-sterile plant gave rise to seven plants in 1971; six were fertile and one was sterile and set no seeds. In 1972, five plant progeny rows gave all fertile plants, i.e., they did not segregate fertile and sterile plants.
Palmer, Reid G.
"Research notes: Genetics of the meiotic mutant st5,"
Soybean Genetics Newsletter: Vol. 6
, Article 30.
Available at: https://lib.dr.iastate.edu/soybeangenetics/vol6/iss1/30 | <urn:uuid:04eaedb1-10ae-4499-803f-5854e80cb0aa> | 2.703125 | 153 | Academic Writing | Science & Tech. | 70.361364 | 95,563,850 |
THE Earth is risking a major ecological breakdown that could eventually render it largely uninhabitable.
This is one of the warnings contained in ‘Surviving the 21st Century’ a powerful new book released recently by global science publisher Springer International.
Our combined actions may be leading to “. . . gross ecological breakdown that will strike humanity harder than anything in our experience”, the book cautions.
Author and science writer Julian Cribb says, “In the past week alone has come news that global populations of fish, birds, mammals, amphibians and reptiles declined by 58 per cent between 1970 and 2012. From 20-30 per cent of known species now appear at risk of extinction.”
“This is an extermination of life on Earth without precedent. The human impact is on track to exceed the catastrophe that took out the dinosaurs.
“Many people don’t realise it, but our own fate is completely bound up with these other creatures, plants and organisms we heedlessly destroy. They provide the clean air and water, the food, the nutrient recycling, the de-toxing, the medications, the clothing and timber that we ourselves need for survival.
“Humans are now engaged in demolishing our own home, brick by brick. Every dollar we spend on food or material goods sends a tiny, almost-imperceptible signal down long industrial and market chains to accelerate the devastation
“Together those signals are causing the very systems we ourselves need for survival to break down, as forests fall, deserts spread and oceans acidify.”
A recent study by Princeton University found oxygen levels in the Earth’s atmosphere have fallen by 0.1 per cent in the past 100 years, probably due to land clearing, ocean acidification and burning of fossil fuels. “Though it is still a small signal, it is another indicator of our ability to disrupt the Earth’s life-support system,” Cribb says.
- Related post: you-eat-30-kilos-of-soil-every-day-julian-cribb | <urn:uuid:394216e1-8585-4705-a938-04ee734a5bf9> | 2.953125 | 440 | News Article | Science & Tech. | 46.669412 | 95,563,852 |
On February 3 at 15:53 UTC/10:53 a.m. EST, NASA's Aqua satellite passed over Queensland, Australia and the AIRS or Atmospheric Infrared Sounder instrument captured infrared data on both storms. System 94P/Fletcher was in the Gulf of Carpentaria and over the Northwest region of Queensland, while newborn Edna formed in the South Pacific Ocean east of Queensland.
This infrared image of Tropical Storm Edna was taken by NOAA's polar orbiting satellite, NOAA-19 on Feb. 4 at 1443 UTC/9:43 a.m.
Image Credit: NRL/NOAA
Tropical Storm Edna Moving Toward New Caledonia
System 93P strengthened between February 3 and 4 into Tropical Depression 12P and then Tropical Storm Edna, northwest New Caledonia. By 1500 UTC/10 a.m. EST Edna was about 392 nautical miles northwest of New Caledonia, near 17.2 south latitude and 161.5 east longitude. Edna had maximum sustained winds near 35 knots/40 mph/62 kph. It was moving to the southeast at 19 knots/21.8 mph/35.1 kph.
NASA's AIRS data showed very cold cloud top temperatures in powerful thunderstorms within Edna that have the potential for heavy rainfall. Infrared data also showed that Edna's circulation has consolidated and convection has deepened/strengthened with bands of thunderstorms, mostly north of the center, were wrapping more tightly into the low-level center of circulation.
AIRS data also showed that sea surface temperatures were around 28C/82.4F, warm enough to contribute to strengthening the system. Sea surface temperatures need to be at least 26.6C/80F in order for a tropical cyclone to maintain intensity. Warmer temperatures than that can help in increased evaporation with the formation of thunderstorms that make up a tropical cyclone. However, as Edna continues tracking southward, the storm will run into cooler sea surface temperatures that will squelch any significant intensification.Text credit: Rob Gutro
Rob Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:a8538589-4b1e-40d2-871d-1dd419d53476> | 2.9375 | 1,073 | Content Listing | Science & Tech. | 47.225099 | 95,563,869 |
This story is available online: http://bit.ly/1h69JAA
CORVALLIS, Ore. – Exposure to iron pipes and steel rebar, such as the materials found in most hatcheries, affects the navigation ability of young steelhead trout by altering the important magnetic “map sense” they need for migration, according to new research from Oregon State University.
The exposure to iron and steel distorts the magnetic field around the fish, affecting their ability to navigate, said Nathan Putman, who led the study while working as a postdoctoral researcher in the Department of Fisheries and Wildlife, part of OSU’s College of Agricultural Sciences.
Just last year Putman and other researchers presented evidence of a correlation between the oceanic migration patterns of salmon and drift of the Earth’s magnetic field. Earlier this year they confirmed the ability of salmon to navigate using the magnetic field in experiments at the Oregon Hatchery Research Center. Scientists for decades have studied how salmon find their way across vast stretches of ocean.
“The better fish navigate, the higher their survival rate,” said Putman, who conducted the research at the Oregon Hatchery Research Center in the Alsea River basin last year. “When their magnetic field is altered, the fish get confused.”
Subtle differences in the magnetic environment within hatcheries could help explain why some hatchery fish do better than others when they are released into the wild, Putman said. Stabilizing the magnetic field by using alternative forms of hatchery construction may be one way to produce a better yield of fish, he said.
“It’s not a hopeless problem,” he said. “You can fix these kinds of things. Retrofitting hatcheries with non-magnetic materials might be worth doing if it leads to making better fish.”
Putman’s findings were published this week in the journal Biology Letters. The research was funded by Oregon Sea Grant and the Oregon Department of Fish and Wildlife, with support from Oregon State University. Co-authors of the study are OSU’s David Noakes, senior scientist at the Oregon Hatchery Research Center, and Amanda Meinke of the Oregon Hatchery Research Center.
The new findings follow earlier research by Putman and others that confirmed the connection between salmon and the Earth’s magnetic field. Researchers exposed hundreds of juvenile Chinook salmon to different magnetic fields that exist at the latitudinal extremes of their oceanic range.
Fish responded to these “simulated magnetic displacements” by swimming in the direction that would bring them toward the center of their marine feeding grounds. In essence, the research confirmed that fish possess a map sense, determining where they are and which way to swim based on the magnetic fields they encounter.
Putman repeated that experiment with the steelhead trout and achieved similar results. He then expanded the research to determine if changes to the magnetic field in which fish were reared would affect their map sense. One group of fish was maintained in a fiberglass tank, while the other group was raised in a similar tank but in the vicinity of iron pipes and a concrete floor with steel rebar, which produced a sharp gradient of magnetic field intensity within the tank. Iron pipes and steel reinforced concrete are common in fish hatcheries.
The scientists monitored and photographed the juvenile steelhead, called parr, and tracked the direction in which they were swimming during simulated magnetic displacement experiments. The steelhead reared in a natural magnetic field adjusted their map sense and tended to swim in the same direction. But fish that were exposed to the iron pipes and steel-reinforced concrete failed to show the appropriate orientation and swam in random directions.
More research is needed to determine exactly what that means for the fish. The loss of their map sense could be temporary and they could recalibrate their magnetic sense after a period of time, Putman said. Alternatively, if there is a critical window in which the steelhead’s map sense is imprinted, and it is exposed to an altered magnetic field then, the fish could remain confused forever, he said.
“There is evidence in other animals, especially in birds, that either is possible,” said Putman, who now works for the National Oceanic and Atmospheric Administration. “We don’t know enough about fish yet to know which is which. We should be able to figure that out with some simple experiments.”
This site is protected by wp-copyrightpro.com | <urn:uuid:08ddfe9e-02f1-4149-8fad-11b94d69be0b> | 3.40625 | 937 | Truncated | Science & Tech. | 37.731037 | 95,563,904 |
You are here: Home Environment Climate Change Does Irrigation Mask Climate Change? Does Irrigation Mask Climate Change? by Scott James October 13, 2010, 9:03 am We know irrigation works- meaning, we know that we can turn a desert into farmland. But how much do we know about the side effects? A new report from two New York researchers entitled Irrigation and 20th Century Climate suggests that the worldwide increase in irrigation over the 20th century is actually changing weather patterns and masking some local effects of global warming. The report also raises the question that if current irrigation and resulting weather changes are hiding some localized effects of climate change, what will happen if the current water supplies in those areas used for irrigation runs out? Image Credit: PUMA AND COOK: IRRIGATION AND 20TH CENTURY CLIMATE The expansion of irrigation during the 20th century The report concludes that “generally have cooler temperatures and increased precipitation in the presence of irrigation,” which on one hand eerily echoes the U.S. manifest destiny slogan of “Water Follows the Plough,” but on the other hand makes perfect common sense. If agriculture pulls large amounts of water from underground sources, there will be more water evaporating, leading to the cooler temperatures and increased precipitation. The report goes on to look at how that increased use of water results in more evaporation, higher cloud cover, and a reduction of temperatures in warmer seasons and even a reduction in traditionally heavy seasons of rainfall, like India’s monsoon. While they estimate that the worldwide cooling due to irrigation is just an average of two tenths of a degree Fahrenheit, they also claim that it could be much higher in concentrated localities- one degree in parts of North America or Europe, for example, or as much as 5 degrees in parts of India. They point to the question of what will happen in places that are in danger of running out of water because of overuse for current irrigation projects. For example, if agriculture drains more than the Ogalalla aquifer can afford to lose, how much more will temperatures rise when the irrigation stops happening? What about a country like Yemen, where the danger of running out of water is immediate and real? To read the report in full, visit Irrigation and 20th Century Climate See more Previous article Rethinking Potatoes Next article Farmed Animals Are "Animals" Too Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Upload a photo / attachment to this comment (PNG, JPG, GIF - 6 MB Max File Size): (Allowed file types: jpg, gif, png, maximum file size: 6MB. | <urn:uuid:eb00f16c-69e6-4123-906f-6fbf547f8b9d> | 2.765625 | 582 | Truncated | Science & Tech. | 31.17684 | 95,563,926 |
In this information report you are going to hear questions and answers about sperm whales for example What do sperm whales eat?
How did sperm whales get their name?
In the sperm whale’s huge head it is mostly filled with the spermaceti organ and this is why they are called sperm whales. The sperm whale spermaceti is valuable and it makes an exceptionally fine lubricant, during the 18th and 19th century is when this happened.
What is the life span of a Sperm whale
The life span of a sperm whale is 70 years. This a long time for a whale to live for so long. Their minimum mass of a sperm whale is 35,000 kilograms. The maximum mass is 57,000 Kilograms. To be this mass the sperm whales have to be adults.
How old do sperm whales have to be to mate?
Female sperm whales need to be at least 7-13 years old and 8.3-9.2 meters long. They only mate every 3-6 years. Also the female sperm whales try to give birth in tropical waters because it is warmer for them and their calves. Male sperm whales aren’t sexually active until they are 18. Also they should be 11-12 meters long.
How do sperm whales feed?
Females dive to at least 3,280 feet just to get their food which is 999.7 meters the percentage of the female's diet is 80% of their diet is giant squid the left over 20% is made up of Octopus, fish, crab, shrimp and even small bottom living sharks. Males diet a male's diet is the same but they dive to at least 3,936 feet and in meters that is 1199.7 meters.
What do Sperm Whales eat?
Sperm whales eat giant squid, fish, octopus, crab, small bottom living sharks and shrimp. Both males and females eat the same food. Sperm whales have to hold their breathe for at least ninety minutes so that they can dive to the bottom of the ocean to get their food especially giant squid. Imagine holding your breathe for ninety minutes.
Are male and female sperm whales different sizes?
Males and females are the same size but here are the sizes for the ages are all different. Old Adult is usually 18 metres long Normal Adult is 9 meters an immature is 12 metres and a valve is 4.5.
Where do sperm whales live?
Sperm whales can be found in any ocean because they are divided into the top side of the equator and the bottom under the equator that divides the world. The females and the baby whales from the species spend more time in the temperate regions than the males. The females and the baby whales migrate to the north too but they come a while after the males because they usually stay in the temperate regions for longer. The males like to move around the place and spend less time in the temperate regions. Instead they like to range from the equator to the polar regions. These places are colder than the temperate regions because the word polar means the north or south poles.
5 facts about sperm whales are:
- This is the largest of the toothed whales.
- They are also the largest toothed predators of the other animals.
- The sperm whales were used when NZ ( New Zealand) was first discovered by the Maori people. This was used for food. Scientist have estimated that there may be two hundred thousand or even one million sperm whales left out in the wild.
- It may seem like there are heaps but there are heaps being killed each year.
- Soon there won’t be any sperm whales left because they are getting killed for their meat and oil. | <urn:uuid:3e034947-b6ec-4cbc-a9d8-7024a1c7d1eb> | 3.5625 | 768 | Knowledge Article | Science & Tech. | 71.04344 | 95,563,951 |
The Universe is Only Spacetime
by John A. Macken
Publisher: onlyspacetime.com 2012
Number of pages: 361
This book makes the case that everything in the universe can be formed from the single building block of 4 dimensional spacetime. It shows how all fundamental particles, forces and cosmology can be derived from this energetic 4 dimensional spacetime.
Home page url
Download or read it online for free here:
by Keiji Oenoki, et al. - easyphysics.net
An online physics tutorial. Contents: Velocity; Acceleration; Forces and Newton's Laws; Motion in Two Dimensions; Projectile and Periodic Motion; Waves; Sound; Light; Electric Forces; Electric Field; The Current; Basic Circuit; Advanced Circuit.
by Mark Clemente, at al. - CK-12 Foundation
Contents: Gravitation; Nuclear Energy; The Standard Model of Particle Physics; The Standard Model and Beyond; A Brief Synopsis of Modern Physics; Nanoscience; Biophysics; Kinematics: Motion, Work, and Energy; Laboratory Activities; etc.
by Belal E. Baaquie - National University of Singapore
Contents: Science and the Scientific Method; Physical Laws; Energy; Probability; Waves; Electromagnetic radiation; Electric and Magnetic Fields; Entropy; Second Law of Thermodynamics; Statistical Mechanics; Quantum Theory; Atoms; etc.
by D.C. Cassidy, G. Holton, J. Rutherford - Springer
Provides a thorough grounding in contemporary physics while placing physics into its social context. A course designed for students preparing to enter fields outside of science or engineering, including students planning to teach in K-12 classrooms. | <urn:uuid:b72b8631-bfde-4863-95f9-d0af0c5aac7e> | 2.984375 | 354 | Content Listing | Science & Tech. | 28.416268 | 95,563,952 |
Andrew Mansfield Head of Flow Chemistry, Syrris
This blog post is the first in a series of “flow chemistry learning” posts – simply subscribe to stay updated with the latest!
Famous chemistry professors are doing it. Magazines are writing about it. Students are focusing on it. Research and development chemists are perfecting reactions with it, and scale-up chemists are producing products with it.
You’ve undoubtedly heard about flow chemistry, but unless you’ve used our R&D100 Award-winning Asia Flow Chemistry System, you might still be wondering – what, exactly, is flow chemistry?
The basics of flow chemistry
Though it goes by a number of names – “plug flow chemistry”, “microchemistry”, and “continuous flow chemistry” – the principles of flow chemistry are the same.
Flow chemistry is the process of performing chemical reactions in a tube or pipe
What this means is that reactive components are flowed down temperature-controlled tubes or pipes and mixed together at a mixing junction; a radically different approach from the traditional chemistry method of performing reactions in glass flasks or jacketed reactors.
The differences between plug flow and continuous flow chemistry
Though often used interchangeably, there is a small difference between “plug flow chemistry” and “continuous flow chemistry”.
Continuous flow chemistry is just that – continuous. The reactive materials are continuously pumped with no breaks, resulting in a continuous stream of chemicals, and therefore a continuous stream of end product.
Plug flow chemistry is where alternating “plugs” of reactive materials and solvent are pumped, where each plug is considered as a separate entity. These plugs never meet so the conditions in which they go through the flow chemistry system (i.e. temperature and residence time) can be changed to observe how the reaction changes.
Intelligent systems, such as the Asia Flow Chemistry System, can automatically collect the individual plugs, sending the product into one collector and the solvent into another.
What is a mixing junction?
So what do we mean by a “mixing junction”? Essentially, it’s the equivalent of a round-bottomed flask or a jacketed reactor – it’s where the mixing occurs in a flow chemistry system.
The two (or more) separate tubes of reactive compounds are brought together and flowed through a single, temperature-controlled channel in order to mix them.
What types of mixing junctions are available?
Glass microreactor chips
Glass microreactor chips are the most commonly known type of reactor used in a flow chemistry system. A piece of glass is “etched” with a particular design (depending on the application); the design helps determines how wide the mixing channel is and how the mixing occurs. A longer channel enables a longer residence time than a shorter channel (assuming the pump flow rate is the same).
Glass microreactor chips are inserted into chip climate controllers which maintain a set temperature throughout the entire chip and are the perfect system for chemists just starting out in flow chemistry.
Tube reactors are effectively long tubes wrapped around a heated or cooled coil. The large length of the coil offers far longer residence times than glass microreactor chips (or much faster pump flow rate) if the application requires it.
Column reactors are glass tubes and allow the use of solid phase chemistry such as catalysts, solid-supported reagents, or scavengers.
So why are chemists adopting flow chemistry into their reactions?
There are a number of reasons chemists across all industries are introducing, or switching to, continuous flow chemistry.
In short, the main benefits are;
- Faster reactions
- Safer reactions
- Faster reaction optimization
- Fast serial library synthesis
- Reaction conditions not possible in batch
- Reactions are usually more selective
- Scale up is easier in flow than batch
- Easy integration of reaction analysis
- Reactions are easier to work-up in flow
When is continuous flow chemistry not the answer?
The importance of smooth flow
About Dr. Andrew Mansfield
Andrew was formerly a Research Chemist at Pfizer and spent much of his career focusing on introducing flow chemistry technologies, meaning Andrew is well placed to lead Syrris’ flow chemistry offering. Read Andrew’s bio here.
What is catalysis? What is a catalyst? How does catalysis work? And why would you want to perform catalysis in continuous flow? Flow Chemistry Applications Specialist, Neal, explains why chemists like to incorporate catalysts into their chemistry and the benefits they bring…
So why should your lab consider performing your chemistry using continuous flow chemistry techniques? Discover several reasons including faster and reactions, and accessing novel chemistries not possible in batch
With modern technology, you can automate your entire lab if you wanted to, from automated liquid handling and motorized pipettes through to robots labeling your samples. But the easiest place to start is the source of your reactions – your jacketed reactor.
When you break it down, flow chemistry is not as scary a prospect as it might seem. Photos in your favorite chemistry magazine may make it look complex, but all you really need is a pump, some tubes, and a mixing junction. | <urn:uuid:5e4a82bf-c8e2-4e31-9922-aa5724f268d0> | 3.828125 | 1,084 | News (Org.) | Science & Tech. | 37.048168 | 95,563,963 |
Solve quadratic equations and use continued fractions to find rational approximations to irrational numbers.
A voyage of discovery through a sequence of challenges exploring properties of the Golden Ratio and Fibonacci numbers.
Find the link between a sequence of continued fractions and the ratio of succesive Fibonacci numbers.
An article introducing continued fractions with some simple puzzles for the reader.
An iterative method for finding the value of the Golden Ratio with explanations of how this involves the ratios of Fibonacci numbers and continued fractions.
This article sets some puzzles and describes how Euclid's algorithm and continued fractions are related.
Which rational numbers cannot be written in the form x + 1/(y + 1/z) where x, y and z are integers?
A personal investigation of Conway's Rational Tangles. What were the interesting questions that needed to be asked, and where did they lead?
In this article we show that every whole number can be written as a continued fraction of the form k/(1+k/(1+k/...)).
Use Euclid's algorithm to get a rational approximation to the number of major thirds in an octave.
In this article we are going to look at infinite continued fractions - continued fractions that do not terminate.
Which of these continued fractions is bigger and why?
The tangles created by the twists and turns of the Conway rope trick are surprisingly symmetrical. Here's why!
Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely?
Find the equation from which to calculate the resistance of an infinite network of resistances. | <urn:uuid:a62aa075-5964-4c4a-b153-6416a2fd7a4e> | 3.46875 | 356 | Content Listing | Science & Tech. | 51.306593 | 95,563,966 |
The earliest reprogramming efforts relied on four separate viruses to transfer genes into the cells' DNA--one virus for each reprogramming gene (Oct4, Sox2, c-Myc and Klf4). Once activated, these genes convert the cells from their adult, differentiated status to an embryonic-like state.
However, this method poses significant risks for potential use in humans. The viruses used in reprogramming are associated with cancer because they may insert DNA anywhere in a cell's genome, thereby potentially triggering the expression of cancer-causing genes, or oncogenes. For iPS cells to be employed to treat human diseases, researchers must find safe alternatives to reprogramming with such viruses. This latest technique represents a significant advance in the quest to eliminate the potentially harmful viruses.
Bryce Carey, an MIT graduate student working in the lab of Whitehead Member Rudolf Jaenisch, spearheaded the effort by joining in tandem the four reprogramming genes through the use of bits of DNA that code for polymers known as 2A peptides. Working with others in the lab, he then manufactured a so-called polycistronic virus capable of expressing all four reprogramming genes once it is inserted into the genomes of mature mouse and human cells.
When the cells' protein-creating machinery reads the tandem genes' DNA, it begins making a protein. However, when it tries to read the 2A peptide DNA that resides between the genes, the machinery momentarily stops, allowing the first gene's protein to be released. The machinery then moves on to the second gene, creates that gene's protein, stalls when reaching another piece of 2A peptide DNA, and releases the second gene's protein. The process continues until the machinery has made the proteins for all four genes.
Using the tandem genes, Carey created iPS cells containing just a single copy of the polycistronic vector instead of multiple integrations of the viruses. This significant advancement indicates that the approach can become even safer if combined with technologies such as gene targeting, which allows a single transgene to be inserted at defined locations.
Interestingly, while Carey's single-virus method integrates all four genes into the same location, it has proven to be roughly 100 times less efficient than older approaches to reprogramming. This phenomenon remains under investigation.
"We were surprised by the lower efficiency," Carey says. "We're not sure why, but we need to look what's going on with expression levels of the polycistronic virus's proteins compared to separate viruses' proteins."
Although the one virus method is less efficient, Jaenisch maintains it represents an important advance in the field.
"This is an extremely useful tool for studying the mechanisms of reprogramming," says Jaenisch, who is also a professor of biology at MIT. "Using this one virus creates a single integration in the cells' DNA, which makes things much easier to handle."
Nicole Giese | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:627b97b5-908b-49de-8b05-3ce6ca9f2914> | 3.84375 | 1,195 | Content Listing | Science & Tech. | 36.11359 | 95,563,967 |
This article is part of the sequence The Basics You Won’t Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.
Somewhere in the first lectures of a programming basics course, we are shown how to take input and show output on the terminal. That’s called standard input/output or just Standard IO for short.
So, in C# we have Console.WriteLine and Console.ReadLine.
In C++, we have cin and cout.
All these things are associated with the topic of Standard IO. And what they tell us is that the standard input is the keyboard and the standard output is the screen. And for the most part, that is the case.
But what we don’t get told is that the Standard IO can be changed. There is a way to accept input from a file and redirect output to another file. No, I’m not talking about writing code to read/write files. I am talking about using the Standard IO for the job, via the terminal. | <urn:uuid:df6c43ce-3997-4f26-9de1-16f0139eafa1> | 3.25 | 222 | Truncated | Software Dev. | 71.115 | 95,563,977 |
Like a dirty filter, the Earth's oceans are growing less efficient at absorbing vast amounts of carbon dioxide, the major greenhouse gas produced by fossil-fuel burning, reports a study co-authored by Francois Primeau, UC Irvine Earth system science associate professor.
Electronic devices could create significant environmental and health problems after they are thrown away. UC Irvine researchers are working with engineers, manufacturers and public health officials to find solutions.
More than 1 billion people worldwide have unreliable access to clean water. To raise awareness of this and other water issues, UC Irvine is hosting a two-day public event featuring free movies and a panel discussion with local water experts. | <urn:uuid:30d1d3d8-aec8-44cd-ace9-42f683c24a31> | 2.984375 | 131 | Content Listing | Science & Tech. | 24.428743 | 95,563,980 |
While previous studies have shown that marine noise can affect animal movement and communication, with unknown ecological consequences, scientists from the Universities of Bristol and Exeter and the École Pratique des Hautes Études (EPHE) CRIOBE in France have demonstrated that boat noise stops embryonic development and increases larval mortality in sea hares.
Sea hares, (specifically the sea slug Stylocheilus striatus used in this study) usually hatch from their eggs to swim away and later feed on toxic alga but this study, conducted in a coral reef lagoon in French Polynesia, found that when exposed to playback of boat noise, more eggs failed to develop and those that hatched were more likely to die.
Lead author Sophie Nedelec, a PhD researcher at the University of Bristol and EPHE said: "Traffic noise is now one of the most widespread global pollutants. If the reproductive output of vulnerable species is reduced, we could be changing communities and losing vital ecological functions. This species is particularly important because it eats a toxic alga that affects recruitment of fish to coral reefs."
Anthropogenic (man-made) noise is now recognised as a global pollutant, appearing in national and international legislation (for example, the US National Environment Policy Act and European Commission Marine Strategy Framework Directive).
Boats are found around all coastal environments where people live and the noise they make spreads far and wide. Increasingly, recent research has indicated that noise from human activities can affect the behaviour and physiology of animals, but this is the first study to show impacts on development and larval survival.
Co-author, Dr Steve Simpson, a marine biologist and senior lecturer at the University of Exeter, said: "Boat noise may cause stress or physically disrupt cells during development, affecting chances of survival. Since one in five people in the world rely on marine animals as a major source of protein, regulating traffic noise in important fisheries areas could help marine communities and the people that depend on them."
Co-author, Dr Suzanne Mills, an evolutionary biologist from CRIOBE, Perpignan said: "Our study used controlled field experiments and a split-brood, counterbalanced design to account for any possible site or genetic effects. Nearly 30,000 eggs were placed in plastic tubes. Half the eggs from each mother were near speakers playing boat noise while the other half were near speakers playing coral-reef ambient noise. Both success of embryonic development and post-hatching survival decreased by more than 20% as a consequence of exposure to boat-noise playback."
Co-author, Dr Andy Radford, a reader in behavioural ecology at the University of Bristol, said: "This is the first indication that noise pollution can affect development and survival during critical early life stages. Growing evidence for the impact of noise on animals suggests that consideration should be given to the regulation of human activities in protected areas."
The research is published today in Scientific Reports.
Hannah Johnson | Eurek Alert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:1a864a66-6669-498c-9ac6-dd6978927a61> | 3.40625 | 1,271 | Content Listing | Science & Tech. | 34.622059 | 95,563,989 |
By M.K. Sundaresan
The guide of Particle Physics fills that void. This distinct paintings comprises, in encyclopedic shape, phrases of curiosity in particle physics, together with its extraordinary jargon. It covers the experimental and theoretical strategies of particle physics besides phrases from the heavily similar fields of astrophysics and cosmology. Designed basically for non-specialists with a simple wisdom of quantum mechanics and relativity, the entries defend a level of rigor through supplying the appropriate technical and mathematical details.
Clear and fascinating prose, a number of figures, and historic overviews supplement the handbook's comfort either as a reference and as a call for participation into the interesting international of particle physics.
Read Online or Download Handbook of Particle Physics (Pure and Applied Physics) PDF
Similar nuclear physics books
This booklet is on inertial confinement fusion, another approach to produce electrical energy from hydrogen gas by utilizing robust lasers or particle beams. It includes the compression of tiny quantities (micrograms) of gas to thousand occasions strong density and pressures differently current basically within the centre of stars.
Twenty-five years in the past, Michael eco-friendly, John Schwarz, and Edward Witten wrote volumes on string idea. released in the course of a interval of fast growth during this topic, those volumes have been hugely influential for a new release of scholars and researchers. regardless of the big development that has been made within the box on account that then, the systematic exposition of the rules of superstring concept offered in those volumes is simply as appropriate this day as while first released.
Quantity 7 is an instantaneous continuation of quantity 6, which documented the beginning of the complementarity argument and its earliest embellishments. It covers the extension and refinement of the complementarity argument from 1933 till Bohrs' dying in 1962. All Bohr's courses at the topic, including chosen manuscripts and extracts of his correspondence with pals and fellow pioneers equivalent to Werner Heisenberg and Wolfgang Pauli, are integrated.
This thesis includes new study in either experimental and theoretical particle physics, making very important contributions in each one. analyses of collision info from the ATLAS test on the LHC are provided, in addition to phenomenological reviews of heavy colored resonances which may be produced on the LHC.
Extra resources for Handbook of Particle Physics (Pure and Applied Physics)
Handbook of Particle Physics (Pure and Applied Physics) by M.K. Sundaresan | <urn:uuid:d6daa0f4-bab9-47c6-af5c-f3f2448e2ab6> | 2.59375 | 496 | Product Page | Science & Tech. | 22.96428 | 95,564,009 |
The filtration medium combines the moringa oleifera’s seed with sand to create a low-cost filtration system
Researchers from Carnegie Mellon University have further refined a process that could help provide clean water to regions that face water scarcity by using proteins from the moringa oleifera plant–commonly known as the drumstick tree–and sand. The water filtration medium is being termed “f-sand.”
The process uses proteins from the moringa oleifera plant, commonly found in India and other subtropical climates, and combines it with sand filtration methods. The seed proteins are extracted and adsorbed to the surface of silica particles to create f-sand. According to Carnegie Mellon University College of Engineering, f-sand kills microorganisms and reduces turbidity.
The study, titled “Moringa oleifera Seed Protein Adsorption to Silica: Effects of Water Hardness, Fractionation, and Fatty Acid Extraction,” was published in the journal ACS Langmuir by Bob Tilton and Todd Przybycien, along with students Brittany Nordmark and Toni Bechtel, and Carnegie alumnus John Riley.
“It’s an area where complexity could lead to failure–the more complex it is, the more ways something could go wrong,” said Tilton. “I think the bottom line is that this supports the idea that the simpler technology might be the better one.” | <urn:uuid:ee748956-a238-4657-813a-a40196458dd6> | 3.28125 | 320 | News Article | Science & Tech. | 28.314107 | 95,564,017 |
|ECHINODERMATA : DENDROCHIROTIDA : Cucumariidae||STARFISH, SEA URCHINS, ETC.|
Description: A long tapering body and small white sparsely-branched tentacles are the distinctive features of this sea cucumber. The thin body has a long pointed tail and often lies curved in the sediment in a U-shape. The brown body bears five distinct rows of tube-feet. There are spicules in three layers,large irregular flat plates with numerous holes, smaller round plates with holes and small basket-shaped spicules nearest the surface. Length about 10cm, diameter 1cm or less.
Habitat: Lives buried in muddy sand or mud with only the tentacles visible. Can be common in shallow water with Virgularia mirabilis.
Distribution: Found all round the British Isles.
Similar Species: No other species have the characteristically shaped body.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Leptopentacta elongata (Duben & Koren, 1845). [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=ZB4640 Accessed on 2018-07-18
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:7de66af6-e76e-4a3c-bf0b-99f4e1111795> | 3.359375 | 338 | Knowledge Article | Science & Tech. | 42.726446 | 95,564,031 |
A lack of mathematical rigour has to do with significant gaps in arguments. Either the mathematician is being careless or is relying on intuitions that cannot be easily translated into deductive reasoning. Prior to the nineteenth century, many arguments in calculus lacked rigour. Words like’ small’, ‘infinity’, ‘approaches’, and ‘limit’ were used without ever being precisely defined. Infinite series were treated by methods analogous to those used for finite series, and no justification for doing this was offered.
KeywordsInfinite Series Deductive Reasoning High School Teacher Mathematical Rigour Finite Series
Unable to display preview. Download preview PDF. | <urn:uuid:44d2e51a-78ab-4c8a-ae0a-573965f687b3> | 3.609375 | 141 | Truncated | Science & Tech. | 16.665 | 95,564,062 |
Ice-tsunamis is an informal term to describe tsunamis generated by ice masses. Ice masses split or shed from a glacier or an ice sheet through the so-called mechanism “calving”. The frequency and size of calving ice masses is relevant for ice sheet mass balance of ice covered areas such as Greenland or the Antarctic, and to understand sea level rise. If the ice mass calves in a water body, tsunamis of tens of meters or even larger may be generated, which poses a hazard for coastal communities (Greenland), tourists and the fishing, shipping and Oil & Gas industries.
Ice-tsunamis are often simple called “landslide-tsunamis” and past ice-tsunamis were successfully recreated with the landslide-tsunami hazard assessment method introduced by Dr Heller and collaborators (Heller et al. 2009; Heller and Hager 2010, 2011).
However, in many cases the characteristics of ice-tsunamis is different from typical landslide-tsunamis because ice is lighter than water and because ice calving mechanisms are very diverse (fall, overturning, upwards movement of submerged masses, etc.).
Dr Heller and his team are interested in the characterisation of individual extreme ice calving events and in the associated ice-tsunami risk. Ongoing work includes both laboratory experiments and computer simulations. A test campaign is funded by the EU HYDRALAB+ consortium where Dr Heller leads a team consisting of scientists based in 5 EU countries to conduct large scale laboratory ice-tsunami tests in a 50 m × 50 m large water basin.
- Tsunamis due to ice masses: Different calving mechanisms and linkage to landslide-tsunamis. Funded by HYDRALAB+
- Modelling of tsunamis generated by ice calving. Nottingham Summer Engineering Research Placement 2016. Funded by the Faculty of Engineering.
- Experimental and numerical investigation of tsunamis caused by ice calving. PhD study. Funded by the China Scholarship Council and the University of Nottingham.
- HELLER, V. and HAGER, W. H., 2011. Wave types of landslide generated impulse waves OCEAN ENGINEERING. 38(4), 630-640
- HELLER, V. and HAGER, W.H., 2010. Impulse Product Parameter in Landslide Generated Impulse Waves JOURNAL OF WATERWAY PORT COASTAL AND OCEAN ENGINEERING-ASCE. 136(3), 145-155
- HELLER, V., HAGER, W.H. and MINOR, H.-E., 2009. Landslide generated impulse waves in reservoirs - Basics and computation VAW, ETH Zurich. | <urn:uuid:af27bb60-3f30-477a-9959-a5e8c917d0ae> | 3.625 | 580 | Knowledge Article | Science & Tech. | 41.266005 | 95,564,066 |
The Santa Ana winds are strong, extremely dry down-slope winds that originate inland and affect coastal Southern California and northern Baja California. They originate from cool, dry high-pressure air masses in the Great Basin.
Santa Ana winds are known for the hot, dry weather that they bring in autumn (often the hottest of the year), but they can also arise at other times of the year. They often bring the lowest relative humidities of the year to coastal Southern California. These low humidities, combined with the warm, compressionally-heated air mass, plus high wind speeds, create critical fire weather conditions. Also known as "devil winds", the Santa Anas are infamous for fanning regional wildfires.
The National Weather Service defines Santa Ana winds as "Strong down slope winds that blow through the mountain passes in southern California. These winds, which can easily exceed 40 miles per hour (18 m/s), are warm and dry and can severely exacerbate brush or forest fires, especially under drought conditions."
The Santa Anas are katabatic winds--Greek for "flowing downhill", arising in higher altitudes and blowing down towards sea level. Santa Ana winds originate from high-pressure airmasses over the Great Basin and upper Mojave Desert. Any low-pressure area over the Pacific Ocean, off the coast of California, can change the stability of the Great Basin High, causing a pressure gradient that turns the synoptic scale winds southward down the eastern side of the Sierra Nevada and into the Southern California region. Cool, dry air flows outward in a clockwise spiral from the high pressure center. This cool, dry airmass sweeps across the deserts of eastern California toward the coast, and encounters the towering Transverse Ranges, which separate coastal Southern California from the deserts. The airmass, flowing from high pressure in the Great Basin to a low pressure center off the coast, takes the path of least resistance by channeling through the mountain passes to the lower coastal elevations, as the low pressure area off the coast pulls the airmass offshore.
These passes include the Soledad Pass, the Cajon Pass, and the San Gorgonio Pass, all well known for exaggerating Santa Anas as they are funneled through. As the wind narrows and is compressed into the passes its velocity increases dramatically, often to near-gale force or above. At the same time, as the air descends from higher elevation to lower, it is heated adiabatically, warming about 5 °F for each 1,000 feet it descends (10 °C for each 1,000 m). As it warms, its capacity to hold moisture increases, so its relative humidity decreases. The air has already been dried by orographic lift before reaching the Great Basin, as well as by subsidence from the upper atmosphere, so this additional warming often causes relative humidity to fall below 10 percent. The end result is a strong, warm, and very dry wind blowing out of the bottom of mountain passes into the valleys and coastal plain.
During Santa Ana conditions it is typically hotter along the coast than in the deserts, with the Southern California coastal region reaching some of its highest annual temperatures in autumn rather than summer.
While the Santa Anas are katabatic, they are not Föhn winds. These result from precipitation on the windward side of a mountain range which releases latent heat into the atmosphere which is then warmer on the leeward side (e.g., the Chinook or the original Föhn).
If the Santa Anas are strong, the usual day-time sea breeze may not arise, or develop weak later in the day because the strong offshore desert winds oppose the on-shore sea breeze. At night, the Santa Ana Winds merge with the land breeze blowing from land to sea and strengthen because the inland desert cools more than the ocean due to differences in the heat capacity and because there is no competing sea breeze.
Santa Ana winds often bring the lowest relative humidities of the year to coastal Southern California. These low humidities, combined with the warm, compressionally-heated airmass, plus the high wind speeds, create critical fire weather conditions. The combination of wind, heat, and dryness accompanying the Santa Ana winds turns the chaparral into explosive fuel feeding the infamous wildfires for which the region is known. Wildfires fanned by Santa Ana winds burned 721,791 acres (2,920.98 km2) in two weeks during October 2003. and another 500,000 acres (2,000 km2) in the October 2007 California wildfires.
Although the winds often have a destructive nature, they have some benefits as well. They cause cold water to rise from below the surface layer of the ocean, bringing with it many nutrients that ultimately benefit local fisheries. As the winds blow over the ocean, sea surface temperatures drop about 4°C (7°F), indicating the upwelling. Chlorophyll concentrations in the surface water go from negligible, in the absence of winds, to very active at more than 1.5 milligrams per cubic meter in the presence of the winds.
During the Santa Ana winds, large ocean waves can develop. These waves come from a northeasterly direction; toward the normally sheltered side of Catalina Island. Protected harbors such as Avalon and Two Harbors are normally sheltered and the waters within the harbors are very calm. In strong Santa Ana conditions, these harbors develop high surf and strong winds that can tear boats from their moorings and crash them onto the shore. During a Santa Ana, it is advised that boaters moor on the back side of the island to avoid the dangerous conditions of the front side.
A Santa Ana fog is a derivative phenomenon in which a ground fog settles in coastal Southern California at the end of a Santa Ana wind episode. When Santa Ana conditions prevail, with winds in the lower two to three kilometers (1.25-1.8 miles) of the atmosphere from the north through east, the air over the coastal basin is extremely dry, and this dry air extends out over offshore waters of the Pacific Ocean. When the Santa Ana winds cease, the cool and moist marine layer may re-form rapidly over the ocean if conditions are right. The air in the marine layer becomes very moist and very low clouds or fog occurs. If wind gradients turn on-shore with enough strength, this sea fog is blown onto the coastal areas. This marks a sudden and surprising transition from the hot, dry Santa Ana conditions to cool, moist, and gray marine weather, as the Santa Ana fog can blow onshore and envelop cities in as quickly as fifteen minutes. However, a true Santa Ana fog is rare, because it requires conditions conducive to rapid re-forming of the marine layer, plus a rapid and strong reversal in wind gradients from off-shore to on-shore winds. More often, the high pressure system over the Great Basin, which caused the Santa Ana conditions in the first place, is slow to weaken or move east across the United States. In this more usual case, the Santa Ana winds cease, but warm, dry conditions under a stationary air mass continue for days or even weeks after the Santa Ana wind event ends.
A related phenomenon occurs when the Santa Ana condition is present but weak, allowing hot dry air to accumulate in the inland valleys that may not push all the way to sea level. Under these conditions auto commuters can drive from the San Fernando Valley where conditions are sunny and warm, over the low Santa Monica Mountains, to plunge into the cool cloudy air, low clouds, and fog characteristic of the marine air mass. This and the "Santa Ana fog" above constitute examples of an air inversion.
The similar winds in the Santa Barbara area occur most frequently in the late spring to early summer, and are strongest at sunset, or "sundown"; hence their name: sundowner. Because high pressure areas usually migrate east, changing the pressure gradient in southern California to the northeast, it is common for "sundowner" wind events to precede Santa Ana events by a day or two.
Winds blowing off the elevated glaciated plateaus of Greenland and Antarctica experience the most extreme form of katabatic wind, of which the Santa Ana is a type, for the most part. The winds start at a high elevation and flow outward and downslope, attaining hurricane gusts in valleys, along the shore, and even out to sea. Like the Santa Ana, these winds also heat up by compression and lose humidity, but because they start out so extraordinarily cold and dry and blow over snow and ice all the way to the sea, the perceived similarity is negligible.
This section needs expansion. You can help by adding to it. (December 2011)
The Santa Ana winds and the accompanying raging wildfires have been a part of the ecosystem of the Los Angeles Basin for over 5,000 years, dating back to the earliest habitation of the region by the Tongva and Tataviam peoples.
The Santa Ana winds have been recognized and reported in English-language records as a weather phenomenon in Southern California since at least the mid-nineteenth century. Various episodes of hot, dry winds have been described over this history as dust storms, hurricane-force winds, and violent north-easters, damaging houses and destroying fruit orchards. Newspaper archives have many photographs of regional damage dating back to the beginnings of news reporting in Los Angeles. When the Los Angeles Basin was primarily an agricultural region, the winds were feared particularly by farmers for their potential to destroy crops.
The winds are also associated with some of the area's largest and deadliest wildfires, including the state's largest fire on record, the Cedar Fire, as well as the Laguna Fire, Old Fire, Esperanza Fire, Santiago Canyon Fire of 1889 and the Witch Creek Fire.
In October 2007, the winds fueled major wild fires and house burnings in Escondido, Malibu, Rainbow, San Marcos, Carlsbad, Rancho Bernardo, Poway, Ramona, and in the major cities of San Bernardino, San Diego and Los Angeles. The Santa Ana winds were also a factor in the November 2008 California wildfires.
In early December 2011, the Santa Ana winds were the strongest yet recorded. An atmospheric set-up occurred that allowed the towns of Pasadena and Altadena in the San Gabriel Valley to get whipped by sustained winds at 97 mph (156 km/h), and gusts up to 167 mph (269 km/h). The winds toppled thousands of trees, knocking out power for over a week. Schools were closed, and a "state of emergency" was declared. The winds grounded planes at LAX, destroyed homes, and were even strong enough to snap a concrete stop light from its foundation. The winds also ripped through Mammoth Mountain and parts of Utah. Mammoth Mountain experienced a near-record wind gust of 175 mph (282 km/h), on December 1, 2011.
In December 2017 a complex of twenty-five Southern California wildfires were exacerbated by long-lasting and strong Santa Ana winds.
Especially hot, dry, and dusty Santa Ana winds are widely believed (in Southern California, at least) to affect people's moods and behavior negatively. This has not been proven in studies, though limited evidence may point to this conclusion. Despite the lack of definitive evidence, it is a part of local lore.
The winds carry Coccidioides immitis and Coccidioides posadasii spores into nonendemic areas, a pathogenic fungus that causes Coccidioidomycosis ("Valley Fever"). Symptomatic infection (40 percent of cases) usually presents as an influenza-like illness with fever, cough, headaches, rash, and myalgia (muscle pain). Serious complications include severe pneumonia, lung nodules, and disseminated disease, where the fungus spreads throughout the body. The disseminated form of Coccidioidomycosis can devastate the body, causing skin ulcers, abscesses, bone lesions, severe joint pain, heart inflammation, urinary tract problems, meningitis, and often death.
The most well-accepted explanation for the name Santa Ana winds is that it is derived from the Santa Ana Canyon in Orange County, one of the many locations the winds blow intensely. Newspaper references to the name Santa Ana winds date as far back as 1886. By 1893, controversy had broken out over whether this name was a corruption of the Spanish term Santana (a running together of the words Santa Ana), or the different term Satanás, meaning Satan. However, newspaper mention of the term "Satanás" in reference to the winds did not begin appearing until more than 60 years later. A possible explanation is that the spoken Spanish language merges two identical vowels in elision, when one ends a word and the other begins the next word. Thus the Spanish pronunciation of the phrase "Santa Ana" sounds like "Santana."
Another attempt at explanation of the name claims that it derives from a Native American term for "devil wind" that was altered by the Spanish into the form "Satanás" (meaning Satan), and then later corrupted into "Santa Ana." However, an authority on Native American language claims this term "Santana" never existed in that tongue.
A third explanation places the origin of the term Santa Ana winds with an Associated Press correspondent stationed in Santa Ana in 1902, who documented the name "Santa Ana winds," or possibly mistook the term "Santana" or "Satanás" for "Santa Ana."
Another derivation favored by the late well-known KABC television meteorologist, Dr. George Fischbeck, cited the etymology of the Santana winds as coming from the early Mexicano/Angeleno: "Caliente aliento de Satanás" or "hot breath of Satan." This is likely a false etymology or folk etymology, though.
The Santa Ana winds are commonly portrayed in fiction as being responsible for a tense, uneasy, wrathful mood among Angelenos. Some of the more well-known literary references include the Philip Marlowe story "Red Wind" by Raymond Chandler, and Joan Didion's Slouching Towards Bethlehem.
|"||There was a desert wind blowing that night. It was one of those hot dry Santa Anas that come down through the mountain passes and curl your hair and make your nerves jump and your skin itch. On nights like that every booze party ends in a fight. Meek little wives feel the edge of the carving knife and study their husbands' necks. Anything can happen. You can even get a full glass of beer at a cocktail lounge.||"|
|-- Raymond Chandler, "Red Wind"|
|"||The baby frets. The maid sulks. I rekindle a waning argument with the telephone company, then cut my losses and lie down, given over to whatever is in the air. To live with the Santa Ana is to accept, consciously or unconsciously, a deeply mechanistic view of human behavior.
...[T]he violence and the unpredictability of the Santa Ana affect the entire quality of life in Los Angeles, accentuate its impermanence, its unreliability. The wind shows us how close to the edge we are.
|-- Joan Didion, Slouching Towards Bethlehem.| | <urn:uuid:7664b2af-57d8-4886-b078-f852f10b7598> | 3.703125 | 3,167 | Knowledge Article | Science & Tech. | 42.325223 | 95,564,069 |
A cylindrical specimen of steel with a cross section of 5 cm^2 is subjected to a tensile force of 800 N. What is the stress on the material?© BrainMass Inc. brainmass.com July 20, 2018, 4:35 pm ad1c9bdddf
Please see the attached file for the solution.
a cylindrical specimen ...
This solution calculates stress on steel given its cross section and tensile force. The stress on the material is determined. | <urn:uuid:56ce6428-f0ac-45cd-b5f5-5010625f1cf2> | 2.953125 | 101 | Truncated | Science & Tech. | 76.898923 | 95,564,071 |
Cambridge University Press, Jun 20, 2002 - Mathematics - 132 pages
This book provides a comprehensive look at the Schwarz-Christoffel transformation, including its history and foundations, practical computation, common and less common variations, and many applications in fields such as electromagnetism, fluid flow, design and inverse problems, and the solution of linear systems of equations. It is an accessible resource for engineers, scientists, and applied mathematicians who seek more experience with theoretical or computational conformal mapping techniques. The most important theoretical results are stated and proved, but the emphasis throughout remains on concrete understanding and implementation, as evidenced by the 76 figures based on quantitatively correct illustrative examples. There are over 150 classical and modern reference works cited for readers needing more details. There is also a brief appendix illustrating the use of the Schwarz-Christoffel Toolbox for MATLAB, a package for computation of these maps.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
algorithm analytic annulus applications arcs aspect ratio boundary values bounded solution circular-arc polygons complex computational conformal mapping conformal modulus corners CRDT cross-ratios crowding define derivative boundary conditions determined Dirichlet values disk maps domain Doubly connected regions Elcrat elliptic functions elongated embedding example exterior maps Faber polynomials factor gearlike regions geometry Green's function grid harmonic function infinite infinity integrand interior angles intervals iteration L-shaped Laplace's equation Level curves logarithm MATLAB mesh method Mobius transformation Netlib Oblique derivative boundary oblique derivative problem piecewise piecewise-constant polynomial prevertex prevertices quadrilateral real axis rectangle maps reflection Riemann mapping theorem Riemann surface SC formula SC integral SC map SC Toolbox Schwarz Schwarz-Christoffel formula Schwarz-Christoffel mapping SCPACK section 4.2 segments shown in Figure side lengths simply connected singularities slit standard SC strip map target region Theorem 5.2 Trefethen triangles unbounded unique unit circle unknown upper half-plane vertex vertices | <urn:uuid:57925e1f-33d9-4968-a86d-3119e85e08ad> | 2.671875 | 431 | Product Page | Science & Tech. | -12.288608 | 95,564,082 |
Researchers are mapping global patterns of marine plastic pollution as alarm grows over floating rubbish. A team led by marine scientist Carlos Duarte from KAUST shows that the level of plastic debris in the Red Sea is relatively low.
Samples of floating plastic rubbish were collected by the team from 120 sites along 1500km of shoreline on the eastern margin of the Red Sea during voyages in 2016-2017. The debris was captured in plankton nets dragged slowly just below the sea surface and the fragments were then painstakingly sorted into material type and size.
Three-quarters of the collected rubbish was rigid fragments of broken objects. Plastic film, such as bags or wrapping, made up 17% percent, but there were only small amounts of fishing lines or nets (6%) and foam (4%).
The relatively low levels of floating plastic in the Red Sea may either be due to there being fewer sources of rubbish or its faster removal, explains doctoral student, Cecilia Martin. Not much plastic comes from the land because this coastline has few of the usual polluting contributors.
"Usually the main source of plastic in the sea tends to be litter and mismanaged waste," says Martin. "But on this coastline, the only large human settlement is Jeddah, with a population of 2.8 million people, and little tourism, so there are few people with the opportunity to litter." Similarly, rivers globally provide 10-50% of discarded oceanic plastic, but because the Red Sea catchment has no permanent rivers, their contribution is negligible.
"Instead, the winds and a few storms are most probably the main sources of plastic," says Martin. "This is reflected in our findings of proportionally higher amounts of plastic films compared to global trends."
There is a concern, however, says Martin, about the "missing" plastic. The low levels of debris can be partially attributed to its "'removal" by the extensive mangrove and coral reef systems of the area. Capture of plastics is problematic for these ecosystems.
"Mangroves are perfect traps for macrolitter," says Martin. "At high tide, floating items reach the forest and then, as the tide drops, get stuck in seedlings and mangrove aerial roots (pneumatophores) which act as a mesh to trap them."
Coral ecosystems can also consume plastic. "The small size of microplastic items makes it available to a wide range of organisms and many marine groups, such as corals, mollusks, crabs and plankton are found to ingest plastic."
Duarte says the problem of plastic pollution in the ocean reflects our consumer habits and the solution is to reduce plastic use in our daily lives.
Carolyn Unck | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:996f7d87-03e3-4e03-a0d1-d4296f556485> | 3.671875 | 1,218 | Content Listing | Science & Tech. | 45.107175 | 95,564,118 |
Application Security Testing: An Integral Part of DevOps
To help you make the transition as thoroughly and as completely as possible myself and other authors and contributors are writing a lot about threading, reflection, assemblies, COM interop, and delegates. But, reviewing programming subjects with a friend recently, I was reminded that there are developers at all levels, not just the advanced level. To be as thorough as possible, then, I am exploring advanced topics as well as non-advanced topics. (I welcome queries from readers too, and sometimes write an article based on several queries.)
This article is to the point. In this article we will examine interfaces: how to define them and how to implement them.
Visual Basic 6 does not support inheritance or classes in the object-oriented sense of the construct. VB6 does support COM interfaces. VB .NET, on the other supports classes and interfaces, so a distinguishment had to be made between the two idioms.
Defining Classes and Interfaces in VB .NET
The class and interface idioms use a very similar syntax when you define them. The following example defines an empty class in VB .NET, followed by an empty interface.
Public Class AClass End Class Public Interface AnInterface End Interface
Classes can contain fields, properties, events, and methods. These elements of a class, called members, can have modifiers indicating that they are public, private, protected, or friend. All members of an interface declaration are public and as a result do not need nor can they have access modifiers.
Classes contain code; interfaces do not. However, classes that implement an interface do contain code. Keep in mind that there are no instances of interfaces in VB .NET. Every instance is a type that implements an interface, but is itself not an instance of the interface. (From this point we will leave the discussion of classes for another time and focus only on interfaces.)
Assuming we have an interface named AnInterface, we can only add method declarations to that interface. Extending the interface from the previous section, we can add a method named WhoAmI. The result is shown next.
Public Interface AnInterface Function WhoAmI() As String End Interface
All types that implement the AnInterface interface must implement every declared method in that interface. In this example we only need to implement the function WhoAmI. Suppose AClass implements AnInterface; we would need to implement WhoAmI. The result of implement AnInterface in AClass would yield the following code.
Public Class AClass Implements AnInterface Public Function WhoAmI() As String Implements AnInterface.WhoAmI Return "AClass" End Function End Class
The first thing we have to do is indicate that we want to implement the interface by name. Implements AnInterface tells consumers that AClass will implement all of the methods described in AnInterface. (The Visual Studio .NET IDE reminds us that we have to do so too.)
The difference between VB6 and VB .NET is that we have to add the Implements clause to the function body as shown in the listing. The function is declared as normal, but the clause Implements AnInterface.WhoAmI completes the contract between the class and the interface.
Structures can implement interfaces in VB.NET too. Whether a class or a structure is implementing an interface, you will need the Implements statement as demonstrated, and you will need to implement every method defined in the interface using the Implements clause at the end of the procedure header to indicate that a particular method satisfies a particular interface method.
Interfaces can be very short or very long. Methods described by an interface can be subroutines or functions, and they can be as elaborate or as simple as you need them to be. One method can implement more than one interface's method. Finally, keep in mind that interface methods can be called with a reference to the object or with a reference to the interface.
About the Author
Paul Kimmel is a freelance writer for Developer.com and CodeGuru.com. Look for cool Visual Basic .Net topics in his upcoming book Visual Basic .Net Unleashed available in January of 2002. Paul founded Software Conceptions, Inc. in 1990. Contact Paul Kimmel at email@example.com for help building VB.NET applications or migrating VB6 applications to .NET. | <urn:uuid:58b82065-bd14-4f46-87c9-149259a417b5> | 3.203125 | 913 | Documentation | Software Dev. | 41.973814 | 95,564,131 |
SPIE CEO Dr Eugene Arthurs has paid tribute to the late laser pioneer Charles Townes during the Lase plenary session at Photonics West, which took place 10-12 February in San Francisco. Arthurs referred to Townes as an inspiration throughout his life and career, but also as a 'giant of a friend'.
A recipient of the Nobel Prize in 1964 for the development of the maser, Townes passed away on the 27 January at the age of 99.
Along with creating the first maser − microwave amplification by stimulation emission of radiation − Townes pioneered the use of lasers in astronomy, detecting the first complex molecules in space and being the first to measure the mass of the black hole in the centre of our galaxy.
‘My career, my life were undoubtedly influenced by this genius,’ said Arthurs, who began his career as a laser physicist. ‘I was actually brought into the field by the beauty of an argon-ion laser.’
A professor at the University of California, Berkeley, Townes visited the campus daily up until last year, working in both the physics department and the Space Sciences Laboratory, where he taught experimental astrophysics to pupils.
His dedication to students was inspiring, Arthurs said: ‘Nobel laureates can be hard to pin down sometimes, but if he was asked to speak to students, he was immediately there. He loved doing that, loved giving them advice.’
Townes began working on creating a pure beam of short-wavelength, high-frequency light in 1951.
Starting from the theory of stimulated emission − introduced by Albert Einstein in 1917 − which states that the right wavelength of light can stimulate an excited atom to emit light of the same wavelength, Townes worked on how to corral a gas of excited atoms without them flying apart.
Townes developed a solution which allowed him to separate excited and non-excited molecules and store them in a resonant cavity, so that when a microwave travelled through the gas, the molecules were stimulated to emit microwaves in step with one another - a coherent burst. He and his students built such a device using ammonia gas in 1954 and dubbed it a maser.
In 1958, he and his brother-in-law and future Nobelist, Arthur Schawlow, conceived the idea of doing the same thing with optical light, but using mirrors at the ends of a gas tube to amplify the light to get an ‘optical maser'.
It was Theodore Maiman who eventually demonstrated the first laser in 1960. However, in 1964, Townes was jointly awarded with the Nobel Prize in Physics with two Russians, Aleksandr Prokhorov and Nicolai Basov, who independently came up with the idea for a maser.
But developing the first maser was not Townes’ only notable scientific achievement. In the years after winning the Nobel Prize, he studied radio and infrared astronomy, where he, along with colleagues at Berkeley, detected ammonia and water molecules in space, and demonstrated, for the first time, the presence of dense molecular clouds.
In 1985, his team discovered the black hole that lives at the centre of our Milky Way, through mid- and far-infrared spectroscopy.
In addition to winning the Nobel Prize, Townes also won the Templeton Award in 2005, which is given to a person who has made an exceptional contribution to affirming life’s spiritual dimension, Arthurs told the audience at the plenary session.
‘It is a $1.5 million award − bigger than the Nobel Prize − and anyone who had the honour to know Charles would not be surprised to know that he gave away all of the money − to Furman University, to a homeless shelter in Berkeley, and to a church,’ Arthurs said.
‘He really was a remarkable person to know. His vision, his openness to everything, his interest in all areas of science was quite inspiring,’ Arthurs added. | <urn:uuid:7d11860c-c0af-421b-9cf9-6283653348d9> | 2.859375 | 831 | News Article | Science & Tech. | 37.257218 | 95,564,132 |
Berlin: Pigeons are capable of switching between two tasks as quickly as humans - and even more faster in certain situations, despite their small brains, a study suggests.
Researchers performed behavioural experiments to test birds and humans and found that the cause of the slight multitasking advantage in birds is their higher neuronal density.
"For a long time, scientists used to believe the mammalian cerebral cortex to be the anatomical cause of cognitive ability; it is made up of six cortical layers," said Sara Letzner from Ruhr-University Bochum in Germany.
In birds, however, such a structure does not exist.
"That means the structure of the mammalian cortex cannot be decisive for complex cognitive functions such as multitasking," said Letzner, researcher of the study published in the journal Current Biology.
The pallium of birds does not have any layers comparable to those in the human cortex, but its neurons are more densely packed than in the cerebral cortex in humans.
Pigeons, for example, have six times as many nerve cells as humans per cubic millimetre of brain, researchers said.
The average distance between two neurons in pigeons is fifty per cent shorter than in humans, they said.
As the speed at which nerve cell signals are transmitted is the same in both birds and mammals, researchers had assumed that information is processed more quickly in avian brains than in mammalian brains.
They tested this hypothesis using a multitasking exercise that was performed by 15 humans and 12 pigeons.
In the experiment, both the human and the avian participants had to stop a task in progress and switch over to an alternative task as quickly as possible.
The switchover to the alternative task was performed either at the same time the first task was stopped, or it was delayed by 300 milliseconds.
In the first case, real multitasking takes place, which means that two processes are running simultaneously in the brain, those being the stopping of the first task and switching over to the alternative task.
Pigeons and humans both slow down by the same amount under double stress.
In the second case - switching over to the alternative task after a short delay - the processes in the brain undergo a change.
The two processes, namely stopping the first task and switching over to the second task, alternate like in a ping- pong game.
For this purpose, the groups of nerve cells that control both processes have to continuously send signals back and forth.
The researchers had assumed that pigeons must have an advantage over humans because of their greater nerve cell density. They were, in fact, 250 milliseconds faster than humans.
"Researchers in the field of cognitive neuroscience have been wondering for a long time how it was possible that some birds, such as crows or parrots, are smart enough to rival chimpanzees in terms of cognitive abilities, despite their small brains and their lack of a cortex," said Letzner.
The results of the study provide a partial answer to this mystery. It is precisely because of their small brain that is densely packed with nerve cells that birds are able to reduce the processing time in tasks that require rapid interaction between different groups of neurons, researchers said. | <urn:uuid:e6621d25-c520-4a4b-8691-fd7948b547dd> | 3.59375 | 650 | News Article | Science & Tech. | 38.696644 | 95,564,143 |
Most fish rely primarily on their vision to find prey to feed upon, but a University of Rhode Island biologist and her colleagues have demonstrated that a group of African cichlids feeds by using its lateral line sensory system to detect minute vibrations made by prey hidden in the sediments.
The lateral line system is composed of a canal embedded in the scales along the side of the body of a fish, around its eyes and on its lower jaw, which contain small groups of sensory hair cells that respond to water flow. The lateral line system aids some fish in swimming upstream, navigation around obstacles, and the detection of predators and prey.
According to Jacqueline Webb, a URI professor of biology, cichlids in the genus Aulonocara, which only live in Lake Malawi, have widened lateral line canals that are highly sensitive to vibrations and water flows. They feed by gliding through the water with their chin close to the sand like a metal detector, seeking out twitching arthropods and other unseen prey items.
There are about 16 species of Aulonocara cichlids in Lake Malawi, all of which feed in the sand.
"These cichlids join a short list of fish that have been demonstrated to use their lateral line system to feed," said Webb. "Since most of the fish with widened lateral line canals are found in the deep sea, it's difficult to study them. These cichlids can now be used as a model system for studying widened canals, and we can apply what we learn from them to the fish in the deep sea."
Webb analyzed video of the swimming behavior of the fish in response to live and dead brine shrimp located on the surface of the sandy substrate in a tank. She compared the fishes' ability to detect prey under light and dark conditions, and looked at their ability to detect prey when the lateral line system was chemically "deactivated."
She found that the fish were able to find live prey easily, even in darkness, but not without a healthy lateral line system.
Her discovery opens the door to the study of the convergent evolution of wide canals and raises the question of whether fish that feed non-visually have an ecological advantage over visual-only feeders. Webb was recently awarded a $334,000 grant from the National Science Foundation to study the development and behavioral role of wide lateral lines.
"We also hope that this work will allow us to determine whether the sensory biology of a species can be used to predict its ecological success," she said, "especially in environments where the water quality is poor or where there is increased turbidity. Do these fish have an advantage in water where it is difficult to see well?"
To examine these questions, Webb will use microCT imaging to create a three-dimensional reconstruction of the skulls of cichlids, while also developing what she calls a "chin tickler" – an artificial stimulation delivery system – to standardize the stimulation provided from beneath the sand to the cichlid test subjects.
Todd McLeish | EurekAlert!
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:e7e3d3f4-a913-4621-8c8e-76901bb50150> | 4.125 | 1,262 | Content Listing | Science & Tech. | 41.417457 | 95,564,169 |
The particle -- shown at higher magnification than anything ever seen from another world -- is a rounded particle about one micrometer, or one millionth of a meter, across. It is a speck of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil.
"This is the first picture of a clay-sized particle on Mars, and the size agrees with predictions from the colors seen in sunsets on the Red Planet," said Phoenix co-investigator Urs Staufer of the University of Neuchatel, Switzerland, who leads a Swiss consortium that made the microscope.
"Taking this image required the highest resolution microscope operated off Earth and a specially designed substrate to hold the Martian dust," said Tom Pike, Phoenix science team member from Imperial College London. "We always knew it was going to be technically very challenging to image particles this small."
It took a very long time, roughly a dozen years, to develop the device that is operating in a polar region on a planet now about 350 million kilometers or 220 million miles away.
The atomic force microscope maps the shape of particles in three dimensions by scanning them with a sharp tip at the end of a spring. During the scan, invisibly fine particles are held by a series of pits etched into a substrate microfabricated from a silicon wafer. Pike's group at Imperial College produced these silicon microdiscs.The atomic force microscope can detail the shapes of particles as small as about 100 nanometers, about one one-thousandth the width of a human hair. That is about 100 times greater magnification than seen with Phoenix's optical microscope, which made its first images of Martian soil about two months ago.
Until now, Phoenix's optical microscope held the record for producing the most highly magnified images to come from another planet.
"I'm delighted that this microscope is producing images that will help us understand Mars at the highest detail ever," Staufer said. "This is proof of the microscope's potential. We are now ready to start doing scientific experiments that will add a new dimension to measurements being made by other Phoenix lander instruments."
"After this first success, we're now working on building up a portrait gallery of the dust on Mars," Pike added.
Mars' ultra-fine dust is the medium that actively links gases in the Martian atmosphere to processes in Martian soil, so it is critically important to understanding Mars' environment, the researchers said.The particle seen in the atomic force microscope image was part of a sample scooped by the robotic arm from the "Snow White" trench and delivered to Phoenix's microscope station in early July. The microscope station includes the optical microscope, the atomic force microscope and the sample delivery wheel.
It is part of a suite of tools called Phoenix's Microscopy, Electrochemistry and Conductivity Analyzer.
The Phoenix mission is led by Peter Smith from the University of Arizona with project management at NASA's Jet Propulsion Laboratory, Pasadena, Calif., and development partnership at Lockheed Martin, Denver. International contributions come from the Canadian Space Agency; the University of Neuchatel; the universities of Copenhagen and Aarhus in Denmark; the Max Planck Institute in Germany; and the Finnish Meteorological Institute. The California Institute of Technology in Pasadena manages JPL for NASA.
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:1468906f-4a3f-45be-b514-ba864f4f2a7a> | 4.0625 | 1,322 | Content Listing | Science & Tech. | 38.396977 | 95,564,170 |
Share this article:
Weather and climate threats are among the top risks that will have the biggest global impact in the next 10 years, according to a report by the World Economic Forum.
The report is an assessment by 1,000 experts and decision-makers on the likelihood and impact of global risks over a 10-year period.
Following a devastating year for weather and natural disasters in 2017, the Global Risk Report highlighted the environment as an area of particular concern this year.
Behind weapons of mass destruction in the top spot are extreme weather events, natural disasters, failure of climate change mitigation and adaptation and water crises.
"Extreme weather events have ranked in the top two future risks since 2014. It should be alarming to the general public," AccuWeather Meteorologist Brett Anderson said.
Meanwhile, extreme weather events and natural disasters claimed spots one and two for risks most likely to occur in the next 10 years.
“This is not surprising: September 2017 was the most intense month on record for extreme weather events, as well as the most expensive U.S. hurricane season since 2005 with economic losses in excess of $300 billion,” Group Chief Risk Officer Alison Martin said.
During 2017, catastrophic weather resulted in a record-setting cost to the United States, in particular.
Sixteen billion-dollar weather and climate events occurred, costing a total of $306 billion in damage. It shattered the previous U.S. annual record of $214.8 billion (CPI-adjusted) in 2005.
“And the U.S. was not alone in experiencing extreme weather: Ireland, for example, had its worst tropical storm in more than 50 years,” she said.
Record number of billion dollar weather disasters strikes US in early 2017
15 photos recall the magnitude, devastation of natural disasters around the world in 2017
2018 Winter Olympics: Cold winds may leave spectators shivering at PyeongChang opening ceremony
2017 also became the third hottest year on record and the hottest year on record without an El Niño.
"The world is clearly warming and a warming world will make these rare, extreme events much more commonplace," Anderson said.
According to Martin, too little has been done to mitigate climate change and that’s not likely to change.
“Our own analysis shows that the likelihood of missing the Paris Agreement’s target of limiting global warming to 2 degrees Celsius or below is greater than the likelihood of achieving it. This is likely to exacerbate the impact of global environmental risks,” she said.
“I fear we may squander the opportunity to move towards a more sustainable, equitable and inclusive future. As a business leader, a member of our society and a parent, this makes me deeply concerned about the future we may leave for the generations to come.”
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
The southeastern United States is facing the risk for damaging thunderstorms this weekend.
A pattern of persistent downpours, beginning with a rainstorm this weekend is likely to disrupt travel, hinder outdoor plans and projects and put summer heat on hold in the Northeast into early August.
Gusty winds caused blowing dust to sweep across the Las Vegas area on Saturday, creating dangerous conditions for travelers.
Near-record heat will set the stage for a heightened risk of wildfires in the southwestern United States, including Southern California, next week.
The intense record heat baking the south-central United States is expected to get trimmed back early next week, but a sweep of refreshing air is not on the horizon.
A deadly heat wave is expected to continue into early week across Japan as Ampil bypasses the region to the south.
An uptick in monsoon rainfall is expected to heighten the flood threat across eastern and northern India this week. | <urn:uuid:fe482430-820e-404f-92cb-1fe47475f9ca> | 3.296875 | 802 | Content Listing | Science & Tech. | 44.562444 | 95,564,203 |
X-ray astronomy, study of celestial objects by means of the X rays they emit, in the wavelength range from 0.01 to 10 nanometers. X-ray astronomy dates to 1949 with the discovery that the sun emits X rays. Since X rays could not be observed from ground-based telescopes, V-2 rockets launched from White Sands, N.Mex., occasionally carried telescopes to study solar X-ray emissions. In 1962 a group led by R. Giacconi launched a small rocket from White Sands to search for celestial sources of X rays with instruments similar to Geiger counters. During the 5-min flight the experiment discovered an X-ray source now called Scorpius X-1, a close binary star in which one star expels gas onto a very dense neighbor, which may be a white dwarf, a neutron star, or a black hole. This mission also found that the earth is bathed in diffuse X rays coming from all directions. Soon afterward X-ray emissions were found coming from the Crab Nebula and the radio galaxies (galaxies whose radio emissions constitute an extraordinarily large amount of their total energy output) Centaurus A and Virgo A. Other types of galaxies, particularly Seyfert galaxies (galaxies with extremely bright cores that are strong emitters of radio waves, X rays, and gamma rays), also emit X rays. The center of our galaxy is a strong X-ray source, which is an indicator of the violent activity taking place there.
In 1970 the Uhuru satellite, one of NASA's small astronomy satellites, began to look specifically for X-ray sources. Uhuru used detectors filled with argon, in which incoming X radiation gives off electrons in amounts proportional to its strength. Uhuru mapped more than 400 sources and discovered a series of X-ray binary stars in which ordinary stars orbit neutron stars that emit X rays. One of these sources, Cygnus X-1, is an object with ten times the mass of the sun. Too massive to be a neutron star, it is possibly a black hole.
Much of the data in X-ray astronomy is now gathered by orbiting satellites. In addition to the United States, Germany and Japan are among the countries having X-ray satellites. In the 1970s the Skylab space station and Orbiting Solar Observatory satellites continued the study, as did the Solar Maximum Mission the following decade. A series of High Energy Astrophysical Observatories (HEAO) were launched during the late 1970s to study X rays, cosmic rays, and gamma rays. HEAO-1, launched in 1977, increased the number of known X-ray sources from 350 to 1,500. HEAO-2—also known as the Einstein Observatory—carried the largest X-ray telescope ever built. It detected several thousand new X-ray sources in our galaxy and beyond, discovered that cataclysmic variable stars in our own galaxy emit X rays when they are in outburst, achieved the first unambiguous detection of X rays from ordinary stars other than the sun, and obtained the first X-ray images of supernova remnants, pulsars, and star clusters. As a result, supernova remnants mapped in X-ray wavelengths can be compared with visible light and radio images. In an example of cooperation between amateur and professional astronomers, the Einstein Observatory was turned toward SS Cygni (see variable star) whenever amateur astronomers with backyard telescopes reported it in outburst. The few days' duration of these outbursts allowed enough time to change the satellite's observing schedule so that it could examine the star, and it discovered the source of the star's X-ray emissions.
During the 1980s the European, Russian, and Japanese space agencies continued to launch successful X-ray astronomy missions, such as the European X-ray Observatory Satellite (EXOSAT), Granat, the Kvant module (of the Mirspace station), Tenma, and Ginga. These missions were more modest in scale than the HEAO program in the 1970s and were directed toward in-depth studies of known phenomena.
In 1990, ROSAT [Roentgen Satellite], a joint project of Germany, the United States, and Great Britain, was launched. Operational until 1999, it was instrumental in the discovery of X-ray emissions from comets and conducted an all-sky survey in the X-ray region of the spectrum. Five other satellites launched in the 1990s are still operational. ALEXIS [Array of Low Energy X-ray Imaging Sensors] was launched in 1993; a minisatellite containing six coffee-can-sized wide-angle, ultrasoft-X-ray telescopes, it provided the data for a unique sky map for studying celestial flashes of soft X rays. Also launched in 1993, the Advanced Satellite for Cosmology and Astrophysics is a joint Japanese-American project; containing four X-ray telescopes, its primary purpose is the X-ray spectroscopy of such astrophysical entities as quasars and cosmic background X radiation. In 1995, NASA orbited the Rossi X-ray Timing Explorer (RXTE) to study the variations in the emission of such X-ray sources as black-hole candidates, active galactic nuclei, white dwarf stars, neutron stars, and other high-energy sources. The RXTE played a key role in the discovery in 1996 of a "pulsing burster" located near the center of the Milky Way. Unlike other X-ray sources, this one burst, oscillated, and flickered simultaneously, with bursts lasting from 6 to 100 seconds. Before it burned out, the unexplained object was the brightest source of X rays and gamma rays in the sky, radiating more energy in 10 seconds than the sun does in 24 hours. BeppoSAX, a joint Italian-Dutch satellite, was launched in 1996. When on Dec. 14, 1997, for 1 or 2 seconds the most energetic burst of gamma radiation ever detected was recorded by the Compton Gamma Ray Observatory,BeppoSAX recorded the X-ray afterglow of the burst, thereby providing a relatively accurate location for the source. The Chandra X-ray Observatory was deployed from a shuttle and boosted into a high earth orbit in 1999; it focuses on such objects as black holes, quasars, and high-temperature gases throughout the X-ray portion of the electromagnetic spectrum. Also launched in 1999 was X-ray Multimirror Mission, an ESA satellite that carries an optical-ultraviolet telescope together with three parallel mounted X-ray telescopes, allowing it to simultaneously observe phenomena in two regions of the spectrum.
"X-ray astronomy." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (July 21, 2018). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/x-ray-astronomy
"X-ray astronomy." The Columbia Encyclopedia, 6th ed.. . Retrieved July 21, 2018 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/x-ray-astronomy
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:
Modern Language Association
The Chicago Manual of Style
American Psychological Association
- Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
- In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.
Stars and other celestial objects radiate energy in many wavelengths other than visible light, which is only one small part of the electromagnetic spectrum. At the low end (with wavelengths longer than visible light) are low-energy infrared radiation and radio waves. At the high end of the spectrum (wavelengths shorter than visible light) are high-energy ultraviolet radiation, X rays, and gamma rays.
X-ray astronomy is a relatively new scientific field focusing on celestial objects that emit X rays. Such objects include stars, galaxies, quasars, pulsars, and black holes.
Earth's atmosphere filters out most X rays. This is fortunate for humans and other life on Earth since a large dose of X rays would be deadly. On the other hand, this fact makes it difficult for scientists to observe the X-ray sky. Radiation from the shortest-wavelength end of the X-ray range, called hard X rays, can be detected at high altitudes. The only way to view longer X rays, called soft X rays, is through special telescopes placed on artificial satellites orbiting outside Earth's atmosphere.
First interstellar X rays detected
In 1962, an X-ray telescope was launched into space by the National Aeronautics and Space Administration (NASA) aboard an Aerobee rocket. The rocket contained an X-ray telescope devised by physicist Ricardo Giacconi (1931– ) and his colleagues from a company called American Science and Engineering, Inc. (ASEI). During its six-minute flight, the telescope detected the first X rays from interstellar space, coming particularly from the constellation Scorpius.
Later flights detected X rays from the Crab Nebula (where a pulsar was later discovered) and from the constellation Cygnus. X rays in this latter site are believed to be coming from a black hole. By the late 1960s, astronomers had become convinced that while some galaxies are sources of strong X rays, all galaxies (including our own Milky Way) emit weak X rays.
Words to Know
Black holes: Remains of a massive star that has burned out its nuclear fuel and collapsed under tremendous gravitational force into a single point of infinite mass and gravity.
Electromagnetic radiation: Radiation that transmits energy through the interaction of electricity and magnetism.
Electromagnetic spectrum: The complete array of electromagnetic radiation, including radio waves (at the longest-wavelength end), microwaves, infrared radiation, visible light, ultraviolet radiation, X rays, and gamma rays (at the shortest-wavelength end).
Gamma rays: Short-wavelength, high-energy radiation formed either by the decay of radioactive elements or by nuclear reactions.
Infrared radiation: Electromagnetic radiation of a wavelength shorter than radio waves but longer than visible light that takes the form of heat.
Pulsars: Rapidly spinning, blinking neutron stars.
Quasars: Extremely bright, starlike sources of radio waves that are the oldest known objects in the universe.
Radiation: Energy transmitted in the form of subatomic particles or waves.
Ultraviolet radiation: Electromagnetic radiation of a wavelength just shorter than the violet (shortest wavelength) end of the visible light spectrum.
Wavelength: The distance between two troughs or two peaks in any wave.
X rays: Electromagnetic radiation of a wavelength just shorter than ultraviolet radiation but longer than gamma rays that can penetrate solids and produce an electrical charge in gases.
In 1970, NASA launched Uhuru, the first satellite designed specifically for X-ray research. It produced an extensive map of the X-ray sky. In 1977, the first of three High Energy Astrophysical Observatories (HEAO) was launched. During its year and a half of operation, it provided constant monitoring of X-ray sources, such as individual stars, entire galaxies, and pulsars. The second HEAO, known as the Einstein Observatory, operated from November 1978 to April 1981. It contained a high resolution X-ray telescope that discovered that X rays are coming from nearly every star.
In July 1999, NASA launched the Chandra X-ray Observatory (CXO), named after the Nobel Prize-winning, Indian-born American astrophysicist Subrahmanyan Chandrasekhar (1910–1995). About one billion times more powerful than the first X-ray telescope, the CXO has a resolving power equal to the ability to read the letters of a stop sign at a distance of 12 miles (19 kilometers). This will allow it to detect sources more than twenty times fainter than any previous X-ray telescope. The CXO orbits at an altitude 200 times higher than the Hubble Space Telescope. During each orbit around Earth, it travels one-third of the way to the Moon.
The purpose of the CXO is to obtain X-images and spectra of violent, high-temperature celestial events and objects to help astronomers better understand the structure and evolution of the universe. It will observe galaxies, black holes, quasars, and supernovae (among other objects) billions of light-years in the distance, giving astronomers a glimpse of regions of the universe as they existed eons ago. In early 2001, the
CXO found the most distant X-ray cluster of galaxies astronomers have ever observed, located about 10 billion light-years away from Earth. Less than a month later, it detected an X-ray quasar 12 billion light-years away. These are both important discoveries that may help astronomers understand how the universe evolved.
[See also Telescope; X rays ]
"X-ray Astronomy." UXL Encyclopedia of Science. . Encyclopedia.com. (July 21, 2018). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/x-ray-astronomy-0
"X-ray Astronomy." UXL Encyclopedia of Science. . Retrieved July 21, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/x-ray-astronomy-0
Modern Language Association
The Chicago Manual of Style
American Psychological Association
"X-ray astronomy." World Encyclopedia. . Encyclopedia.com. (July 21, 2018). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/x-ray-astronomy
"X-ray astronomy." World Encyclopedia. . Retrieved July 21, 2018 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/x-ray-astronomy | <urn:uuid:25098f4d-e188-48bf-a7d8-4c37330d1baa> | 4.15625 | 3,165 | Knowledge Article | Science & Tech. | 42.236157 | 95,564,204 |
Choose a symbol to put into the number sentence.
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be?
If you have only four weights, where could you place them in order to balance this equaliser?
Start by putting one million (1 000 000) into the display of your calculator. Can you reduce this to 7 using just the 7 key and add, subtract, multiply, divide and equals as many times as you like?
Can you put the numbers 1 to 8 into the circles so that the four calculations are correct?
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it.
Place six toy ladybirds into the box so that there are two ladybirds in every column and every row.
Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots!
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
There are 78 prisoners in a square cell block of twelve cells. The clever prison warder arranged them so there were 25 along each wall of the prison block. How did he do it?
Winifred Wytsh bought a box each of jelly babies, milk jelly bears, yellow jelly bees and jelly belly beans. In how many different ways could she make a jolly jelly feast with 32 legs?
There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places.
Can you each work out the number on your card? What do you notice? How could you sort the cards?
You have two egg timers. One takes 4 minutes exactly to empty and the other takes 7 minutes. What times in whole minutes can you measure and how?
How have the numbers been placed in this Carroll diagram? Which labels would you put on each row and column?
Using the statements, can you work out how many of each type of rabbit there are in these pens?
Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families?
This is an adding game for two players.
Arrange eight of the numbers between 1 and 9 in the Polo Square below so that each side adds to the same total.
Got It game for an adult and child. How can you play so that you know you will always win?
This task, written for the National Young Mathematicians' Award 2016, invites you to explore the different combinations of scores that you might get on these dart boards.
Strike it Out game for an adult and child. Can you stop your partner from being able to go?
How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this?
You have 5 darts and your target score is 44. How many different ways could you score 44?
Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores.
Cherri, Saxon, Mel and Paul are friends. They are all different ages. Can you find out the age of each friend using the information?
Can you arrange 5 different digits (from 0 - 9) in the cross in the way described?
This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code?
This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15!
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99 How many ways can you do it?
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the jugs.
Some Games That May Be Nice or Nasty for an adult and child. Use your knowledge of place value to beat your opponent.
Place this "worm" on the 100 square and find the total of the four squares it covers. Keeping its head in the same place, what other totals can you make?
Use your logical-thinking skills to deduce how much Dan's crisps and ice-cream cost altogether.
Ben has five coins in his pocket. How much money might he have?
Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number.
Look carefully at the numbers. What do you notice? Can you make another square using the numbers 1 to 16, that displays the same properties?
Can you substitute numbers for the letters in these sums?
This task, written for the National Young Mathematicians' Award 2016, focuses on 'open squares'. What would the next five open squares look like?
Three children are going to buy some plants for their birthdays. They will plant them within circular paths. How could they do this?
A game for 2 players. Practises subtraction or other maths operations knowledge.
This task follows on from Build it Up and takes the ideas into three dimensions!
A game for 2 people using a pack of cards Turn over 2 cards and try to make an odd number or a multiple of 3.
There were chews for 2p, mini eggs for 3p, Chocko bars for 5p and lollypops for 7p in the sweet shop. What could each of the children buy with their money?
Exactly 195 digits have been used to number the pages in a book. How many pages does the book have?
These two group activities use mathematical reasoning - one is numerical, one geometric.
Here you see the front and back views of a dodecahedron. Each vertex has been numbered so that the numbers around each pentagonal face add up to 65. Can you find all the missing numbers?
A game for 2 or more players with a pack of cards. Practise your skills of addition, subtraction, multiplication and division to hit the target score. | <urn:uuid:89fb8022-29be-408c-bf7c-6048bc3b8c33> | 4.21875 | 1,454 | Tutorial | Science & Tech. | 76.463412 | 95,564,218 |
It might take a millisecond, or it might take a century. But if you have a large enough sample, a pattern begins to emerge.
It takes a certain amount of time for half the atoms in a sample to decay.
Scientists look at half-life decay rates of radioactive isotopes to estimate when a particular atom might decay.
This technique is widely used on recent artifacts, but educators and students alike should note that this technique will not work on older fossils (like those of the dinosaurs alleged to be millions of years old).
Organisms at the base of the food chain that photosynthesize – for example, plants and algae – use the carbon in Earth’s atmosphere.
They have the same ratio of carbon-14 to carbon-12 as the atmosphere, and this same ratio is then carried up the food chain all the way to apex predators, like sharks.
Levels of carbon-14 become difficult to measure and compare after about 50,000 years (between 8 and 9 half lives; where 1% of the original carbon-14 would remain undecayed).
The question should be whether or not carbon-14 can be used to date any artifacts at all? There are a few categories of artifacts that can be dated using carbon-14; however, they cannot be more 50,000 years old. | <urn:uuid:d4cae8fc-5add-48bd-895e-006a83e45766> | 3.859375 | 272 | Knowledge Article | Science & Tech. | 51.18721 | 95,564,235 |
Oct. 24, 2017 -- A rash of earthquakes in southern Colorado and northern New Mexico recorded between 2008 and 2010 was likely due to fluids pumped deep underground during oil and gas wastewater disposal, says a new University of Colorado Boulder study.
The study, which took place in the 2,200-square-mile Raton Basin along the central Colorado-northern New Mexico border, found more than 1,800 earthquakes up to magnitude 4.3 during that period, linking most to wastewater injection well activity. Such wells are used to pump water back in the ground after it has been extracted during the collection of methane gas from subterranean coal beds.
One key piece of the new study was the use of hydrogeological modeling of pore pressure in what is called the "basement rock" of the Raton Basin - rock several miles deep that underlies the oldest stratified layers. Pore pressure is the fluid pressure within rock fractures and rock pores.
While two previous studies have linked earthquakes in the Raton Basin to wastewater injection wells, this is the first to show that elevated pore pressures deep underground are well above earthquake-triggering thresholds, said CU Boulder doctoral student Jenny Nakai, lead study author. The northern edges of the Raton Basin border Trinidad, Colorado, and Raton, New Mexico.
"We have shown for the first time a plausible causative mechanism for these earthquakes," said Nakai of the Department of Geological Sciences. "The spatial patterns of seismicity we observed are reflected in the distribution of wastewater injection and our modeled pore pressure change."
A paper on the study was published in the Journal of Geophysical Research: Solid Earth. Co-authors on the study include CU Boulder Professors Anne Sheehan and Shemin Ge of geological sciences, former CU Boulder doctoral student Matthew Weingarten, now a postdoctoral fellow at Stanford University, and Professor Susan Bilek of the New Mexico Institute of Mining and Technology in Socorro.
The Raton Basin earthquakes between 2008 and 2010 were measured by the seismometers from the EarthScope USArray Transportable Array, a program funded by the National Science Foundation (NSF) to measure earthquakes and map Earth's interior across the country. The team also used seismic data from the Colorado Rockies Experiment and Seismic Transects (CREST), also funded by NSF.
As part of the research, the team simulated in 3-D a 12-mile long fault gleaned from seismicity data in the Vermejo Park region in the Raton Basin. The seismicity patterns also suggest a second, smaller fault in the Raton Basin that was active from 2008-2010.
Nakai said the research team did not look at the relationship between the Raton Basin earthquakes and hydraulic fracturing, or fracking.
The new study also showed the number of earthquakes in the Raton Basin correlates with the cumulative volume of wastewater injected in wells up to about 9 miles away from the individual earthquakes. There are 28 "Class II" wastewater disposal wells - wells that are used to dispose of waste fluids associated with oil and natural gas production - in the Raton Basin, and at least 200 million barrels of wastewater have been injected underground there by the oil and gas industry since 1994.
"Basement rock is typically more brittle and fractured than the rock layers above it," said Sheehan, also a fellow at CU's Cooperative Institute for Research in Environmental Sciences. "When pore pressure increases in basement rock, it can cause earthquakes."
There is still a lot to learn about the Raton Basin earthquakes, said the CU Boulder researchers. While the oil and gas industry has monitored seismic activity with seismometers in the Raton Basin for years and mapped some sub-surface faults, such data are not made available to researchers or the public.
The earthquake patterns in the Raton Basin are similar to other U.S. regions that have shown "induced seismicity" likely caused by wastewater injection wells, said Nakai. Previous studies involving CU Boulder showed that injection wells likely caused earthquakes near Greeley, Colorado, in Oklahoma and in the mid-continent region of the United States in recent years.
For more information contact Jim Scott in CU Boulder media relations at firstname.lastname@example.org or 303-492-3114. | <urn:uuid:67dd2e9d-12e4-487b-9206-f092f2a2cb8f> | 2.953125 | 877 | News (Org.) | Science & Tech. | 34.841131 | 95,564,249 |
Planetary Protection and a Mars Exploration Debate
On the subject of planetary protection, even the scientific experts don’t agree all the time. Planetary protection is concerned with biological cross-contamination of planets during space missions. The Outer Space Treaty of 1967 – signed by more than 90 countries -- requires that space exploration be done is ways that avoid bringing hitchhiker microorganisms from Earth on outward bound spacecraft and bringing something back that could be biohazardous to Earth.
Now there is a debate about whether planetary protection policies for unmanned missions searching for life of Mars should be relaxed in advance of human missions to Mars. A former SETI Institute scientist, Alberto G. Fairén, now with the Centro de Astrobiologia has written an article suggesting that human missions to Mars will increase the likelihood of planetary contamination and time is running out to discover the existence of life on Mars. John Rummel, of the SETI Institute’s Scientific Advisory Board has rebutted that assertion.
- EurekAlert: Debate Over Mars Exploration Strategy Heats up in Astrobiology Journal
- SpaceRef: Debate Over Mars Exploration Strategy Heats up in Astrobiology Journal
- Astrobiology: Searching for Life on Mars Before it’s too Late
- Astrobiology: Four Fallacies and an Oversight: Searching for Martian Life
Deep Mine Research and Education at Boulby Mine
SETI Institute scientist Rosalba Bonnacorsi is part of the expedition the team when NASA Spaceward Boung and the U.K. Centre for Astrobiology conduct a planetary analog expedition iin the Boulby Mine. Boulby is the site of the astrobiology analog research with the Mine Analog Research Program (MINAR).
The team includes scientists, teachers, engineers, biologists, geologists and astronauts who will work on a variety of science and technology projects to address specific scientific questions and test a variety of potential technologies and planetary exploration protocols.
Spaceward Bound is an educational program and will use the lab and mine environment to carry out science and technology in support of the subsurface exploration.
- SETI.org: Catch Up with SETI Institute Scientist Rosalba Bonnacorsi on her NASA Spaceward Bound Expedition to the Center of the Earth (Almost!)
- The University of Edinburgh: Deep Mine Research to Aid Future Mars Missions
- Tech 2: Scientists from Around the World are Gathering in a Deep Mine to Research Technologies for Deep Space Missions
- The National: Scottish Scientists Help Solve Dark Energy Mystery Which has Baffled Researchers for Years
Cassini and Saturn’s Rings
SETI Institute scientist Matt Tiscareno, who has been working on research from Cassini since 2004, shared some highlights from Cassini’s Ring-Grazing orbits (December 2016 to March 2017) – weekly plunges through the ring plane just off the outer edge of the main rings of Saturn – and Grand Finale (April 2017 to September 2017) – weekly plunges between the rings and Saturn’s cloud-tops. The science goals of these orbits included sampling of particles from Saturn’s rings and atmosphere, detailed measurements of Saturn’s gravity and magnetic field to probe the planet’s interior, and close-range imaging of Saturn and its rings.
- American Scientist: Cassini and the Rings of Saturn
Rings Found Around Dwarf Planet Haumea
Haumea is the first dwarf planet and Kuiper Belt object found to have a ring system. Scientists hope this discovery will help them understand how and why rings form. Until the discovery of rings around the asteroid 10199 Chariklo in 2013, it was assumed that only large planets could host ring systems.
SETI Institute scientist Mark Showalter who is leading the hazard planning team for New Horizon’s next flyby target, an object in the Kuiper Belt called MU69, commented:
“I’m sort of torn. Scientifically, this is fascinating. But as someone with MU69 on his mind, I did meet the news with some trepidation. We hadn’t not assumed there was a ring, but it drives home the point that there are generally things out there that we might not know about. We’ll be doing a great deal of studying and preparation.”
- National Geographic: First Rings Found Circling Weird World Near Pluto
- Science News: Oddball Dwarf Planet Haumea Has a Ring
The Unistellar eVscope Making it Easier than Ever to See Objects in the Night Sky
The new Unistellar eVscope uses optics and electronics to increase the brightness and celestial objects in the eyepiece in real time. This is making easier and more exciting for casual observers to view colorful nebulas, asteroids, meteor showers, far away galaxies and more in the night sky, even in areas with heavy light pollution such as New York City. Further, when the telescope is available to the public in late 2018 (and through a crowdfunding campaign coming up later this month), it will also be wi-fi enabled, making it possible for researchers to request observations of objects from different parts of the world and for users to share their observations.
- Scientific American: New Telescope “Gives Back the Sky” to City-Dwellers
- SETI.org: SETI Institute-Unistellar Partnership Promises to Revolutionize Amateur Astronomy
- SETI.org: Unistellar Telescope Successfully Finds, Images Asteroid Florence
- SETI.org: Seeing Pluto with Your Own Eyes from Your Backyard with Unistellar’s eVscope
SETI Institute Elects Dava Newman to Board of Trustees
The Board of Trustees of the SETI Institute has unanimously elected Dava Newman as the newest member of its Board of Trustees. Dava is the Apollo Program Professor of Astronautics at the Massachusetts Institute of Technology (MIT) and a Harvard-MIT Health, Sciences and Technology faculty member. She previously served as Deputy Administrator of NASA under the Obama administration, from May 2015 until January 2017.
- SETI.org: SETI Institute Elects Dava Newman to Board of Trustees
- Space Ref: SETI Institute Elects Dava Newman to Board of Trustees
Big Picture Science
Last week’s Facebook Live went behind the scenes of the SETI Institute’s radio show and podcast, Big Picture Science. This week, Franck Marchis was live from the Division of Planetary Sciences meeting with planetary scientists. Cynthia Philips, Jim Bell and Hal Levison talking about future NASA missions. There will also be a bonus Facebook Live from the SOFIA hangar in Palmdale, CA with participants in the Airborne Astronomy Ambassadors program.
All past Facebook Live videos can be seen on the SETI Institute’s Facebook page at https://www.facebook.com/SETIInstitute/.
Division of Planetary Sciences, October 15-20, Provo, UT SETI Institute Scientist Matt Tiscareno is a featured speaker who will present research on the planetary rings of Saturn and the Cassini mission. Other SETI Institute scientists whose work will be featured at the conference include Christina Dalle Ore, Melissa McGrath, Mark Showalter, Franck Marchis, Driss Takir, Robert Morris, Matija Cuk, Ross Beyer, David Hinson, Erin Ryan, and Peter Tenenbaum.
Sidewalk Astronomy at Pier 17, October 24 in San Francisco. See the universe from Pier 17 in San Francisco with Unistellar eVscope! SETI Institute astronomer Franck Marchis will be there to demo the prototype.
SETI Talks: October 26, Menlo Park, CA Featuring David Grinspoon and a discussion of the Anthropocene.
SETI Talks: November 29, Menlo Park, CA Featuring Jeff Coughlin and Geert Berentsen and a discussion of the Kepler and K2 missions.
American Geophysical Union: December 11-15, New Orleans, LA SETI Institute Scientist Matt Tiscareno will present research on the planetary rings of Saturn and the Cassini mission. | <urn:uuid:e54c3977-e510-4b97-bff1-cef84c7827a5> | 3.21875 | 1,690 | Content Listing | Science & Tech. | 31.144653 | 95,564,265 |
Other Names and/or Listed subspecies:
Status/Date Listed as Endangered:
Area(s) Where Listed As Endangered:
Cameroon, Cote d'Ivoire, Democratic Republic of Congo (Zaire), Ghana, Nigeria, Republic of Congo
The African Teak (Pericopsis elata) is a species of concern belonging in the species group "plants" and found in the following area(s): Cameroon, Cote d'Ivoire, Democratic Republic of Congo (Zaire), Ghana, Nigeria, Republic of Congo. This species is also known by the following name(s): Afromosia.
African Teak Facts Last Updated:
January 1, 2016
To Cite This Page:
Glenn, C. R. 2006. "Earth's Endangered Creatures - African Teak Facts" (Online).
Accessed 7/17/2018 at http://earthsendangered.com/profile.asp?sp=5539&ID=1.
Need more African Teak facts? | <urn:uuid:adb6d4a4-5871-4a91-b93d-3f10a98e4397> | 2.75 | 222 | Knowledge Article | Science & Tech. | 51.341662 | 95,564,268 |
Fibrations and Homotopy Groups
The theory of fibrations is a very useful tool in the determination of homotopy groups. For a fibration (E,B,F,p), there is a relationship between the homotopy groups of E, B and F, and this often allows us to determine the homotopy of one of these spaces, given that of the others. In this chapter we will formulate particular cases of this relationship, and show how they can be used in the computation of homotopy groups of specific spaces. Later we will see how to derive these particular cases from a single general theorem—the theorem on the exact homotopy sequence of fibrations, which we prove in Section 11.2.
KeywordsUnitary Transformation Homotopy Class Total Space Orbit Space Homotopy Group
Unable to display preview. Download preview PDF. | <urn:uuid:747fb728-c008-4ba5-8f18-9a0335e7fef7> | 2.671875 | 181 | Truncated | Science & Tech. | 38.704887 | 95,564,274 |
Global Tipping Points
Host Bernice Notenboom along with scientists and climate experts will delve into the elements destabilizing our climate system and how changes in a particular area can dramatically impact those thousands of miles away.
The phenomena of Tipping Points chronicles the idea that, at a particular moment in time, a minute change can have large, long-term consequences on a fragile climate system already in a state of flux. Localized ecological systems are known to shift abruptly and irreversibly from one state to another when they are forced across critical thresholds. Further, when the situation is pushed past the Tipping Point it will potentially lead to a chain reaction, putting other ecosystems around the globe in peril. | <urn:uuid:d5677b1e-0c88-4d49-8d47-2d09654c523e> | 2.671875 | 143 | Truncated | Science & Tech. | 16.680767 | 95,564,301 |
why do sea turtles come out of the water
The sea turtle life cycle starts when a female lays its eggs on a nesting beach, usually in the tropics. From six weeks to two months later (depending on the species), a tiny hatchling makes its way to the surface of the sand and heads to the water, dodging every predator imaginable. From the time the take their first swim until they return to coastal waters to forage as may be as long as a decade. This period of time is often referred to as the "lost years" since following sea turtles movements during this phase is difficult and their whereabouts are often unknown.
Following the "lost years", when they have grown to approximately the size of a dinner plate, their pelagic (open ocean) phase comes to an end and they return to coastal waters where they forage and continue to mature. During this time, these reptiles are highly mobile, foraging over large areas of ocean.
From leatherbacks to loggerheads, six of the seven species of sea turtles are threatened or endangered at the hand of humans.
Sadly, the fact is that they face many dangers as they travel the seas в including accidental capture and entanglement in fishing gear (also known as bycatch), the loss of nesting and feeding sites to coastal development, poaching, and ocean pollution including plastic. These creatures are well-adapted to the ocean though they require air to survive. Their size varies greatly, depending upon species в from the small Kempвs ridley, which weighs between 80в100 pounds, to the enormous leatherback, which can weigh more than 1,000 pounds.
Sea turtles live in almost every ocean basin throughout the world, nesting on tropical and subtropical beaches. They migrate long distances to feed, often crossing entire oceans. Some loggerheads nest in Japan and migrate to Baja California Sur, Mexico to forage before returning home again. Leatherbacks are capable of withstanding the coldest water temperatures (often below 40ЛF) and are found as far south as Chile and as far north as Alaska.
- Views: 93
why do siamese fighting fish blow bubbles
why do sea turtles cry when they lay eggs
why is the water so blue in the bahamas
why is the loggerhead sea turtle endangered
why do sea turtles lay eggs on land | <urn:uuid:4740dd1d-7407-45cd-9add-d34f660b55d9> | 3.78125 | 479 | Knowledge Article | Science & Tech. | 40.553109 | 95,564,308 |
HTML File Upload is used to show the list of all file, when a user click on browse button A form in an HTML (web page) contain an input element with type="file". This involve one or more files into the submission of form. The file are stored into hard disk of the web server, that is why file input is called" file upload".
Understand with Example
The Tutorial illustrate an example from HTML File Upload. In this code, we create a html page which show you a
<input type="text"> :In HTML,the type of control used in form is defined by the Type attribute. The default value of Type is text, which enables a a single-line text input field. The size attribute define the number of characters in text field.
<input type="file">:The <input type="file> is used to create a upload file with a text box and the browse button. The method attribute used in form must be set to post.
when a user enter the name of file in the specified textbox or click on browse button to search the specific folder and click on a submit button, the filename is submitted.
<form action="fileinsert.html" method="post">
Enter Your Text (Optional):<br>
<input type="text" name="textwrite" size="15">
Specify your File:<br>
<input type="file" name="datasize" size="30">
<input type="submit" value="Send">
Output on the browser is displayed as | <urn:uuid:84ec78ea-b91d-4a68-8722-da35166370a3> | 3.671875 | 322 | Tutorial | Software Dev. | 53.103157 | 95,564,320 |
A webhook in web development is a method of augmenting or altering the behaviour of a web page, or web application, with custom callbacks. These callbacks may be maintained, modified, and managed by third-party users and developers who may not necessarily be affiliated with the originating website or application. The term "webhook" was coined by Jeff Lindsay in 2007 from the computer programming term hook.
Webhooks are "user-defined HTTP callbacks". They are usually triggered by some event, such as pushing code to a repository or a comment being posted to a blog. When that event occurs, the source site makes an HTTP request to the URL configured for the webhook. Users can configure them to cause events on one site to invoke behaviour on another. The action taken may be anything. Common uses are to trigger builds with continuous integration systems or to notify bug tracking systems. Since they use HTTP, they can be integrated into web services without adding new infrastructure. However, there are also ways to build a message queuing service on top of HTTP--some RESTful examples include IronMQ and RestMS.
Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.Visit defaultLogic's partner sites below: | <urn:uuid:bd7883ee-39a1-4af7-bb9e-aae100a200fd> | 2.625 | 285 | Knowledge Article | Software Dev. | 37.001786 | 95,564,322 |
Cornell researchers say double knocks may be soundprints of ivory-bills
The public is invited to join in, listen and help decipher the sounds of the ivory-billed woodpecker By Simeon Moss
Now hear this: After analyzing more than 18,000 hours of recordings from the swampy forests of eastern Arkansas, researchers at the Cornell Laboratory of Ornithology at Cornell University have released recordings offering further evidence -- including the legendary birds distinctive double knock -- for the existence of the ivory-billed woodpecker, once thought extinct. These sounds were recorded in the same area of Arkansas where the species was rediscovered in 2004.
The Cornell researchers announced the results at the annual meeting of the American Ornithologists Union in Santa Barbara, Calif., Aug. 24, and they have invited the public to listen to the calls and knocks on the Web at http://www.birds.cornell.edu.
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:5842dfeb-f9e8-4218-b2ff-526d62ce0211> | 3.015625 | 767 | Content Listing | Science & Tech. | 38.167921 | 95,564,346 |
Imagine a mask that could allow a person to breathe the oxygen in the air without the risk of inhaling a toxic gas, bacterium or even a virus. Effectively filtering different kinds of molecules has always been difficult, but a new process by researchers at the University of Rochester may have paved the way to creating a new kind of membrane with pores so fine they can separate a mixture of gases. Industries could use these types of membranes for extracting hydrogen from other gases for fuel cells that will power the next generation of automobiles.
Mathew Yates, assistant professor of chemical engineering, is developing a new way to make molecular sieves-crystals with holes so small that they can discriminate between large and small molecules. Many such crystals exist and are used regularly in industry and laboratories, but Yatess crystals may be able to be properly aligned and brought together into a sheet, which would dramatically expand their possible uses.
Yates has "grown" the new kind of crystals in a solution of water and oil, where droplets of water only a few billionths of an inch wide are dispersed within the oil with the aid of soap-like compounds. Molecular sieve crystals are normally produced in a simple container of water, which is filled with the right ingredients and heated to form crystals, but this produces crystals in a wide variety of sizes that are short and thick and hard to align. Gathering the crystals together with all their pores pointing in the same direction was all but impossible. Yates found that confining the reaction within the small droplets of water dispersed in oil altered the way the crystals grew-long fibers were created with tunnel-like pores.
Jonathan Sherwood | EurekAlert!
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:6e92ab37-3f60-48d1-a31b-bb1faaa4f07e> | 3.328125 | 979 | Content Listing | Science & Tech. | 40.644428 | 95,564,367 |
This week, astrobiologists are discussing what ESA`s Huygens spaceprobe might discover when it parachutes to the surface of Saturn`s mysterious moon, Titan, in 2005. Titan possesses a rich atmosphere of organic molecules, which Huygens will analyse. Recently some scientists have begun to think that, by redefining life, in broader terms, what we may find on Titan may be life. If this is the case, it certainly will not be life as we know it...
Titan is an astrobiologist`s dream laboratory. Its atmosphere is composed of nitrogen and methane gas. Ultraviolet light from the Sun can break the methane molecules apart, leading to the formation of complex organic molecules by which scientists mean molecules containing carbon. Carbon compounds are the first step towards life, as we know it on Earth. Life, itself, is based on extremely complicated carbon molecules such as DNA. Some scientists believe the composition of Titan`s atmosphere closely resembles that of early Earth, before life began on our planet.
Huygens`s investigations may reveal how life began on Earth. Jean-Pierre Lebreton, ESA`s Project Scientist for Huygens says, "One of the key questions we hope to address is how complex the organic molecules have grown in Titan`s atmosphere."
However, organic molecules are still a long way from life itself. So, what defines life? What is the difference between the living and the non-living? Scientists are still unsure. No satisfactory definition has been found so far. Any attempt to define life`s characteristics either excludes some types of life or includes some inanimate objects. When looking for an appropriate definition of life, there is one property all scientists seem to agree on: all life needs energy to sustain its metabolism.
Jean-Pierre Lebreton | alfa
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:02a39c74-3e35-4a2e-9399-6a68664e8cb9> | 3.15625 | 1,006 | Content Listing | Science & Tech. | 40.007742 | 95,564,368 |
The most common method for air separation is fractional distillation. Cryogenic air separation units (ASUs) are built to provide nitrogen or oxygen and often co-produce argon. Other methods such as membrane, pressure swing adsorption (PSA) and vacuum pressure swing adsorption (VPSA) are commercially used to separate a single component from ordinary air. High purity oxygen, nitrogen, and argon used for semiconductor device fabrication requires cryogenic distillation. Similarly, the only viable source of the rare gases neon, krypton, and xenon is the distillation of air using at least two distillation columns.
Cryogenic distillation process
Pure gases can be separated from air by first cooling it until it liquefies, then selectively distilling the components at their various boiling temperatures. The process can produce high purity gases but is energy-intensive. This process was pioneered by Carl von Linde in the early 20th century and is still used today to produce high purity gases.
The cryogenic separation process requires a very tight integration of heat exchangers and separation columns to obtain a good efficiency and all the energy for refrigeration is provided by the compression of the air at the inlet of the unit.
To achieve the low distillation temperatures an air separation unit requires a refrigeration cycle that operates by means of the Joule–Thomson effect, and the cold equipment has to be kept within an insulated enclosure (commonly called a "cold box"). The cooling of the gases requires a large amount of energy to make this refrigeration cycle work and is delivered by an air compressor. Modern ASUs use expansion turbines for cooling; the output of the expander helps drive the air compressor, for improved efficiency. The process consists of the following main steps:
- Before compression the air is pre-filtered of dust.
- Air is compressed where the final delivery pressure is determined by recoveries and the fluid state (gas or liquid) of the products. Typical pressures range between 5 and 10 bar gauge. The air stream may also be compressed to different pressures to enhance the efficiency of the ASU. During compression water is condensed out in inter-stage coolers.
- The process air is generally passed through a molecular sieve bed, which removes any remaining water vapour, as well as carbon dioxide, which would freeze and plug the cryogenic equipment. Molecular sieves are often designed to remove any gaseous hydrocarbons from the air, since these can be a problem in the subsequent air distillation that could lead to explosions. The molecular sieves bed must be regenerated. This is done by installing multiple units operating in alternating mode and using the dry co-produced waste gas to desorb the water.
- Process air is passed through an integrated heat exchanger (usually a plate fin heat exchanger) and cooled against product (and waste) cryogenic streams. Part of the air liquefies to form a liquid that is enriched in oxygen. The remaining gas is richer in nitrogen and is distilled to almost pure nitrogen (typically < 1ppm) in a high pressure (HP) distillation column. The condenser of this column requires refrigeration which is obtained from expanding the more oxygen rich stream further across a valve or through an Expander, (a reverse compressor).
- Alternatively the condenser may be cooled by interchanging heat with a reboiler in a low pressure (LP) distillation column (operating at 1.2-1.3 bar abs.) when the ASU is producing pure oxygen. To minimize the compression cost the combined condenser/reboiler of the HP/LP columns must operate with a temperature difference of only 1-2 K, requiring plate fin brazed aluminium heat exchangers. Typical oxygen purities range in from 97.5% to 99.5% and influences the maximum recovery of oxygen. The refrigeration required for producing liquid products is obtained using the Joule–Thomson effect in an expander which feeds compressed air directly to the low pressure column. Hence, a certain part of the air is not to be separated and must leave the low pressure column as a waste stream from its upper section.
- Because the boiling point of argon (87.3 K at standard conditions) lies between that of oxygen (90.2 K) and nitrogen (77.4 K), argon builds up in the lower section of the low pressure column. When argon is produced, a vapor side draw is taken from the low pressure column where the argon concentration is highest. It is sent to another column rectifying the argon to the desired purity from which liquid is returned to the same location in the LP column. Use of modern structured packings which have very low pressure drops enable argon with less than 1 ppm impurities. Though argon is present in less to 1% of the incoming, the air argon column requires a significant amount of energy due to the high reflux ratio required (about 30) in the argon column. Cooling of the argon column can be supplied from cold expanded rich liquid or by liquid nitrogen.
- Finally the products produced in gas form are warmed against the incoming air to ambient temperatures. This requires a carefully crafted heat integration that must allow for robustness against disturbances (due to switch over of the molecular sieve beds). It may also require additional external refrigeration during start-up.
The separated products are sometimes supplied by pipeline to large industrial users near the production plant. Long distance transportation of products is by shipping liquid product for large quantities or as dewar flasks or gas cylinders for small quantities.
Pressure swing adsorption provides separation of oxygen or nitrogen from air without liquefaction. The process operates around ambient temperature; a zeolite (molecular sponge) is exposed to high pressure air, then the air is released and an adsorbed film of the desired gas is released. The size of compressor is much reduced over a liquefaction plant, and portable oxygen concentrators are made in this manner to provide oxygen-enriched air for medical purposes. Vacuum swing adsorption is a similar process; the product gas is evolved from the zeolite at sub-atmospheric pressure.
Membrane technologies can provide alternate, lower-energy approaches to air separation. For example, a number of approaches are being explored for oxygen generation. Polymeric membranes operating at ambient or warm temperatures, for example, may be able to produce oxygen-enriched air (25-50% oxygen). Ceramic membranes can provide high-purity oxygen (90% or more) but require higher temperatures (800-900 deg C) to operate. These ceramic membranes include Ion Transport Membranes (ITM) and Oxygen Transport Membranes (OTM). Air Products and Chemicals Inc and Praxair are developing flat ITM and tubular OTM systems, .
Membrane gas separation is used to provide oxygen poor and nitrogen rich gases instead of air to fill the fuel tanks of jet liners, thus greatly reducing the chances of accidental fires and explosions. Conversely, membrane gas separation is currently used to provide oxygen enriched air to pilots flying at great altitudes in aircraft without pressurized cabins.
Oxygen-enriched air can be obtained exploiting the different solubility of oxygen and nitrogen. Oxygen is more soluble than nitrogen in water, so if air is degassed from water, a stream of 35% oxygen can be obtained.
Large amounts of oxygen are required for coal gasification projects; cryogenic plants producing 3000 tons/day are found in some projects. In steelmaking oxygen is required for the basic oxygen steelmaking. Large amounts of nitrogen with low oxygen impurities are used for inerting storage tanks of ships and tanks for petroleum products, or for protecting edible oil products from oxidation.
- Louis Paul Cailletet
- Cryogenic nitrogen plant
- Cryogenic oxygen plant
- Gas separation
- Gas to liquids
- Hampson–Linde cycle
- Industrial gases
- Liquefaction of gases
- Liquid air
- Oxygen concentrator
- Siemens cycle
- NASA Earth Fact Sheet, (updated November 2007)
- "Cool Inventions" (PDF). Institution of Chemical Engineers. September 2010.
- Latimer, R. E. (1967). "Distillation of Air". Chemical Engineering Progress. 63 (2): 35–59.
- Agrawal, R. (1996). "Synthesis of Distillation Column Configurations for a Multicomponent Separation". Industrial & Engineering Chemistry Research. 35 (4): 1059. doi:10.1021/ie950323h.
- Castle, W. F. (2002). "Air separation and liquefaction: Recent developments and prospects for the beginning of the new millennium". International Journal of Refrigeration. 25: 158–172. doi:10.1016/S0140-7007(01)00003-2.
- Particulate matter from forest fires caused an explosion in the air separation unit of a Gas to Liquid plant, see Fainshtein, V. I. (2007). "Provision of explosion proof air separation units under contemporary conditions". Chemical and Petroleum Engineering. 43: 96–101. doi:10.1007/s10556-007-0018-8.
- Vinson, D. R. (2006). "Air separation control technology". Computers & Chemical Engineering. 30 (10–12): 1436–1446. doi:10.1016/j.compchemeng.2006.05.038.
- Galli, F; Comazzi, A; Previtali, D; Manenti, F; Bozzano, G; Bianchi, C. L.; Pirola, C (2017). "Production of oxygen-enriched air via desorption from water: Experimental data, simulations and economic assessment". Computers & Chemical Engineering. 102: 11–16. doi:10.1016/j.compchemeng.2016.07.031.
- Higman, Christopher; van der Burgt, Maarten (2008). Gasification (2nd Edition). Elsevier. p. 324.
|Wikimedia Commons has media related to Air separation.| | <urn:uuid:fbefd152-8437-452c-b107-653874652be3> | 3.765625 | 2,123 | Knowledge Article | Science & Tech. | 42.247116 | 95,564,375 |
TatSu takes a grammar in a variation of EBNF as input, and outputs a memoizing PEG/Packrat parser in Python.
At least for the people who send me mail about a new language that they’re designing, the general advice is: do it to learn about how to write a compiler. Don’t have any expectations that anyone will use it, unless you hook up with some sort of organization in a position to push it hard. It’s a lottery, and some can buy a lot of the tickets. There are plenty of beautiful languages (more beautiful than C) that didn’t catch on. But someone does win the lottery, and doing a language at least teaches you something.
竜 TatSu can compile a grammar stored in a string into a tatsu.grammars.Grammar object that can be used to parse any given input, much like the re module does with regular expressions, or it can generate a Python module that implements the parser.
$ pip install TatSu
Using the Tool
tatsu.compile(grammar, name=None, **kwargs)
Compiles the grammar and generates a model that can subsequently be used for parsing input with.
tatsu.parse(grammar, input, **kwargs)
Compiles the grammar and parses the given input producing an AST as result. The result is equivalent to calling:
model = compile(grammar) ast = model.parse(input)
Compiled grammars are cached for efficiency.
tatsu.to_python_sourcecode(grammar, name=None, filename=None, **kwargs)
Compiles the grammar to the Python sourcecode that implements the parser.
This is an example of how to use 竜 TatSu as a library:
GRAMMAR = ''' @@grammar::CALC start = expression $ ; expression = | expression '+' term | expression '-' term | term ; term = | term '*' factor | term '/' factor | factor ; factor = | '(' expression ')' | number ; number = /\d+/ ; ''' if __name__ == '__main__': import pprint import json from tatsu import parse from tatsu.util import asjson ast = parse(GRAMMAR, '3 + 5 * ( 10 - 20 )') print('# PPRINT') pprint.pprint(ast, indent=2, width=20) print() print('# JSON') print(json.dumps(asjson(ast), indent=2)) print()
竜 TatSu will use the first rule defined in the grammar as the start rule.
This is the output:
# PPRINT [ '3', '+', [ '5', '*', [ '10', '-', '20']]] # JSON [ "3", "+", [ "5", "*", [ "10", "-", "20" ] ] ]
For a detailed explanation of what 竜 TatSu is capable of, please see the documentation.
See the CHANGELOG for details.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size & hash SHA256 hash help||File type||Python version||Upload date|
|TatSu-4.2.6-py2.py3-none-any.whl (79.5 kB) Copy SHA256 hash SHA256||Wheel||py2.py3||May 7, 2018|
|TatSu-4.2.6.zip (126.0 kB) Copy SHA256 hash SHA256||Source||None||May 7, 2018| | <urn:uuid:a7153f69-f17d-498c-bc79-5596b55242dc> | 2.515625 | 793 | Product Page | Software Dev. | 66.431942 | 95,564,377 |