text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
An international team of scientists, led by the University of Oxford, working alongside researchers at the Science and Technology Facilities Council’s (STFC) Central Laser Facility, has gained a deeper insight into the hot, dense matter found at the centre of planets and as a result, has provided further understanding into controlled thermonuclear fusion. The full paper on this research has been published, 19 October, in the scientific journal, Nature Physics.
This deeper insight into planets could extend our comprehension of fusion energy – the same energy that powers the sun, and laser driven fusion as a future energy source. Fusion energy is widely considered an attractive, environmentally clean power source using sea water as its principal source of fuel, where no greenhouse gasses or long lived radioactive waste materials are produced.
Using STFC’s Vulcan laser, the team has used an intense beam of X-rays to successfully identify and reproduce conditions found inside the core of planets, where solid matter has a temperature in excess of 50,000 degrees. The understanding of the complex state of matter in these extreme conditions represents one of the grand challenges of contemporary physics. The results from the Vulcan experiments are intended to improve our models of Jupiter and Saturn and to obtain better constraints on their composition and the age of the Solar System.
Using inelastic X-ray scattering measurements on a compressed lithium sample, it was shown how hot, dense matter states can be diagnosed and structural properties can be obtained. The thermodynamic properties – temperature, density and ionisation state, were all measured using a combination of non-invasive, high accuracy, X-ray diagnostics and advanced numerical simulations. The experiment has revealed that the matter at the centre of planets is in a state that is intermediate between a solid and a gas over lengths larger than 0.3 nanometres. To put this into context, 1 nanometre equates to less than 1/10000th of a human hair! Results showed that extreme matter behaves as a charged liquid, but at smaller distances it acts more like a gas.
Dr Gianluca Gregori, of the University of Oxford and STFC’s Central Laser Facility said: “The study of warm dense matter states, in this experiment on lithium, shows practical applications for controlled thermonuclear fusion, and it also represents significant understanding relating to astrophysical environments found in the core of planets and the crusts of old stars. This research therefore makes it not only possible to formulate more accurate models of planetary dynamics, but also to extend our comprehension of controlled thermonuclear fusion where such states of matter, that is liquid and gas, must be crossed to initiate fusion reactions. This work expands our knowledge of complex systems of particles where the laws that regulate their motion are both classical and quantum mechanical. ”
Professor Mike Dunne, Director of the Central Laser Facility at STFC said: “Using high power lasers to find solutions to astrophysical issues is an area that has been highly active at STFC for some time. We are very excited that the Vulcan laser has contributed to such a significant piece of research. The use of extremely powerful lasers is proving to be a particularly effective approach to delivering long-term solutions for carbon-free energy.”
Wendy Taylor | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:c7d756c8-7ce5-434c-bca1-6072c7fa04b8> | 3.296875 | 777 | News (Org.) | Science & Tech. | 38.040783 | 95,626,450 |
20 July 2018
3D films for better supercapacitors
Published online 15 June 2016
Three-dimensional graphene films can store charge and even hydrogen.
Materials scientists have created a new brand of three-dimensional, porous graphene film that is potentially useful for making high-performance supercapacitors1.
This film can also be used in gas absorption, batteries and hydrogen storage.
Current techniques produce graphene films that have lost much of their original surface area and lack ion-transporting pores.
But Maher El-Kady and his colleagues from Cairo University, Egypt, and University of California, USA, have overcome this by dispersing and freezing aqueous graphene in liquid nitrogen. The ice crystals formed in liquid nitrogen prevent the graphene films from sticking together.
Evaporating the ice crystals leaves behind porous, three-dimensional honeycomb-like graphene films.
Unlike graphene paper, both sides of each graphene film are accessible to electrolytes and can thus store a huge amount of charge. The films are low resistance to charge transport — reducing charge losses during long-term use.
All-solid-state supercapacitors made of the graphene films retain their ability to store energy even after 500 cycles of repeated bending and unbending, the researchers say.
“Besides supercapacitors, the graphene films could potentially be used as catalysts and electrode materials for batteries,” says Yuanlong Shao, the lead author of the study.
Shao, Y. et al. 3D freeze-casting of cellular graphene films for ultrahigh-power-density supercapacitors. Adv. Mater. http://dx.doi.org/10.1002/adma.201506157 (2016). | <urn:uuid:4cde021e-307b-4d60-adc3-5cd233cfca40> | 3.3125 | 354 | Truncated | Science & Tech. | 34.176559 | 95,626,456 |
There have been significant, severe, rains in Japan. Scores have lost their lives and some 2 million are being told to move, evacuate. All a tragedy of course, most especially for those dead and those they leave behind. However, it does give us an opportunity to put something into perspective – the damage done by that nuclear power plant at Fukushima blowing up. The financial damage done was of course vast. That to humans and human life, not so much. And very much less than these rains of course:
Shinzo Abe, Japan’s Prime Minister, has warned of a “race against time” to rescue flood victims as authorities issued new alerts over record rains that have killed at least 48 people and left dozens missing.
The torrential downpours have caused flash flooding and landslides across central and western parts of the country, prompting evacuation orders for more than two million people.
“Rescues, saving lives and evacuations are a race against time,” Mr Abe said as he met with a government crisis cell set up to respond to the disaster.
Yes indeed, a tragedy. But do you recall what we were told about that reactor failure at Fukushima? Something so dangerous that it was going to poison our entire world? Certainly, so dangerous that it led to Germany entirely abandoning nuclear power. Something which is going to kill many more people than the nuclear industry ever has. For the levels of radiation released were large in total, sure, but compared to other sources they’re trivial:
Most of us haven’t a clue what that means of course. We don’t instinctively understand what a becquerel is in the same way that we do pound, pint or gallons, and certainly trillions of anything sounds hideous. But don’t forget that trillions of picogrammes of dihydrogen monoxide is also the major ingredient in a glass of beer. So what we really want to know is whether 20 trillion becquerels of radiation is actually an important number. To which the answer is no, it isn’t. This is actually around and about (perhaps a little over) the amount of radiation the plant was allowed to dump into the environment before the disaster. Now there are indeed those who insist that any amount of radiation kills us all stone dead while we sleep in our beds but I’m afraid that this is incorrect. We’re all exposed to radiation all the time and we all seem to survive long enough to be killed by something else so radiation isn’t as dangerous as all that.
At which point we can offer a comparison. Something to try and give us a sense of perspective about whether 20 trillion nasties of radiation is something to get all concerned about or not. That comparison being that the radiation leakage from Fukushima appears to be about the same as that from 76 million bananas. Which is a lot of bananas I agree, but again we can put that into some sort of perspective.
If just the water falling from the skies is going to kill scores then an entire nuclear power plant blowing up without any deaths at all is small beer, right? And no one has died as a result of radiation from Fukushima – while tens of thousands died from the tsunami itself – and it’s most, most, unlikely that anyone ever will die from that radiation release.
Oh, and as to Germany? By abandoning nuclear in panic they’ve turned to using more lignite, brown coal. Yep, despite that trillion and more they’ve spent on all those renewables CO2 emissions from the country have risen. And we are all onboard with all the climate science, aren’t we, that CO2 emissions are the very devil which will murder us all in our beds?
It’s necessary, despite Douglas Adams, to have a sense of proportion about these things. | <urn:uuid:dffbe873-49c0-4d43-864c-87285d60f83f> | 2.796875 | 792 | Personal Blog | Science & Tech. | 51.882615 | 95,626,469 |
Excitation of helicons by current antennas
Depending on the angle θ between the wave vector and the magnetic field, helicons are conventionally divided into two branches: proper helicons (H mode), propagating at small θ, and Trivelpiece–Gould waves (TG mode), propagating at large θ. The latter are close to potential waves and have a significant electric component along the external magnetic field. It is believed that it is these waves that provide electron heating in helicon discharges. There is also commonly believed that current antennas, widely used to ignite helicon discharges, excite essentially nonpotential Н modes, which then transform into TG modes due to plasma inhomogeneity. In this work, it is demonstrated that electromagnetic energy can also be efficiently introduced in plasma by means of TG modes.
Unable to display preview. Download preview PDF.
- 11.E. A. Bering, III, F. R. Chang-Diaz, I. P. Squire, V. Jacobson, L. D. Cassady, and M. Burkardt, in Proceedings of the 45th AIAA Aerospace Sciences Meeting and Exhibition, Reno, NV, 2007, Paper AIAA-2007-586.Google Scholar | <urn:uuid:512d558f-097c-47f5-ae34-97e1f70b46ff> | 2.828125 | 259 | Truncated | Science & Tech. | 47.301806 | 95,626,473 |
posted by Lacey
Can someone please give me an example of a chemical reaction of scandium with oxygen?
This site may have the information you need.
that's awesome, thank you! :)
Sorry, I've got another question for you.. can you give me a site showing some uses of Sc2O3? I've been searching but I don't think I've found what I'm looking for. Would the main commerical use be for antireflective UV coatings on semiconductors, or is it used more for something else?
Yeah, that's the one I found too - I was just wondering if that is the most common commercial use | <urn:uuid:449bf611-b802-420b-9d90-01d99984cc1b> | 2.84375 | 139 | Q&A Forum | Science & Tech. | 65.173394 | 95,626,475 |
U.S. biologists say the world's fungus-farming ants cultivate essentially the same fungus and aren't as critical to fungi reproduction as had been thought.
The University of Texas-Austin scientists say fungus-farming ants are dependent on cultivating fungus gardens for food, and it has been widely believed the fungi also evolved dependence on the ants for their dispersal and reproduction. When young ant queens establish new colonies, they take a start-up crop of fungi with them from their parental garden.
UT graduate student Alexander Mikheyev and Biology Professor Ulrich Mueller say the fungi reproduce sexually and disperse widely without the aid of their ant farmers. That finding provides a new perspective on co-evolutionary processes -- such as that between honeybees and the flowers they pollinate -- when two or more species influence each other's evolution over time.
"This shows co-evolution can proceed without specificity at the species level," said Mikheyev. "It has been believed mutualistic interactions, as well as parasitic ones, are very specific and one-to-one. We are beginning to realize this is not necessary for long-term co-evolutionary stability ..."
The research appears in the current issue of the Proceedings of the National Academy of Sciences.
Copyright 2006 by United Press International
Explore further: Climate change forced zombie ant fungi to adapt | <urn:uuid:4786d435-ebb4-4a36-b919-b93a6880880c> | 3.140625 | 274 | Truncated | Science & Tech. | 33.395385 | 95,626,486 |
In mathematics, sophomore's dream is the pair of identities (especially the first)
discovered in 1697 by Johann Bernoulli.
The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively.
The name "sophomore's dream", which appears in (Borwein, Bailey & Girgensohn 2004), is in contrast to the name "freshman's dream" which is given to the incorrect[note 1] identity (x + y)n = xn + yn. The sophomore's dream has a similar too-good-to-be-true feel, but is true.
The proofs of the two identities are completely analogous, so only the proof of the first is presented here. The key ingredients of the proof are:
- to write xx = exp(x log x) (using the notation exp(t) for the exponential function et to base e);
- to expand exp(x log x) using the power series for exp; and
- to integrate termwise, using integration by substitution.
In details, one expands xx as
By uniform convergence of the power series, one may interchange summation and integration to yield
To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity
Summing these (and changing indexing so it starts at n = 1 instead of n = 0) yields the formula.
The original proof, given in Bernoulli (1697), and presented in modernized form in Dunham (2005), differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms.
The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing the definite integral. One may integrate by taking u = (ln x)n and dv = xm dx, which yields:
where (n) i denotes the falling factorial; there is a finite sum because the induction stops at 0, since n is an integer.
In this case m = n, and they are integers, so
Integrating from 0 to 1, all the terms vanish except the last term at 1,[note 2] which yields:
From a modern point of view, this is (up to a scale factor) equivalent to computing Euler's integral identity for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts.
- Incorrect unless one is working over a field or unital commutative ring of prime characteristic n or a factor of n. The correct result is given by the binomial theorem.
- All the terms vanish at 0 because by l'Hôpital's rule (Bernoulli omitted this technicality), and all but the last term vanish at 1 since ln(1) = 0.
- Johann Bernoulli, 1697, collected in Johannis Bernoulli, Opera omnia, vol. 3, pp. 376–381
- Borwein, Jonathan; Bailey, David H.; Girgensohn, Roland (2004), Experimentation in Mathematics: Computational Paths to Discovery, pp. 4, 44, ISBN 978-1-56881-136-9
- Dunham, William (2005), "3: The Bernoullis (Johann and )", The Calculus Gallery, Masterpieces from Newton to Lebesgue, Princeton, NJ: Princeton University Press, pp. 46–51, ISBN 978-0-691-09565-3
- OEIS, (sequence A083648 in the OEIS) and (sequence A073009 in the OEIS)
- Pólya, George; Szegő, Gábor (1998), "part I, problem 160", Problems and Theorems in Analysis, p. 36, ISBN 978-3-54063640-3
- Weisstein, Eric W. "Sophomore's Dream". MathWorld.
- Max R. P. Grossmann (2017): Sophomore's dream. 1,000,000 digits of the first constant
- Literature for x^x and Sophomore's Dream, Tetration Forum, 03/02/2010
- The Coupled Exponential, Jay A. Fantini, Gilbert C. Kloepfer, 1998
- Sophomore's Dream Function, Jean Jacquelin, 2010, 13 pp.
- Lehmer, D. H. (1985). "Numbers associated with Stirling numbers and xx". Rocky Mountain Journal of Mathematics. 15: 461. doi:10.1216/RMJ-1985-15-2-461.
- Gould, H. W. (1996). "A Set of Polynomials Associated with the Higher Derivatives of y = xx". Rocky Mountain Journal of Mathematics. 26: 615. doi:10.1216/rmjm/1181072076. | <urn:uuid:9e62d0dc-2ecd-410b-b53c-fa03ca71ccde> | 2.765625 | 1,133 | Knowledge Article | Science & Tech. | 63.706038 | 95,626,496 |
Astronomers accidentally discover a dozen new moons around Jupiter
Scientists trace the origin of high-energy cosmic radiation
Massive iceberg threatens Greenland village with tsunami
NASA's Cape Canaveral Launch Towers Are Now Demolished
'A complete disaster': 8 endangered black rhinos die in Kenya after relocation
Giant hole the size of ME reopens in Antarctica's Weddell Sea
12 October 2017, 02:11 | Dale Webster
Massive Hole Has Opened Up in Antarctica (Report)
A massive hole called a polynya opened in Antarctica's Weddell Sea last month, a odd occurrence as polynyas typically don't develop deep in the ice pack, Motherboard reports. This isn't the first time it's been spotted, having appeared a year ago for a brief period as well, and long before that it was detected back in the 1970s.
Known as a polynya, this year's hole was about 30,000 square miles at its largest, making it the biggest polynya observed in Antarctica's Weddell Sea since the 1970s.
"At that time, the scientific community had just launched the first satellites that provided images of the sea-ice cover from space", said Torge Martin, a meteorologist and climate modeler, as quoted by Phys.org.
The blue curves represent the ice edge.
Actually, this type of phenomena can be termed as polynya- an area of open water completely enclosed by sea ice. "In the depths of winter, for more than a month, we've had this area of open water", Kent Moore, an atmospheric scientist at the University of Toronto, told National Geographic.
'This is now the second year in a row it's opened after 40 years of not being there, ' Moore explained.
It's not clear at this point if the ice hole is influenced in any way by climate change. As per the report, the largest estimates of the hole's current size put it around 80,000 square kilometers.
Scientists weren't expecting the polynya to re-appear, and aren't sure why it has resurfaced twice in the past two years. "This is like opening a pressure relief valve-the ocean then releases a surplus of heat to the atmosphere for several consecutive winters until the heat reservoir is exhausted", Latif said.
Simulated temperature development in the area of the polynya is illustrated above.
The polynya went away for forty years and reopened in September 2016 for a few weeks. A team comprised of scientists from the University of Toronto and the Southern Ocean Carbon and Climate Observations and Modeling (SOCCOM) project found the hole during one of the monitoring exercises with the help of satellite technology.
"Global warming is not a linear process and happens on top of internal variability inherent to the climate system. We don't really understand the long-term impacts this polynya will have".
Experts say it's too early to know how climate change has affected the formation of the huge polynya, if it's to blame at all.
Boy Scouts will admit girls in 2018
The Girl Scouts of the United States of America said the move strains the century-old bond between the two organizations. From a peak of over 5 million members in the 1970's there are now under two and a half million boys in scouting.
67 deaths from West Nile virus reported in US
The city of Frisco has confirmed 18 mosquito pools that have tested positive for West Nile virus so far this year. When outdoors, wear long trousers and long-sleeved shirts and use mosquito repellents. | <urn:uuid:57b674af-9d2b-4343-9a7d-a71ca74cc437> | 2.828125 | 739 | News Article | Science & Tech. | 41.614827 | 95,626,513 |
Stressor footprints and dynamics
We are defining the footprint of materials that stress marine ecosystems, such as contaminants, nutrients and sediment. We are using the latest in marine observational technology to collect data – including drifters and ocean gliders – and building better mathematical models to understand water flows within the region.
Project leader: Craig Stevens, NIWA/University of Auckland
Investigating the 'footprint' of materials that flow into the oceans
This project aims to define the ‘footprint’ of materials that stress marine ecosystems, such as contaminants, nutrients and sediment. Such footprints include where they are and how they flow through surrounding waters.
We are investigating three aspects:
- Near-field effects – around the source of the stressor material
- Regional effects – from wider coastal processes
- Far-field effects – due to factors like climate change
We are using the latest in marine technology – including drifters, ocean gliders and wire-walking moorings – provide critical observational data. Our ocean glider has made multiple passes through the focal region, providing valuable data on the vertical structure of the water column (the different layers between the surface and seabed). We have released surface drifters in Tasman and Golden Bays to track where material is transported.
We are using this information to develop better mathematical models that show currents and water flows within the focal region. These will show how stressor footprints from local marine activities affect the wider marine environment, and indicate what the critical factors are and how they interlink. This is complex because of the number and varied nature of marine activities, but extremely important because they provide critical information for stakeholders and resource managers.
Our data is also used by other Challenge projects, to improve our understanding of how ocean currents transport stressor materials and their connectivity – the way they interact with each other and the marine ecosystem. This is important for determining how much, and what type, of marine activity is viable in a particular region.
Latest news and updates
Improving marine management is critical to New Zealand's future health and wealth, but research in isolation is not enough. Excellent engagement with, and participation from, all users and sectors of society is essential.
We therefore invite comment on our draft strategy for Phase II (2019–2024). This strategy has been co-developed with Māori and stakeholders.
During Seaweek, more than 4,600 school pupils joined 6 Sustainable Seas researchers for 3 days of marine science fieldwork in Tasman Bay, as part of the LEARNZ virtual field trip Sustainable seas – essential for New Zealand’s health and wealth. | <urn:uuid:dfe98325-a425-4add-a36a-099b3b0023b7> | 3.1875 | 536 | Knowledge Article | Science & Tech. | 22.579412 | 95,626,514 |
Open-pool Australian lightwater reactor
|Science with Neutrons|
The Open-pool Australian lightwater reactor (OPAL) is a 20 megawatt (MW) pool-type nuclear research reactor. Officially opened in April 2007, it replaced the High Flux Australian Reactor as Australia's only nuclear reactor, and is located at the Australian Nuclear Science and Technology Organisation (ANSTO) Research Establishment in Lucas Heights, New South Wales, a suburb of Sydney. Both OPAL and its predecessor have been commonly known as simply the Lucas Heights reactor, after their location.
The main reactor uses are:
- Irradiation of target materials to produce radioisotopes for medical and industrial applications
- Research in the fields of materials science and structural biology using neutron beams and its sophisticated suite of experimental equipment
- Analysis of minerals and samples using the neutron activation technique and the delay neutron activation technique
- Irradiation of silicon ingots in order to dope them with phosphorus and produce the basic material used in the manufacturing of semiconductor devices
The reactor runs on an operation cycle of 30 days non-stop at full power, followed by a stop of 5 days to reshuffle the fuel.
During year 2014 OPAL ran a total of 290 days at power, which represents a world-leading level of availability.
The Argentine company INVAP was fully responsible through a turnkey contract, signed in June 2000, for the delivery of the reactor, performing the design, construction and commissioning. Local civil construction was performed by INVAP's partner, John Holland-Evans Deakin Industries. The facility features a large (20-litre) liquid-deuterium cold neutron source, modern supermirror guides, and a 35 m × 65 m guide hall. The cold source was designed by the Petersburg Nuclear Physics Institute, the cryogenic system designed and supplied by Air Liquide and the initial set of four supermirror guides supplied by Mirrotron.
OPAL was opened on 20 April 2007 by then Australian Prime Minister John Howard and is the replacement for the HIFAR reactor. ANSTO received an operating licence from the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) in July 2006, allowing commencement of hot commissioning, where fuel is first loaded into the reactor core. OPAL went critical for the first time on the evening of the 12th of August 2006 and reached full power for the first time on the morning of the 3rd of November 2006.
The reactor core consists of 16 low-enriched plate-type fuel assemblies and is located under 13 metres of water in an open pool. Light water (normal H2O) is used as the coolant and moderator while heavy water (D2O) is used as the neutron reflector. The purpose of the neutron reflector is to improve neutron economy in the reactor, and hence to increase the maximum neutron flux.
OPAL is the centrepiece of the facilities at ANSTO, providing efficient and rapid radiopharmaceutical and radioisotope production, irradiation services (including neutron transmutation doping of silicon), neutron activation analysis and neutron beam research. OPAL is able to produce four times as many radioisotopes for nuclear medicine treatments as the old HIFAR reactor, and a wider array of radioisotopes for the treatment of disease. The modern design includes a cold neutron source (CNS).
The OPAL reactor already has received seven awards in Australia.
Neutron scattering at OPAL
The Bragg Institute at ANSTO hosts OPAL's neutron scattering facility. It is now running as a user facility serving the scientific community in Australia and around the world. New funding was received in 2009 in order to install further competitive instruments and beamlines. The actual facility comprises the following instruments:
ECHIDNA is the name of the high-resolution neutron powder diffractometer. The instrument serves to determine the crystalline structures of materials using neutron radiation analogous to X-ray techniques. It is named after the Australian monotreme echidna, as the spiny peaks of the instrument looks like an echidna.
It operates with thermal neutrons. One of the main features is the array of 128 collimators and position sensitive detectors for rapid data acquisition. ECHIDNA allows for structure determinations, texture measurements and reciprocal space mapping of single crystals in most different sample environments serving the physics, chemistry, materials, minerals and earth-science communities. ECHIDNA is part of the Bragg Institute's park of neutron scattering instruments.
- Neutron guide
- Primary collimator
- There are Söller collimators prior to the monochromator in order to reduce the divergence of the beam and to increase the angular resolution of the instrument. Since this is an intensity compromise, two items of 5' and 10', respectively, can be interchanged or fully removed by an automated mechanism. The collimators cover the full size of the beam delivered by the neutron guide.
- Secondary collimator
- Optionally a secondary collimator with 10' angular acceptance and 200 mm × 20 mm can be placed in the monochromatic beam between the monochromator and the sample, which again influences the resolution function of the instrument.
- Slit system
- Two automated sets of horizontal and vertical pairs of absorbing plates allow to cut down the size of the monochromatic beam prior to the secondary collimator and sample size. They remove unwanted neutrons and reduce the background near the detector. In addition, they allow selection of the sample position to be studied.
- Beam monitor
- Sample stage
- The sample is supported by a heavy load goniometer consisting of a 360° vertical omega rotation axis, x-y translation tables and a chi-phi cross tilt stage of ±20° range. It can hold a few hundred kilograms in order to support heavier sample environments, such as cryostats, furnaces, magnets, load frames, reaction chambers and others. A typical powder sample is filled into vanadium cans which give little unstructured background. The mentioned sample environment allows measurement of changes in the sample as a function of external parameters, like temperature, pressure, magnetic field, etc. The goniometer stage is redundant for most powder diffraction measurements, but will be important for single crystal and texture measurements, where the orientation of the sample plays a role.
- Detector collimators
A set of 128 detectors each equipped which a 5' collimator in front are arranged in a 160° sector focusing to the sample. The collimators select the scattered radiation into the well defined ranges of 128 angular positions. The whole collimator and detector setup is mounted on a common table which is scanned in finer steps around the sample, to be combined further into a continuous diffraction pattern.
- Detector tubes
- The 128 linear position-sensitive 3He gas detector tubes cover the full opening height of 300 mm behind the collimators. They determine the position of the neutron event by charge division over the resistive anode towards each end of the detector. Overall and local count rates lie in the several 10000 Hz range.
PLATYPUS is a time-of-flight reflectometer built on the cold neutron source. The instrument serves to determine the structure of interfaces using highly collimated neutron beams. These beams are shone on to the surface at low angles (typically less than 2 degrees) and the intensity of the reflected radiation is measured as a function of angle of incidence.
It operates using cold neutrons with a wavelength band of 0.2–2.0 nm. Although up to three different angles of incidence are required for each reflectivity curve, the time-of-flight nature means that timescales of kinetic processes are accessible. By analysing the reflected signal one builds a picture of the chemical structure of the interface. This instrument can be used for examining biomembranes, lipid bilayers, magnetism, adsorbed surfactant layers, etc.
it is named after Ornithorhynchus anatinus, the semi-aquatic mammal native to Australia.
WOMBAT is a high-intensity neutron powder diffractometer. The instrument serves to determine the crystalline structures of materials using neutron radiation analogous to X-ray techniques. It is named after the wombat, a marsupial indigenous to Australia.
It will operate with thermal neutrons. It has been designed for highest flux and data acquisition speed in order to deliver time resolved diffraction patterns in a fraction of a second. Wombat will concentrate on in-situ studies and time critical investigations, such as structure determinations, texture measurements and reciprocal space mapping of single crystals in most different sample environments serving the physics, chemistry, materials, minerals and earth-science communities.
KOWARI is a neutron residual stress diffractometer. Strain scanning using thermal neutrons is a powder diffraction technique in a polycrystalline block of material probing the change of atomic spacing due to internal or external stress. It is named after the kowari, an Australian marsupial.
It provides a diagnostic non-destructive tool to optimise e.g. post-weld heat treatment (PWHT, similar to tempering) of welded structures. Tensile stresses for example drive crack growth in engineering components and compressive stresses inhibit crack growth (for example cold-expanded holes subject to fatigue cycling). Life extension strategies have high economic impact and strain scanning provides the stresses needed to calculate remaining life as well as the means to monitor the condition of components since it is non-destructive. One of the main features is the sample table that will allow examination of large engineering components while orienting and positioning them very accurately.
- TAIPAN - Thermal 3-Axis Spectrometer
- KOALA - Laue Diffractometer
- QUOKKA - Small-Angle Neutron Scattering
- PELICAN - Cold-Neutron Time-of-Flight Spectrometer
- SIKA - Cold 3-Axis Spectrometer
- KOOKABURRA - Ultra-Small-Angle Neutron Scattering (USANS)
- DINGO - Neutron Radiography, Tomography and Imaging
July 2007 shutdown
Following the discovery of loose fuel plates during a routine inspection, ANSTO announced on 27 July 2007 that the reactor would be shut down for 8 weeks to fix the fuel plates and a minor fault causing ordinary (light) water to seep into the reactor's heavy water. In the end, the shutdown lasted 10 months. The supply of radiopharmaceuticals was rationed, causing the postponement of some treatments for patients. OPAL returned to full operational power on 23 May 2008, following approval by the nuclear regulator, ARPANSA to use a modified fuel design.
- High Flux Australian Reactor
- Research reactors
- Spallation Neutron Source
- Neutron scattering
- Nuclear medicine
- "PM Opens Australia’s New Nuclear Reactor" (PDF) (Press release). ANSTO. 20 April 2007. Retrieved 2009-07-03.
- "Sydney Opal reactor at full power" (Press release). INVAP. 10 November 2006. Retrieved 2009-07-03.
- "The OPAL reactor already has received seven awards in Australia" (Press release). INVAP. 14 November 2006. Retrieved 2009-07-03.
- Liss, L.; Hunter, B.; Hagen, M.; Noakes, T.; Kennedy, S. (2006). "Echidna—the new high-resolution powder diffractometer being built at OPAL" (PDF). Physica B. 385-386: 1010. Bibcode:2006PhyB..385.1010L. doi:10.1016/j.physb.2006.05.322.
- "Sydney nuclear reactor to shut down". ABC News. 27 July 2007. Retrieved 2009-07-03.
- "Reactor to shut down for about eight weeks" (PDF) (Press release). ANSTO. 27 July 2007. Retrieved 2007-10-25.
- Richard Macey (22 February 2008). "Idle reactor keeps sick waiting for treatment". Sydney Morning Herald. Retrieved 2009-07-03.
- Bragg Institute
- INVAP Nuclear Division designs
- Announcement of the new reactor's name
- Nuclear reactor to reopen after six-month shutdown
- Govt expects nuclear reactor to restart this month
- Reactor ready for second try | <urn:uuid:c6bc3f12-e329-4cf2-b27b-9164adf0c937> | 2.65625 | 2,611 | Knowledge Article | Science & Tech. | 36.67965 | 95,626,518 |
posted by mel
A piece of solid carbon dioxide, with a mass of 6.2 g, is placed in a 4.0 L otherwise empty container at 21°C.
a) What is the pressure in the container after all the carbon dioxide vaporizes?
atm (.86 atm)
(b) If 6.2 g solid carbon dioxide were placed in the same container but it already contained air at 740 torr, what would be the partial pressure of carbon dioxide?
(c) What would be the total pressure in the container after the carbon dioxide vaporizes?
a) I found 0.86 atm
b)I'm assuming the CO2 vaporizes completely. Won't the partial pressure of the CO2 be the same as in (a)? Dalton's Law tells us that the total pressure is the sum of the partial pressures of each gas and each gas will exert a pressure as if the other gas were not present. Check my thinking. OR, try PV = nRT to calculate pCO2 (done in part a), then calcualte pressure of air (740/760) and add them together. Then determine mols CO2 and mols air, add them together and use PV = nRT to calculate total pressure. That should be the same total as you obtained when you added the partial pressure of each.
c)explained in (b) | <urn:uuid:befa7ab9-6864-462b-a16a-3fd1b8193521> | 2.84375 | 287 | Q&A Forum | Science & Tech. | 75.80746 | 95,626,520 |
By Darrin Qualman, Darrin Qualman blog
Humans and our livestock now make up 97 percent of all animals on land. Wild animals (mammals and birds) have been reduced to a mere remnant: just 3 percent. This is based on mass. Humans and our domesticated animals outweigh all terrestrial wild mammals and birds 32-to-1. | <urn:uuid:63801953-7381-4a7d-8b63-47543035b147> | 2.71875 | 76 | Truncated | Science & Tech. | 65.209659 | 95,626,521 |
Genetics of Host Plant Adaptation in Delphacid Planthoppers
Delphacid planthoppers, by any measure, are very successful plant-feeding insects. Indications of their success include a worldwide distribution (O’Brien and Wilson 1985), an extensive range of habitats occupied (Denno and Roderick 1990), and status as major agricultural pests (Wilson and O’Brien 1987). Characteristic of most planthoppers is a monophagous feeding habit and a close association with their host plants for feeding, mating, oviposition, and substrate-borne acoustic communication (Wilson et al. Chapter 1). Thus, understanding the factors which promote or constrain adaptation to a particular host plant species or variety is central to explaining patterns of host plant use (e.g., specialization) and deterring their continuing status as major agricultural pests.
KeywordsHost Plant Rice Variety Host Plant Species Host Type Offspring Performance
Unable to display preview. Download preview PDF. | <urn:uuid:0074700b-9185-45e8-8463-4d54b8709252> | 2.953125 | 200 | Truncated | Science & Tech. | 27.888968 | 95,626,532 |
Editor's Note: This article was originally published on 28 September, 2016. We are republishing the article as Elon Musk has presented a study on making Human Life Interplanetary in the Journal New Space.
Elon Musk has unveiled plans that will let anyone who wants to go to Mars buy a ticket for about the cost of a home. The schedule is tight, there are few launch windows, and things have to proceed like clockwork if this is to happen. Humans going to Mars is not about a symbolic display of technological progress, or a mission of scientific research. This is a question of the survival of the species. Humans cannot stay on Earth indefinitely, as that would eventually lead to an extinction event.
The SpaceX plans charts a course where the cost of going to Mars is reduced considerably. Right now, there is no real way for humans to go to Mars. Using traditional methods, the cost of a ticket can be reduced to about $10 billion per person, which is still too much for most people. The idea is to use innovative methods to significantly reduce the cost of the mission. That means reducing the cost of travelling to Mars by five million per cent.
The major portion of the cost will be saved by reusing the spaceship that will take people to mars. The capacity for the colonial ship has to be 100 people, or more for the plan to be feasible. There were even plans of packing in 200 people onto a ship, so it could get crowded on the journey to Mars. There are solar panels mounted on retractable fins on the ships, which will be used for power on the journey to Mars. The colonial ship will be equipped with the new raptor engines, and pushed into orbit on a booster rocket. The ship along with the booster is one of the largest launch vehicles till date, with a gross lift-off mass of 10,500 tons.
The only comparable rocket is the Saturn V with a gross lift-off mass of 3,039 tons. The vehicle is 122 meters in height with a diameter of 12 meters. Musk wants to build even bigger vehicles with more capacity. The main reason for this is that there are very few launch windows, and the more people and cargo that can be loaded on board in each mission to mars, the cheaper it becomes per ton to ship it.
The booster rocket is also equipped with raptor engines, and is meant to be reusable. The booster will take the spaceship to Earth orbit, where it will be parked while the booster returns to the ground with a vertical landing. Then, a fuelling tank is loaded onto the booster, and it takes off again. The fuelling tank fuels the colonial ship in Earth orbit. This will reduce the cost of repeatedly launching the ship into space. The Ship then travels to Mars, lands, takes off and comes back to Earth.
The first ship will have a propellant plant, which will be further expanded over subsequent ships. This plant is meant to harvest the propellant needed for spaceflight on Mars itself. Mars has a large amount of water ice on the surface, and carbon dioxide in the atmosphere. These natural resources will be harvested, and chemically transformed into a methane and oxygen based propellant. This propellant has many desirable properties, including how well it can be transferred between crafts, how cheap it is, the feasibility of being produced on Mars and support for large vehicle sizes.
The design of the whole system means that the cost of a taking a person to Mars will be less than $200,000 and eventually less than $100,000. Elon Musk plans to reroute the earnings from satellite launches, and missions that take astronauts to the International Space Station, into the Mars colonisation mission. Elon Musk said that the main reason he is accumulating assets is to do everything he can possibly do to make human life multi-planetary, a statement that received an ovation. As people start believing this is possible, Musk believes that the support in the form of funding will snowball.
There will be Red Dragon missions to mars. There are pilot spacecraft that will survey Mars for the colonisation mission. This will include finding locations where carbon dioxide can be collected and water can be mined. Potential landing sites for the colonial ships will be identified, along with any hazards that may exist on the surface. A Dragon 2 mission will be sent on 2018, and another one in 2020. Musk wants to send a flight on every launch window to Mars, similar to a scheduled train time table, where there is always a train waiting. The Dragon 2 is designed to be a propulsive lander, the kind of spaceship that can hop between planets and land on any surface.
A permanent human settlement on Mars would start with a sustainable city, an ecosystem within itself. One of the key problem areas is providing power to the city, which could be done through an array of solar panels. A healthy self sustaining population is a minimum of 1,000,000 people. To get 1,000,000 people on would require between 5000-10,000 missions, with at least 100 people on board each ship.
SpaceX ambitions does not stop at Mars though. The Mars Colonial Transporter was recently renamed as the Interplanetary Transport System. Attractive targets are moons of the gas giants Saturn and Jupiter, that are known to have subsurface oceans. A day before Musk gave his presentation on plans to colonise Mars, NASA announced that the Hubble space telescope had found evidence of plumes of water on Europa. Establishing permanent human settlements in these resource rich moons could provide a base for commercial and scientific activities in the outer solar system, including asteroid mining. | <urn:uuid:f9524995-b921-4812-bb42-dd66b0a46c9d> | 2.921875 | 1,146 | Truncated | Science & Tech. | 55.019941 | 95,626,564 |
History and classification
The first illustration of an amoeboid, from Roesel von Rosenhof's Insecten-Belustigung
The earliest record of an organism resembling Amoeba was produced in 1755 by August Johann Rösel von Rosenhof, who named his discovery "der kleine Proteus" ("the little Proteus"), after Proteus, the shape-shifting sea-god of Greek Mythology. While Rösel's illustrations show a creature similar in appearance to the one now known as Amoeba proteus, his "little Proteus'' cannot be identified confidently with any modern species.
The term "Proteus animalcule" remained in use throughout the 18th and 19th centuries, as an informal name for any large, free-living amoeboid.
In 1758, apparently without seeing Rösel's "Proteus" for himself, Carl Linnaeus included the organism in his own system of classification, under the name Volvox chaos. However, because the name Volvox had already been applied to a genus of flagellate algae, he later changed the name to Chaos chaos. In 1786, the Danish Naturalist Otto Müller described and illustrated a species he called Proteus diffluens, which was probably the organism known today as Amoeba proteus.
The genus Amiba, from the Greek amoibè (ἀμοιβή), meaning "change", was erected in 1822 by Bory de Saint-Vincent. In, 1830. the German naturalist C. G. Ehrenberg adopted this genus in his own classification of microscopic creatures, but changed the spelling to "Amoeba." | <urn:uuid:f7084e8d-8392-409c-8b19-edc8fd5ababb> | 3.671875 | 365 | Knowledge Article | Science & Tech. | 36.982873 | 95,626,567 |
“On September 21, 2014, NASA’s Mars Atmosphere and Volatile EvolutioN mission, or MAVEN, went into orbit around the Red Planet. Its goal: to understand how a changing atmosphere transformed Mars from a warm, wet environment in its youth to the desert world that we see today. Building such a mission and sending it to Mars is a hugely complex task, requiring the close coordination of hundreds of individuals around the country. In this video, several of the team members who made the mission possible share their experiences of working on MAVEN.
NASA’s Goddard Space Flight Center is home to the nation’s largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study Earth, the sun, our solar system and the universe. Named for American rocketry pioneer Dr. Robert H. Goddard, the center was established in 1959 as NASA’s first space flight complex. Goddard and its several facilities are critical in carrying out NASA’s missions of space exploration and scientific discovery. Watch for the latest in NASA’s research into planetary science, astrophysics, Earth observing, and solar science.”
MAVEN, Mars Atmosphere and Volatile EvolutioN, Mars orbiter, Mars mission, MAVEN mission, MAVEN team, LASP | <urn:uuid:6882bdd5-9af6-462e-aa4d-825f2fb9e527> | 3.21875 | 276 | Truncated | Science & Tech. | 38.284687 | 95,626,575 |
+44 1803 865913
Edited By: Manfred Gottwald and Heinrich Bovensmann
250 pages, 50 col illus
SCIAMACHY, the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY, is a passive sensor for exploring the Earth's atmosphere. It observes absorption spectra of molecules from the UV (214 nm) to the short-wave infrared wavelength range (2386 nm) and derives the atmospheric composition from these measurements.
This book is a comprehensive summary describing the entire SCIAMACHY mission - from the very first ideas to the current results. It illustrates how the measurements are performed, how the trace gas concentrations are derived from the measured spectra and how the unique data sets are used to improve our understanding of the changing Earth's atmosphere.
1. SCIAMACHY - The Need for Atmospheric Research from Space;
2. ENVISAT - SCIAMACHY's Host;
3. The Instrument;
4. Instrument Operations;
5. Calibration and Monitoring;
6. SCIAMACHY In-Orbit Operations and Performance;
7. From Radiation Fields to Atmospheric Concentrations - Retrieval of Geophysical parameters;
8. Data Processing and Products;
10. SCIAMACHY's View of the Changing Earth's Environment;
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:859d2762-e6af-42a8-b31f-1500532d0a0a> | 2.8125 | 336 | Product Page | Science & Tech. | 40.633118 | 95,626,593 |
Produto disponível em até 15min no aplicativo Kobo, após a confirmação do pagamento!
Você pode ler este livro digital em vários dispositivos:
IOs - Clique para baixar o app gratuitoAndroid - Clique para baixar o app gratuitoPC - Clique para baixar o app gratuitoBlackBerry - Clique para baixar o app gratuitoWindows Phone - Clique para baixar o app gratuitoKobo - Conheça nossa linha de leitores digitais
Essential reading to understand patterns for parallel programming
Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers.
Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managing and exploiting existing design knowledge for designing parallel programs. Moreover, such approaches enhance not only build-time properties of parallel systems, but also, and particularly, their run-time properties.
Features known solutions in concurrent and distributed programming, applied to the development of parallel programs
Provides architectural patterns that describe how to divide an algorithm and/or data to find a suitable partition and link it with a programming structure that allows for such a division
Presents an architectural point of view and explains the development of parallel software
Patterns for Parallel Software Design will give you the skills you need to develop parallel software. | <urn:uuid:3d43b184-c8ab-429c-932c-e86487a611a4> | 3.03125 | 371 | Product Page | Software Dev. | 0.087857 | 95,626,615 |
Attempted to created a Philosopher’s Stone He heated residues from boiled urine, and a liquid dropped out and burst into flames
Hydrogen was observed and collected long before it was recognised as a unique gas by Robert Boyle in 1671, who dissolved iron in diluted hydrochloric acid.
it was first recognized as an element in the second half of the 18th century.
first modern textbook about chemistry. It contained a list of "simple substances" that Lavoisier believed could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc and sulfur, which formed the basis for the modern list of elements
Scientists began to see patterns in the characteristics.
formulating one of the earliest attempts to classify the elements.
vis tellurique' (telluric screw), a three-dimensional arrangement of the elements constituting an early form of the periodic classification
56 elements into 11 groups, based on characteristics.
produced several Periodic Tables. His first table contained just 28 elements, organised by how many other atoms they can combine with. Unfortunately for Meyer, his work wasn’t published until 1870, a year after Mendeleev’s periodic table had been published. He was the first person to recognise the periodic trends in the properties of elements, and the graph shows the pattern he saw in the atomic volume of an element plotted against its atomic weight.
Discovery started the development of the periodic table, arranging chemical elements by atomic mass. He did so by writing the properties of the elements on pieces of card and arranging and rearranging them until he realised that, by putting them in order of increasing atomic weight, certain types of element regularly occurred. He predicted the discovery of other elements, and left spaces open in his periodic table for them.
three types of radiation; alpha, beta and gamma rays
Noble gases added to the periodic table as group 0.
electrons orbit the nucleus of an atom
electrons move around a nucleus in discrete energy called orbitals. Radiation is emitted during movement from one orbital to another.
identified protons in the atomic nucleus.
provided atomic numbers, based on the number of electrons in an atom, rather than based on atomic mass.
first discovered neutrons, and isotopes were identified, this was the complete basis for the periodic table
first split an atom by bombarding lithium in a particle accelerator, changing it to two helium nuclei.
identified lanthanides and actinides (atomic number >92), which are usually placed below the periodic table. | <urn:uuid:8478e868-2b6d-4f3f-9626-726e3c80571c> | 3.625 | 523 | Knowledge Article | Science & Tech. | 29.867748 | 95,626,616 |
Get Premium for free!
9 months ago
Jules Verne mentioned OTEC in Twenty Thousand Leagues Under the Sea
Jacques Arsene d'Arsonval, a French physicist, proposed tapping the thermal energy of the ocean.
D'Arsonval's student, Georges Claude, built the first OTEC plant, in Matanzas, Cuba in 1930
J. Hilbert Anderson and James H. Anderson, Jr. patented their new "closed cycle" design
Tokyo Electric Power Company 's operational 120 kW closed-cycle OTEC plant on the island of Nauru.
The U.S. established the Natural Energy Laboratory of Hawaii Authority (NELHA) at Hawaii, a leading test facility for OTEC technology
In 2002, India tested a 1 MW floating OTEC pilot plant near Tamil Nadu.
In 2011, Makai Ocean Engineering completed a heat exchanger test facility at NEHLA,Hawaii.
OTEC plant built by Makai Ocean Engineering went operational in Hawaii in August 2015
Saga University with various Japanese industries completed the installation of a new OTEC plant.
Research for open-cycle started
Share on Google+
Share on Facebook
Submit to Reddit
Share on LinkedIn
Post to Tumblr | <urn:uuid:657722fc-d4b1-4c7b-9228-f7e4b806d527> | 3.3125 | 257 | Content Listing | Science & Tech. | 42.811202 | 95,626,673 |
For Immediate Release, April 13, 2016
Contact: Abel Valdivia, (510) 844-7103, email@example.com
West Coast Water Quality Standards Not Strong Enough to Fight Ocean Acidification
OAKLAND, Calif. — A new scientific paper published today in the journal Ocean & Coastal Management concludes that current water quality criteria are inadequate to address ocean acidification on the West Coast. This paper and a related report published last week call for major changes in how California, Oregon and Washington deal with ocean acidification triggered by the high carbon emissions that are also causing climate change.
Water quality standards are the management foundation of the Clean Water Act, and give water quality managers the tools to maintain a water body in an ecologically functional condition. Every two years, each state water quality regulatory agency must analyze and list water bodies that are impaired by pollution and failing to meet their water quality standards..However, these determinations are challenging when addressing acidified waters because harmful biological damage is known to occur well within current pH standards.
“The West Coast is on the front line in the fight against climate change and ocean acidification. We need water quality standards that match the magnitude and urgency of the problem, not outdated versions designed more than 40 years ago,” said Dr. Abel Valdivia, a marine scientist with the Center for Biological Diversity who was not involved in the study. “Ocean acidification caused by increasing human carbon dioxide emissions is a major issue, and we see substantial negative effects on marine calcifying species even within pH criteria that are currently considered normal. The bottom line is we need new, better and stronger standards and we need them now.”
The new study describes scientific difficulties in assessing water impairment associated with ocean acidification using existing data. Current coastal and even estuarine pH fluctuations fall well within an allowable criteria range that is considered “normal,” even though they are known to cause substantial biological negative effects.
There is strong scientific evidence that many biological communities are declining due to ocean acidification. Today’s study identifies two marine species, oysters and pelagic sea snails called pteropods, that can be used to design new criteria because of their vulnerability to corrosive waters. Over the past decade, both groups have already shown negative effects due to ocean acidification.
Last week the West Coast Ocean Acidification and Hypoxia Panel, a bi-national team of 20 leading scientists in ocean acidification from California, Oregon, Washington, and British Columbia, published a report providing recommendations and actions that West Coast states can take now to address ocean acidification locally. Among the main recommendations was to revise and adopt water quality criteria relevant to ocean acidification.
Recently California Assembly member Das Williams (D-Santa Barbara) proposed a bill that would require the Ocean Protection Council to conduct research and make recommendations for further legislative and executive action on ocean acidification. However, in light of the conclusions of the paper released today, and the recommendations of the West Coast Ocean Acidification and Hypoxia Panel, the Center and other organizations are advocating that the bill be strengthened to require adoption of water quality standards more relevant to ocean acidification. The bill has a hearing before the Natural Resources Committee on Monday, April 18 at 1:30 p.m.
“If we want to preserve California’s amazing coast, we need to take action now. Fortunately, we can do it with the tools we have and within the current legal framework but only if we adopt standards sufficient to protect marine communities from the devastating effects of ocean acidification. The cost of inaction will be tremendous and will only increase over time,” Valdivia said. “West Coast states can take the lead in fighting ocean acidification now, we just need political will.” | <urn:uuid:a8cbff6b-e0da-411c-9a34-6a0541f61325> | 2.984375 | 779 | News (Org.) | Science & Tech. | 17.968408 | 95,626,683 |
The Cavendish experiment, performed in 1797–1798 by British scientist Henry Cavendish, was the first experiment to measure the force of gravity between masses in the laboratory and the first to yield accurate values for the gravitational constant. Because of the unit conventions then in use, the gravitational constant does not appear explicitly in Cavendish's work. Instead, the result was originally expressed as the specific gravity of the Earth, or equivalently the mass of the Earth. His experiment gave the first accurate values for these geophysical constants.
The experiment was devised sometime before 1783 by geologist John Michell, who constructed a torsion balance apparatus for it. However, Michell died in 1793 without completing the work. After his death the apparatus passed to Francis John Hyde Wollaston and then to Henry Cavendish, who rebuilt the apparatus but kept close to Michell's original plan. Cavendish then carried out a series of measurements with the equipment and reported his results in the Philosophical Transactions of the Royal Society in 1798.
The apparatus constructed by Cavendish was a torsion balance made of a six-foot (1.8 m) wooden rod suspended from a wire, with a 2-inch (51 mm) diameter 1.61-pound (0.73 kg) lead sphere attached to each end. Two 12-inch (300 mm) 348-pound (158 kg) lead balls were located near the smaller balls, about 9 inches (230 mm) away, and held in place with a separate suspension system. The experiment measured the faint gravitational attraction between the small balls and the larger ones.
The two large balls were positioned on alternate sides of the horizontal wooden arm of the balance. Their mutual attraction to the small balls caused the arm to rotate, twisting the wire supporting the arm. The arm stopped rotating when it reached an angle where the twisting force of the wire balanced the combined gravitational force of attraction between the large and small lead spheres. By measuring the angle of the rod and knowing the twisting force (torque) of the wire for a given angle, Cavendish was able to determine the force between the pairs of masses. Since the gravitational force of the Earth on the small ball could be measured directly by weighing it, the ratio of the two forces allowed the density of the Earth to be calculated, using Newton's law of gravitation.
Cavendish found that the Earth's density was ±0.033 times that of water (due to a simple arithmetic error, found in 1821 by 5.448Francis Baily, the erroneous value ±0.038 appears in his paper). 5.480
To find the wire's torsion coefficient, the torque exerted by the wire for a given angle of twist, Cavendish timed the natural oscillation period of the balance rod as it rotated slowly clockwise and counterclockwise against the twisting of the wire. The period was about 20 minutes. The torsion coefficient could be calculated from this and the mass and dimensions of the balance. Actually, the rod was never at rest; Cavendish had to measure the deflection angle of the rod while it was oscillating.
Cavendish's equipment was remarkably sensitive for its time. The force involved in twisting the torsion balance was very small, ×10−7 N, 1.74 about 1⁄50,000,000 of the weight of the small balls. To prevent air currents and temperature changes from interfering with the measurements, Cavendish placed the entire apparatus in a wooden box about 2 feet (0.61 m) thick, 10 feet (3.0 m) tall, and 10 feet (3.0 m) wide, all in a closed shed on his estate. Through two holes in the walls of the shed, Cavendish used telescopes to observe the movement of the torsion balance's horizontal rod. The motion of the rod was only about 0.16 inches (4.1 mm). Cavendish was able to measure this small deflection to an accuracy of better than one hundredth of an inch using vernier scales on the ends of the rod. Cavendish's accuracy was not exceeded until C. V. Boys's experiment in 1895. In time, Michell's torsion balance became the dominant technique for measuring the gravitational constant (G) and most contemporary measurements still use variations of it.
Cavendish's result was also the first evidence for a planetary core made of metal. The result of 5.4 g·cm−3 is close to 80% of the density of liquid iron, and 80% higher than the density of the Earth's outer crust, suggesting the existence of a dense iron core.
Whether Cavendish determined GEdit
The formulation of Newtonian gravity in terms of a gravitational constant did not become standard until long after Cavendish's time. Indeed, one of the first references to G is in 1873, 75 years after Cavendish's work.
Cavendish expressed his result in terms of the density of the Earth; he referred to his experiment in correspondence as 'weighing the world'. Later authors reformulated his results in modern terms.
After converting to SI units, Cavendish's value for the Earth's density, 5.448 g cm−3, gives
- G = ×10−11 m3 kg–1 s−2, 6.74
Physicists, however, often use units where the gravitational constant takes a different form. The Gaussian gravitational constant used in space dynamics is a defined constant and the Cavendish experiment can be considered as a measurement of this constant. In Cavendish's time, physicists used the same units for mass and weight, in effect taking g as a standard acceleration. Then, since Rearth was known, ρearth played the role of an inverse gravitational constant. The density of the Earth was hence a much sought-after quantity at the time, and there had been earlier attempts to measure it, such as the Schiehallion experiment in 1774.
Derivation of G and the Earth's massEdit
The following is not the method Cavendish used, but shows how modern physicists would calculate the results from his experiment. From Hooke's law, the torque on the torsion wire is proportional to the deflection angle θ of the balance. The torque is κθ where κ is the torsion coefficient of the wire. However, the torque can also be written as a product of the attractive forces between the balls and the distance to the suspension wire. Since there are two pairs of balls, each experiencing force F at a distance L/ from the axis of the balance, the torque is LF. Equating the two formulas for torque gives the following:
Substituting F into the first equation above gives
Assuming the mass of the torsion beam itself is negligible, the moment of inertia of the balance is just due to the small balls:
Solving this for κ, substituting into (1), and rearranging for G, the result is:
Once G has been found, the attraction of an object at the Earth's surface to the Earth itself can be used to calculate the Earth's mass and density:
Definitions of termsEdit
|θ||radians||Deflection of torsion balance beam from its rest position|
|F||N||Gravitational force between masses M and m|
|G||m3 kg−1 s−2||Gravitational constant|
|m||kg||Mass of small lead ball|
|M||kg||Mass of large lead ball|
|r||m||Distance between centers of large and small balls when balance is deflected|
|L||m||Length of torsion balance beam between centers of small balls|
|κ||N m rad−1||Torsion coefficient of suspending wire|
|I||kg m2||Moment of inertia of torsion balance beam|
|T||s||Period of oscillation of torsion balance|
|g||m s−2||Acceleration of gravity at the surface of the Earth|
|Mearth||kg||Mass of the Earth|
|Rearth||m||Radius of the Earth|
|ρearth||kg m−3||Density of the Earth|
- Boys 1894 p. 355
- Poynting, John Henry (1911). "Gravitation". In Chisholm, Hugh. Encyclopædia Britannica. 12 (11th ed.). Cambridge University Press. p. 385. 'The aim [of experiments like Cavendish's] may be regarded either as the determination of the mass of the Earth,...conveniently expressed...as its "mean density", or as the determination of the "gravitation constant", G'. Cavendish's experiment is generally described today as a measurement of G.' (Clotfelter 1987 p. 210).
- Many sources incorrectly state that this was the first measurement of G (or the Earth's density); for instance: Feynman, Richard P. (1963). "7. The Theory of Gravitation". mainly mechanics, radiation and heat. The Feynman lectures on physics. Volume I. Pasadena, California: California Institute of Technology (published 2013). 7–6 Cavendish’s experiment. ISBN 9780465025626. Retrieved December 9, 2013. There were previous measurements, chiefly by Bouguer (1740) and Maskelyne (1774), but they were very inaccurate (Poynting 1894)(Encyclopædia Britannica 1910).
- Clotfelter 1987, p. 210
- McCormmach & Jungnickel 1996, p.336: A 1783 letter from Cavendish to Michell contains '...the earliest mention of weighing the world'. Not clear whether 'earliest mention' refers to Cavendish or Michell.
- Cavendish 1798, p. 59 Cavendish gives full credit to Michell for devising the experiment
- Cavendish, H. 'Experiments to determine the Density of the Earth', Philosophical Transactions of the Royal Society of London, (part II) 88 p.469-526 (21 June 1798), reprinted in Cavendish 1798
- Cavendish 1798, p.59
- Poynting 1894, p.45
- Chisholm, Hugh, ed. (1911). "Cavendish, Henry". Encyclopædia Britannica. 5 (11th ed.). Cambridge University Press. pp. 580–581.
- Cavendish 1798, p.64
- Boys 1894 p.357
- Cavendish 1798 p. 60
- Cavendish 1798, p. 99, Result table, (scale graduations = 1⁄20 in ≈ 1.3 mm) The total deflection shown in most trials was twice this since he compared the deflection with large balls on opposite sides of the balance beam.
- Cavendish 1798, p.63
- McCormmach & Jungnickel 1996, p.341
- see e.g. Hrvoje Tkalčić, The Earth's Inner Core, Cambridge University Press (2017), p. 2.
- Cornu, A.; Baille, J. B. (1873). "Détermination nouvelle de la constante de l'attraction et de la densité moyenne de la Terre" [New Determination of the Constant of Attraction and the Average Density of Earth]. C. R. Acad. Sci. (in French). Paris. 76: 954–958.
- Boys 1894, p.330 In this lecture before the Royal Society, Boys introduces G and argues for its acceptance
- Poynting 1894, p.4
- MacKenzie 1900, p.vi
- Lee, Jennifer Lauren (November 16, 2016). "Big G Redux: Solving the Mystery of a Perplexing Result". NIST.
- Clotfelter 1987
- McCormmach & Jungnickel 1996, p.337
- Hodges 1999
- Lally 1999
- Halliday, David; Resnick, Robert (1993). Fundamentals of Physics. John Wiley & Sons. p. 418. ISBN 978-0-471-14731-2. Retrieved 2013-12-30. 'The apparatus used in 1798 by Henry Cavendish to measure the gravitational constant'
- Feynman, Richard P. (1963). "Lectures on Physics, Vol.1". Addison-Wesley: 6–7. ISBN 0-201-02116-1. 'Cavendish claimed he was weighing the Earth, but what he was measuring was the coefficient G...'
- Feynman, Richard P. (1967). "The Character of Physical Law". MIT Press: 28. ISBN 0-262-56003-8. 'Cavendish was able to measure the force, the two masses, and the distance, and thus determine the gravitational constant G.'
- "Cavendish Experiment, Harvard Lecture Demonstrations, Harvard Univ". Retrieved 2013-12-30.. '[the torsion balance was]...modified by Cavendish to measure G.'
- Shectman, Jonathan (2003). Groundbreaking Experiments, Inventions, and Discoveries of the 18th Century. Greenwood. pp. xlvii. ISBN 978-0-313-32015-6. Retrieved 2013-12-30. 'Cavendish calculates the gravitational constant, which in turn gives him the mass of the earth...'
- Poynting 1894, p.41
- Clotfelter 1987 p.212 explains Cavendish's original method of calculation
- Boys, C. Vernon (1894). "On the Newtonian constant of gravitation". Nature. 50 (1292): 330–4. Bibcode:1894Natur..50..330.. doi:10.1038/050330a0. Retrieved 2013-12-30.
- Cavendish, Henry (1798). "Experiments to Determine the Density of the Earth". In MacKenzie, A. S. Scientific Memoirs Vol.9: The Laws of Gravitation. American Book Co. (published 1900). pp. 59–105. Retrieved 2013-12-30. Online copy of Cavendish's 1798 paper, and other early measurements of gravitational constant.
- Clotfelter, B. E. (1987). "The Cavendish experiment as Cavendish knew it". American Journal of Physics. 55 (3): 210–213. Bibcode:1987AmJPh..55..210C. doi:10.1119/1.15214. Establishes that Cavendish didn't determine G.
- Falconer, Isobel (1999). "Henry Cavendish: the man and the measurement". Measurement Science and Technology. 10 (6): 470–477. Bibcode:1999MeScT..10..470F. doi:10.1088/0957-0233/10/6/310.
- "Gravitation Constant and Mean Density of the Earth". Encyclopædia Britannica, 11th Ed. 12. The Encyclopædia Britannica Co. 1910. pp. 385–389. Retrieved 2013-12-30.
- Hodges, Laurent (1999). "The Michell-Cavendish Experiment, faculty website, Iowa State Univ". Retrieved 2013-12-30. Discusses Michell's contributions, and whether Cavendish determined G.
- Lally, Sean P. (1999). "Henry Cavendish and the Density of the Earth". The Physics Teacher. 37 (1): 34–37. Bibcode:1999PhTea..37...34L. doi:10.1119/1.880145.
- McCormmach, Russell; Jungnickel, Christa (1996). Cavendish. Philadelphia, Pennsylvania: American Philosophical Society. ISBN 0-87169-220-1. Retrieved 2013-12-30.
- Poynting, John H. (1894). The Mean Density of the Earth: An essay to which the Adams prize was adjudged in 1893. London: C. Griffin & Co. Retrieved 2013-12-30. Review of gravity measurements since 1740.
- Cavendish’s experiment in the Feynman Lectures on Physics
- Sideways Gravity in the Basement, The Citizen Scientist, July 1, 2005. Homebrew Cavendish experiment, showing calculation of results and precautions necessary to eliminate wind and electrostatic errors.
- "Big 'G'", Physics Central, retrieved Dec. 8, 2013. Experiment at Univ. of Washington to measure the gravitational constant using variation of Cavendish method.
- Eöt-Wash Group, Univ. of Washington. "The Controversy over Newton's Gravitational Constant". Archived from the original on 2016-03-04. Retrieved December 8, 2013.. Discusses current state of measurements of G.
- Model of Cavendish's torsion balance, retrieved Aug. 28, 2007, at Science Museum, London.
- Weighing the Earth - background and experiment | <urn:uuid:ec6de465-d979-4381-bd1d-3545e6d06a9f> | 3.84375 | 3,610 | Knowledge Article | Science & Tech. | 70.439298 | 95,626,704 |
The escalation in dust emissions — which may be due to the interplay of several factors, including increased windstorm frequency, drought cycles and changing land-use patterns — has implications both for the areas where the dust is first picked up by the winds and for the places where the dust is put back down.
This image shows a dust storm in Canyonlands National Park.
Credit: Jason Neff
"Dust storms cause a large-scale reorganization of nutrients on the surface of the Earth," said Janice Brahney, who led the study as a CU-Boulder doctoral student. "And we don't routinely monitor dust in most places, which means we don't have a good handle on how the material is moving, when it's moving and where it's going."
Based on anecdotal evidence, such as incidents of dust coating the snowpack in the southern Rockies and a seemingly greater number of dust storms noticed by Western residents, scientists have suspected that dust emissions were increasing. But because dust has not been routinely measured over long periods of time, it was difficult to say for sure.
"What we know is that there are a lot of dust storms, and if you ask people on the Western Slope of Colorado, or in Utah or Arizona, you'll often hear them say, 'Yeah, I grew up in this area, and I don't remember it ever being like this before,'" said CU-Boulder geological sciences Associate Professor Jason Neff, Brahney's adviser and a co-author of the paper. "So there is anecdotal evidence out there that things are changing, but no scientific data that can tell us whether or not that's true, at least for the recent past."
For the new study, recently published online in the journal Aeolian Research, the research team set out to determine if they could use calcium deposition as a proxy for dust measurements. Calcium can make its way into the atmosphere — before falling back to earth along with precipitation — through a number of avenues, including coal-fired power plants, forest fires, ocean spray and, key to this study, wind erosion of soils.
The amount of calcium dissolved in precipitation has long been measured by the National Atmospheric Deposition Program, or NADP, which first began recording the chemicals dissolved in precipitation in the late 1970s to better understand the phenomena of acid rain.
Brahney and her colleagues reviewed calcium deposition data from 175 NADP sites across the United States between 1994 and 2010, and they found that calcium deposition had increased at 116 of them. The sites with the greatest increases were clustered in the Northwest, the Midwest and the Intermountain West, with Colorado, Wyoming and Utah seeing especially large increases.
The scientists were able to determine that the increase was linked to dust erosion because none of the other possible sources of atmospheric calcium — including industrial emissions, forest fires or ocean spray — had increased during the 17-year period studied.
It's also likely that the calcium deposition record underrepresents the amount of dust that's being blown around, said Brahney, who is now a postdoctoral researcher at the University of British Columbia in Canada. That's because the NADP network only measures dust that has collided with water in the atmosphere before precipitating to earth — not dust that is simply moved by the wind. And not all dust contains the same amount of calcium.
The increase in dust erosion matters, the researchers said, because it can impoverish the soil in the areas where dust is being lost. Wind tends to pick up the finer particles in the soils, and those are the same particles that have the most nutrients and can hold onto the most soil moisture, Brahney said.
Increasing amounts of dust in the atmosphere also can cause people living in the rural West a variety of problems, including poor air quality and low visibility. In extreme cases, dust storms have shut down freeways, creating problems for travelers.
The areas where the dust travels to are also affected, though the impacts are more mixed. When dust is blown onto an existing snowpack, as is often the case in the Rockies, the dark particles better absorb the sun's energy and cause the snowpack to melt more quickly. But the dust that's blown in also brings nutrients to alpine areas, and the calcium in dust can buffer the effects of acid rain.
In the future, researchers working in Neff's lab hope to get a more precise picture of dust movement by measuring the dust itself. In the last five years, large vacuum-like measuring instruments designed specifically to suck in dust emissions have been installed at sites between the canyon lands of Utah and the Front Range of the Rockies. Once scientists have enough data collected, they'll be able to look for trends in dust emissions without relying on proxies.
The study was funded by the National Science Foundation.
Jason Neff | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:67a6d02a-005b-4c5b-a218-c1d1cf71e45f> | 3.953125 | 1,614 | Content Listing | Science & Tech. | 44.510208 | 95,626,738 |
Together with the site at Koeln, the DLR site at Oberpfaffenhofen is one of Germany's largest research centres. Located near the A96 motorway between Munich and Lindau, the site is home to eight scientific institues and currently employs approximately 1700 people. The research centre's main fields of activity include paricipating in space missions, climate research, research and development in the field of Earth observation, developing navigation systems and advanced robotics development.
Air pollution is one of the biggest threats to health worldwide. Around seven million people die as a result of pollutants every year, as the World Health Organization (WHO) has recently established in a global study.
On 6 July 2018 at 03:15 CEST (01:15 UTC), it was time. The team at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) MASCOT Control Center in Cologne received the first signals from the German-French asteroid lander MASCOT upon its arrival at the near-Earth asteroid Ryugu.
A new 'cyber colleague' is on its way to the International Space Station (ISS) to join German ESA astronaut Alexander Gerst. CIMON and six other experiments for the 'horizons' mission lifted off from Cape Canaveral Air Force Station in Florida on Friday, 29 June 2018 at 11:42 CEST (05:42 local time) on board a US Dragon capsule with a Falcon 9 launcher.
The Japanese Hayabusa2 spacecraft has made a 3200-million-kilometre journey with the German-French Mobile Asteroid Surface Scout (MASCOT) lander on board. The two spacecraft have been travelling through the Solar System since December 2014, culminating in an approach manoeuvre to the near-Earth asteroid that has lasted several weeks and was completed on 27 June 2018. | <urn:uuid:b20fa1fb-ecfd-475d-a1f2-6fc47fa0d2ae> | 2.96875 | 385 | Content Listing | Science & Tech. | 38.239405 | 95,626,751 |
22 April 2005
22 April 2005
Researchers at New Jersey Institute of Technology have discovered a novel method of changing the chemical characteristics of carbon nanotubes by heating them in a closed vessel microwave oven.
Somenath Mitra, PhD, professor of chemistry and environmental sciences, and Zafar Iqbal, PhD, also a professor of chemistry and environmental sciences, discussed their findings last week at the 229th national meeting of the American Chemical Society (ACS) at the Hyatt Regency Hotel, San Diego.
The pair, aided by doctoral student Yubing Wang, have written “Microwave-Induced, Green and Rapid Chemical Functionalization of Single-Walled Carbon Nanotubes” to be published in a forthcoming issue of a technology journal.
“We understand ourselves to be the first in the world to have discovered this method,” said Mitra. “The beauty is that our method is green and clean. We use no toxic material and reduce the reaction times from hours—on occasion even days—to three minutes.""
Iqbal noted that the method costs much less than others currently used. ""Plus, the solubility of our carbon nanotubes are several times higher than any other researcher has yet reported in this short amount of time.” Solubility is the most essential characteristic of carbon nanotubes since researchers must be able to dissolve them to see them work their magic.
With a microwave oven hitting temperatures of 250 degrees Celsius, the researchers can chemically modify the tubes. Such a temperature is closer to radiation treatment than the output of a kitchen microwave oven. Since the reactions are fast, the nanotubes are not damaged or structurally modified.
""A carbon nanotube is just carbon,"" said Mitra. ""The surprise for us is that it's difficult to make nanotubes react with anything. They are like diamonds—very, very inert. They don’t react and they don’t dissolve in water. But, if you can change their chemical characteristics as we have done using our method, we see them transform right before our eyes.”
Once the tiny, microscopic tubes are chemically altered, they become soluble in common solvents like water and alcohol, and new kinds of films or coatings can be produced. The tubes can also be formulated into paints and plastic nanocomposites. The functionalized nanotubes become more useful than the pristine ones because the functionalized groups can be tailored for specific applications.
“Nanotubes are opening new vistas for products and design,” added Mitra. “For example, the space shuttle includes components of lightweight carbon or carbon-polymer composites. The military especially likes these materials because ultimately they will allow for the development of lightweight equipment.”
LANXESS will open a new applications development and technical services (AD & TS) laboratory for polyurethane dispersions (PUDs) in Latina, Italy. This will be capable of supporting market needs in a variety of coatings and adhesives applications in leather and textile finishing, plastic, glass and metal substrates coatings, as well as glass fibre sizing.
Applications for composites in the sports and leisure sector will be showcased by various exhibitors at Composites Europe in Stuttgart, Germany, on 6-8 November.
UK company Codem Composites has provided key bodywork components to support the F1 team Sahara Force India. | <urn:uuid:4d52cb2d-4b51-4502-995b-02fdb5131687> | 3.578125 | 716 | News Article | Science & Tech. | 27.356101 | 95,626,762 |
Learn Ruby on Rails, PDF Tutorial
Learn Web Developments with Rails
Ruby enthusiastically and you want to realize web applications with? So get started with Ruby on Rails. Many emblematic sites of the last years were built on this techno, like Twitter or AirBnB.
Whether you are a young developer who wants to discover this framework, a seasoned developer wishing to add a string to your bow, or an entrepreneur wishing to prototype your product yourself, this course is for you!
Table of contents
- 1. What is Ruby on Rails?
- 2. Install Ruby on Rails
- 3. Create your first page
- 4. Guided Tour: Manage Controllers and Variables
- 5. Put conditions and loops in views
- 6. Guided Tour: Use the Roads and Controllers
- 7. Use the layout
- 8. Putting it into practice
Rbenv is a program that allows installing any version of Ruby easily;
Gem is a program that comes with Ruby. It allows you to install other programs, such as Rails. 4.2.6 is the version of Rails we use today: the most recent version at the time I write these lines.
- File Size:
- 1,125.03 Kb
- Submitted On:
Take advantage of this course called Learn Ruby on Rails, PDF Tutorial to improve your Web development skills and better understand Ruby.
This course is adapted to your level as well as all Ruby pdf courses to better enrich your knowledge.
All you need to do is download the training document, open it and start learning Ruby for free.
PHP Symfony framework course
Download a complete guide in PDF about Symfony2 plateforme.
Download free PHP course
With this PDF tutorial you will learn the basics of PHP ,understand the working model of PHP to begin coding your own projects and scripts.Free courses under 95 pages designated to beginners.
Getting Started with Ruby programming language
A complet tutorial about Ruby programming language under 594 pages for advanced level students, free training document in PDF by David Flanagan and Yukihiro Matsumoto.
Ruby on Rails PDF Tutorial
Learn the basics of Ruby on rails programming language, free training document in 250 pages for all level users. | <urn:uuid:8b5839ec-f156-4aa8-9f6f-b38f6a1a54af> | 2.953125 | 471 | Product Page | Software Dev. | 61.942587 | 95,626,766 |
Type Event Revision 2018.3332 Keywords orientation See also system.orientation resize
Orientation events occur when the device orientation changes. This means that the orientation event will be triggered even for orientations that the app does not support, with a caveat for Android (see below).
Apps with a fixed orientation, for example
"portrait" only, may use the orientation event to rotate the UI manually. However, for apps with multiple supported orientations, the orientation event should not be used to re-layout the UI — instead, the resize event should be used.
This event is also helpful if you're using accelerometer or gyroscope data. This data is relative to portrait orientation, so you can use orientation events to handle the data based on the device's current orientation.
On Android, if your app only supports one orientation in
build.settings, the orientation event will still be triggered for all device orientations. However, if your app supports two or more orientations, the orientation event will only be triggered only for the app's supported orientations.
There is a limitation in the Android OS where it will never report an orientation event when flipping between
"landscapeLeft", nor will it be reported between
local function onOrientationChange( event ) local currentOrientation = event.type print( "Current orientation: " .. currentOrientation ) end Runtime:addEventListener( "orientation", onOrientationChange ) | <urn:uuid:822dda0a-9685-4b0d-ad7d-5f892f9c7d51> | 2.640625 | 302 | Documentation | Software Dev. | 25.108333 | 95,626,770 |
domingo, 4 de diciembre de 2011
Astronomy - An Ancient 'Metropolis' Orbiting the Milky Way --One of the Oldest Objects in the Universe
One of 150 globular clusters that orbit the Milky Way, M107, also known as NGC 6171, a compact and ancient family of stars that lies about 21 000 light-years away, is a bustling metropolis: thousands of stars in globular clusters like this one are concentrated into a space that is only about twenty times the distance between our Sun and its nearest stellar neighbour, Alpha Centauri. A significant number of these stars have already evolved into red giants, one of the last stages of a star’s life.
Globular clusters are among the oldest objects in the Universe. And since the stars within a globular cluster formed from the same cloud of interstellar matter at roughly the same time — typically over 10 billion years ago — they are all low-mass stars, as lightweights burn their hydrogen fuel supply much more slowly than stellar behemoths. Globular clusters formed during the earliest stages in the formation of their host galaxies and therefore studying these objects can give significant insights into how galaxies, and their component stars, evolve.
M107 is not visible to the naked eye, but, with an apparent magnitude of about eight, it can easily be observed from a dark site with binoculars or a small telescope. The globular cluster is about 13 arcminutes across, which corresponds to about 80 light-years at its distance, and it is found in the constellation of Ophiuchus, north of the pincers of Scorpius. Roughly half of the Milky Way’s known globular clusters are actually found in the constellations of Sagittarius, Scorpius and Ophiuchus, in the general direction of the centre of the Milky Way. This is because they are all in elongated orbits around the central region and are on average most likely to be seen in this direction.
This stunning new image of Messier 107 at top of page was captured by the Wide Field Imager on the 2.2-metre telescope at ESO’s La Silla Observatory in Chile.
Source: The Daily Galaxy - eso.org
Publicado por Karla Segura Chavarría en 15:21 | <urn:uuid:ae984157-be7c-459a-be34-643190952822> | 3.5625 | 480 | Personal Blog | Science & Tech. | 35.734686 | 95,626,772 |
Credit: NASA/CXC and NASA/JPL-Caltech
Shocking Processes from Eta Carinae
The superluminous, doomed star known as Eta Carinae is one of the most intriguing stars within 10,000 light-years from earth. It's best know for its extreme brightening and subsequent dramatic fading in the middle of the 19th century, an event as energetic as a small supernova, which, somehow, the star survived. We know from X-ray and other observations that Eta Carinae is actually a binary system, in which the brighter, more massive star is orbited by a mysterious, unseen companion star in an unusally elliptical orbit. X-ray emission is produced by the collision of the wind from the brighter star with the companion's wind. The collision of the winds creates a "bow-shock" around the companion star as it plows through the thick slow wind of the brighter star. The energy of the collision is converted to heat, raising temperatures in the bow shock to tens of millions of degrees, producing X-rays. This bow shock moves around the orbit with the motion of the companion star, and periodically lights up different regions of the ejected gas and dust which surround the binary. There's also a mysterious source of very high gamma ray emission, detected by both NASA's Fermi Gamma-ray Space Telescope and by ESA's INTEGRAL space observatory. Now observations with the NuSTAR observatory have conclusively identified Eta Carinae as the source of this unusual high energy X-ray emission. The image above shows a NuSTAR high energy X-ray image of Eta Carinae as green contours superimposed on a Chandra X-ray Observatory X-ray image of Eta Carinae. The NuSTAR contours pinpoint the high energy X-ray source as originating from very near the binary system (which is shown as the blue-white X-ray source near the center of the Chandra image). Variations in the hard source seen by NuSTAR show that the source varies with the binary orbit, and confirm that the emission originates from Eta Carinae. In addition, the high energy emission seen by NuSTAR smoothly connects with the gamma-ray emission seen by Fermi. These observations indicate that the high energy X-ray and gamma-ray emission is produced by electrons which are accelerated to near the speed of light by the tremendous power of the colliding wind bow shock. These electrons in turn bounce off optical light, increasing the energy of the optical light up to the gamma-ray region, producing the gamma rays seen by Fermi.
Published: July 9, 2018
HEA Dictionary * Archive
* Search HEAPOW
* Other Languages
* HEAPOW on Facebook
* Download all Images
* Education * HEAD
Each week the HEASARC
brings you new, exciting and beautiful images from X-ray and Gamma ray
astronomy. Check back each week and be sure to check out the HEAPOW archive!
Page Author: Dr. Michael F. Corcoran
Last modified Monday, 09-Jul-2018 10:03:33 EDT | <urn:uuid:8257da50-8c9b-4735-96eb-488f9e7fb935> | 3.578125 | 656 | Truncated | Science & Tech. | 42.13111 | 95,626,804 |
On Nov. 20 at 1200 UTC/7 a.m. EST, Tropical Cyclone Helen had maximum sustained winds near 50 knots/57.5 mph/92.6 kph. It was centered near 15.5 north and 83.9 east, about 499 nautical miles/574.2 miles/924.1 km south-southwest of Calcutta, India. Helen was crawling to the northwest at 1 knot/1.1 mph/1.8 kph. A mid-level subtropical ridge (elongated area) of high pressure is expected to slowly build east of Helen and steer the storm on a more western track in the next day.
Visible/short wave infrared data from ESA's METEO-7 satellite and rainfall data from NASA's TRMM satellite was combined to create this image of Tropical Cyclone Helen on Nov. 20.
Current warnings are in effect for fishermen along the coasts of Andhra Pradesh, who are advised to return to shore.
Animated multispectral satellite showed a resurgence of deep convection over the low-level center of circulation. Satellite data also showed that the band of thunderstorms that appeared strong to the north has weakened and become fragmented. Visible/short wave infrared data from ESA's METEO-7 satellite and rainfall data from NASA's Tropical Rainfall Measuring Mission or TRMM satellite was combined at the Naval Research Laboratory to create a composite image of the storm on Nov. 20. The image showed the clouds associated with Helen were mostly still over the open waters of the Arabian Sea, and that south of the center, light rainfall was occurring.
Helen is expected to intensify to 60 knots/69.0 mph/111.1 kph over the next two days and weaken before landfall. Helen is forecast to pass just south of the Yelichetladibba Palem and Nachugunta Reserved Forests in Andhra Pradesh, located in the coastal plain of Krishna Delta. Helen is expected to make landfall in the vicinity of Chinnaganjam in southeastern India.
Rob Gutro | EurekAlert!
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:eaf4c48a-fcae-4a94-aac6-19715a99e5ce> | 2.96875 | 1,050 | Content Listing | Science & Tech. | 50.087759 | 95,626,809 |
The South African grassland biome is one of the most threatened biomes in South Africa. Approxi- mately 45% of the grassland biome area is transformed, degraded or severely invaded by alien plants and the remaining natural areas are highly fragmented. In this fragmented landscape, the connectivity between habitat patches is very important to maintain viable populations. In this study we aimed to quantify connectivity of the grassland biome in Mpumalanga using graph theory in order to identify conservation priorities and to direct conservation efforts. Graph theory-based connectivity indices have the ability to combine spatially explicit habitat data with species specific dispersal data and can quantify structural and functional connectivity over large landscapes.We used these indices to quantify the overall connectivity of the study area, to determine the influence of abandoned croplands on overall connectivity, and to identify the habitat patches and vegetation types most in need of maintaining overall connectivity.Natural areas were identified using 2008 land cover data for Mpumalanga. Connectivity within the grassland biome of Mpumalanga was analysed for grassland species with dispersal distances ranging from 50 to 1000 m. The grassland habitat patches were mostly well connected, with 99.6% of the total habitat area connected in a single component at a threshold distance of 1000 m. The inclusion of abandoned croplands resulted in a 33% increase in connectivity at a threshold distance of 500 m. The habitat patches most important for maintaining overall connectivity were the large patches of continuous habitat in the upper and lower centres of the study area and the most important vegetation types were theWakkerstroom Montane Grassland and the EasternTemperate FreshwaterWetlands.These results can be used to informmanagement decisions and reserve design to improve and maintain connectivity in this biome.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:27427355-20fd-47dc-bb2d-79b8444456d3> | 3.109375 | 377 | Academic Writing | Science & Tech. | 11.808167 | 95,626,825 |
- Research news
- Open Access
Silencing paradox resolved
© BioMed Central Ltd 2005
Published: 10 February 2005
The paradoxical involvement of RNA-mediated gene silencing in the maintenance of some DNA silencing is bridged in Arabidopsis plants by an RNA polymerase that acts as a liaison between both pathways, UK researchers report in the February 3 issue of Science.
Alan Herr, from the John Innes Centre, Norwich, and colleagues from there and elsewhere show that an RNA polymerase connects RNA and DNA silencing pathways. They found that mutants in RNA polymerase IV (Pol IV, also called RPD1), part of a new clade of polymerases in plants, were defective in both pathways.
"The finding of a new silencing-specific RNA polymerase is a surprising twist in the evolution of RNA polymerases," Herr wrote The Scientist in an E-mail. "Even though Pol IV is plant specific, the function of Pol IV may be performed by another RNA polymerase in other programs. Silencing of a locus does not mean that it is not transcribed."
RNA silencing occurs through the multiprotein RNA-induced silencing complex that cleaves double stranded RNA, producing short interfering RNAs (siRNA), which then amplify the cycle. Conversely, DNA silencing occurs through chromatin-mediated mechanisms that can include DNA methylation and histone modifications to form transcriptionally inactive heterochromatic regions. In the Pol IV mutant, both siRNA formation and DNA methylation are decreased at heterochromatic regions, Herr and colleagues found.
"Pol IV works together with a different type of RNA polymerase previously implicated in gene-silencing mechanisms called RNA-dependent RNA polymerase to produce double-stranded RNA that is then processed into small RNAs by a dicer enzyme," Herr wrote in his E-mail. "These small RNAs then act as the specificity determinant for the establishment and maintenance of the silenced state."
The new report is consistent with a general silencing model that Shiv Grewal of the National Institutes of Health calls "a self-enforcing loop." According to the model, siRNA is targeted to heterochromatin, and "heterochromatic regions recruit the RNAi machinery through the interactions of chromodomains," Grewal,, who was not involved in the study, told The Scientist. This in turn reinforces the complex by producing more siRNA transcripts. Polymerase IV may allow just enough transcription in heterochromatic regions to kick-start the loop, Grewal said.
Steve Jacobsen, of the University of California, Los Angeles, described the research as a case in which plants "use transcription to keep a locus silent. Methylation shuts genes off, but too strong of a shutoff is not good for maintaining siRNA-mediated silencing. Shut off all transcription, and siRNA can't work.
Jerzy Paszkowski at the University of Geneva, Switzerland, said the model raises "he chicken and egg problem" about which part of the loop comes first. Grewal suggested that "bidirectional transcription may be the initial trigger," like that found in transposable elements and other repetitive sequences.
In the battle between transposable elements overtaking a genome and a genome completely quiescing these parasites, Pol IV may be playing both sides, allowing transcription and silencing. According to Grewal, "Transposable elements have evolved to transcribe in the presence of heterochromatin, an adaptive response to overcome the heterochromatic machinery to silence them." While heterochromatin bodyguards this incomplete silencing, Pol IV allows transposable elements to "sneak past the door," says Grewal.
Herr and colleagues found that Pol IV is a plant-specific polymerase that groups outside of the usual polymerases I, II, and III. "The phylogenetic restriction of Pol IV suggests that it has an evolutionarily derived function rather than an evolutionary basal one," according to Jim Birchler at the University of Missouri, Columbia. Consistently, a predicted subunit of the new polymerase IV machine, RPD2, controls silencing in Arabidopsis.
Though Pol IV has a genetic function in silencing, Birchler noted that "Pol IV could have a role in silencing in the plant kingdom that is not understood at all. Determining the conditions under which Pol IV performs transcription is an important next step."
- A.J. Herr et al., "RNA polymerase IV directs silencing of endogenous DNA," Science, February 3, 2005., [http://www.sciencemag.org/cgi/content/abstract/1106910v1]
- D.C. Baulcombe, "RNA silencing in plants," Nature, 431:356-63, September 16, 2004, [http://www.nature.com/doifinder/10.1038/nature02874]
- Shiv Grewal, [http://www.cshl.org/public/SCIENCE/grewal.html]
- Steve Jacobsen, [http://www.mcdb.ucla.edu/Research/Jacobsen/]
- Jerzy Paszkowski, [http://www.unige.ch/sciences/biologie/plantsciences/grpaszkowski/] | <urn:uuid:67f25106-1833-4651-84c8-04731df9f26d> | 2.671875 | 1,122 | Academic Writing | Science & Tech. | 36.599779 | 95,626,832 |
As part of ongoing research to understand how miniaturization affects brain size and behavior, researchers measured the central nervous systems of nine species of spiders, from rainforest giants to spiders smaller than the head of a pin. As the spiders get smaller, their brains get proportionally bigger, filling up more and more of their body cavities.
Nephila clavipes, a big tropical spider, has plenty of room in its body for its brain. Credit: Pamela Belding, STRI
"The smaller the animal, the more it has to invest in its brain, which means even very tiny spiders are able to weave a web and perform other fairly complex behaviors," said William Wcislo, staff scientist at the Smithsonian Tropical Research Institute in Panama. "We discovered that the central nervous systems of the smallest spiders fill up almost 80 percent of their total body cavity, including about 25 percent of their legs."
Some of the tiniest, immature spiderlings even have deformed, bulging bodies. The bulge contains excess brain. Adults of the same species do not bulge. Brain cells can only be so small because most cells have a nucleus that contains all of the spider's genes, and that takes up space. The diameter of the nerve fibers or axons also cannot be made smaller because if they are too thin, the flow of ions that carry nerve signals is disrupted, and the signals are not transferred properly. One option is to devote more space to the nervous system.
The enormous biodiversity of spiders in Panama and Costa Rica made it possible for researchers to measure brain extension in spiders with a huge range of body sizes. Nephila clavipes, a rainforest giant weighs 400,000 times more than the smallest spiders in the study, nymphs of spiders in the genus Mysmena.
The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama, is a unit of the Smithsonian Institution. The Institute furthers the understanding of tropical nature and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems. Website: www.stri.org.
Quesada, Rosanette, Triana, Emilia, Vargas, Gloria, Douglass, John K., Seid, Marc A., Niven, Jeremy E., Eberhard, William G., Wcislo, William T. 2011. "The allometry of CNS size and consequences of miniaturization in orb-weaving and cleptoparasitic spiders." Arthropod Structure and Development 521-529, doi10.1016/j.asd.2011.07.002
Beth King | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:456895f9-7bd7-4289-bf7a-beffc66ee390> | 3.921875 | 1,220 | Content Listing | Science & Tech. | 43.631689 | 95,626,836 |
In behavioural research, internal states are measured through novelty responses. The way an animal responds to novel stimuli in the environment depends on its internal state (e.g., motivational drives), as determined by interacting neural and neurotransmitter systems (Horstick, Mueller, and Burgess, 2018). Animals that have recently had a stressful experience, for example, are more likely to be wary of novel stimuli.
Larval zebrafish (Danio rerio) are highly sensitive to light (Burgess and Granato, 2007). They display regular rates of discontinuous motion while swimming under even illumination, and can react to a sudden change in light with stereotyped locomotion alterations. This makes them ideal for studies of internal states (De Marco et al., 2016) and behavioural screens. The Zantiks MWP unit allows measuring the activity of multiple larvae and control light of varying power and wavelength.
Experimental set up
Larval zebrafish are placed individually in each well of a 6, 12, 24 48 or 96 well plate.
The multi-well plate with larvae is inserted into the chamber of the Zantiks MWP unit. The script can be written to control single or multiple square pulses of light, of varying length and known power and wavelength, as well as control the temperature of the medium inside the wells. Locomotor activity of each larva (measured in arbitrary units or as swum distance) is measured and written to a data file at the frequency required by the observer.
Exemplary video (speed, 8x) of zebrafish larvae in a 24 well plate during two consecutive 120 s square pulses of light (intertrial interval, 120 s).
Burgess, H. A., and Granato, M. (2007). Modulation of locomotor activity in larval zebrafish during light adaptation. J. Exp. Biol. 210, 2526–2539.
De Marco, R.J., Thiemann, T., Groneberg, A.H., Herget, U., and Ryu, S (2016). Optogenetically enhanced activity of pituitary corticotroph cells post stress onset causes rapid organizing effects on behaviour. Nat. Commun. 7, 12620 DOI: 10.1038/ncomms12620 | <urn:uuid:0baf6757-62cf-4a63-acff-9bbdb4aee51c> | 2.5625 | 470 | Tutorial | Science & Tech. | 50.892031 | 95,626,852 |
Estimation of Soil Properties Using Hyperspectral VIS/IR Sensors
- John Wiley & Sons, Ltd
- Publication Type:
- Encyclopedia of Hydrological Sciences, 2005, 1st, pp. 887 - 902
- Issue Date:
Knowledge of soil properties and processes are crucial to the understanding of the terrestrial hydrologic cycle and the functioning of terrestrial ecosystems. In this paper, we present the current state and potential of hyperspectral remote sensing techniques for quantitative retrieval of soil properties. Remote sensing is used to detect chemical and physical soil properties either (i) directly from the bare soil pixels, (ii) through advanced spectroscopy methods in mixed soil-vegetation-litter pixels, and (iii) by measurements of the overlying vegetated canopy to infer soil properties and moisture status. Optical-geometric properties of soil surfaces reveal information on soil physical features, such as soil structure, crusting, and erosion. We also investigate the use of vegetation water indices to infer soil drying and wetting in the soil root zone. We conclude with a discussion on future needs and directions for remote sensing of soil properties.
Please use this identifier to cite or link to this item: | <urn:uuid:600d3933-f309-4869-94db-742be69f3547> | 3.15625 | 249 | Academic Writing | Science & Tech. | 18.40105 | 95,626,856 |
Adding solar technology to electric vehicles is an ideal evolution in theory, but for many in the industry, practicality remains an issue. This skepticism hasn’t stopped a Dutch startup from developing Lightyear One, a vehicle that finally has the ability to be powered by just sunlight. Their revolutionary development earned them the new Climate Change Innovator Award at CES 2018.
What does this mean? To begin with, Lightyear will receive a plaque for the award and will be recognized during the CES 2018 Sustainability Day on Thursday, January 11th. The Consumer Technology Association announced in October that they would be featuring this new award to highlight multiple startups that are using technology to eliminate greenhouse gas emissions. Gary Shapiro, CEO of CTA, believes the tech industry needs to be a major player in climate change.
"Consumer tech already provides many solutions to address climate challenges well beyond the electronics industry. Home automation systems cut unnecessary energy use, tech-enabled telecommuting reduces car travel and emissions, and newer tech products use less electricity. And there's nowhere better than CES, the global stage for innovation, to scout the next startup that can deliver meaningful emission reductions."
Many in the EV industry have been skeptical of using solar panels, including one of the biggest innovators, Elon Musk. Last July, he noted that it would take a “transformer-like” system that popped out extra solar panels to retrieve a charge, and that would be good for up to 30 miles per day. The Toyota Prius, which featured a solar roof, backed that claim as it was only useful to help power electronics in the vehicle and slightly extending battery life.
Fully-fledged solar-powered cars have some considerable cons if they don't have reliable sunlight, ranging from less power to lower performance. This obstacle has lead to the perspective that solar-assisted EVs are the next evolution. Not only could this approach add more miles to a vehicle’s range, but it could dual as a mobile power station with bidirectional charging.
All of this hasn’t stopped this Dutch startup from thinking ahead to create the Lightyear One. The new electric car can charge itself hassle-free as it sits in a driveway, and the battery inside holds a range of up to 500 miles. This would give it ample room to hold a significant charge when there’s no sunlight, and it would still feature all the traditional methods of EV charging, if needed.
Of course, it’ll still be a while before the Lightyear One is released. In terms of cost, this technology is predictably not going to be cheap. More specifically, the first 10 cars will be given out in 2019 with the next 100 the following year, and the starting price will be at just over $140,000 USD.
There’s also a lot of questions that remain as it goes into development, such as how efficient it can charge through cloud cover. Lightyear One may not be feasible for those that will keep it stored away from the sunlight, but it certainly provides a more easier, efficient way to use an electric vehicle for some. At the very least, it’ll be interesting to see how this vehicle performs and changes the solar aspect of the EV industry.
These seven Etsy shops from around the world offer an impressive range of cruelty-free products you can feel good about putting on your face.
A new report shares why decentralized energy grids will power the homes of the future and make a major difference in the lives of those in developing countries currently with limited or zero access to electricity.
Starbucks and McDonalds are working together to rethink to-go cups and inviting others to join them in creating eco-friendly packaging in an effort to reduce waste and environmental impact.
A new report finds that meat and dairy producers are on track to surpass the oil industry's greenhouse gas emissions. | <urn:uuid:f16df512-6361-4bbc-b1cd-8d0a520f8aac> | 2.890625 | 787 | News Article | Science & Tech. | 40.873649 | 95,626,860 |
Authors: Antonio Puccini
Analyzing the neutron decay, or beta-decay (Bd), our calculations and evaluations show that the 3rd particle emitted with the Bd (required by Pauli and Fermi to compensate for a noticeable energy gap) can be identified in an electron free of electric charge, that is a neutral electron: e° (instead of a neutrino). In the various Supersymmetric Models, there is the existence of a particle with a limited mass, which can never collapse in a lighter particle: the so-called Lightest Supersymmetric Particle (LSP). To date, this LSP has never been detected in any experiment. Examining the potential properties attributed to that particle in the various Supersymmetric Models, it seems to see a close analogy with features likely to be related to e°. Indeed, from a more in-depth examination, it appears that the properties of the two considered particles are completely superimposable, as if the two particles could be interchangeable, that is, identifiable in one another. It seems interesting to note that in our model we give particular attention to the fundamental property attributable both to the LSP and to the e°, i.e. the symmetry (represented by C, or charge conjugation), detectable by: ē°=C(e°)=e°
Comments: 10 Pages.
[v1] 2017-07-18 05:55:50
Unique-IP document downloads: 54 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:e80c18c9-4430-483e-85ae-cca53a967584> | 2.609375 | 450 | Academic Writing | Science & Tech. | 34.818661 | 95,626,867 |
Scientists hope that understanding the mechanisms which determine the diversity and productivity of ecosystems will help ecologists and conservationists to develop strategies to ensure that conservation areas are highly productive and rich in biodiversity.
The study used a lab-based artificial ecosystem of communities of bacteria to examine what happens when the bacteria move around and evolve to live in different parts of the ecosystem over the course of hundreds of generations. The scientists measured the effect this dispersal of species has on the productivity and biodiversity of the ecosystem over all.
'Productive' ecosystems are defined as those that support a large total amount of living matter, from tiny microbes up to plants and animals. Scientists refer to this measurement of the amount of life present as an ecosystem's 'biomass'. A number of studies in the last decade have shown that ecosystems that have a high biodiversity - meaning they are rich in variety of species - are also highly productive over short time scales, but until now the underlying processes creating this link between high levels of biodiversity and productivity over evolutionary time scales have not been understood.
The scientific team behind this new research found that both the biodiversity and productivity of an ecosystem are at a peak when there is an intermediate rate of dispersal of species - not too little and not too much - between different parts of the ecosystem.
When there is little or no dispersal, populations of species that remain in harsh areas of an ecosystem are unable to adapt to their environment due to a low population size and lack of genetic variation. Conversely, when there is too much dispersal in an ecosystem, species evolve to be 'generalists' that can survive in many habitats, but fail to thrive in any given one.
Dr Craig Maclean, one of the authors of the study at the NERC Centre for Population Biology at Imperial College London, explains that an intermediate rate of dispersal creates a 'happy medium' wherein species move around enough to ensure that harsh environments are adapted to, but not so much that they become generalists.
He says: "Dispersal constantly brings new individuals and new genes into harsh environments, which is essential for evolutionary adaptation to difficult environments. When species adapt to new environments it increases the productivity of the ecosystem and it can increase the biodiversity, as movement between different parts of an ecosystem provides more 'niches' for species to exploit."
To carry out the study, the research team created an artificial ecosystem for the bacterium Pseudomonas fluorescens. The ecosystem consisted of 95 different areas, each one containing a different food source. The scientists introduced the bacteria - which could eat approximately half of the 95 food sources - to the ecosystem, and then began to manipulate the rate at which the bacteria dispersed between the 95 different areas.
Every day during the experiment, the team measured the biomass in the ecosystem as an indicator of the ecosystem's productivity, and found that the levels of biomass were highest when there was an intermediate dispersal rate.
After 400 generations, the team isolated bacteria from the ecosystem and measured the ability of the bacteria to grow on each of the food sources. Using this data, the team were able to measure the diversity of the ecosystem, as it indicated how many different species had evolved from the bacteria which were originally introduced to the experiment, which could only eat half of the food sources available.
The research was carried out by an international team, led by Centre National de la Recherche Scientifique scientists at Montpellier 2 University in France, in collaboration with Imperial College London and the University of Liverpool.
Danielle Reeves | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:378c40e5-6356-4281-b5ad-33b0dd8a018e> | 3.96875 | 1,314 | Content Listing | Science & Tech. | 29.206253 | 95,626,876 |
Earth-size planets circling nearby stars come in two flavors, either rocky or gassy, astronomers reported on Monday. And more than three-quarters of stars likely host at least one of these alien Earths.
“We are talking about worlds barely larger than our own,” says astronomer Geoff Marcy of the University of California, Berkeley, speaking at the American Astronomical Society meeting in Washington, D.C. “That’s how far we have come.”
When astronomers began reporting the discovery of planets orbiting nearby stars in 1995, the few worlds they detected were as large or larger than Jupiter. Now measurements from NASA’s Kepler space telescope—which has discovered 237 of the more than 1,000 planets detected, according to Michele Johnson of NASA’s Ames Research Center in Mountain View, California—are helping us learn what worlds in Earth’s weight class are made of.
The findings narrow the range of planets on which we might expect to see alien life, Marcy says, to the smaller ones nearest to Earth in size. However, none of the planets reported in the new Kepler data orbited well enough inside the “habitable zone” of their stars to be amenable to oceans and life, he noted. (Related: “Newfound Earth-Size Exoplanet Doomed.”)
On the plus side, that still leaves a lot of planets for future alien hunters to investigate, as roughly three-quarters of the 3,538 still-unconfirmed candidate planets detected by Kepler appear to be Earth-size, Marcy reported at the meeting. And roughly one in five stars are orbited in their habitable zone by a planet one to two times as wide as Earth.
Planetary Dividing Line
“We are finding a dividing line between two classes of Earth-size planets,” says astronomer Yoram Lithwick of Northwestern University in Chicago, who presented a study of 60 “Super-Earth” worlds, ones roughly one to four times as wide as Earth. “Many are even fluffier than Neptune and Uranus.”
In the study presented by Lithwick and a separate study of 42 planets presented by Marcy, the cutoff is between planets more or less than two times as wide as Earth.
Those less than two times as wide as Earth are either rocky or are draped with an outermost layer of cloudy hydrogen and helium gas haze, while those more than two times as wide as our planet all have densities that suggest they are gassy worlds. These “mini-Neptunes” likely look more like Uranus and Neptune in our solar system.
What causes the dividing line between rocky Super-Earths and mini-Neptunes? Basically, rocky planets can’t get much wider than twice the size of Earth, says Marcy. Once they reach that size, added rock is just compressed further, making the planet more dense but leaving it at the same width.
Gassy worlds, in contrast, grow wider as more gas is added, because the thin gas allows them to balloon out.
“It’s a reasonable conclusion, supported by theory, and the observations seem solid,” says astronomer Stephen Maran, author of Astronomy for Dummies. “The thing that really stands out is that we live in an oddball solar system that doesn’t have one of these mini-Neptunes.”
This story originally appeared at nationalgeographic
Are Antibiotics Leading To An Increased Risk Of Miscarriage?
According to a new study published in the CMAJ (Canadian Medical Association Journal), many classes of antibiotics are associated with an...May 1, 2017
Could a Carbon Tax Work?
Over the past couple of years, several suggestions for limiting the amount of greenhouse gases that are produced by the burning...May 1, 2017
Genes Might Be Helping the Tasmanian Devil Fight Off Face Cancer
Getty Images The Tasmanian devil is famous for two things. One, it’s ornery as all hell. And two, it’s the unfortunate...August 30, 2016
Six Scientists Lived in a Tiny Pod for a Year Pretending They Were on Mars
Arguably one of the most Mars-like environments on Earth, the north side of Mauna Loa has been home sweet home to...August 29, 2016
Forget the Pool. This Guy Chased Tornadoes All Summer
This May, a massive supercell storm ripped through the countryside just outside of Dodge City, Kansas. It produced more than a...August 29, 2016
This Aquanaut Is Defining the Next Era of Spaceflight
NASA Megan McArthur has spent her life messing with microgravity. She was on the team that got the first commercial cargo...August 29, 2016
What Gives With Insects Pretending to Be Sticks and Leaves?
Imagine that you had one outfit and one outfit only: a jumpsuit that made you look like a leaf. You’d blend...August 29, 2016
How to Use Physics to Paddle Board Like a Pro
Getty Images Question: How do you make a stand up paddle board go straight if you only paddle on one side?...August 29, 2016
Cluster of Big Earthquakes Rattles Iceland’s Katla Volcano
Alamy Last night, a brief earthquake swarm rattled the caldera at Katla in southern Iceland. The largest earthquakes were over M4,...August 29, 2016 | <urn:uuid:369e22b1-8314-4d27-8c05-ae26ed162fc4> | 3.203125 | 1,139 | Content Listing | Science & Tech. | 54.858115 | 95,626,900 |
What is Laravel ?
What is Laravel ?
According to Wikipedia, Laravel is a free, open-source PHP web framework, created by Taylor Otwell and intended for the development of web applications following the model–view–controller (MVC) architectural pattern. The source code of Laravel is hosted on GitHub and licensed under the terms of MIT License. Now we will explain more detail about that.
What Is Laravel ?
Laravel is web application framework with expressive, elegant syntax. Development must be an enjoyable, cretive experience to be truly fulfilling. Laravel aims make the development process a pleasing one for the developer without sacrificing application functionality.
Sometimes, web developer combine Laravel with other frameworks, including frameworks implemented in other languages such as Ruby on Rails, ASP.NET MVC and Sinatra. Laravel is accessible, yet powerful, providing powerful tools needed for large, robust application.
How to Start Use Laravel ?
1. Install and Configuration
There are several steps that must be done when installing Laravel
Laravel utilizies composer to manage its dependecies. Download a copy of the composer.phar. For the PHAR archive file, you can either keep it in your local project directory or move to usr/local/bin to use it globally on your system.
After download Laravel, you can run the installer using Composer.
composer global require "laravel/installer=~1.1"
Make sure to place the ~/.composer/vendor/bin directory in your PATH so the Laravel executable is found when you run the laravel command in your terminal.
Once installed, the simple “laravel new” command will create a fresh Laravel installation in the directory you specify. For instance, laravel new blog would create a directory named blog containing a fresh Laravel installation with all dependencies installed. This method of installation is much faster than installing via Composer.
For install Laravel you can install via download, first you can download the 4.2 version of the Laravel framework and extract its contents into directory on your server, next run your Laravel application with command “php composer.phar install” or “composer install” command to install all of the framework’s dependencies. This process requires Git to be installed on the server to successfully complete the installation. If you want to update the Laravel framework, you may issue the php composer.phar update command.
2. Server Requirements
Laravel framework has a few system requirments:
MCrypt PHP Extension
As of PHP 5.5, some OS distributions may require you to manually install the PHP JSON extension. When using Ubuntu, this can be done via apt-get install php5-json.
The first thing you should after installing Laravel is set your application key to a random string. You can install via Composer by using the “php arisan key:generate” command. Typically, this string should be 32 characters long. The key can be set in the app.php configuration file.
If you wish to review the app/config/app.php file and its documentation. It contains several options such as timezone and locale that you may wish to change according to your application. You should also configure your local environment.
4. Pretty URL
Framework ships with a public/.htaccess file that is used to allow URLs without index.php. If you use Apache to serve your Laravel application, be sure to enable the mod_rewrite module.
That was a brief explanation of laravel and how to install it, to learn more you can read it on 41s.io/intro3357 and if you know information about Software Engineering, Problem Solving,Tutorial, you can visit at www.41studio.com/blog. | <urn:uuid:81f1736a-1199-49f5-9b44-8857c4d8a248> | 2.734375 | 811 | Tutorial | Software Dev. | 45.653837 | 95,626,902 |
'Beams' from space that could power cities: First tests on solar satellites offer hope of green energy that might actually WORK
- Floating solar panels 'beam' energy to Earth using lasers or microwaves
- Equipment tested in space to deploy 'swarm' of solar panels
- Initially will supply power to disaster areas or outlying regions
- Eventually 'swarm' of tiny satellites could power cities
On Earth, solar power has had a slow start, thanks to high prices and inefficient panels - but the first tests on 'solar satellites' offer hope of 'green energy' that actually works.
Researchers at Stratchclyde University have already tested equipment in space, a first step for solar panels to collect energy and transfer it back to earth through microwaves or lasers.
The researchers aim to produce a 'swarm' of satellites that could one day power whole cities.
Initially, the tiny satellites wouldn't replace ordinary power grids - instead, they could swiftly resupply power to disaster areas or outlying districts that are difficult to reach
As part of the project, researchers at Strathclyde University aim to produce a 'swarm' of satellites that could one day power whole cities
Dr Massimiliano Vasile, of Strathclyde¿s Department of Mechanical and Aerospace Engineering, is leading the research
Initially the tiny satellites wouldn't replace ordinary power grids - instead, they could swiftly resupply power to disaster areas or outlying districts that are difficult to reach.
A 'receiver' on Earth would turn the precisely targeted microwave or laser beams into usable electricity.
The idea of solar panels in space has been much discussed - but the new research proves that at least a small-scale version IS possible.
Dr Massimiliano Vasile, of the University of Strathclyde’s Department of Mechanical and Aerospace Engineering, who is leading the space based solar power research, said: ‘Space provides a fantastic source for collecting solar power and we have the advantage of being able to gather it regardless of the time of the day or indeed the weather conditions.
‘In areas like the Sahara desert where quality solar power can be captured, it becomes very difficult to transport this energy to areas where it can be used.
'However, our research is focusing on how we can remove this obstacle and use space based solar power to target difficult to reach areas.
‘By using either microwaves or lasers we would be able to beam the energy back down to earth, directly to specific areas.
'This would provide a reliable, quality source of energy and would remove the need for storing energy coming from renewable sources on ground as it would provide a constant delivery of solar energy.
‘Initially, smaller satellites will be able to generate enough energy for a small village but we have the aim, and indeed the technology available, to one day put a large enough structure in space that could gather energy that would be capable of powering a large city.’
Last month, a team of science and engineering students at Strathclyde developed an innovative ‘space web’ experiment which was carried on a rocket from the Arctic Circle to the edge of space.
The researchers already deployed a network of satellites which could be used to 'beam' power back to Earth
The experiment, known as Suaineadh – or ‘twisting’ in Scots Gaelic, was an important step forward in space construction design and demonstrated that larger structures could be built on top of a light-weight spinning web, paving the way for the next stage in the solar power project.
Dr Vasile added: ‘The success of Suaineadh allows us to move forward with the next stage of our project which involves looking at the reflectors needed to collect the solar power.
‘The current project, called SAM (Self-inflating Adaptable Membrane) will test the deployment of an ultra light cellular structure that can change shape once deployed. The structure is made of cells that are self-inflating in vacuum and can change their volume independently through nanopumps.
‘The structure replicates the natural cellular structure that exists in all living things. The independent control of the cells would allow us to morph the structure into a solar concentrator to collect the sunlight and project it on solar arrays. The same structure can be used to build large space systems by assembling thousands of small individual units.’
The project is part of a NASA Institute for Advanced Concepts (NIAC) study led by Dr John Mankins of Artemis Innovation. The University of Strathclyde represents the European section of an international consortium involving American researchers, and a Japanese team, led by Professor Nobuyuki Kaya of the University of Kobe, a world leader in wireless power transmission.
The NIAC study is demonstrating a new conceptual design for large scale solar power satellites. The role of the team at the University of Strathclyde is to develop innovative solutions for the structural elements and new solutions for orbit and orbit control.
Most watched News videos
- Shocking video shows driver knocking cyclists off their bikes
- Sharks feast on huge whale carcass off popular surf beach
- Brave lion cub forced to jump into raging river to follow mother
- The moment Katie Price's mum is given heartbreaking prognosis
- Model Annabelle Neilson walks the catwalk in 2010 fashion show
- White woman confronts mother playing outside with child
- Moment off-duty cop shoots armed motorbike thief dead
- The streets of Alcudia in Mallorca are flooded by mini-tsunami
- Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga'
- Love Island TEASER: Georgia gets anxious as she could be kicked off
- Shocking moment young girl is attacked by golden eagle
- Schwarzenegger criticizes 'wet noodle' Trump after Putin meeting | <urn:uuid:678ebe2b-c462-47d3-947e-18b6cbe02421> | 3.0625 | 1,222 | News Article | Science & Tech. | 16.555218 | 95,626,944 |
I have to discuss cell and tissue types found in typical animals, but because it is so vast I am having a hard time finding one concept to elaborate on.© BrainMass Inc. brainmass.com July 20, 2018, 7:00 am ad1c9bdddf
The first level of cellular organization is a cell, followed immediately by a tissue. Tissues are a group of cells working together to perform a specific function. In fact, the body contains four classes of tissues:
- Connective tissue
- Epithelial tissue (epithelium)
- Muscle tissue
- Nerve tissue
If you are looking ...
This solution provides the types of cells and tissues found in typical animals, as well as ideas for elaborating on a concept related to them. | <urn:uuid:4c9a6596-a2e8-4e80-bd93-1b9fbfeae5e3> | 3 | 161 | Q&A Forum | Science & Tech. | 50.413382 | 95,626,971 |
Online Dictionary: translate word or phrase from Indonesian to English or vice versa, and also from english to english on-line.
Hasil cari dari kata atau frase: Diazo reactions(0.00811 detik)
Found 1 items, similar to Diazo reactions.
English → English (gcide)
Definition: Diazo reactions
Diazo- \Di*az"o-\ [Pref. di- + azo-] (Chem.)
A combining form (also used adjectively), meaning pertaining
to, or derived from, a series of compounds containing a
radical of two nitrogen atoms, united usually to an aromatic
radical; as, diazo-benzene, C6H5.N2.OH.
Note: Diazo compounds are in general unstable, but are of
great importance in recent organic chemistry. They are
obtained by a partial reduction of the salts of certain
Diazo reactions (Chem.), a series of reactions whereby
diazo compounds are employed in substitution. These
reactions are of great importance in organic chemistry. | <urn:uuid:8f7ec21a-d6ab-4650-90f2-01b09a71b35a> | 3.21875 | 232 | Structured Data | Science & Tech. | 38.659212 | 95,626,979 |
its v v confusing.
Can someone explain it to me?
Turn on thread page Beta
difference between primary, secondary and teritary structures of halogenoalkanes? watch
- Thread Starter
- 16-02-2018 20:17
- 16-02-2018 20:38
A primary halogenoalkane is where there are two H atoms bonded to the C atom bonded to the halogen (Cl, Br, I) for example 1-bromobutane
Secondary is one H atom, for example 2-bromobutane
If you draw those examples out it should help
Tertiary is no H atoms
- 17-02-2018 16:09
if the carbon bearing the halogen is attached to at least two hydrogen, it's a primary.
At least one hydrogen attached? You got secondary.
No hydrogen attached, you got tertiary.
It can be applied even for alcohols, not just halogenoalkanes. | <urn:uuid:1c91ea93-2630-4f10-9b1b-ccea5eb800f9> | 2.71875 | 208 | Comment Section | Science & Tech. | 58.701136 | 95,626,998 |
An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object-relational databases are a hybrid of both approaches.
Object databases have been considered since the early 1980s.
Object-oriented database management systems (OODBMSs) also called ODBMS (Object Database Management System) combine database capabilities with object-oriented programming language capabilities. OODBMSs allow object-oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the OODBMS. Because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the OODBMS and the programming language will use the same model of representation. Relational DBMS projects, by way of contrast, maintain a clearer division between the database model and the application.
As the usage of web-based technology increases with the implementation of Intranets and extranets, companies have a vested interest in OODBMSs to display their complex data. Using a DBMS that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer-aided design (CAD).
Object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "object-oriented database system" first appeared around 1985. Notable research projects included Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin-Madison), IRIS (Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press.
Early commercial products included Gemstone (Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). The early to mid-1990s saw additional commercial products enter the market. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB (Objectivity, Inc.), ObjectStore (Progress Software, acquired from eXcelon which was originally Object Design), ONTOS (Ontos, Inc., name changed from Ontologic), O2 (O2 Technology, merged with several companies, acquired by Informix, which was in turn acquired by IBM), POET (now FastObjects from Versant which acquired Poet Software), Versant Object Database (Versant Corporation), VOSS (Logic Arts) and JADE (Jade Software Corporation). Some of these products remain on the market and have been joined by new open source and commercial products such as InterSystems Caché.
Object database management systems added the concept of persistence to object programming languages. The early commercial products were integrated with various languages: GemStone (Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System for Smalltalk). For much of the 1990s, C++ dominated the commercial object database management market. Vendors added Java in the late 1990s and more recently, C#.
Starting in 2004, object databases have seen a second growth period when open source object databases emerged that were widely affordable and easy to use, because they are entirely written in OOP languages like Smalltalk, Java, or C#, such as Versant's db4o (db4objects), DTS/S1 from Obsidian Dynamics and Perst (McObject), available under dual open source and commercial licensing.
Object databases based on persistent programming acquired a niche in application areas such as engineering and spatial databases, telecommunications, and scientific areas such as high energy physics and molecular biology.
Another group of object databases focuses on embedded use in devices, packaged software, and real-time systems.
Most object databases also offer some kind of query language, allowing objects to be found using a declarative programming approach. It is in the area of object query languages, and the integration of the query and navigational interfaces, that the biggest differences between products are found. An attempt at standardization was made by the ODMG with the Object Query Language, OQL.
Access to data can be faster because an object can be retrieved directly without a search, by following pointers.
Another area of variation between products is in the way that the schema of a database is defined. A general characteristic, however, is that the programming language and the database schema use the same type definitions.
Multimedia applications are facilitated because the class methods associated with the data are responsible for its correct interpretation.
Many object databases, for example Gemstone or VOSS, offer support for versioning. An object can be viewed as the set of all its versions. Also, object versions can be treated as objects in their own right. Some object databases also provide systematic support for triggers and constraints which are the basis of active databases.
The efficiency of such a database is also greatly improved in areas which demand massive amounts of data about one item. For example, a banking institution could get the user's account information and provide them efficiently with extensive information such as transactions, account information entries etc.
The Object Data Management Group was a consortium of object database and object-relational mapping vendors, members of the academic community, and interested parties. Its goal was to create a set of specifications that would allow for portable applications that store objects in database management systems. It published several versions of its specification. The last release was ODMG 3.0. By 2001, most of the major object database and object-relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed. In 2001, the ODMG Java Language Binding was submitted to the Java Community Process as a basis for the Java Data Objects specification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001.
In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce additional object-oriented query APIs but rather use the OO programming language itself, i.e., Java and .NET, to express queries. As a result, Native Queries emerged. Similarly, Microsoft announced Language Integrated Query (LINQ) and DLINQ, an implementation of LINQ, in September 2005, to provide close, language-integrated database query capabilities with its programming languages C# and VB.NET 9.
In February 2006, the Object Management Group (OMG) announced that they had been granted the right to develop new specifications based on the ODMG 3.0 specification and the formation of the Object Database Technology Working Group (ODBT WG). The ODBT WG planned to create a set of standards that would incorporate advances in object database technology (e.g., replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to include new features into these standards that support domains where object databases are being adopted (e.g., real-time systems). The work of the ODBT WG was suspended in March 2009 when, subsequent to the economic turmoil in late 2008, the ODB vendors involved in this effort decided to focus their resources elsewhere.
In January 2007 the World Wide Web Consortium gave final recommendation status to the XQuery language. XQuery uses XML as its data model. Some of the ideas developed originally for object databases found their way into XQuery, but XQuery is not intrinsically object-oriented. Because of the popularity of XML, XQuery engines compete with object databases as a vehicle for storage of data that is too complex or variable to hold conveniently in a relational database. XQuery also allows modules to be written to provide encapsulation features that have been provided by Object-Oriented systems.
XQuery v1 and XPath v2 are so complex (no FOSS software is implementing this standards after 10 years of its publication) when comparing with XPath v1 and XSLT v1 implementations, and XML not fitted all community demands as an open format. Since early 2000s JSON is gaining community and applications, overcoming XML in the 2010s. JSONiq, a query-analog of XQuery for JSON (sharing same XQuery core expressions and operations), demonstred the functional equivalence between JSON and XML formats. In this context, the main strategy of OODBMS maintainers was to retrofitting JSON (by using JSON as internal data type).
In January 2016, with the PostgreSQL 9.5 release was the first FOSS OODBMS to offer an efficient JSON internal datatype (JSONB) with a complete set of functions and operations, for all basic relational and non-relational manipulations.
An object database stores complex data and relationships between data directly, without mapping to relational rows and columns, and this makes them suitable for applications dealing with very complex data. Objects have a many to many relationship and are accessed by the use of pointers. Pointers are linked to objects to establish relationships. Another benefit of an OODBMS is that it can be programmed with small procedural differences without affecting the entire system. | <urn:uuid:d760b4c0-7c62-4126-861e-8d03a9f80762> | 3.203125 | 1,974 | Knowledge Article | Software Dev. | 26.282959 | 95,627,004 |
The physical removal of invasive plants can generate a significant volume of materials. Much of this material is bagged and sent to landfills, but is there another way to manage this material? Joe Van Rossum of the University of Wisconsin-Extension will present results from his research on the fate of garlic mustard and common buckthorn seeds placed into compost piles typical at large-scale compost facilities. The results also provide insight into the fate of invasive plant materials that may be inadvertently delivered to municipal yard waste sites.
Wild rice (Manoomin) is a cereal grain that is harvested and enjoyed throughout the Upper Great Lakes Region by people of varied cultural backgrounds. It has been a central component of the culture of the Anishinaabe people in the region for thousands of years and continues to be of great importance to many tribal communities. Its importance is noted by the fact that the Menominee tribe was named for this plant. Wild rice is also a key element of Great Lakes coastal and interior wetlands that provides food, cover, and spawning habitat for a variety of wildlife species. Unfortunately, wild rice populations have declined throughout much of the plant’s historic range, due in large part to human impacts. Given the strong cross-cultural importance of this grain, sustaining regional populations of wild rice requires a commitment to multicultural approaches that recognize, respect, and weave together ways of knowing that are influenced by both traditional knowledge and western science.
Deer play important roles in the ecology of Michigan, and in the culture of the people who live in and visit the Great Lakes State. How are decisions about the management of the state’s deer herds made? What goals are we working to achieve through that management? How can individuals or groups of individuals become involved in those decisions or work together to achieve similar goals at a local scale? Brent Rudolph, the Deer and Elk Program Leader for the Michigan Department of Natural Resources, will discuss and engage us in dialogue on these points in our next webcast.
Stewardship, including invasive species management, is a year-round endeavor. Each season brings different challenges and new opportunities. Knowing the correct timing for conducting invasive species work can greatly increase efficiency and reduce costs. When you think of invasive species work for fall, you typically think of treating woody invasive plants. However, there are a lot of other tasks that need to be completed in preparation for spring work. In this webcast, we’ll cover some of the basic tasks that managers and stewards can do in the fall to prepare for next year’s invasive species work, including scouting, mapping, firebreak installation, equipment maintenance, and planning.
Michigan has amazing inland lakes that overall are very healthy. However, research shows that the greatest threat to the overall health of inland lakes is the loss of near shore habitat. The Michigan Natural Shoreline Partnership (MNSP), formed in 2008, is trying to change this. It was created to promote the importance of natural shorelines and soft-engineering techniques for restoring and protecting Michigan’s inland lakes. Our guess speaker in this webcast is Julia Kirkwood, the current Chair of the MNSP. She will discuss why natural shorelines are important and what happens when vegetation is removed. She will also discuss the MNSP resources and programs that are designed to train and educate professionals and property owners on the different options for creating a natural shoreline.
As the temperature rises and we head into summer, many of us will be heading towards the water. Michigan truly is a “water wonderland”, surrounded by the Great Lakes and home to thousands of inland lakes, rivers, and streams. Unfortunately, many of these aquatic systems have been significantly changed by invasive, non-native plants and animals – and it seems like there is a new invader discovered every day! During this webcast, we will take a look at some of the aquatic invasive species now found in Michigan, and some that are knocking on our door. We’ll also discuss a variety of ways to help stop the spread and protect our lakes and streams.
In 1998 and again in 2008 wildfires surged through much of Florida, destroying property and homes. Many Florida fire managers believe these fires were the result of a build up of hazardous fuels due to public resistance to prescribed fire. The FIFE Program (Fire in Florida's Ecosystems) was designed to teach Florida educators about fire ecology, prescribed fire and wildfire prevention. This educator training program gives teachers the information and tools to teach students in grades 3-12 about fire’s natural role in nature, while also tying back to core concepts in math, science, social studies and language arts. The program is targeted to Florida’s schools but can be modified for any region in the country. Tune in to learn more about this dynamic program, which has taught over 3,000 teachers, camp councilors, 4-H and Scout leaders, and park staff since 1999.
Spring finally feels like it's on its way - the weather is warming up, the songbirds are back, and the garlic mustard is showing its green little leaves. On April 10th, we'll be kicking off our 2013 Garlic Mustard Challenge with this webcast. We'll cover a bit of the science behind the plant, and then we'll be joined by people leading the charge against this invasive in different places around Michigan! We'll hear about their work and success protecting the special places where they live. Tune in, and get pumped up for the 2013 Challenge! | <urn:uuid:3719e0ab-28f2-4747-9265-cd82b17209bd> | 3.1875 | 1,118 | Content Listing | Science & Tech. | 39.8424 | 95,627,006 |
Since human colonisation in the 17th century, the island has lost most of its unique animals. The litany includes the famous flightless dodo, giant tortoises, parrots, pigeons, fruitbats, and giant lizards. It is comparatively easy to notice the loss of a species, but much more difficult to realise how many interactions have been lost as a result.
Recent work has highlighted how it is not species diversity per se, which breathes life into ecosystems, but rather the networks of interactions between organisms. Thus, the real ghosts in Mauritius are not as much the extinct animals themselves, but more importantly the extinct networks of interactions between the species.
Reporting in this week’s PLoS ONE, Dennis Hansen, Christopher Kaiser and Christine Müller from the University of Zurich investigate how the loss of seed dispersal interactions in Mauritius may affect the regeneration of endemic plants. Why is it important for seeds to be dispersed away from maternal plants" One possible answer is given by the Janzen-Connell model, one of the most studied ecological patterns in tropical mainland forests –but which so far has not been experimentally investigated on oceanic islands. In essence, the model suggests that for successful seedling establishment, seeds need to be dispersed away from adult trees of the same species, to escape natural enemies that are associated with the adult trees (seed predators, pathogens, herbivores). The recent loss of most frugivores in Mauritius has left many fleshy-fruited plant species stranded without crucial seed dispersal interactions, leaving the na tural regeneration dynamics of the forests at a virtual standstill.
Within the framework of the Janzen-Connell model, the ecologists investigated seed germination and seedling survival patterns of one of the many critically endangered endemic trees, Syzygium mamillatum (Myrtaceae), in relation to distance from maternal trees. The results showed strong negative effects of proximity to maternal trees on growth and survival of seedlings, suggesting that dispersal is crucial for successful seedling establishment of this species. However, no extant frugivores eat the fruits of S. mamillatum, and most fruits are left to rot on the forest floor. In pristine Mauritius, the fruits would likely have been eaten and the seeds dispersed by ground-dwellers such as the dodo, the giant tortoises or giant lizards.
It may seem an impossible task to resurrect these lost interactions – simply because the Mauritian dodo is, well, dead as a dodo. However, recent studies have suggested rejuvenating lost interactions in currently dysfunctional ecosystems by using analogue species to replace extinct species – so-called ‘rewilding’. In one of the first experimental assessments of the use of ecological analogue seed dispersers, the Zurich group of ecologists successfully used giant Aldabran tortoises as stand-ins for the two extinct Mauritian tortoises in feeding experiments. Seedlings from gut-passed seeds grew taller, had more leaves, and suffered less damage from natural enemies than any of the other seedlings. The results thus show that Aldabran giant tortoises can be efficient analogues that can replace extinct endemic seed dispersers of S. mamillatum.
Overall, while it is acknowledged that oceanic islands harbour a disproportionally large fraction of the most critically endangered plant species in the world, the study highlights how little we know about how the predictions of the Janzen-Connell model affects the regeneration and longer-term survival of endangered plants on islands. The results potentially have serious implications for the conservation management of rare plants on oceanic islands. Here, plants are often crammed into very small nature reserves, in which seedlings may be unable to disperse far enough to escape high natural enemy pressures around adult trees.
Lastly, in contrast to recent controversy about rewilding projects in North America and elsewhere, this study also illustrates how Mauritius and other oceanic islands are ideal study systems in which to empirically explore the use of ecological analogue species in restoration ecology.
Dennis Hansen | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Science Education
23.07.2018 | Health and Medicine
23.07.2018 | Life Sciences | <urn:uuid:ea2ae79b-3ede-425a-a37a-d7b508bab937> | 3.96875 | 1,433 | Content Listing | Science & Tech. | 31.099106 | 95,627,011 |
Aerodynamics and Hydrodynamics of the Human Body, Birds, and Boeing
The aerodynamics of the human body are very interesting indeed. This may sound somewhat funny, because human beings can't fly, however our desire to fly has enabled us to adapt and innovate to achieve the same purpose. Man has always dreamed of being able to fly like the birds. The aerodynamics of the human body are quite serious in many sports. To confirm this, just at Lance Armstrong in a tour to France.
Bicycle racing aerodynamics against the relative wind are quite serious. In most bicycle races the riders are doing in excess of 60 mph for a large part of the race and the aerodynamics of the human being are as serious as they are in it modern day automobile performance, fuel economy and directional control. Wind Tunnel testing for bicycle racing gear such as helmets, racing frames, racing attire are coming place. We know that NASA material science is also used in modern sports from everything from skies to golf clubs, Jamaican Bobsleds to swimming suits and from marathon running shoes to those bicycle components.
Aerodynamics, material sciences and human geometry (biometrics, ergonomics) are as common in the Olympics as they are in Auto Racing, Dick Rutan and the X-Prize, Reno Air Races, Space Flight and in modern military equipment operation. In the Wright Brothers first aircraft the pilot laid out on the wing so he was fully part of the aerodynamics from the first flight.
Now we have parachutes, parasailing, ultra-lights, Gyro-Copters, Jet packs, etc, where the aerodynamics of the human being is a huge factor. Having had the chance to race competitively street motorcycles in my day, I can tell you it is a huge component to performance. The human body is what it is, the bike is already quite aerodynamically designed, how the body is placed when you accelerate the motorcycle to 185 plus mph makes a huge difference. Whether you are shooting a man out of a cannon or jumping off the pier into the Annual Human Powered Flight Contest into the Hudson Bay, this is no joking matter, aerodynamics of the human body is just as important in racing, sport as it is for the birds in the sky or the fish which fly.
The aerodynamics and fluid dynamics of many species especially species of prey will ultimately decide their survival, if they fail to have the adequate speed, then they will not be able to eat. If a species, which is hunted cannot dodge or move fast enough then it will have no other option than to massively reproduce to avoid extinction or maintain tight formations, swarms, herds or social order to use the safety in numbers principle. The fastest bird, the peregrine falcon was clocked at 217 mph in Germany while in a dive. Most Falcon's can catch their prey in midair are at speed of around 100 mph, although usually much less. No wonder the Military named the F-16 the Falcon?
The spine-tailed swift has a maximum speed a high of 106 mph in level flight. Thus the Falcon might have a tough time extending it's wings at that speed for the proper speed to catch it, thus it can live near Falcons without being eaten and the Falcon will then go after lesser prey with better odds of eating. If you look at the F-14 it has the ability to bring it's wings out for slow flight and keep them swept for accelerated and sustained cruising speed, very similar to the bird. The first moveable winged jet aircraft was the well-known X-5, which variable in-flight wing configurations as did the F-111, B-1 and several others. Many aircraft have been designed to change various other configurations for many reasons, the F-8 Crusader changed it's angle of attack and the SST and Concorde change it's nose on take offs.
Most modern fighters have speed brakes to slow them down. All techniques stolen from nature, as birds adjust their heads in flight for visibility, adjust their angle of attack when approaching relative wind for faster climb, adjust wings for diving and stick out their feet to slow down. Well, yes these techniques were stolen from nature alright, that is pretty much the case, yet we have obviously improved on natures designs in this dimension. After all we are now building aircraft capable of Mach 5 and others, which can carry many hundreds of tons in payloads. In skydiving you learn quickly how to maneuver your body to achieve your intended path. A bird would do much the same only be 100 times better at it since it practices all day long everyday.
Most ordinance, which is delivered, such as bombs need to be dropped well under the speed of sound so that they do not in fact create their own new trajectory as they fly away from where they are pointed and need to be delivered. Having been employed washing cars in my day, I can tell you we may in fact have stolen that idea too. Aircraft like birds do lots of adjusting and playing around with configurations to allow them to take advantage of various situations as needed, thus aerodynamically speaking man has copied the observations he has witnessed from birds since his first flight. How about another example, the Bald Eagle, the United States of America's official mascot? Well it has a souring level flight speed of around 50 mph, which is quite fast in bird terms. While souring The adult Eagle's wing span is between 6 and 7 feet.
Largest discovered was 7.9 feet, but the wings folded back can allow the eagle to dive at very fast speeds of around 75 mph as it would be most difficult to attain significant speed with such large wings extended. Different configurations and methodologies can also be applied to human body aerodynamics with a little bit of modification. All the while having an incredible accuracy in it's vision, which would make military intelligence proud indeed as the F-15 Eagle relies enhanced equipment and the human component, which is 3-4 times less adapted than the eagle's eyes, yet with the newest technology we again have adapted to better nature. If we look at the aerodynamics of nature and the process of evolution we see the most adapted species in the air as the Eagle and Falcon, which are truly marvels of 100's of millions of years, we begin and appreciate our ominous task of re-engineering. As we look to build aircraft, MAVs, UAVs to serve mankind's needs we should make a note of this. As we develop smaller technologies and demand versatility we will definitely be looking at the best nature has to offer in the way of suggestions.
A human parachutist in a dive has been clocked also 217 mile per hour, the maximum speed for the Falcon. We might ask ourselves, is the organic aerodynamic speed limit for evolution on this planet 217 mph? This presently includes our knowledge of the flight speeds of our most adapted species on the planet presently. Is this figure correct for previous periods? What was the speed of the Pterodactyl? Was the air thinner or thicker under 10,000 ft. back then? Would it have needed to go faster? Maybe, but if so from what? Once you are the fastest and have no higher food chain component to go after, why would you evolve into a higher performing animal? Well if you played, had contests and displays of agility for procreation, pecking order, competed for territorial rights with your fellow species, then you might evolve to be better and have greater performance, developed higher cognition, hunting skills, defense skills and evolved to fly faster too. This would be inline with current animal and human behavior in our current period and the writings of the past 10,000 plus years of written recorded history and observational study of species on earth.
We know from the study of aerodynamic, hydrodynamics and racing that there are also issues with ROI or issues with diminishing returns. For instance if a Pterodactyl were to fly faster, it would need to develop more muscle, lose weight, spend more time developing flight skills.
However this takes time away from hunting. It would cause issues with its ability to fight off other pterodactyls and would mean more food intake was needed. So a happy medium would eventually be reached for continuation of the species, social order, etc. So then, is that compromise or happy medium 217 mph? A man falling in freefall from an aircraft fully tucked and using the BMPs for rapid decent max'ed out at 217 mph, like the Falcon. It is highly interesting that these organic matter speeds that the highly evolved Falcon is so similar to the diving speed of a human being. We can learn a lot about how the human body interacts with the elements and the study of aerodynamics has lots to still learn from nature.
"Lance Winslow" - If you have innovative thoughts and unique perspectives, come think with Lance; www.WorldThinkTank.net/wttbbs
This RSS feed URL is deprecated, please update. New URLs can be found in the footers at https://news.google.com/news
Should We Allow The Genetic Modification of Insects?
Some scientists argue over creation, intelligent design and evolution. Others argue did man create god or did god create man.
Re-Designing the ICBM With The Latest and Greatest Technology
We propose a Stealth Aircraft or non-stealth composite aircraft to have a honey comb structure with an external shell or skin for the diffraction of incoming laser weaponry, which we believe based on the results of the THEL Anti-Missile Defense System success, will be what future enemies will use to attempt to down our US Aircraft. Israel is selling information and weapons to China, Pakistan and other nations.
Mars Surface Exploration and AFF
As we study more and more about Mars we know there is life. Unfortunately in many regions of the planet it is not so evident.
Hyper Sound Wave Emissions to Quiet Helicopters
It is possible to disrupt the sound waves coming from a flying helicopter and re-direct those sound waves up. This would mean on a day with no clouds you would not be able to hear the helicopter from the ground.
Whispering Windows For Observation Decks of ISS and Moon Colonies
Talking glass, which was featured recently in the famous Tom Cruise Movie "Minority Report" where advertising would spring to life and communicate with the actor; is not new science. In fact it has been around since the 1940's and some believe that the ghost of Lincoln, which was discussed in the biographies of Richard Nixon as being in the White House when looking at his picture on the wall was Whispering Windows Technology.
Gulf of Mexico Formed by a Rotating Hurricane Trapped in the Region
Recently in a coffee shop I met a gentleman who had dismissed himself from University Level Study and had developed some interesting theories on land formations of the Americas. His theory included a concept that the Gulf of Mexico some 2.
Acoustic Transducers To Detect And Eliminate Incoming Mortar Rounds
There maybe a way to use acoustic transducers to pin-point incoming enemy ordinance such as mortar rounds in order to shoot them down. Directional sound waves from acoustic transducers set at specific locations around friendly locations can create artificial barriers, which the incoming ordinance will have to pass to reach its target and thus be detected and triangulated or quadrangulated for interception.
'Kenyanthropus platyops': - Perhaps the 6,000,000 year old men found by a maverick who went behind the authorities back at the Olduvai Gorge will be proven to actually not be outside the australopithecine lineage. But the Leakey family has found a 3.
Why Condition Your Boiler Water?
A boiler is used for generating steam. It does this by heating water to its boiling point, after which steam will evaporate from it.
The HarmonicIn a good history book by a leading light in the field of history, I recall Michael Grant saying Pythagoras was 'weird'. This book is The Rise of the Greeks and he does almost admit he is not qualified to judge the great sage, which is more than many academics will allow.
A Unique History of the Light Bulb
Most people assume that Thomas Edison invented the light bulb. This is only partially true however.
DNA Testing Breaks Down Barriers in the Court Room
DNA testing has three major applications for forensic studies: identification of missing persons; identification of victims of wars, accidents, and natural disasters; and crime investigation. Annually, more than 20,000 forensic DNA tests are performed in the UK.
Free Energy from Space
Tesla was always looking for a way to harvest electromagnetic energy and deliver it to the world wireless, after reading a biography about Tesla; I had come up with this concept. Harvesting and Wireless delivery of Electromagnetic Energy From Space.
Hibernating Humans for Space Flight
Can we hibernate humans using hydrogen sulfide gas for long-term space flight? The answer is most likely; "YES". Scientists have successfully hibernated mice spontaneously using hydrogen sulfide gas.
Theoretically is it Possible to Defy Gravity?
Many believe it is possible to build an anti-gravity machines and there are many small version which can do this by interfering with the gravity waves. Other say why build an anti-gravity wave machine when you can use the gravity to pull you the other way.
MP Apprehension of High Strung or Drunken Soldiers
Recently scientists have discovered the hydrogen sulfide gas caused mice to go into spontaneous hibernation. The genetic similarities to the mammalian class humans belong to includes these rodents as well.
Science Fiction by Arthur C Clarke
It is difficult to have a discussion with someone about science fiction if they are not familiar with the works of Arthur C Clarke. The concepts are not too awfully difficult to understand and not nearly as complex as reading Issac Asimov for the science fiction novice and anyone can enjoy Mr.
Dream Therapy and Learning thru Human Hibernation
Want to learn a new language? Would you like to earn a PhD in Physics; perfect your backstroke tennis swing, golf finesse or fly-fishing techniques, while you sleep? While, your immune system catches your body back up on your lack of exercise, fitness and proper diet? How about if I told you, that you most likely will shed about 20-25 lbs of extra weight while all this is going on? Well, if we further develop our sleep research, human hibernation studies and mind or brain advances we will be able to do all this and more in the near future, perhaps less than 5-years.Let me tell you about a few of the latest new rapidly approaching and potentially converging technologies, which will make all this possible.
New Energy Bill: Reducing Our Dependence on Foreign Oil
The U. S.
Issues with Aerial Fire Fighting
A few years ago I visited the Wyoming Contractor, which used WWII aircraft to fight such fires. I was amazed that such old aircraft were not in museums but rather in flying condition and used for dropping phoschek on fires.
|home | site map | Xray Photography| | <urn:uuid:eca820fc-f408-4a57-8ffa-0afe09b19685> | 2.75 | 3,124 | Content Listing | Science & Tech. | 47.25768 | 95,627,018 |
Trophic state index
Trophic State Index (TSI) is a classification system designed to rate bodies of water based on the amount of biological activity they sustain. The TSI of a body of water is rated on a scale from zero to one hundred. Under the TSI scale, bodies of water may be defined as oligotrophic (TSI 0-40, having the least amount of biological productivity, "good" water quality); mesoeutrophic (TSI 40-60, having a moderate level of biological activity, "fair" water quality); or eutrophic to hypereutrophic (TSI 60-100, having the highest amount of biological activity, "poor" water quality). The quantities of nitrogen, phosphorus, and other biologically useful nutrients are the primary determinants of a body of water's TSI. Nutrients such as nitrogen and phosphorus tend to be limiting resources in standing water bodies, so increased concentrations tend to result in increased plant growth, followed by corollary increases in subsequent trophic levels.[a] Consequently, a body of water's trophic index may sometimes be used to make a rough estimate of its biological condition. Although the term "trophic index" is commonly applied to lakes, any surface water body may be indexed.
Carlson's Trophic State Index
Carlson's index was proposed by Robert Carlson in his 1977 seminal paper, "A trophic state index for lakes". It is one of the more commonly used trophic indices and is the trophic index used by the United States Environmental Protection Agency. The trophic state is defined as the total weight of biomass in a given water body at the time of measurement. Because they are of public concern, the Carlson index uses the algal biomass as an objective classifier of a lake or other water body's trophic status. According to the US EPA, the Carlson Index should only be used with lakes that have relatively few rooted plants and non-algal turbidity sources.
Because they tend to correlate, three independent variables can be used to calculate the Carlson Index: chlorophyll pigments, total phosphorus and Secchi depth. Of these three, chlorophyll will probably yield the most accurate measures, as it is the most accurate predictor of biomass. Phosphorus may be a more accurate estimation of a water body's summer trophic status than chlorophyll if the measurements are made during the winter. Finally, the Secchi depth is probably the least accurate measure, but also the most affordable and expedient one. Consequently, citizen monitoring programs and other volunteer or large-scale surveys will often use the Secchi depth. By translating the Secchi transparency values to a log base 2 scale, each successive doubling of biomass is represented as a whole integer index number. The Secchi depth, which measures water transparency, indicates the concentration of dissolved and particulate material in the water, which in turn can be used to derive the biomass. This relationship is expressed in the following equation:
- where z = the depth at which the disk disappears,
- I0 is the intensity of light striking the water's surface,
- Iz is about 10% of I0 and is considered a constant,
- kw is a coefficient for the attenuation of light by water and dissolved substances,
- α is treated as a constant with the units of square meters per milligram and
- C is the concentration of particulate matter in units for milligrams per cubic meter.
A lake is usually classified as being in one of three possible classes: oligotrophic, mesotrophic or eutrophic. Lakes with extreme trophic indices may also be considered hyperoligotrophic or hypereutrophic. The table below demonstrates how the index values translate into trophic classes.
|< 30—40||0—2.6||0—12||> 8—4||Oligotrophic|
|70—100+||56—155+||96—384+||0.5— < 0.25||Hypereutrophic|
Oligotrophic lakes generally host very little or no aquatic vegetation and are relatively clear, while eutrophic lakes tend to host large quantities of organisms, including algal blooms. Each trophic class supports different types of fish and other organisms, as well. If the algal biomass in a lake or other water body reaches too high a concentration (say >80 TSI), massive fish die-offs may occur as decomposing biomass deoxygenates the water.
An oligotrophic lake is a lake with low primary productivity, as a result of low nutrient content. These lakes have low algal production, and consequently, often have very clear waters, with high drinking-water quality. The bottom waters of such lakes typically have ample oxygen; thus, such lakes often support many fish species such as lake trout, which require cold, well-oxygenated waters. The oxygen content is likely to be higher in deep lakes, owing to their larger hypolimnetic volume.
Ecologists use the term oligotrophic to distinguish unproductive lakes, characterised by nutrient deficiency, from productive, eutrophic lakes, with an ample or excessive nutrient supply. Oligotrophic lakes are most common in cold regions underlain by resistant igneous rocks (especially granitic bedrock).
Mesotrophic lakes are lakes with an intermediate level of productivity. These lakes are commonly clear water lakes and ponds with beds of submerged aquatic plants and medium levels of nutrients.
The term mesotrophic is also applied to terrestrial habitats. Mesotrophic soils have moderate nutrient levels.
A eutrophic body of water, commonly a lake or pond, has high biological productivity. Due to excessive nutrients, especially nitrogen and phosphorus, these water bodies are able to support an abundance of aquatic plants. Usually, the water body will be dominated either by aquatic plants or algae. When aquatic plants dominate, the water tends to be clear. When algae dominate, the water tends to be darker. The algae engage in photosynthesis which supplies oxygen to the fish and biota which inhabit these waters. Occasionally, an excessive algal bloom will occur and can ultimately result in fish death, due to respiration by algae and bottom-living bacteria. The process of eutrophication can occur naturally and by human impact on the environment.
Hypereutrophic lakes are very nutrient-rich lakes characterized by frequent and severe nuisance algal blooms and low transparency. Hypereutrophic lakes have a visibility depth of less than 3 feet, they have greater than 40 micrograms/litre total chlorophyll and greater than 100 micrograms/litre phosphorus.
The excessive algal blooms can also significantly reduce oxygen levels and prevent life from functioning at lower depths creating dead zones beneath the surface.
Likewise, large algal blooms can cause biodilution to occur, which is a decrease in the concentration of a pollutant with an increase in trophic level. This is opposed to biomagnification and is due to a decreased concentration from increased algal uptake.
Trophic index drivers
Both natural and anthropogenic factors can influence a lake or other water body's trophic index. A water body situated in a nutrient-rich region with high net primary productivity may be naturally eutrophic. Nutrients carried into water bodies from non-point sources such as agricultural runoff, residential fertilisers, and sewage will all increase the algal biomass, and can easily cause an oligotrophic lake to become hypereutrophic.
Often, the desired trophic index differs between stakeholders. Water-fowl enthusiasts (e.g. duck hunters) may want a lake to be eutrophic so that it will support a large population of waterfowl. Residents, though, may want the same lake to be oligotrophic, as this is more pleasant for swimming and boating. Natural resource agencies are generally responsible for reconciling these conflicting uses and determining what a water body's trophic index should be.
- Biomass (ecology)
- Nonpoint source pollution
- Secchi disk
- Surface runoff
- Trophic level
- Trophic level index, a similar measure used in New Zealand
- Water quality
- List of biological development disorders
- Note that this use of trophic levels refers to feeding dynamics, and has a much different meaning than a body of water's trophic index.
- University of Southern Florida Water Institute. "Trophic State Index (TSI)". Learn More About Trophic State Index (TSI) - Lake.WaterAtlas.org. University of Southern Florida. Retrieved 6 June 2018.
- United States Environmental Protection Agency (2007) Carlson's Trophic State Index. Aquatic Biodiversity. http://www.epa.gov/bioindicators/aquatic/carlson.html accessed 17 February 2008.
- Carlson, R.E. (1977) A trophic state index for lakes. Limnology and Oceanography. 22:2 361--369.
- Carlson R.E. and J. Simpson (1996) A Coordinator's Guide to Volunteer Lake Monitoring Methods. North American Lake Management Society. 96 pp.
- Definition of eutrophic at dictionary.com. | <urn:uuid:08a42bcf-24ec-4483-a5a5-b7fa9ec7753e> | 3.515625 | 1,924 | Knowledge Article | Science & Tech. | 32.520033 | 95,627,026 |
The distant forests of Africa’s Congo Basin have lengthy been a blind spot for scientists working to know how Earth’s pure cycles reply to the environmentally distinctive traits of various areas.
Now, two Florida State College researchers are a part of a world crew of scientists revealing the sudden position that large-scale fires and excessive nitrogen deposition play within the ecology and biogeochemistry of those lush Central African forests.
Their findings, printed within the journal Proceedings of the Nationwide Academy of Sciences, may sign a essentially new understanding of those forests’ construction, functioning and biodiversity.
“We have now been working within the Congo Basin for a decade and discoveries like this present novel insights into how our planet works and remind us how a lot we nonetheless have to know in regards to the world round us,” stated Rob Spencer, affiliate professor within the Division of Earth, Ocean and Atmospheric Science.
In collaboration with their Belgian and Congolese colleagues, FSU scientists performed intensive subject analysis all through the densely forested Congo Basin — a area whose inaccessibility and political turmoil has rendered it critically understudied and knowledge poor.
Samples collected throughout the fieldwork had been processed utilizing an ultrahigh-resolution mass spectrometer housed on the FSU-headquartered Nationwide Excessive Magnetic Discipline Laboratory. This subtle analytical software gives detailed molecular signatures of the natural materials in a given pattern.
Researchers had been significantly excited about sifting via the samples for a gaggle of fire-derived compounds known as condensed aromatics, which point out the position of fireplace as a supply of natural materials.
“Certain sufficient, we discovered that the fire-derived condensed aromatics had been related to the excessive ranges of nitrogen within the samples,” stated FSU doctoral candidate Travis Drake, a co-author of the research. “The atmospheric modeling already urged that these elevated depositions of nitrogen had been linked to fireplace, however now we had some molecular proof to again it up.”
The forests of the Congo Basin are bordered on their northern and southern sides by huge mosaics of dry savannas and grasslands. When fires ignite in these drier areas on account of slash-and-burn agriculture or pure causes like lightning, huge tracts of biomass go up in smoke. A lot of the natural nitrogen from these fires, researchers have now discovered, is swept up into the ambiance and deposited on the forests.
In tropical ecosystems just like the Congolese forests, nitrogen can usually act as a limiting nutrient — a naturally occurring ingredient whose shortage could curb organic development. When surpluses of a limiting nutrient are pumped into an ecosystem, it may stimulate and speed up development in a number of enterprising species.
On its face, this course of could seem innocent. However, Drake stated, nutrient saturation can even have the impact of curbing biodiversity.
“Every organism in an ecosystem specializes and tries to search out its small place within the cascade of vitamins,” Drake stated. “But when the forest is being flooded with vitamins, sure crops and organisms will profit far more than others, and that may result in much less biodiversity.”
Drake stated these findings elevate a serious query in regards to the ecology of the Congo forests: If these excessive charges of nitrogen deposition have been occurring for a whole lot, hundreds or tens of hundreds of years, how may which have affected the forests’ long-term development and improvement?
“There are some wonderful ecological variations between the Congo forest and different rainforests just like the Amazon,” he stated. “The Amazon would not have the expansive, arid savannas or the numerous fireplace inputs which can be discovered within the Congo, and there may be far much less biodiversity within the Congo than within the Amazon. If fires have been plowing the ambiance with nitrogen for years, it is potential the Congo could be a particularly over-fertilized forest.”
Up to now, little analysis had been performed on the ecology and biogeochemistry of the Congo forests. The truth is, in lots of instances, fashions of the area relied on a long time previous knowledge, hypothesis or charges crudely grafted from different rainforests around the globe.
Now, scientists are working with a renewed appreciation of Central Africa’s distinctive ecological traits. Drake stated these most up-to-date findings assist to sign a brand new age of analysis within the forests of the Congo Basin.
“Folks at the moment are seeing the Congo as an essential hotbed for analysis,” he stated. “It is an encouraging time to be a scientist working within the Congo.” | <urn:uuid:1fd0f84c-2844-477e-839d-4d9d122ab729> | 3.375 | 962 | News Article | Science & Tech. | 19.940263 | 95,627,053 |
Australian researchers have been left surprised by a rare bird of prey that travelled more than 3,000 km from its natural habitat.
Michael Mulvaney, a senior ecologist at the Australian Capital Territory (ACT) Environment and Planning Directorate, said a tracker was attached to the rare little eagle to assess the impact of local developments on the native species.
He told Australian Broadcasting Corporation (ABC) radio on Friday that the bird was recorded travelling 3,300 km in less than three weeks to a small town in the Northern Territory (NT).
“We were surprised … we weren’t expecting our Canberra bird to go all the way to the Northern Territory, particularly as it’s a breeding male which seems to have been breeding in the Canberra area since 2001,” Mulvaney said.
“For it to just one day decide to head up north, that it’s sick of the cold and to go that far was really surprising.”
The eagle flew up to 500 km in a single day, reaching a top speed of 55 km per hour (kph).
Previous efforts to track little eagles outside the ACT had shown the bird traditionally flies much shorter distances.
Mulvaney said his team was astounded with the journey the little eagle, which is one quarter of the size of a wedge-tailed eagle and one of the smallest eagle species in the world.
The little eagle is considered a threatened species in both the ACT and New South Wales (NSW).
“In 1998, there were 13 pairs of the little eagles in the ACT, by 2011 we were down to one pair, so they’re a bird we’re trying to look after,” Mulvaney said.
“Two of those breeding pairs are in areas that are close to new developments … so we were trying to examine what development was going to do to the breeding pairs” reports Xinhua, Canberra. | <urn:uuid:2f3fee04-0ef8-4734-bd87-1f558448e5c6> | 3.21875 | 401 | News Article | Science & Tech. | 52.039071 | 95,627,054 |
Anchor tag is an one of the main element in the HTML which is used to create Hyperlinks.
Hyperlink (also called link) is generally a text that appears as active text, hovering mouse changes mouse icon to hand and clicking this text executes an event. By default this text color is blue with underline style.
<a href="http://www.dotnetfunda.com" title="DotNetFunda.com">DotNetFunda.com</a>
In the above code snippet, "href" stands for 'Hypertext refernce' and "http" stands for 'Hypertext transfer protocol'.
Notice the hyperlink in the above output image. Clicking on the hyperlink sends user to http://www.dotnetfunda.com website in the same tab. If we want to open the target in the new tab we use "target" attribute as below.
<a href="http://www.dotnetfunda.com" target="_blank" title="DotNetFunda.com">DotNetFunda.com</a>Views: 3868 | Post Order: 2 | <urn:uuid:536c1861-f5ca-4fb1-b778-f7e40c0da47e> | 3.265625 | 241 | Tutorial | Software Dev. | 62.524853 | 95,627,081 |
WASHINGTON (Reuters) – People living on the Solomon Islands in the Pacific Ocean long had spoken of a big, tree-dwelling rat called vika that inhabited the rainforest, but the remarkable rodent managed to elude scientists — until now.
After searching for it for years with cameras mounted in trees and traps, scientists said they finally caught up with the rat on Vangunu Island, part of the Solomon Islands, spotting one as it emerged from a tree felled by loggers.
It instantly joined the list of the biggest rats in the world, weighing about four times more than an ordinary rat and measuring about 1-1/2 feet (about half a meter) long.
“Vika lives in a very thick, complex forest, and it is up in the canopy so it is difficult to find. It is also a rare species. It is likely there are not many of these rats left,” mammalogist Tyrone Lavery of the Field Museum in Chicago, who led the research, said on Thursday.
The orange-brown rat dines on nuts and fruit, has short ears, a smooth tail with very fine scales and wide feet that allow it to move through the forest canopy.
The rat is reputed to chew holes in coconuts to eat the inside. “I haven’t found proof of this yet, but I have found that they can eat a very thick-shelled nut called a ngali nut,” Lavery said.
A small number of rat species around the world rival vika’s size. Lavery said a vika relative also inhabiting the Solomon Islands, called Poncelet’s giant rat, is twice the size.
The world’s largest rodent is not a rat, but rather South America’s barrel-shaped capybara.
A phenomenon called the “island effect” may help account for the size of Vika and other big rat species in the Solomon Islands.
“The island effect, or island syndrome, relates to the effects living on an island has on the evolution of body size. On islands, small species such as rats, evolve to have larger body size, they attain higher population densities and they produce fewer offspring,” Lavery said.
“Vika also probably arrived on an island where there were no other large mammals living in the canopy eating fruits and nuts so the species evolved to fill this niche,” Lavery said.
Lavery said vika should be considered critically endangered, with logging threatening its habitat.
The research was published this week in the Journal of Mammalogy.
Reporting by Will Dunham; Editing by Sandra Maler | <urn:uuid:7fc8f5f4-0991-48af-ac1b-3c9851ad8886> | 3.46875 | 558 | News Article | Science & Tech. | 47.256789 | 95,627,082 |
Analysis of Monoterpene Hydrocarbons in the Atmosphere
Monoterpenes as well as isoprene are important biogenic organics in the atmosphere. Monoterpenes are mainly emitted from coniferous trees, and are often responsible for the aroma of forest air, while isoprene is dominantly emitted from deciduous trees. Since they are chemically reactive, the concern about their role in atmospheric chemistry has prompted several investigations to determine their ambient air concentrations (Rasmussen and Went 1965; Holdren et al. 1979; Yokouchi et al. 1981b, 1983; Roberts et al. 1983b, 1985; Riba et al. 1987; Yokouchi and Ambe 1988).
KeywordsMonoterpene Hydrocarbon Niwot Ridge Adsorption Tube Monoterpene Concentration Subambient Temperature
Unable to display preview. Download preview PDF.
- Roberts JM, Fehsenfeld DL, Albritton DL, Sievers RE (1983a) Sampling and analysis of monoterpene hydrocarbons in the atmosphere with Tenax gas chromatographic porous polymer. In: Keith LH (ed) Identification and analysis of organic pollutants in air. Butterworth, Boston, MA, pp 371–387Google Scholar
- Stern AC (ed) (1976) Air pollution. Measuring, monitoring and surveillance of air pollution. 3rd edn, vol 3. Academic Press, New YorkGoogle Scholar | <urn:uuid:d2b9e838-50a4-493d-8480-b552e2992668> | 2.984375 | 299 | Truncated | Science & Tech. | 23.980643 | 95,627,085 |
London, Nov 28 (PTI) Scientists have developed a new technology that uses nuclear waste to generate clean electricity in a nuclear-powered battery.
Researchers from the University of Bristol in the UK have grown a man-made diamond that, when placed in a radioactive field, is able to generate a small electrical current.
The development could solve some of the problems of nuclear waste, clean electricity generation and battery life, researchers said.
Unlike the majority of electricity-generation technologies, which use energy to move a magnet through a coil of wire to generate a current, the man-made diamond is able to produce a charge simply by being placed in close proximity to a radioactive source.
"There are no moving parts involved, no emissions generated and no maintenance required, just direct electricity generation," said Tom Scott, Professor in the universitys Interface Analysis Centre.
"By encapsulating radioactive material inside diamonds, we turn a long-term problem of nuclear waste into a nuclear-powered battery and a long-term supply of clean energy," said Scott.
The team has demonstrated a prototype diamond battery using Nickel-63 as the radiation source.
However, they are now working to significantly improve efficiency by utilising carbon-14, a radioactive version of carbon, which is generated in graphite blocks used to moderate the reaction in nuclear power plants.
Research by academics at Bristol has shown that the radioactive carbon-14 is concentrated at the surface of these blocks, making it possible to process it to remove the majority of the radioactive material.
The extracted carbon-14 is then incorporated into a diamond to produce a nuclear-powered battery.
"Carbon-14 was chosen as a source material because it emits a short-range radiation, which is quickly absorbed by any solid material," said Neil Fox from the School of Chemistry.
"This would make it dangerous to ingest or touch with your naked skin, but safely held within diamond, no short-range radiation can escape. In fact, diamond is the hardest substance known to man, there is literally nothing we could use that could offer more protection," said Fox.
Despite their low-power, relative to current battery technologies, the life-time of these diamond batteries could revolutionise the powering of devices over long timescales.
Using carbon-14 the battery would take 5,730 years to reach 50 per cent power, which is about as long as human civilisation has existed.
"We envision these batteries to be used in situations where it is not feasible to charge or replace conventional batteries.
"Obvious applications would be in low-power electrical devices where long life of the energy source is needed, such as pacemakers, satellites, high-altitude drones or even spacecraft," Scott said. PTI SAR SAR | <urn:uuid:23fabf11-a35f-45ff-bf44-a0b2ada751ea> | 3.625 | 560 | News Article | Science & Tech. | 22.268177 | 95,627,087 |
Torrential rainfall on early Mars eroded the network of deep valleys still visible on the surface of the planet today, new research claims
- Experts found river networks on Mars have parallels with waterways on Earth
- Heavy rainfall over a prolonged period may have run off quickly over the surface
- This is how river valleys develop in arid regions on Earth, such as in Arizona
- One hypothesis is the northern third of Mars was once covered by an ocean
- Water evaporated, condensed around volcanoes and led to heavy rainfall
Torrential downpours of rain on the surface of Mars formed a network of deep valleys still visible on its surface today, experts say.
Mars bears the imprint of canyons and ravines – similar to rivers on Earth – that formed billions of years ago and crisscross the surface of the planet.
Exactly how they were created has long been debated, but scientists assume there must once have been enough water to feed streams that cut channels into the soil.
Now, a new study claims to prove that rainwater must have been behind the erosion of the largely dry Martian surface.
A giant ocean that once covered a third of the surface of Mars may have been responsible for the waters evaporating and condensing in the atmosphere, researchers claim.
Scroll down for video
Torrential downpours of rain on the surface of Mars formed a network of deep valleys still visible on its surface today. This image shows the central portion of Osuga Valles, which has a total length of 100 miles. In some places, it is 12 miles wide and plunges to a depth of 3,000 ft
Experts from the Swiss Federal Institute of Technology (ETH) in Zurich determined the branching structures of the former river networks on Mars have parallels with waterways that run through arid regions on Earth.
The distribution of the branching angles of the river valleys on Mars are strikingly similar to those in places like Arizona.
Researchers observed the same valley network patterns in a landscape in the US state where astronauts are currently training for future Mars missions.
Sporadic heavy rainfall on Mars over a prolonged period of time may have run off quickly over the surface, shaping the valley networks.
This is how river valleys develop in arid regions on Earth.
Using statistics from all mapped river valleys on Mars, the team concluded that the contours still visible on Mars today must have been created by similar surface run-off of rainwater.
'Recent research shows that there must have been much more water on Mars than previously assumed,' said physicist Hansjörg Seybold from ETH.
'It’s likely that most of it evaporated into space. Traces of it might still remain in the vicinity of Mars, but this is a question for a future space mission.'
Mars' bears the imprint of canyons and ravines, similar to rivers on Earth, that formed billions of years ago and crisscross its surface. The angles of valley branches - here a section of the Warrego Valles region - on Mars are narrow and correspond to those of arid regions on Earth
The angle of a river branch is determined, among other things, by how dry an area is and whether groundwater emerges from the ground. Experts observed the same valley network patterns in Arizona, where astronauts are training for future Mars missions
WAS MARS EVER HOME TO LIQUID WATER?
Evidence of water on Mars dates back to the Mariner 9 mission, which arrived in 1971. It revealed clues of water erosion in river beds and canyons as well as weather fronts and fogs.
Viking orbiters that followed caused a revolution in our ideas about water on Mars by showing how floods broke through dams and carved deep valleys.
Mars is currently in the middle of an ice age, and before this study, scientists believed liquid water could not exist on its surface.
In June 2013, Curiosity found powerful evidence that water good enough to drink once flowed on Mars.
In September of the same year, the first scoop of soil analysed by Curiosity revealed that fine materials on the surface of the planet contain two per cent water by weight.
In 2017, Scientists provided the best estimates for water on Mars, claiming it once had more liquid H2O than the Arctic Ocean - and the planet kept these oceans for more than 1.5 billion years.
The findings suggest there was ample time and water for life on Mars to thrive, but over the last 3.7 billion years the red planet has lost 87 per cent of its water - leaving it barren and dry.
The branching angles on Mars are comparatively low, so the team ruled out the influence of groundwater seepage on Mars.
River networks that are strongly affected by re-emerging groundwater - as found, for example, in Florida - tend to have wider branching angles between the two tributaries and do not match the narrow angles of streams in arid areas.
Conditions such as those found in arid landscapes on Earth today likely prevailed on Mars for only a relatively short period, between 3.6 and 3.8 billion years ago.
In that period, the atmosphere on Mars may have been much denser than it is today.
One hypothesis suggests the northern third of Mars was covered by an ocean at that time.
Water evaporated, condensed around the volcanoes of the highlands to the south of the ocean and led to heavy rainfall there.
As a result, rivers formed, which left the traces that can be observed on Mars today.
The full findings of the study were published in the journal Science Advances.
Exactly how the valleys visible on the surface of Mars (pictured) were created has long been debated, but scientists assume there must once have been enough water to feed streams that cut channels into the soil (stock)
One hypothesis suggests that the northern third of Mars was covered by an ocean at that time. This image taken by the Mars Pathfinder mission in 1999 shows the surface of the red planet as it appears today (stock)
DO SCIENTISTS BELIEVE WE COULD EVER FIND LIFE ON MARS?
Over the years, scientists have found a number of promising signs that life may have been present on Mars, including evidence of water, chemical reactions, and expansive ice lakes beneath the surface.
Life on Mars is unlikely to have flourished on the surface, given the harsh conditions – including radiation, solar winds, and frigid temperatures.
As a result, many scientists believe organisms evolved to live beneath the surface of the Red Planet.
In November 2016, Dr Christian Schröder, an environmental science and planetary exploration lecturer at Stirling University, said: ‘For life to exist in the areas we investigated, it would need to find pockets far beneath the surface, located away from the dryness and radiation present on the ground.’
This is supported by evidence of water beneath the surface.
Researchers have identified mudstones and sedimentary bands on Mars, which only form when there is water present for thousands of years.
Vast oceans of ice have also been uncovered, lying just below the surface of the planet.
The presence of ice and water beneath the Red Planet greatly increases the chances that there was once at least microscopic life on Mars and that some form of the organism could be living there today.
'Any place on Earth we find liquid water we find life,’ Jim Crocker, vice president of Lockheed Martin's Space Systems said in August 2016.
‘It's very exciting to understand the possibility that life could possibly have started on Mars before it lost its atmosphere, and perhaps even in the deeper surfaces, where water is still liquid because of the heat of the planet, perhaps there's bacterial life.'
Having water just below the surface also means that human colonies could survive and even thrive on the planet and indicates that fuel for manned spaceflight could be manufactured there.
In 2017, Nasa's Curiosity rover also found evidence of boron on the red planet's surface.
This is another key ingredient for life, and scientists say the find is a huge boost in the hunt for life.
Boron was unearthed in the Gale Cater, which is 3.8 billion years old, younger than the likely formation of life on Earth.
That means the conditions from which life could have potentially grown may have existed on ancient Mars, long before organisms began to develop on Earth.
A controversial 2001 study into a 4.5 billion-year-old meteorite, dubbed ALH84001, which was found in Antarctica's Allan Hills ice field in 1984, claimed it had definitive prove of life on Mars.
Meteorite ALH84001 was blasted off the surface of Mars by a comet or asteroid 15 million years ago, and Nasa researchers said it contains proof the Red Planet was once teeming with bugs which lived at the bottom of shallow pools and lakes.
They also suggested there would have been plants or organisms capable of photosynthesis and complex ecosystems on Mars.
However British experts said at the time that the evidence, though exciting, had to be treated with caution and could not be taken as conclusive, since many non-biological chemical processes could also explain what was found.
Most watched News videos
- Brutal bat attack caught on surveillance video in the Bronx
- Man fatally shoots a father during an argument over a handicap spot
- Man sets up projector to make garden look like jurassic park
- Waitress tackles male customer after grabbing her backside
- Road rage brawl ends with BMW driver sending man flying
- Sir David Attenborough shuts down Naga Munchetty's questions
- Shocking video shows mother brutally beating her twin girls
- Bon Jovi star Richie Sambora soars in fighter plane
- Several people injured during knife attack on bus in Germany
- Passenger films inside BA flight that made emergency landing at Gatwick
- Debris flies from Tornado touch down in Valeria, Iowa
- Last known survivor of Amazon tribe captured on camera | <urn:uuid:2930e294-bdaf-4cda-bc1d-b9b867809e26> | 3.515625 | 2,046 | News Article | Science & Tech. | 40.591507 | 95,627,094 |
- Open Access
Amplification of tsunami heights by delayed rupture of great earthquakes along the Nankai trough
© The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences; TERRAPUB 2010
Received: 16 March 2009
Accepted: 18 December 2009
Published: 17 June 2010
We investigated the effect of delayed rupture of great earthquakes along the Nankai trough on tsunami heights on the Japanese coast. As the tsunami source, we used a model of the 1707 Hoei earthquake, which consists of four segments: Tokai, Tonankai, and two Nankai segments. We first searched for the worst case, in terms of coastal tsunami heights, of rupture delay time on each segment, on the basis of superposition principle for the linear long wave theory. When the rupture starts on the Tonankai segment, followed by rupture on the Tokai segment 21 min later, as well as the eastern and western Nankai segments 15 and 28 min later, respectively, the average coastal tsunami height becomes the largest. To quantify the tsunami amplification, we compared the coastal tsunami heights from the delayed rupture with those from the simultaneous rupture model. Along the coasts of the sea of Hyu’uga and in the Bungo Channel, the tsunami heights become significantly amplified (>1.4 times larger) relative to the simultaneous rupture. Along the coasts of Tosa Bay and in the Kii Channel, the tsunami heights become amplified about 1.2 times. Along the coasts of the sea of Kumano and Ise Bay, and the western Enshu coast, the tsunami heights become slightly smaller for the delayed rupture. Along the eastern Enshu coast, the coast of Suruga Bay, and the west coast of Sagami Bay, the tsunami heights become amplified about 1.1 times.
The 1707 Hoei earthquake was one of the largest earthquakes occurred in Edo period, when historical documents had become available throughout Japan. This earthquake resulted Mw over 8.4 and the tsunami runup heights more than 5 m on the Pacific coast. For this event, the rupture is considered to have started from the N2 segment and all the (N1 to N4) segments were broken with short time delay. However, based on historical documents, some studies claimed that time delay of less than several tens of minutes might exist between fault rupture at each segment during the 1707 Hoei earthquake (Iida, 1985; Usami, 2003). Such delayed rupture between segments would produce positive interference of tsunami amplitude and duration compared with simultaneous rupture, hence important for mitigating tsunami disasters associated with the Nankai trough earthquakes.
The effect of such rupture delay on coastal tsunami heights was studied by Kawata et al. (2003), and also by Central Disaster Mitigation Council (http://www.bousai.go.jp/jishin/chubou/nanka/16/sankousiryou2_9.pdf). They evaluated, based on the superposition principle for the linear long wave theory, the tsunami heights at 10 local points such as Shizuoka, Nagoya and Wakayama. They simply considered the worst case for each of evaluation points without considering seismological or geophysical rupture scenarios. Evaluation of tsunami heights based on the superposition principle is simple and efficient; however, there is a possibility to overestimate the tsunami heights by using linear long wave equation without considering the effects of bottom friction or advection terms.
In addition to the historical records of past earthquakes, geophysical modeling supports the nucleation of large earthquakes from the Tonankai (N2) segment off Kii Peninsula. Hori et al. (2004) made numerical simulation of cyclic occurrence of large earthquakes along the Nankai trough and showed that large earthquakes nucleate off Kii Peninsula, where both dip angle and the convergence rate of the subducting slab are larger than the other segments.
In this paper, we investigate the effects of delayed rupture of the Nankai and Tokai segments, based on a seismological model by assuming that the rupture initiates at the Tonankai (N2) segment off Kii Peninsula. We first search for the worst case of rupture delay in terms of coastal tsunami heights based on the superposition principle for the linear long wave theory. We then make tsunami simulation of nonlinear long wave theory for the worst case scenario, to examine the amplification of tsunami heights relative to the simultaneous rupture on the coasts from Sagami Bay through Kyushu.
2. Numerical Computation
Fault parameter of the 1707 Hoei earthquake (Annaka et al., 2003).
3. Search for the Worst Case of Rupture Delay
The rupture delay time for each segment is searched under three conditions based on the seismological and geophysical information; 1) the rupture starts on the N2 segment, 2) the delay time is less than 60 min on neighboring segments, and 3) rupture on the N4 segment is followed by that on the N3 segment. The tsunami waveform at each local point is evaluated based on the superposition principle. The maximum tsunami height is searched with 1 min interval for 120 min from the initial rupture of the N2 segment.
At Susaki on the coast of Tosa Bay, the synthetic waveform is dominated by waveforms from the N4 and N3 segments, and the contribution from the N1 and N2 segments is small. Because Susaki is located far from the N1 and N2 segments, the delayed rupture on the N1 segment has little effect on the synthetic waveform. The same feature is seen on the coasts of the sea of Hyu’uga, in the Bungo Channel, of Tosa Bay and in the Kii Channel.
Rupture delay time of the worst case scenario.
Rupture delay time (min)
At Shimoda in Izu Peninsula, the waveform from the N1 segment has the largest contribution, followed by that from the N2 segment, to the synthetic waveform. By delaying the N1 waveform, the synthetic waveform has larger amplitude than the simultaneous rupture. The same feature is seen on the eastern Enshu coast, on the coast of Suruga Bay, and on the west coast of Sagami Bay.
To quantify the amplification of coastal tsunami heights with respect to the simultaneous rupture, we introduce geometric average A of the ratios of computed tsunami heights from the delayed rupture to that from the simultaneous rupture at 36 points. Computed coastal heights less than 0.5 m are excluded for the calculation of A value.
We search for a combination of rupture delay times on the N1, N3 and N4 segments, which makes the amplitude of synthetic tsunami waveforms maximum. We vary the rupture delay times up to 60 min for 1 min interval, hence we examined 60 * 60 * 60 cases to find the maximum value.
The combination that gives the maximum A value is shown in Table 2. The A value is 1.26 for this worst case. Unlike the previous scenario (Kawata et al., 2003), this scenario is based on seismological studies and considered to be realistic rather than a purely worst case scenario.
4. Nonlinear Computation for the Rupture Scenario
The search made in the last section indicates that the rupture scenario shown in Table 2 produces the largest tsunami heights on the average. Because we assumed the superposition principle for the linear long waves, there is a possibility to have overestimated tsunami heights without considering the effects of bottom friction and advection terms. We therefore calculate the coastal tsunami heights for the above scenario by using nonlinear long wave theory.
5. Tsunami Heights Along the Japanese Coast
Along Coast 1, the east coast of Kyushu (1-1) and the coast in the Bungo Channel (1-2, 3), tsunami heights in most of the coastline are significantly amplified (A > 1.4) for the delayed rupture scenario. As we have seen in Fig. 5, the tsunami from the delayed ruptures of the N3 and N4 segments interferes with the westward propagating tsunami from the N2 segment to cause larger amplitudes. But the delayed rupture on the N1 segment has little effect on the tsunami height.
Along Coast 2, the coast of Tosa Bay (2-1) and the west (2-2) and east (2-3, 4) coasts in the Kii Channel, tsunami heights in most of the coastline are amplified for the delayed rupture scenario (1 < A < 1.2). At some locations around Susaki, Kaifu and Tanabe, the tsunami heights for the delayed rupture become very large.
Along Coast 3, the coast of the sea of Kumano (3-1), and close to Ise Bay (3-2, 3) and the western Enshu coast (34), the tsunami heights for the delayed rupture scenario are similar to (3-1, 2, 3, A ∼ 1) or smaller than (3-4, A < 1) the simultaneous rupture. Maximum tsunami heights on Coast 3-1, 3-2 and 3-3 are controlled by the N2 segment, hence the delayed rupture on the other segments has little effect on the maximum tsunami heights. Tsunami heights on Coast 3-4 are controlled by both N1 and N2 segments; tsunamis from these segments arrive almost the same time to amplify the coastal heights.
Along Coast 4, the eastern Enshu coast and the coast of Suruga Bay (4-1), and the west coast of Sagami Bay (42), tsunami heights for the delayed rupture scenario become larger than the simultaneous rupture (1 < A < 1.2), particularly around Omaesaki and Irouzaki. These amplifications of tsunami heights are due to delayed rupture on the N1 segment as we have seen in Fig. 5. On the coast east of Ito, the heights are almost the same for the simultaneous and delayed ruptures.
We investigated the effects of delayed rupture of the Tokai and Nankai segments on coastal tsunami heights. The parameter search based on the superposition principle of linear long wave theory indicates that the worst case scenario is the rupture starts on the Tonankai (N2) segment followed by the rupture on the Tokai (N1) segment 21 min later, and the Nankai (N3 and N4) segments 15 min and 28 min later. In this scenario, the tsunami heights become significantly amplified (A > 1.4) with respect to the simultaneous rupture along the coasts of the sea of Hyu’uga and in the Bungo Channel, amplified (1 < A < 1.2) along the coasts of Tosa Bay and in the Kii Channel. The tsunami heights are similar (A ∼ 1) along the coasts of the sea of Kumano and of Ise Bay, and smaller (A < 1) along the western Enshu coast. Along the coast of Suruga Bay and along the west coast of Sagami Bay, the tsunami heights become amplified (1 < A < 1.2).
This study was supported by Research Project of the “Improvements of strong ground motion and tsunami simulation accuracy for application of realistic disaster prevention of the Nankai-Trough mega-thrust earthquake” of the Ministry of Education, Culture and Sports, Science and Technology. Bathymetry data in this study was offered by the Cabinet Office. We are grateful to Drs. Tatsuhiko Saito of ERI, Univ. of Tokyo and Yuichi Namegaya of AIST for their advice during the course of this study.
- Aida, I., Numerical experiments for the tsunamis generated off the coast of the Nankaido district, Bull. Earthq. Res. Inst., 56, 713–730, 1981 (in Japanese).Google Scholar
- Ando, M., Source mechanisms and tectonic significance of historical earthquakes along the Nankai trough, Japan, Tectonophysics, 27, 204–215, 1975.View ArticleGoogle Scholar
- Annaka, T., K. Inagaki, H. Tanaka, and K. Yanagisawa, Characteristics of great earthquakes along the Nankai trough based on numerical tsunami simulation, J. Earthq. Eng., JSCE, CD-ROM, 2003 (in Japanese).Google Scholar
- Goto, C. and Y. Ogawa, (IUGG/IOC TIME PROJECT), Numerical method of tsunami simulation with the leap-frog scheme, Part 1 Shallow water theory and its difference scheme, Intergovernmental Oceanographic Commission, Manuals and Guides, No. 35, 43 p., 1997.Google Scholar
- Hori, T., N. Kato, K. Hirahara, T. Baba, and Y. Kaneda, A numerical simulation of earthquake cycles along the Nankai trough, southwest Japan: Lateral variation in frictional property due to the slab geometry controls the nucleation position, Earth Planet. Sci. Lett, 228, 215–226, 2004.View ArticleGoogle Scholar
- Iida, K., Investigation of historical earthquakes (5): Earthquake and tsunami damages by the Hoei earthquake of October 28, 1707, Bull. Aichi Inst. Technol. Part B, 17, 143–157, 1985 (in Japanese).Google Scholar
- Ishibashi, K., Status of historical seismology in Japan, Ann. Geophys., 47, 339–368, 2004.Google Scholar
- Kawata, Y., S. Suzuki, and T. Takahashi, An effect of giant earthquake scenarios at the Nankai trough on a tsunami hazard, Proc. Coastal Eng., JSCE50, 326–330, 2003 (in Japanese).View ArticleGoogle Scholar
- Mansinha, L. and D. E. Smylie, The displacement fields of inclined faults, Bull. Seismol. Soc. Am, 61(5), 1433–1440, 1971.Google Scholar
- Usami, T., Materials for Comprehensive List of Destructive Earthquakes in Japan, University of Tokyo Press, 2003 (in Japanese).Google Scholar | <urn:uuid:33a62bfb-fb3f-46c4-ae20-0892241d69e3> | 2.5625 | 2,974 | Academic Writing | Science & Tech. | 51.520231 | 95,627,096 |
Kinematics, Dynamics, and the Structure of Physical Theory
- Publication Year:
- Usage 272
- Downloads 272
- PhilSci-Archive 272
- Repository URL:
- Most Recent Tweet View All Tweets
Every physical theory has (at least) two different forms of mathematical equations to represent its target systems: the dynamical (equations of motion) and the kinematical (kinematical constraints). Kinematical constraints are differentiated from equations of motion by the fact that their particular form is fixed once and for all, irrespective of the interactions the system enters into. By contrast, the particular form of a system's equations of motion depends essentially on the particular interaction the system enters into. All contemporary accounts of the structure and semantics of physical theory treat dynamics, i.e., the equations of motion, as the most important feature of a theory for the purposes of its philosophical analysis. I argue to the contrary that it is the kinematical constraints that determine the structure and empirical content of a physical theory in the most important ways: they function as necessary preconditions for the appropriate application of the theory; they differentiate types of physical systems; they are necessary for the equations of motion to be well posed or even just cogent; and they guide the experimentalist in the design of tools for measurement and observation. It is thus satisfaction of the kinematical constraints that renders meaning to those terms representing a system's physical quantities in the first place, even before one can ask whether or not the system satisfies the theory's equations of motion. | <urn:uuid:07072abb-8ef2-4580-8f77-70be408934d9> | 2.59375 | 317 | Truncated | Science & Tech. | 13.560921 | 95,627,108 |
Plastic waste has become such a significant issue, even a junior high student knows it’s an overwhelming task. Anna Du, a sixth-grade student from Massachusetts, is in the beginning stages of a robot that can help clean up plastic littler. It follows some of the other amazing entrepreneurial efforts to eliminate plastic waste, like Boyan Slat’s Ocean Cleanup project.
Du’s inspiration to create a cleanup solution of her own came when she wasn’t able to pick up all the pollution at the Boston Harbor. She wanted an easier process to retrieve the trash, creating a robot that uses underwater infrared light to track down microplastics. As development continues, she hopes the machine will have the ability to pick this trash up.
There’s been collaborative efforts like Litterati to help collect all kinds of pollution in our environment and provides unique data, such as how much trash can be found in an area and it can link what companies are responsible. Du’s project, at least in the initial stages, looks to give us further details on just how bad our oceans are polluted when it identifies plastic waste.
"One day when I was at Boston Harbor, I noticed there was a lot of plastics on the sand," Du told WSB Radio. "I tried picking some up, but there seemed to be so many more, and it just seemed impossible to clean it all up."
Plastic pollution in our waters continues to be a serious problem. Only a portion of it is caused by ships -- 80 percent of it comes from urban runoff, such as when it blows away from our trash cans, garbage trucks, or landfills. The Great Pacific Garbage Patch continues to collect enough pollution to be bigger than countries like France. Overall, there’s over five trillion pieces of plastic in the ocean.
The project was entered in a Young Scientist Lab challenge, created by Discovery Education and 3M. Du ended up being one of the 10 finalists and is mentored by one of the 3M scientists, Dr. Ann Fornof. Fornof is an advanced research specialist with a Ph.D in macromolecular science that helps commercialize the company’s products.
Du hopes to create a cleaning system that has a similar impact to Boyan Slat’s Ocean Cleanup machine. Over the next five years, it’s expected to clean up half of the Great Pacific Garbage Patch. This summer, Slat is planning to launch the first cleanup system and most recently has successfully conducted towing tests.
“[In 15 years, I hope to be an] engineer because I love the ocean and marine animals, and I want to do something to help,” Du told the Young Scientist Lab in a Q&A. “In the future, with my engineering, I hope to be able to save people with all of my inventions.”
A number of other unique projects made the cut at the Young Scientist Lab challenge. Leo Wylonis from Pennsylvania is working to limit aircraft carbon emissions through the use of pneumatic artificial muscles. Theodore Jiang from California is creating a new smartphone case that’s able to generate electricity from finger taps on the screen, essentially charging while you text, or as he calls it, “textricity.”
These seven Etsy shops from around the world offer an impressive range of cruelty-free products you can feel good about putting on your face.
A new report shares why decentralized energy grids will power the homes of the future and make a major difference in the lives of those in developing countries currently with limited or zero access to electricity.
Starbucks and McDonalds are working together to rethink to-go cups and inviting others to join them in creating eco-friendly packaging in an effort to reduce waste and environmental impact.
A new report finds that meat and dairy producers are on track to surpass the oil industry's greenhouse gas emissions. | <urn:uuid:ddcfe27d-c2ba-4a9c-947d-e2e789c8c482> | 3.265625 | 813 | News Article | Science & Tech. | 48.830398 | 95,627,110 |
From AMS Glossary
(Redirected from Sea level chart)
(Also called surface map, sea level chart, sea level pressure chart.) An analyzed chart of surface weather observations.
Essentially, a surface chart shows the distribution of sea level pressure, including the positions of highs, lows, ridges, and troughs and the location and character of fronts and various boundaries such as drylines, outflow boundaries, sea-breeze fronts, and convergence lines. Often added to this are symbols of occurring weather phenomena, analysis of pressure tendency (isallobars), indications of the movement of pressure systems and fronts, and perhaps others, depending upon the intended use of the chart. Although the pressure is referred to mean sea level, all other elements on this chart are presented as they occur at the surface point of observation. A chart in this general form is the one commonly referred to as the weather map. When the surface chart is used in conjunction with constant-pressure charts of the upper atmosphere (e.g., in differential analysis), sea level pressure is usually converted to the height of the 1000-mb surface. The chart is then usually called the 1000-mb chart. | <urn:uuid:e0184650-b026-4c6b-b804-8dc3983f6ab6> | 3.984375 | 241 | Structured Data | Science & Tech. | 39.041976 | 95,627,112 |
OpenSees - The Open System for Earthquake Engineering Simulation
Earthquake engineering has long been hampered by limitations of available computational procedures. The most familiar and commonly used computer tools were developed in the 1970s and 1980s, and are limited by simplifications and approximations that were necessary at the time. Furthermore, different computer codes are required for different parts of the earthquake engineering problem, from seismic hazard analysis, through geotechnical site response, through soil-structure interaction, to structural response. New computer simulation tools are necessary to integrate the best knowledge from all the disciplines so that more effective engineering can be practiced.
A centerpiece of PEER’s program is new research on simulation models and computational methods to assess the performance of structural and geotechnical systems. Breaking the barriers of traditional methods and software development protocols, PEER has embarked on a completely new approach in the earthquake engineering community by developing an open-source, object-oriented software framework. OpenSees is a collection of modules to facilitate the implementation of models and simulation procedures for structural and geotechnical earthquake engineering. By shared development using well-designed software interfaces, the open-source approach has effected collaboration among a substantial community of developers and users within and outside of PEER. Unique among software for earthquake engineering, OpenSees allows integration of models of structures and soils to investigate challenging problems in soil-structure-foundation interaction. In addition to improved models for reinforced concrete structures, shallow and deep foundations, and liquefiable soils, OpenSees is designed to take advantage of the latest developments in databases, reliability methods, scientific visualization, and high-end computing.
PEER has provided substantial support to the community by sponsoring three workshops on OpenSees, attended by more than 100 researchers and engineers. Over 300 developers and users share insights and are kept apprised of the latest developments through on-line collaboration tools. The OpenSees websitehttp://opensees.berkeley.edu provides the source code, documentation, examples, user group links, and information about the development roadmap for the software. | <urn:uuid:878a062a-ecba-41ea-a6d1-2f969bd3e779> | 2.546875 | 425 | About (Org.) | Software Dev. | -0.864781 | 95,627,121 |
Potts, BM and Sandhu, KS and Wardlaw, T and Freeman, J and Li, H and Tilyard, P and Park, RF, Evolutionary history shapes the susceptibility of an island tree flora to an exotic pathogen, Forest Ecology and Management, 368 pp. 183-193. ISSN 0378-1127 (2016) [Refereed Article]
Copyright 2016 Published by Elsevier B.V.
With globalisation, the world’s native biotas are increasingly exposed to disease, parasitism, herbivory and competition from exotic organisms. The vulnerability of native biota to these exotic invasions is exacerbated by human disturbance and global climate change. Rust pathogens are some of the most important plant pathogens, including Puccinia psidii (myrtle rust or guava rust) that is now spreading worldwide at an alarming rate. P. psidii is native to South America and affects species in the family Myrtaceae, including the economically and ecologically important eucalypts. The Australian continent has a rich myrtaceous flora and is the centre of origin of most eucalypt species. P. psidii was first detected in Australia in 2010 and has since rapidly spread along its east coast. We assess the risk this exotic pathogen poses to the eucalypt flora of the southern Australian island of Tasmania, where the first incursion of P. psidii was detected in early 2015. Specifically, we tested the relative importance of phylogenetic history, habitat, endemism, and range size in predicting host susceptibility.
Rust screening of seedlings from one to four populations of each of the 30 eucalypt species which are native to Tasmania, revealed significant genetic-based variation in response among host species and populations within the species. Significant population differences in susceptibility were detected in threatened, rare and endemic eucalypt species, as well as Australia’s main plantation eucalypt (Eucalyptus globulus) and the world’s tallest angiosperm species (Eucalyptus regnans). A significant proportion of the variation in host species susceptibility to this exotic pathogen was explained by phylogenetic history, while factors such as habitat, endemism and range size had no detectable effect. Species from subgenus Eucalyptus (13 species) were more susceptible than those from subgenus Symphyomyrtus (17 species) due to differences between the subgenera in the proportion of plants showing a symptomless response. These subgenera are here shown to differ in their leaf oil and wax chemistry. The potential contribution of these differences and other possible mechanisms causing these subgeneric differences in susceptibility are discussed. This study demonstrates the power of a phylogenetic approach to risk assessment for biosecurity and highlights the need for broader resistance screening within and between populations of species of high conservation or economic value.
|Item Type:||Refereed Article|
|Keywords:||Eucalyptus, Puccinia psidii, myrtle rust, guava rust, biosecurity, phylogeny|
|Research Division:||Biological Sciences|
|Research Field:||Population, Ecological and Evolutionary Genetics|
|Objective Division:||Plant Production and Plant Primary Products|
|Objective Field:||Native Forests|
|Author:||Potts, BM (Professor Brad Potts)|
|Author:||Freeman, J (Dr Jules Freeman)|
|Author:||Li, H (Dr Hai Li)|
|Author:||Tilyard, P (Mr Paul Tilyard)|
|Web of Science® Times Cited:||5|
|Deposited By:||Plant Science|
Repository Staff Only: item control page | <urn:uuid:efb1b0b1-26a5-4a64-8f0a-6a344a36edaf> | 2.671875 | 791 | Academic Writing | Science & Tech. | 16.766418 | 95,627,127 |
Articles filed under Impact on Bats
The 152-megawatt Spring Valley Wind Energy project about 260 miles northeast of Las Vegas killed an estimated 566 bats in 2013, so its operator agreed to change when the windmills kick on in hopes of reducing the number of deaths.
Heartland Community College will be doing a more complete study later this year to determine if its wind turbine is killing too many birds and bats.
Disease and heedless management of wind turbines are killing North America’s bats, with potentially devastating consequences for agriculture and human health. We have yet to find a cure for the disease known as white-nose syndrome, which has decimated populations of hibernating, cave-dwelling bats in the Northeast. But we can reduce the turbine threat significantly without dismantling them or shutting them down.
USFWS’s National Fish and Wildlife Forensics Laboratory studied three solar farms in Southern California: Desert Sunlight, Genesis Solar and Ivanpah Solar Electric Generating System (ISEGS). Two-hundred and thirty-three different birds from 71 species were found over the course of a two-year study.
Wildlife-smart wind power may be as close as it gets to "green energy." But over vast swaths of America, the "smart" part is still more hot air than reality--especially when it comes to raptors. Essayist Ted Williams provides an important review of wind energy and its impact on birds.
Developing and implementing a habitat conservation plan is a requirement for obtaining an incidental take permit under the Endangered Species Act. Without such a permit, it is illegal to harm or kill federally threatened and endangered species. The plan and permit allow for projects that potentially impact threatened or endangered species to continue while the company takes actions to avoid, minimize and mitigate for the impacts.
Exelon Generation, which owns and operates the 28-turbine Criterion wind project built in 2010 in Garrett County, has pledged to "feather" or reduce the rotation speed of its turbines' blades during nighttime from late summer to early fall, peak bat migration time.
A business park near the Camp Perry site already has put up a wind turbine, but it isn't operating yet. Kim Kauffman, director of the Black Swamp Bird Observatory, said they will be monitoring it. "If we were to learn it killed migrating birds or eagles, we would pursue legal action," she said. There are about 60 bald eagle nests within 10 miles of the wind turbine, she said.
But these are hard days for these peculiar animals, because they face mass extinction from a disease called White Nose Syndrome and every night thousands of are killed by energy-producing wind turbines that conservationists, economists and politicians hope will reduce this nation's need for foreign oil. A new study from the University of Colorado, Denver, estimates that 600,000 bats were killed by wind turbines last year alone.
The review of a proposed 62-turbine wind farm project in this Somerset County town has been put on hold in part because of concerns about the danger the turbines might pose to bats being threatened by white-nose syndrome, a rapidly spreading fungal disease.
Bat mortality is a typical concern at wind energy farms and it is standard practice to evaluate bat habitats and mortality rates during project reviews. The Department of Environmental Protection, however, may be revising its recommendations on the turning speed of wind turbines, which can be a threat to birds and bats that fly into them.
More than a half-million bats were killed by flying into high-speed wind energy turbines last year, according to new research scheduled for publication next week in the journal BioScience. Previous estimates had said that the clean-energy producing mechanisms were responsible from between 33,000 to 880,000, but a new analysis of dead bats found at wind turbine sites conducted by University of Colorado-Denver researchers places that figure at over 600,000.
Little information is available on bat deaths at wind turbine facilities in the Rocky Mountain West or the Sierra Nevada, according to Mark Hayes, a University of Colorado, Boulder researcher who authored a new study, set to be published in the journal BioScience. “The development and expansion of wind energy facilities is a key threat to bat populations in North America,” Hayes said.
Over 600,000 bats were killed by wind energy turbines across the United States last year, with the highest concentration of kills in the Appalachian Mountains, according to new research. In a paper published Friday in the journal BioScience, University of Colorado biologist Mark Hayes used records of dead bats found beneath wind generators, and statistical analysis, to estimate how many bats were struck and killed by generator propellers each year.
One looming threat is the growing presence of wind farms — a threat that wasn’t realized until the first turbine went up in northeastern B.C. and killed two Eastern red bats, a species biologists weren’t even aware existed in the province. “It was a real red flag for us that we don’t know enough about our bats and we better figure it out fast.”
“The Sierra Club position is that we support wind energy ‘in appropriate sites,’ and that has to include siting considerations and engineering and operating conditions to minimize bird impacts,” said Jim Kotcon, conservation chair of the West Virginia chapter of the Sierra Club. “We should not be providing a blank check to wind farms, and they need to operate in an environmentally conscious way in order to retain their claim as ‘green energy’.”
GMP will also continue to follow its certificate of public good which requires voluntary curtailment of turbine operation during calm or nearly calm summer evenings when bats are out hunting. The agreement gave GMP a permit allowing a handful of bats to be killed at the wind project each year, with the understanding that more bats would be saved through the mitigation funding than lost at the wind project.
The operator of a southern West Virginia wind farm estimates that several dozen endangered bats could be killed by flying into turbine blades during a 25-year period, according to a federal review of the risks to the flying mammals. The estimated death toll comes as Beech Ridge Energy requests a permit under the federal Endangered Species Act.
The technology lets researchers track all of the tagged birds on one frequency but identify them separately, including 600 birds and bats tagged by other researchers in the Gulf of Maine. ...The Nantucket Sound pilot project is designed to help researchers figure out what marine and coastal birds are doing and where they are doing it offshore, said Caleb Spiegel, a biologist with the wildlife service, which is supporting the work.
On Thursday the Vermont Agency of Natural Resources will hold a hearing in Lowell on Green Mountain Power's request to kill up to four of the bats a year at the Kingdom Community Wind site in Lowell. The request comes as bat populations in the Northeast have been decimated by a fungal disease called white nose syndrome. | <urn:uuid:1a20092b-657f-4fed-bb0c-25690be0c746> | 2.890625 | 1,433 | Content Listing | Science & Tech. | 35.975374 | 95,627,134 |
The UN Framework Convention on Climate Change (UNFCCC) aims to avoid what is called “dangerous anthropogenic interference with the climate system”.
However, there is no guarantee that the level of climate change – how much the temperature increases in the future – is the only thing we should be worried about. How quickly the changes take place can also mean a lot for how serious the consequences will be. This was already acknowledged when the UNFCCC was signed in 1992. It says that we must stabilize the concentrations of greenhouse gases in the atmosphere within a time period that allows ecosystems to adapt and economic development to continue, and that ensures that food production will not be threatened. This focus on rate of change has, however, not been reflected to any noticeable degree among either scientists or politicians.
There are a few studies that focus on the consequences of the rate of climate change. Most of these are ecological studies. They leave no doubt that the expected rate of change during this century will exceed the ability of many animals and plants to migrate or adapt. Leemans and Eickhout (2004) found that adaptive capacity decreases rapidly with an increasing rate of climate change. Their study finds that five percent of all ecosystems cannot adapt more quickly than 0.1 °C per decade over time. Forests will be among the ecosystems to experience problems first because their ability to migrate to stay within the climate zone they are adapted to is limited. If the rate is 0.3 °C per decade, 15 percent of ecosystems will not be able to adapt. If the rate should exceed 0.4 °C per decade, all ecosystems will be quickly destroyed, opportunistic species will dominate, and the breakdown of biological material will lead to even greater emissions of CO2. This will in turn increase the rate of warming.
According to the Intergovernmental Panel on Climate Change (IPCC), the global average temperature today is increasing by 0.2 °C per decade.
There is also a risk that rapid climate change will increase the likelihood of large and irreversible changes, such as a weakening of the Gulf Stream and melting of the Greenland ice sheets. Rapid change also increases the risk of triggering positive feedback mechanisms that will increase the rate and level of temperature change still more.
We know far less about the consequences of rate of temperature increase than we do about the level. Nevertheless, we know enough to say that if we are to avoid dangerous climate change, then we should also be concerned about how quickly it occurs. This can have important implications for which climate measures we should implement. If we set a long-term climate goal – such as 2 °C – there will be many different emissions paths we could take to reach this goal. But these emissions paths can differ to a relatively large degree with respect to how quickly the changes will take place – especially over the next few decades.
Focusing on the rate of climate change can imply that we should concentrate more on the short-lived greenhouse gases – such as methane and tropospheric ozone – and particles with a warming effect, such as soot (black carbon). It can also imply a greater focus on the medium-term (the next few decades), since the fastest changes could occur around that time.
Petter Haugneland | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:2dc1d138-311c-46bc-bf4b-bd2d8c210a0e> | 3.953125 | 1,320 | Content Listing | Science & Tech. | 41.844041 | 95,627,160 |
Global methane emissions from oil production between 1980 and 2012 were far higher than previously thought – in some cases, as much as double the amount previously estimated, according to a new scientific study.
The reason for the discrepancy is simple. The author of the study − which also includes emissions of another gas, ethane − says it is the first to take into account different production management systems and geological conditions around the world.
Lena Höglund-Isaksson, senior research scholar at the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria, describes the old figures, which were based on arguing that what happened in North American oilfields applied equally to the rest of the world, as “rather simplistic”.
The IIASA study, published in Environmental Research Letters journal, is another reminder that climate science – like all science – is only as dependable as the data on which it relies.
In an oil reservoir, there is a layer of gas above the oil that has a methane content of 50 per cent to 85 per cent. When you pump the oil to the surface, this associated gas will also escape.
Lena Höglund-Isaksson, senior research scholar, International Institute for Applied Systems Analysis (IIASA)
In a system as complex as the atmosphere, faulty data can have far-reaching consequences.
Potent greenhouse gas
Methane is a potent greenhouse gas − the most important contributor to climate change after carbon dioxide. There is now international agreement that methane is 34 times more potent than CO2 over a century, but 84 times more over a much shorter timespan – just 20 years.
Yet while methane concentrations in the atmosphere can easily be measured, it is much harder to establish how much the different sources, whether human or natural, contribute to the total. This information is needed to work out how to reduce emissions.
Dr Höglund-Isaksson explains: “In an oil reservoir, there is a layer of gas above the oil that has a methane content of 50 per cent to 85 per cent. When you pump the oil to the surface, this associated gas will also escape.”
In oil production in North America, she says, almost all of this gas is recovered, and most of the small amount that is not will be flared to prevent leakage − and possible explosions. A very small amount is simply vented.
In other parts of the world, where gas recovery rates are lower, much larger quantities of methane emissions are released into the atmosphere.
“Existing global bottom-up emission inventories of methane used rather simplistic approaches for estimating methane from oil production, merely taking the few direct measurements that exist from North American oil fields and scaling them with oil production worldwide,” says Dr Höglund-Isaksson.
(Bottom-up, in this context, involves multiplying the production of oil by the amount of methane released per unit of oil produced).
This approach left ample room for error, so she decided to find a new method to provide a better explanation for the global variations.
In the new study, Dr Höglund-Isaksson estimated global methane emissions from oil and gas systems in over 100 countries over a 32-year period, using country-specific data ranging from reported volumes of associated gas to satellite imagery that can show flaring.
She also used atmospheric measurements of ethane, a gas that is released along with methane and is easier to link more directly to oil and gas activities.
Dr Höglund-Isaksson found that global methane emissions, particularly in the 1980s, were as much as double previous estimates.
Russia’s methane emissions
The study also found that the Russian oil industry contributes a large amount to the methane emissions.
A decline in the Russian oil industry in the 1990s contributed to a global decline in emissions, which continued until the early 2000s. That was when methane recovery systems were becoming more common and also helping to reduce emissions.
But since 2005, emissions from oil and gas systems have remained fairly constant, which Dr Höglund-Isaksson says is probably linked to increasing shale gas production, which largely offsets emission reductions achieved through increased gas recovery.
She says that there is still uncertainty in the numbers, and that improving the data requires close collaboration between the scientific measurement community and the oil and gas industry to make more direct measurements available from different parts of the world.
The good news is that her research promises more accurate measurements of how much methane is in the atmosphere.
The less good news is that just how much is there appears to be increasing rapidly – faster than at any time this century.
This story was published with permission from Climate News Network.
Thanks for reading to the end of this story!
We would be grateful if you would consider joining as a member of The EB Circle. This helps to keep our stories and resources free for all, and it also supports independent journalism dedicated to sustainable development. It only costs as little as S$5 a month, and you would be helping to make a big difference. | <urn:uuid:2bd58f53-6c5d-4ddd-9603-9c52aa092223> | 3.625 | 1,051 | News Article | Science & Tech. | 34.618188 | 95,627,196 |
You are here: Home News and Events Water Levels in the Great Lakes Water Levels in the Great Lakes by Heather Carr January 2, 2012, 2:08 am Water levels in the Great Lakes are lower than their long-term averages. Despite a wet season, experts believe the gains could evaporate. After the unusually warm temperatures this summer, lake levels are lower than many would like. Unseasonably warm temperatures have kept ice from forming on many parts of the lakes. Ice cover slows evaporation because the ice must first melt before the water can evaporate. Water levels make a difference in shoreline habitat, fish spawning, recreational boating, and the amount of cargo carried in commercial ships. Great Lakes photo via Shutterstock See more Previous article Bumble: A Family Friendly Cafe with Organically Produced Local Food Next article GM Corn Resistant to 2,4-D About to be Deregulated One Ping Pingback:Blue Living Ideas | Resources for Keeping our Planet Blue Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Upload a photo / attachment to this comment (PNG, JPG, GIF - 6 MB Max File Size): (Allowed file types: jpg, gif, png, maximum file size: 6MB. | <urn:uuid:98cb8dd5-61b1-4e0b-98fb-1fe22d771f0e> | 3 | 283 | Truncated | Science & Tech. | 36.665707 | 95,627,198 |
A single sand grain harbours up to 100,000 microorganisms from thousands of species.
Just imagine, you are sitting on a sunny beach, contentedly letting the warm sand trickle through your fingers. Millions of sand grains. What you probably can't imagine: at the same time, billions upon billions of bacteria are also trickling through your fingers.
View of a sand grain under a fluorescence microscope: The green spots are stained bacteria, which have mainly colonized depressions on the grain.
Max Planck Institute for Marine Microbiology/CC-SA BY 4.0
Between 10,000 and 100,000 microorganisms live on each single grain of sand, as revealed in a study by researchers from the Max Planck Institute for Marine Microbiology in Bremen. This means that an individual grain of sand can have twice as many residents as, say, the city of Fairbanks, Alaska!
Even bacteria hide
It has long been known that sand is a densely populated and active habitat. Now David Probandt and his colleagues have described the microbial community on a single grain of sand using modern molecular methods. To do this, they used samples taken from the southern North Sea, near the island of Heligoland, off the German coast.
The bacteria do not colonize the sand grains uniformly. While exposed areas are practically uncolonized, the bacteria bustle in cracks and depressions. “They are well protected there”, explains Probandt. “When water flows around the grains of sand and they are swirled around, rubbing against each other, the bacteria are safe within these depressions.” These sites may also act as hiding grounds from predators, who comb the surface of the sand grains in search of food.
However, the diversity of the bacteria, and not just their numbers, is impressive. “We found thousands of different species of bacteria on each individual grain of sand”, says Probandt.
Some bacteria species and groups can be found on all investigated sand grains, others only here and there. “More than half of the inhabitants on all grains are the same. We assume that this core community on all sand grains displays a similar function”, explains Probandt. “In principle, each grain has the same fundamental population and infrastructure.” We can therefore really discover a great deal about the bacterial diversity of sand in general from investigating a single grain of sand.
Sandy coasts are enormous filters
Sand-dwelling bacteria play an important role in the marine ecosystem and global material cycles. Because these bacteria process, for example, carbon and nitrogen compounds from seawater and fluvial inflows, the sand acts as an enormous purifying filter. Much of what is flushed into the seabed by seawater doesn't come back out.
“Every grain of sand functions like a small bacterial pantry”, explains Probandt. They deliver the necessary supplies to keep the carbon, nitrogen and sulphur cycles running. “Whatever the conditions may be that the bacterial community on a grain of sand is exposed to – thanks to the great diversity of the core community there is always someone to process the substances from the surrounding water.”
Probandt, T. Eickhorst, A. Ellrott, R. Amann and Katrin Knittel (2017): Microbial life on a sand grain: from bulk sediment to single grains. The ISME Journal.
Published under CC-SA BY 4.0
Probandt, D., Knittel, K., Tegetmeyer, H. E., Ahmerkamp, S., Holtappels, M. and Amann, R. (2017): Permeability shapes bacterial communities in sublittoral surface sediments. Environ Microbiol, 19: 1584–1599. doi:10.1111/1462-2920.13676
Please direct your queries to
Max Planck Institute for Marine Microbiology
Phone: +49 421 2028 940
Dr. Katrin Knittel
Max Planck Institute for Marine Microbiology
Phone: +49 421 2028 935
or the press office
Dr. Fanni Aspetsberger
Telefon: +49 421 2028 947
Dr. Manfred Schlösser
Phone: +49 421 2028 734
Dr. Fanni Aspetsberger | Max-Planck-Institut für marine Mikrobiologie
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:3541cf2c-e5ba-4f70-b26f-ce0ecf7410d4> | 3.765625 | 1,511 | Knowledge Article | Science & Tech. | 47.514675 | 95,627,213 |
Viscous Compressible Flow Simulations Using Supercomputers
The appearance of supercomputers accelerated the development of the sophisticated Computational Fluid Dynamics for the practical use. Even the “Reynolds-averaged” Navier-Stokes computations are in the matured stage with the help of supercomputers. For instance, two-dimensional Navier-Stokes code called ‘ARC2D’ developed by T.H.Pulliam and J.L.Steger at NASA Ames Research Center requires only 5 minutes to obtain the converged solution with 27000 grid points on CRAY XM-P. The two-dimensional code called ‘LANS2D’ developed by Obayashi and the present author currently requires less than 1 minute with 6400 grid points and is becoming faster using Japanese supercomputer, Fujitsu VP-400. Now, it seems that two-dimensional Navier-stokes codes such as these can be used as one of the design tools. Actually, Mitsubishi Heavy Industry is using the code ‘NSFOIL’ developed by the National Aerospace Laboratory,Japan for the design of a transonic airfoil, the detail of which was presented at the AIAA 3rd Applied Aerodynamics Conference with the experimental data.
KeywordsShock Wave Strong Shock Wave Spanwise Location NASA Ames Research National Aerospace Laboratory
Unable to display preview. Download preview PDF.
- 1).Pulliam, T. H. and Steger, J. L., “Recent Improvements in Efficiency, Accuracy, and Convergence for Implicit Approximate Factorization Algorithms,” AIAA Paper 85-0360, Reno, Nevada, January, 1985.Google Scholar
- 2).Obayashi, S., Matsushima, K., Fujii, K., and Kuwahara, K., “Improvements in Efficiency aand Reliability for Navier-Stokes Computations using the LU-ADI Factorization Algorithm,” to appear as AIAA Paper 86-0338, Reno, Nevada, January, 1986.Google Scholar
- 3).Miyakawa, J., Hi rose, N., and Kawai, N., Comparison of Aerodynamic Characte-ristics of Transonic Airfoil by Navier-Stokes Computation and by Wind Tunnel Test at High Reynolds Number/1 AIAA Paper 85–5025, Cbloradosprings, Colorado, Sept., 1985.Google Scholar
- 4).Obayashi, S. and Kuwahara, K., “LU Factorization of an Implicit Scheme for the Compressible Navier-Stokes Equations” AIAA Paper 84–1670, Snowmass, Colorado, June, 1984.Google Scholar
- 6).Obayashi, S., Kuwahara, K. and Yoshizawa, Y., “A New LU Factored Method for the Compressible Navier-Stokes Equations,” Proc. 9th ICNMFD, Saclay, France, June, 1984.Google Scholar
- 7).Obayashi, S. and Fujii, K., “Computation of Three-Dimensional Viscous Transonic Flows with the LU Factored Scheme,” AIAA Paper 85–1510, Cincinnati, Ohio, July,1985.Google Scholar
- 8).Lombard, C. K., Bardina, J., Venkatapathy, E., and O1iger, J., “Multi-Dimensional Formulation of CSCM-An Upwind Flux Difference Eigenvector Split Method for the Compressible Navier-Stokes Equations,” AIAA Paper 83–1895, Danvers, Massachusetts, July, 1983.Google Scholar
- 9).Baldwin, B. S. and Lomax, H., “Thin Layer Approximation and Algebraic Model for Separated Turbulent Flows,” AIAA Paper 78–257, Jan., 1978.Google Scholar
- 10).Takashima, K., to appear in Techniocal Memorandum of National Aerospace Laboratory, Japan.Google Scholar
- 11).Fujii, K and Obayashi, S., “Practical Applications of Improved LU-ADI Scheme for the Three-Dimensional Navier-Stokes Computation of Transonic Viscous Flows,” to appear as AIAA Paper 86-0513, Reno, Nevada January,1986.Google Scholar
- 12).Fujii, K and Obayash, S., “Navier-Stokes Simulation of Transonic Flows over Wing-Fuselage Combinations,” submitted to the presentation at the AIAA 4th Applied Aerodynamics Conference to be held at San Diego, California,June,1986.Google Scholar | <urn:uuid:26b79473-530d-492c-904d-53cf016f4b69> | 2.640625 | 1,027 | Academic Writing | Science & Tech. | 45.265538 | 95,627,232 |
New findings have implications for recent carbon dioxide rise and melting glaciers
A fresh look at some old rocks has solved a crucial mystery of the last Ice Age, yielding an important new finding that connects to the global retreat of glaciers caused by climate change today, according to a new study by a team of climate scientists.
Improved dating methods reveal that the rise in carbon dioxide levels was the primary cause of the simultaneous melting of glaciers around the globe during the last Ice Age. The new finding has implications for rising levels of man-made greenhouse gases and retreating glaciers today.
Courtesy: National Science Foundation
For decades, researchers examining the glacial meltdown that ended 11,000 years ago took into account a number of contributing factors, particularly regional influences such as solar radiation, ice sheets and ocean currents.
But a reexamination of more than 1,000 previously studied glacial boulders has produced a more accurate timetable for the pre-historic meltdown and pinpoints the rise in carbon dioxide - then naturally occurring - as the primary driving factor in the simultaneous global retreat of glaciers at the close of the last Ice Age, the researchers report in the journal Nature Communications.
"Glaciers are very sensitive to temperature. When you get the world's glaciers retreating all at the same time, you need a broad, global reason for why the world's thermostat is going up," said Boston College Assistant Professor of Earth and Environmental Sciences Jeremy Shakun. "The only factor that explains glaciers melting all around the world in unison during the end of the Ice Age is the rise in greenhouse gases."
The researchers found that regional factors caused differences in the precise timing and pace of glacier retreat from one place to another, but carbon dioxide was the major driver of the overall global meltdown, said Shakun, a co-author of the report "Regional and global forcing of glacier retreat during the last deglaciation."
"This is a lot like today," said Shakun. "In any given decade you can always find some areas where glaciers are holding steady or even advancing, but the big picture across the world and over the long run is clear - carbon dioxide is making the ice melt."
While 11,000 years ago may seem far too distant for a point of comparison, it was only a moment ago in geological time. The team's findings fix even greater certainty on scientific conclusions that the dramatic increase in manmade greenhouse gases will eradicate many of the world's glaciers by the end of this century.
"This has relevance to today since we've already raised CO2 by more than it increased at the end of the Ice Age, and we're on track to go up much higher this century -- which adds credence to the view that most of the world's glaciers will be largely gone within the next few centuries, with negative consequences such as rising sea level and depleted water resources," said Shakun.
The team reexamined samples taken from boulders that were left by the retreating glaciers, said Shakun, who was joined in the research by experts from Oregon State University, University of Wisconsin-Madison, Purdue University and the National Center for Atmospheric Research in Boulder, Colo.
Each boulder has been exposed to cosmic radiation since the glaciers melted, an exposure that produces the isotope Beryllium-10 in the boulder. Measuring the levels of the isotope in boulder samples allows scientists to determine when glaciers melted and first uncovered the boulders.
Scientists have been using this process called surface exposure dating for more than two decades to determine when glaciers retreated, Shakun said. His team examined samples collected by multiple research teams over the years and applied an improved methodology that increased the accuracy of the boulder ages.
The team then compared their new exposure ages to the timing of the rise of carbon dioxide concentration in the atmosphere, a development recorded in air bubbles taken from ice cores. Combined with computer models, the analysis eliminated regional factors as the primary explanations for glacial melting across the globe at the end of the Ice Age. The single leading global factor that did explain the global retreat of glaciers was rising carbon dioxide levels in the air.
"Our study really removes any doubt as to the leading cause of the decline of the glaciers by 11,000 years ago - it was the rising levels of carbon dioxide in the Earth's atmosphere," said Shakun.
Carbon dioxide levels rose from approximately 180 parts per million to 280 parts per million at the end of the last Ice Age, which spanned nearly 7,000 years. Following more than a century of industrialization, carbon dioxide levels have now risen to approximately 400 parts per million.
"This tells us we are orchestrating something akin to the end of an Ice Age, but much faster. As the amount of carbon dioxide continues to increase, glaciers around the world will retreat," said Shakun.
Ed Hayward | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:7c481e99-8e27-4148-87a1-38120eb1a5a9> | 3.703125 | 1,612 | Content Listing | Science & Tech. | 38.460408 | 95,627,262 |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Relativity theories (special relativity and general relativity) are based on the notion that position, orientation, movement and acceleration cannot be defined in an absolute way, but only relative to a system of reference. The scale relativity theory proposes to extend the concept of relativity to physical scales (time, length, energy, or momentum scales), by introducing an explicit "state of scale" in coordinate systems.
This extension of the relativity principle was originally introduced by Laurent Nottale, based on the idea of a fractal space-time theory first introduced by Garnet Ord, and by Nottale and Jean Schneider.
To describe scale transformations requires[why?] the use of fractal geometries, which are typically[dubious ] concerned with scale changes. Scale relativity is thus an extension of relativity theory to the concept of scale, using fractal geometries to study scale transformations.
The construction of the theory is similar to previous relativity theories, with three different levels: Galilean, special and general. The development of a full general scale relativity is not finished yet.
Feynman's paths in quantum mechanicsEdit
Richard Feynman developed a path integral formulation of quantum mechanics before 1966. Searching for the most important paths relevant for quantum particles, Feynman noticed that such paths were very irregular on small scales, i.e. infinite and non-differentiable. This means that in between two points, a particle can have not one path, but an infinity of potential paths.
This can be illustrated with a concrete example. Imagine that you are hiking in the mountains, and that you are free to walk wherever you like. To go from point A to point B, there is not just one path, but an infinity of possible paths, each going through different valleys and hills.
Scale relativity hypothesizes that quantum behavior comes from the fractal nature of spacetime. Indeed, fractal geometries allow to study such non-differentiable paths. This fractal interpretation of quantum mechanics has been further specified by Abbot and Wise, showing that the paths have a fractal dimension 2. Scale relativity goes one step further by asserting that the fractality of these paths is a consequence of the fractality of space-time.
There are other pioneers who saw the fractal nature of quantum mechanical paths. Also, as much as the development of general relativity required the mathematical tools of non-Euclidean (Riemannian) geometries, the development of a fractal space-time theory would not have been possible without the concept of fractal geometries developed and popularized by Benoit Mandelbrot. Fractals are usually associated with the self-similar case of a fractal curve, but other more complicated fractals are possible, e.g. considering not only curves, but also fractal surfaces or fractal volumes, as well as investigating fractal dimensions which have other values than 2, and which also vary with scale.
Garnet Ord and Laurent Nottale both connected fractal space-time with quantum mechanics. Nottale coined the term "scale relativity" in 1992. He developed the theory and its applications with more than one hundred scientific papers, two technical books in English, and three popular books in French.
Principle of scale relativityEdit
The principle of relativity says that physical laws should be valid in all coordinate systems. This principle has been applied to states of position (the origin and orientation of axes), as well as to the states of movement of coordinate systems (speed, acceleration). Such states are never defined in an absolute manner, but relatively to one another. For example, there is no absolute movement, in the sense that it can only be defined in a relative way between one body and another. Scale relativity proposes in a similar manner to define a scale relative to another one, and not in an absolute way. Only scale ratios have a physical meaning, never an absolute scale, in the same way as there exists no absolute position or velocity, but only position or velocity differences.
The concept of resolution is re-interpreted as the "state of scale" of the system, in the same way as velocity characterizes the state of movement. The principle of scale relativity can thus be formulated as:
the laws of physics must be such that they apply to coordinate systems whatever their state of scale.
The main goal of scale relativity is to find laws which mathematically respect this new principle of relativity. Mathematically, this can be expressed through the principle of covariance applied to scales, that is, the invariance of the form of physics equations under transformations of resolutions (dilations and contractions).
Including resolutions in coordinate systemsEdit
Galileo introduced explicitly velocity parameters in the observational referential. Then, Einstein introduced explicitly acceleration parameters. In a similar way, Nottale introduces scale parameters explicitly in the observational referential. The core idea of scale-relativity is thus to include resolutions explicitly in coordinate systems, thereby integrating measure theory in the formulation of physical laws.
An important consequence is that coordinates are not numbers anymore, but functions, which depend on the resolution. For example, the length of the Brittany coast is explicitly dependent on the resolution at which one measures it.
If we measure a pen with a ruler graduated at a millimetric scale, we should write that it is 15 ± 0.1 cm. The error bar indicates the resolution of our measure. If we had measured the pen at another resolution, for example with a ruler graduated at the centimeter scale, we would have found another result, 15 ± 1 cm. In scale relativity, this resolution defines the "state of scale". In the relativity of movement, this is similar to the concept of speed, which defines the "state of movement".
The relative state of scale is fundamental to know about for any physical description. For example, if we want to describe the movement and properties of a sphere, we may as well use classical mechanics or quantum mechanics depending on the size of the sphere in question.[time needed]
In particular, information on resolution is essential to understand quantum mechanical systems, and in scale relativity, resolutions are included in coordinate systems, so it seems a logical and promising approach to account for quantum phenomena.
Dropping the hypothesis of differentiabilityEdit
Scientific theories usually do not improve by adding complexity, but rather by starting from a more and more simple basis. This fact can be observed throughout the history of science. The reason is that starting from a less constrained basis provides more freedom and therefore allows richer phenomena to be included in the scope of the theory. Therefore, new theories usually do not contradict the old ones, but widen their domain of validity and include previous knowledge as special cases. For example, releasing the constraint of rigidity of space led Einstein to derive his theory of general relativity and to understand gravitation. As expected, this theory naturally includes Newton's theory, which is recovered as a linear approximation under weak fields.
The same type of approach has been followed by Nottale to build the theory of scale relativity. The basis of current theories is a continuous and two-times differentiable space. Space is by definition a continuum, but the assumption of differentiability is not supported by any fundamental reason. It is usually assumed only because it is observed that the first two derivatives of position with respect to time are needed to describe motion. Scale relativity theory is rooted in the idea that the constraint of differentiability can be relaxed and that this allows quantum laws to be derived.
In terms of geometry, differentiability means that a curve is sufficiently smooth and can be approximated by a tangent. Mathematically, two points are placed on this curve and one observes the slope of the straight line joining them as they become closer and closer. If the curve is smooth enough, this process converges (almost) everywhere and the curve is said to be differentiable. It is often believed that this property is common in nature. However, most natural objects have instead a very rough surface, or contour. For example, the bark of trees and snowflakes have a detailed structure that does not become smoother when the scale is refined. For such curves, the slope of the tangent fluctuates endlessly or diverges. The derivative is then undefined (almost) everywhere and the curve is said to be nondifferentiable.Therefore, when the assumption of space differentiability is abandoned, there is an additional degree of freedom that allows the geometry of space to be extremely rough. The difficulty in this approach is that new mathematical tools are needed to model this geometry because the classical derivative cannot be used. Nottale found a solution to this problem by using the fact that nondifferentiability implies scale dependence and therefore the use of fractal geometry. Scale dependence means that the distances on a nondifferentiable curve depend on the scale of observation. It is therefore possible to maintain differential calculus provided that the scale at which derivatives are calculated is given, and that their definition includes no limit. It amounts to saying that nondifferentiable curves have a whole set of tangents in one point instead of one, and that there is a specific tangent at each scale.
To abandon the hypothesis of differentiability does not mean abandoning differentiability. Instead, this leads to a more general framework, where both differentiable and non-differentiable cases are included. Combined with motion relativity, scale relativity by definition thus extends and contains general relativity. As much as general relativity is possible when we drop the hypothesis of euclidian space-time, allowing the possibility of curved space-time, scale relativity is possible when we abandon the hypothesis of differentiability, allowing the possibility of a fractal space-time. The objective is then to describe a continuous space-time which is not everywhere differentiable, as it was in general relativity.
Abandoning differentiability doesn't mean abandoning differential equations. The concept of fractal allows work with the nondifferentiable case with differential equations. In differential calculus, we can see the concept of limit as a zoom, but in this generalization of differential calculus, one doesn't look only at the limit zooms (zero and infinity) but also everything in between, that is, all possible zooms.
In sum, we can drop the hypothesis of the differentiability of space-time, keeping differential equations, provided that fractal geometries are used. With them, we can still deal with the nondifferentiable case with the tools of differential equations. This leads to a double differential equation treatment: in space-time and in scale space.
If Einstein showed that space-time was curved, Nottale shows that it is not only curved, but also fractal. Nottale has proven a key theorem which shows that a space which is continuous and non-differentiable is necessarily fractal. It means that such a space depends on scale.
Importantly, the theory does not merely describe fractal objects in a given space. Instead, it is space itself which is fractal. To understand what a fractal space means requires to study not just fractal curves, but also fractal surfaces, fractal volumes, etc.
Mathematically, a fractal space-time is defined as a nondifferentiable generalization of Riemannian geometry. Such a fractal space-time geometry is the natural choice to develop this new principle of relativity, in the same way that curved geometries were needed to develop Einstein's theory of general relativity.
In the same way that general relativistic effects are not felt in a typical human life, the most radical effects of the fractality of spacetime appear only at the extreme limits of scales: micro scales or at cosmological scales. This approach therefore proposes to bridge not only the quantum and the classical, but also the classical and the cosmological, with fractal to non-fractal transitions (see Fig. 1). More plots of this transition can be seen in the literature.
Minimum and maximum invariant scalesEdit
A fundamental and elegant result of scale relativity is to propose a minimum and maximum scale in physics, invariant under dilations, in a very similar way as the speed of light is an upper limit for speed.
Minimum invariant scaleEdit
In special relativity, there is an unreachable speed, the speed of light. We can add speeds without end, but they will always be less than the speed of light. The sums of all speeds are limited by the speed of light. Additionally, the composition of two velocities is inferior to the sum of those two speeds.
In special scale relativity, similar unreachable observational scales are proposed, the Planck length scale (lP) and the Planck time scale (tP). Dilations are bounded by lP and tP, which means that we can divide spatial or temporal intervals without end, but they will always be superior to Planck's length and time scales. This is a result of special scale relativity (see section 2.7 below). Similarly, the composition of two scale changes is inferior to the product of these two scales.
Maximum invariant scaleEdit
The choice of the maximum scale (noted L) is less easy to explain, but it mostly consists to identify it with the cosmological constant: L = 1/(Λ2). This is motivated in parts because a dimensional analysis shows that the cosmological constant is the inverse of the square of a length, i.e. a curvature.
Galilean scale relativityEdit
The theory of scale relativity follows a similar construction as the one of the relativity of movement, which took place in three steps: galilean, special and general relativity.
This is not surprising, as in both cases the goal is to find laws satisfying transformation laws including one parameter that is relative: the speed in the case of the relativity of movement; the resolution in the case of the relativity of scales.
Galilean scale relativity involves linear transformations a constant fractal dimension, self-similarity and scale invariance. This situation is best illustrated with self-similar fractals. Here, the length of geodesics varies constantly with resolution. The fractal dimensions of free particles doesn't change with zooms. These are self-similar curves.
In Galilean relativity, recall that the laws of motion are the same in all inertial frames. Galileo famously concluded that "the movement is like nothing". In the case of self-similar fractals, paraphrasing Galileo, one could say that "scaling is like nothing". Indeed, the same patterns occur at different scales, so scaling is not noticeable, it is like nothing.
In the relativity of movement, Galileo's theory is an additive Galilean group:
- X' = X - VT
- T' = T
However, if we consider scale transformations (dilations and contractions), the laws are products, and not sums. This can be seen by the necessity to use units of measurements. Indeed, when we say that an object measures 10 meters, we actually mean the object measures 10 times the definite predetermined length called "meter". The number 10 is actually a scale ratio of two lengths 10/1m, where 10 is the measured quantity, and 1m is the arbitrary defining unit. This is the reason why the group is multiplicative.
Moreover, an arbitrary scale e doesn't have any physical meaning in itself (like the number 10), only scale ratios r = e' / e have a meaning, in our example, r = 10 / 1. Using the Gell-Mann-Lévy method, we can use a more relevant scale variable, V = ln(e' / e), and then find back an additive group for scale transformations by taking the logarithm – which converts products into sums.
When in addition to the principle of scale relativity, one adds the principle of relativity of movement, there is a transition of the structure of geodesics at large scales, where trajectories do not depend on the resolution anymore, where trajectories become classical. This explains the shift of behavior from quantum to classical. See also Fig. 1.
Special scale relativityEdit
Special scale relativity can be seen as a correction of galilean scale relativity, where Galilean transformations are replaced by Lorentz transformations. The "corrections remain small at 'large' scale (i.e. around the Compton scale of particles) and increase when going to smaller length scales (i.e. large energies) in the same way as motion-relativistic corrections increase when going to large speeds".
In Galilean relativity, it was considered "obvious" that we could add speeds without limit (w = u + v). This composition laws for speed was not challenged. However, Poincaré and Einstein did challenge it with special relativity, setting a maximum speed on movement, the speed of light. Formally, if v is a velocity, v + c = c. The status of the speed of light in special relativity is a horizon, unreachable, impassable, invariant under changes of movement.
Regarding scale, we are still within a Galilean kind of thinking. Indeed, we assume without justification that the composition of two dilations is ρ * ρ = ρ2. Written with logarithms, this equality becomes lnρ + lnρ = 2lnρ. However, nothing guarantees that this law should hold at quantum or cosmic scales. As a matter of fact, this dilation law is corrected in special scale relativity, and becomes: ln ρ + ln ρ = 2 ln ρ / (1 + ln ρ2).
More generally, in special relativity the composition law for velocities differs from the Galilean approximation and becomes (with the speed of light c = 1):
- u ⊕ v = (u + v) / (1 + u * v)
Similarly, in special scale relativity, the composition law for dilations differs from our Galilean intuitions and becomes (in a logarithm of base K which includes a possible constant C = ln K, which plays the same role as c):
- logρ1 ⊕ logρ2 = (logρ1 + logρ2) / (1 + logρ1 * logρ2)
The status of the Planck scale in special scale relativity plays a similar role as the speed of light in special relativity. It is a horizon for small scales, unreachable, impassable, invariant under scale changes, i.e. dilations and contractions. The consequence for special scale relativity is that applying two times the same contraction ρ to an object, the result is a contraction less strong than contraction ρ x ρ. Formally, if ρ is a contraction, ρ * lP = lP.
As noted above, there is also an unreachable, impassable maximum scale, invariant under scale changes, which is the cosmic length L. In particular, it is invariant under the expansion of the universe.
General scale relativityEdit
In Galilean scale relativity, spacetime was fractal with constant fractal dimensions. In special scale relativity, fractal dimensions can vary. This varying fractal dimension remains however constrained by a log-Lorentz law. This means that the laws satisfy a logarithmic version of the Lorentz transformation. The varying fractal dimension is covariant, in a similar way as proper time is covariant in special relativity.
In general scale relativity, the fractal dimension is not constrained anymore, and can take any value. In other words, it is the situation where there is curvature in scale space. Einstein's curved space-time becomes a particular case of the more general fractal spacetime.
General scale relativity is much more complicated, technical, and less developed than its Galilean and special versions. It involves non-linear laws, scale dynamics and gauge fields. In the case of non self-similarity, changing scales generates a new scale-force or scale-field which needs to be taken into account in a scale dynamics approach. Quantum mechanics then needs to be analyzed in scale space.
Finally, in general scale relativity, we need to take into account both movement and scale transformations, where scale variables depend on space-time coordinates. More details about the implications for abelian gauge fields and non-abelian gauge fields can be found in the literature. Nottale's 2011 book provides the state of the art.
To sum up, one can see some structural similarities between the relativity of movement and the relativity of scales in Table 1:
|Relativity||Variables defining the coordinate system||Variables characterizing the state of the coordinate system|
|Scale||Length of a fractal
Variable fractal dimension
Table 1. Comparison between relativity of movement and relativity of scales. In both cases, there are two kinds of variables linked to the coordinate systems: variables which define the coordinate system, and variables that characterize the state of the coordinate system. In this analogy, the resolution can be assimilated to a speed; acceleration to a scale acceleration; space to the length of a fractal; and time, to the variable fractal dimension. Table adapted from this paper.
Consequences for quantum mechanicsEdit
The fractality of space-time implies an infinity of virtual geodesics. This remark already means that a fluid mechanics is needed. Note that this view is not new, as many authors have noticed fractal properties at quantum scales, thereby suggesting that typical quantum mechanical paths are fractal. See this article for a review. However, the idea to consider a fluid of geodesics in a fractal spacetime is an original proposal from Nottale.
In scale relativity, quantum mechanical effects appear as effects of fractal structures on the movement. The fundamental indeterminism and nonlocality of quantum mechanics are deduced from the fractal geometry itself.
There is an analogy between the interpretation of gravitation in general relativity and quantum effects in scale relativity. Indeed, if gravitation is a manifestation of space-time curvature in general relativity, quantum effects are manifestations of a fractal space-time in scale relativity.
To sum up, there are two aspects which allows scale relativity to better understand quantum mechanics. On the one side, fractal fluctuations themselves are hypothesized to lead to quantum effects. On the other side, non-differentiability leads to a local irreversibility of the dynamics and therefore to the use of complex numbers.
Quantum mechanics thus receives not only a new interpretation, but a firm foundation in relativity principles.
As Philip Turner summarized:
the structure of space has both a smooth (differentiable) component at the macro-scale and a chaotic, fractal (non-differentiable) component at the micro-scale, the transition taking place at the de Broglie length scale.
This transition is explained with Galilean scale relativity (see also above).
Derivation of quantum mechanics' postulatesEdit
Starting from scale relativity, it is possible to derive the fundamental "postulates" of quantum mechanics. More specifically, building on the result of the key theorem showing that a space which is continuous and non-differentiable is necessarily fractal (see section 2.4), Schrödinger's equation, Born's and von Neumann's postulate are derived.
To derive Schrödinger's equation, Nottale started with Newton's second law of motion, and used the result of the key theorem. Many subsequent works then confirmed the derivation.
Actually, the Schrödinger equation derived becomes generalized in scale relativity, and opens the way to a macroscopic quantum mechanics (see below for validated empirical predictions in astrophysics). This may also help to better understand macroscopic quantum phenomena in the future.
The significance of these fundamental results is immense, as the foundations of quantum mechanics which were up to now axiomatic, are now logically derived from more primary relativity theory principles and methods.
Gauge fields appear when scale and movements are combined. Scale relativity proposes a geometric theory of gauge fields. As Turner explains:
The theory offers a new interpretation of gauge transformations and gauge fields (both Abelian and non-Abelian), which are manifestations of the fractality of space-time, in the same way that gravitation is derived from its curvature.
The relationships between fractal space-time, gauge fields and quantum mechanics are technical and advanced subject-matters elaborated in details in Nottale's latest book.
Consequences for elementary particles physicsEdit
Scale relativity gives a geometric interpretation to charges, which are now "defined as the conservative quantities that are built from the new scale symmetries". Relations between mass scales and coupling constants can be theoretically established, and some of them empirically validated. This is possible because in scale relativity, the problem of divergences in quantum field theory is resolved. Indeed, in the new framework, masses and charges become finite, even at infinite energy. In special scale relativity, the possible scale ratios become limited, constraining in a geometric way the quantization of charges. Let us compare a few theoretical predictions with their experimental measures.
Nottale's latest theoretical prediction of the fine-structure constant at the Z0 scale is:
- α−1(mZ) = 128.92
By comparison, a recent experimental measure gives:
- α−1(mZ) = 128.91 ± 0.02
At low energy, the theoretical fine-structure constant prediction is:
- α−1 = 137.01 ± 0.035;
which is within the range of the experimental precision:
- α−1 = 137.036
SU (2) coupling at Z scaleEdit
- α−12 Z = 29.8169 ± 0.0002
While the experimental value gives:
- α−12 Z = 29.802 ± 0.027.
Strong nuclear force at Z scaleEdit
Special scale relativity predicts the value of the strong nuclear force with great precision, as later experimental measurements confirmed. The first prediction of the strong nuclear force at the Z energy level was made in 1992:
- αS (mZ) = 0.1165 ± 0.0005
A recent and refined theoretical estimate gives:
- αS (mZ) = 0.1173 ± 0.0004,
which fits very well with the experimental measure:
- αS (mZ) = 0.1176 ± 0.0009
Mass of the electronEdit
As an application from this new approach to gauge fields, a theoretical estimate of the mass of the electron (me) is possible, from the experimental value of the fine-structure constant. This leads a very good agreement:
- me(theoretical) = 1.007 me (experimental)
Some chaotic systems can be analyzed thanks to a macroquantum mechanics. The main tool here is the generalized Schrödinger equation, which brings statistical predictability characteristic of quantum mechanics into other scales in nature. The equation predicts probability density peaks. For example, the position of exoplanets can be predicted in a statistical manner. The theory predicts that planets have more chances to be found at such or such distance from their star. As Baryshev and Teerikorpi write:
With his equation for the probability density of planetary orbits around a star, Nottale has seemingly come close to the old analogy which saw a similarity between our solar system and an atom in which electrons orbit the nucleus. But now the analogy is deeper and mathematically and physically supported: it comes from the suggestion that chaotic planetary orbits on very long time scales have preferred sizes, the roots of which go to fractal space-time and generalized Newtonian equation of motion which assumes the form of the quantum Schrödinger equation.
However, as Nottale acknowledges, this general approach is not totally new:
The suggestion to use the formalism of quantum mechanics for the treatment of macroscopic problems, in particular for understanding structures in the solar system, dates back to the beginnings of the quantum theory
At the scale of Earth's orbit, space debris probability peaks at 718 km and 1475 km have been predicted with scale relativity, which is in agreement with observations at 850 km and 1475 km. Da Rocha and Nottale suggest that the dynamical braking of the Earth's atmosphere may be responsible for the difference between the theoretical prediction and the observational data of the first peak.
Scale relativity predicts a new law for interplanetary distances, proposing an alternative to the nowadays falsified Titius-bode "law". However, the predictions here are statistical and not deterministic as in Newtonian dynamics. In addition to being statistical, the scale relativistic law has a different theoretical form, and is more reliable than the original Titius-Bode version:
The Titius-Bode "law" of planetary distance is of the form a + b × c n, with a = 0.4 AU, b = 0.3 AU and c = 2 in its original version. It is partly inconsistent — Mercury corresponds to n = −∞, Venus to n = 0, the Earth to n = 1, etc. It therefore "predicts" an infinity of orbits between Mercury and Venus and fails for the main asteroid belt and beyond Saturn. It has been shown by Herrmann (1997) that its agreement with the observed distances is not statistically significant. ... [I]n the scale relativity framework, the predicted law of distance is not a Titius-Bode-like power law but a more constrained and statistically significant quadratic law of the form an = a0n2.
The method also applies to other extrasolar systems. Let us illustrate this with the first exoplanets found around the pulsar PSR B1257+12. Three planets, A, B and C have been found. Their orbital period ratios (noted PA/PC for the period ratio of planet A to C) can be estimated and compared to observations. Using the macroscopic Schrödinger equation, the recent theoretical estimates are:
- (PA/PC)1/3 = 0.63593 (predicted)
- (PB/PC)1/3 = 0.8787 (predicted),
which fit the observed values with great precision:
- (PA/PC)1/3 = 0.63597 (observed)
- (PB/PC)1/3 = 0.8783 (observed).
The puzzling fact that many exoplanets (e.g. hot Jupiters) are so close to their parent stars receives a natural explanation in this framework. Indeed, it corresponds to the fundamental orbital of the model, where (exo)planets are at 0.04 UA / solar mass of their parent star.
Daniel da Rocha studied the velocity of about 2000 galaxy pairs, which gave statistically significant results when compared to the theoretical structuration in phase space from scale relativity. The method and tools here are similar to the one used for explaining the structure in solar systems.
Similar successful results apply at other extragalactic scales: the local group of galaxies, clusters of galaxies, the local supercluster and other very large scale structures.
Scale relativity suggests that the fractality of matter contributes to the phenomenon of dark matter. Indeed, some of the dynamical and gravitational effects which seem to require unseen matter are suggested to be consequences of the fractality of space on very large scales.
In the same way as quantum physics differs from the classical at very small scales because of fractal effects, symmetrically, at very large scales, scale relativity also predicts that corrections from the fractality of space-time must be taken into account (see also Fig. 1[full citation needed]).
Such an interpretation is somehow similar in spirit to modified Newtonian dynamics (MOND), although here the approach is founded on relativity principles. Indeed, in MOND, Newtonian dynamics is modified in an ad hoc manner to account for the new effects, while in scale relativity, it is the new fractal geometric field taken into consideration which leads to the emergence of a dark potential.
On the largest scale, scale relativity offers a new perspective on the issue of redshift quantization. With a reasoning similar to the one which allows to predict probability peaks for the velocity of planets, this can be generalized to larger intergalactic scales. Nottale writes:
In the same way as there are well-established structures in the position space (stars, clusters of stars, galaxies, groups of galaxies, clusters of galaxies, large scale structures), the velocity probability peaks are simply the manifestation of structuration in the velocity space. In other words, as it is already well-known in classical mechanics, a full view of the structuring can be obtained in phase space.
Large numbers hypothesisEdit
Nottale noticed that reasoning about scales was a promising road to explain the large numbers hypothesis. This was elaborated in more details in a working paper. The scale-relativistic way to explain the large numbers hypothesis was later discussed by Nottale and by Sidharth.
Prediction of the cosmological constantEdit
In scale relativity, the cosmological constant is interpreted as a curvature. If one does a dimensional analysis, it is indeed the inverse of the square of a length. The predicted value of the cosmological constant, back in 1993 was:
- ΩΛ h2 = 0.36
Depending on model choices, the most recent predictions give the following range:
- 0.311 < ΩΛ h2 (predicted) < 0.356,
while the measured cosmological constant from the Planck satellite is:
- ΩΛ h2 (measured) = 0.318 ±0.012.
Given the improvements of the empirical measures from 1993 until 2011, Nottale commented:
The convergence of the observational values towards the theoretical estimate, despite an improvement of the precision by a factor of more than 20, is striking.
Dark energy can be considered as a measurement of the cosmological constant. In scale relativity, dark energy would come from a potential energy manifested by the fractal geometry of the universe at large scales, in the same way as the Newtonian potential is a manifestation of its curved geometry in general relativity.
Scale relativity offers a new perspective on the old horizon problem in cosmology. The problem states that different regions of the universe have not had contact with each other's because of the great distances between them, but nevertheless they have the same temperature and other physical properties. This should not be possible, given that the transfer of information (or energy, heat, etc.) can occur, at most, at the speed of light.
Nottale writes that special scale relativity "naturally solves the problem because of the new behaviour it implies for light cones. Though there is no inflation in the usual sense, since the scale factor time dependence is unchanged with respect to standard cosmology, there is an inflation of the light cone as t → Λ/c″, where Λ is the Planck length scale (ħG/c3)1/2. This inflation of the light cones makes them flare and cross themselves, thereby allowing a causal connection between any two points, and solving the horizon problem (see also).
Applications to other fieldsEdit
Although scale relativity started as a spacetime theory, its methods and concepts can and have been used in other fields. For example, quantum-classical kinds of transitions can be at play at intermediate scales, provided that there exists a fractal medium which is locally nondifferentiable. Such a fractal medium then plays a role similar to that played by fractal spacetime for particles. Objects and particles embedded in such a medium will acquire macroquantum properties. As examples, we can mention gravitational structuring in astrophysics (see section 5), turbulence, superconductivity at laboratory scales (see section 7.1), and also modeling in geography (section 7.4).
What follows are not strict applications of scale relativity, but rather models constructed with the general idea of relativity of scales. Fractal models, and in particular self-similar fractal laws have been applied to describe numerous biological systems such as trees, blood networks, or plants. It is thus to be expected that the mathematical tools developed through a fractal space-time theory can have a wider variety of applications to describe fractal systems.
Superconductivity and macroquantum phenomenaEdit
The generalized Schrödinger equation, under certain conditions, can apply to macroscopic scales. This leads to the proposal that quantum-like phenomena need not to be only at quantum scales. In a recent paper, Turner and Nottale proposed new ways to explore the origins of macroscopic quantum coherence in high-temperature superconductivity.
If we assume that morphologies come from a growth process, we can model this growth as an infinite family of virtual, fractal, and locally irreversible trajectories. This allows to write a growth equation in a form which can be integrated into a Schrödinger-like equation.
The structuring implied by such a generalized Schrödinger equation provides a new basis to study, with a purely energetic approach, the issues of formation, duplication, bifurcation and hierarchical organization of structures.
An inspiring example is the solution describing growth from a center, which bears similarities with the problem of particle scattering in quantum mechanics. Searching for some of the simplest solutions (with a central potential and a spherical symmetry), a solution leads to a flower shape, the common Platycodon flower (see Fig. 2). In honor to Erwin Schrödinger, Nottale, Chaline and Grou named their book "Flowers for Schrödinger" (Des fleurs pour Schrödinger).
In a short paper, researchers inspired by scale relativity proposed a log-periodic law for the development of the human embryo, which fits pretty well with the steps of the human embryo development.
Other studies suggest that many living systems processes, because embedded in a fractal medium, are expected to display wave-like and quantized structuration.
Singularity and evolutionary treesEdit
Utilizing the fractal mathematics due to Mandlebrot (1983) these authors develop a model based upon a fractal tree of the time sequences of major evolutionary leaps at various scales (log-periodic law of acceleration – deceleration). The application of the model to the evolution of western civilization shows evidence of an acceleration in the succession (pattern) of economic crisis/non-crisis, which point to a next crisis in the period 2015–2020, with a critical point Tc = 2080. The meaning of Tc in this approach is the limit of the evolutionary capacity of the analyzed group and is biologically analogous with the end of a species and emergence of a new species.
Reception and critiqueEdit
Scale relativity and other approachesEdit
It may help to understand scale relativity by comparing it to various other approaches to unifying quantum and classical theories. There are two main roads to try to unify quantum mechanics and relativity: to start from quantum mechanics, or to start from relativity. Quantum gravity theories explore the former, scale relativity the latter. Quantum gravity theories try to make a quantum theory of spacetime, whereas scale relativity is a spacetime theory of quantum theory.
Although string theory and scale relativity start from different assumptions to tackle the issue of reconciling quantum mechanics and relativity theory, the two approaches need not to be opposed. Indeed, Castro suggested to combine string theory with the principle of scale relativity:
It was emphasized by Nottale in his book that a full motion plus scale relativity including all spacetime components, angles and rotations remains to be constructed. In particular the general theory of scale relativity. Our aim is to show that string theory provides an important step in that direction and vice versa: the scale relativity principle must be operating in string theory.
Scale relativity is based on a geometrical approach, and thereby recovers the quantum laws, instead of assuming them. This distinguishes it from other quantum gravity approaches. Nottale comments:
The main difference is that these quantum gravity studies assume the quantum laws to be set as fundamental laws. In such a framework, the fractal geometry of space-time at the Planck scale is a consequence of the quantum nature of physical laws, so that the fractality and the quantum nature co-exist as two different things.
In the scale relativity theory, there are not two things (in analogy with Einstein's general relativity theory in which gravitation is a manifestation of the curvature of space-time): the quantum laws are considered as manifestations of the fractality and nondifferentiability of space-time, so that they do not have to be added to the geometric description.
Loop quantum gravityEdit
They have in common to start from relativity theory and principles, and to fulfill the condition of background independence.
El Naschie's E-Infinity theoryEdit
El Naschie has developed a similar, yet different fractal space-time theory, because he gives up differentiability and continuity. El Naschie thus uses a "Cantorian" space-time, and uses mostly number theory (see Nottale 2011, p. 7). This is to be contrasted with scale relativity, which keeps the hypothesis of continuity, and thus works preferentially with mathematical analysis and fractals.
Causal dynamical triangulationEdit
Through computer simulations of causal dynamical triangulation theory, a fractal to nonfractal transition was found from quantum scales to larger scales. This result seems to be compatible with quantum-classical transition deduced in another way, from the theoretical framework of scale relativity.
For both scale relativity and non-commutative geometries, particles are geometric properties of space-time. The intersection of both theories seems fruitful and still to be explored. In particular, Nottale further generalized this non-communicativity, saying that it "is now at the level of the fractal space-time itself, which therefore fundamentally comes under Connes's noncommutative geometry. Moreover, this noncommutativity might be considered as a key for a future better understanding of the parity and CP violations, which will not be developed here."
Doubly special relativityEdit
Both theories have identified the Planck length as a fundamental minimum scale. However, as Nottale comments:
the main difference between the "Doubly-Special-Relativity" approach and the scale relativity one is that we have identified the question of defining an invariant length-scale as coming under a relativity of scales. Therefore the new group to be constructed is a multiplicative group that becomes additive only when working with the logarithms of scale ratios, which are definitely the physically relevant scale variables, as we have shown by applying the Gell-Mann–Levy method to the construction of the dilation operator (see Sec. 4.2.1).
Nelson stochastic mechanicsEdit
At first sight, scale relativity and Nelson's stochastic mechanics share features, such as the derivation of the Schrödinger equation. Some authors point out problems of Nelson's mechanics in multi-time correlations in repeated measurements. On the other hand, perceived problems could be resolved. By contrast, scale relativity is not founded on a stochastic approach. As Nottale writes:
Here, the fractality of the space-time continuum is derived from its nondifferentiability, it is constrained by the principle of scale relativity and the Dirac equation is derived as an integral of the geodesic equation. This is therefore not a stochastic approach in its essence, even though stochastic variables must be introduced as a consequence of the new geometry, so it does not come under the contradictions encountered by stochastic mechanics.
In the scale relativity description, there is no longer any separation between a "microscopic" description and an emergent "macroscopic" description (at the level of the wave function), since both are accounted for in the double scale space and position space representation.
Special and general relativity theory are notoriously hard to understand for non-specialists. This is partly because our psychological and sociological use of the concepts of space and time are not the same as the one in physics. Yet, the relativity of scales is still harder to apprehend than other relativity theories. Indeed, humans can change their positions and velocities but have virtually no experience of shrinking or dilating themselves.
Sociologists Bontems and Gingras did a detailed bibliometrical analysis of scale relativity and showed the difficulty for such a theory with a different theoretical starting point to compete with well-established paradigms such as string theory.
Back in 2007, they considered the theory to be neither mainstream, that is, there are not many people working on it compared to other paradigms; but also neither controversial, as there is very little informed and academic discussion around the theory. The two sociologists thus qualified the theory as "marginal", in the sense that the theory is developed inside academics, but is not controversial.
They also show that Nottale has a double career. First, a classical one, working on gravitational lensing, and a second one, about scale relativity. Nottale first secured his scientific reputation with important publications about gravitational lensing, then obtained a stable academic position, giving him more freedom to explore the foundations of spacetime and quantum mechanics.
A possible obstacle to the growth in popularity of scale relativity is that fractal geometries necessary to deal with special and general scale relativity are less well known and developed mathematically than the simple and well-known self-similar fractals. This technical difficulty may make the advanced concepts of the theory harder to learn. Physicists interested in scale relativity need to invest some time into understanding fractal geometries. The situation is similar to the need to learn non-euclidian geometries in order to work with Einstein's general relativity. Similarly, the generality and transdiciplinary nature of the theory also made Auffray and Noble comment: "The scale relativity theory and tools extend the scope of current domain-specific theories, which are naturally recovered, not replaced, in the new framework. This may explain why the community of physicists has been slow to recognize its potential and even to challenge it."
Nottale's popular book, written in French, has been compared with Einstein's popular book Relativity: The Special and the General Theory. A future translation of this book from French into English might help the popularization of the theory.
The reactions from scientists to scale relativity are generally positive. For example, Baryshev and Teerikorpi write:
Though Nottale's theory is still developing and not yet a generally accepted part of physics, there are already many exciting views and predictions surfacing from the new formalism. It is concerned in particular with the frontier domains of modern physics, i.e. small length- and time-scales (microworld, elementary particles), large length-scales (cosmology), and long time-scales.
Regarding the predictions of planetary spacings, Potter and Jargodzki commented:
In the 1990s, applying chaos theory to gravitationally bound systems, L. Nottale found that statistical fits indicate that the planet orbital distances, including that of Pluto, and the major satellites of the Jovian planets, follow a numerical scheme with their orbital radii proportional to the squares of integers n2 extremely well!
Auffray and Noble gave an overview:
Scale relativity has implications for every aspect of physics, from elementary particle physics to astrophysics and cosmology. It provides numerous examples of theoretical predictions of standard model parameters, a theoretical expectation for the Higgs boson mass which will be potentially assessed in the coming years by the Large Hadron Collider, and a prediction of the cosmological constant which remains within the range of increasingly refined observational data. Strikingly, many predictions in astrophysics have already been validated through observations such as the distribution of exoplanets or the formation of extragalactic structures.
a prediction for the Higgs boson that should have been observed at mH ≃113.7GeV...it would appear, according to the book itself, that the theory it describes would be already ruled out by LHC data!
However, this prediction was initially made at a time when the Higgs boson mass was totally unknown. Additionally, the prediction does not rely on scale relativity itself, but on a new suggested form of the electroweak theory. The final LHC result is mH = 125.6 ± 0.3 GeV, and lies therefore at about 110% of this early estimate.
Particle physicist and skeptic Victor Stenger also noticed that the theory "predicts a nonzero value of the cosmological constant in the right ballpark". He also acknowledged that the theory "makes a number of other remarkable predictions".
- Nottale 1989.
- Ord 1983.
- Nottale & Schneider 1984.
- Feynman & Hibbs 1965.
- Müller 2005, p. 71.
- Abbott & Wise 1981.
- Campesino-Romeo, D'Olivo & Socolovsky 1982.
- Allen 1983.
- Nottale 1992.
- Nottale: list of papers
- Nottale 1993a; Nottale 2011.
- Nottale 1998d; Nottale, Chaline & Grou 2000; Nottale, Célérier & Lehner 2009.
- Nottale 2011, p. 8.
- Nottale 1998a, p. 218.
- Mandelbrot 1983.
- Nottale 2004a.
- Nottale 1993a, p. 82.
- Nottale 1993a, p. 84.
- Nottale 1998a, p. 188.
- Sendra 2013a.
- Nottale 1993a, p. 304.
- Nottale 1996a, p. 915.
- Nottale 1998a, p. 161.
- Nottale 1993a, p. 299.
- Nottale 1996a.
- Nottale 2003.
- Nottale 2011, sec. 12.6.
- Galileo 1991.
- Gell-Mann & Lévy 1960.
- Nottale 1993a.
- Nottale 2011, p. 460.
- Nottale 1998a, p. 212.
- Nottale 1997.
- Nottale 2004b.
- Nottale 1994, p. 121.
- Nottale, Célérier & Lehner 2006.
- Nottale 2011.
- Forriez, Martin & Nottale 2010.
- Kröger 1997.
- Turner 2013.
- Célérier & Nottale 2004.
- Nottale & Célérier 2007.
- Nottale 1993a, sec. 5.6.
- Dubois 2000.
- Jumarie 2001.
- Cresson 2003.
- Ben Adda & Cresson 2004.
- Ben Adda & Cresson 2005.
- Jumarie 2006.
- Jumarie 2007.
- Nottale 2011, sec. 5.7.2.
- Nottale 2011, sec. 5.7.3.
- Célérier & Nottale 2003.
- Célérier & Nottale 2010.
- Nottale 2011, p. 297.
- Nottale 2011, p. 490.
- Yao 2006.
- Nottale 2011, p. 499.
- Particle Data Group 2000.
- Nottale 2010, pp. 123–24.
- Nottale 2011, p. 483.
- Baryshev & Teerikorpi 2002, p. 256.
- Nottale 2011, p. 589.
- Da Rocha & Nottale 2003, p. 577.
- Anz 2000.
- Nottale 1993a, pp. 311–21.
- Nottale, Schumacher & Gay 1997.
- Nottale 2011, p. 559.
- Wolszczan & Frail 1992.
- Nottale 1998b.
- Nottale 2011, p. 622.
- Konacki & Wolszczan 2003, p. L149.
- Nottale 2011, sec. 13.5.
- Nottale 1996b.
- Nottale 1998c.
- Nottale, Schumacher & Lefevre 2000.
- Da Rocha 2004.
- Nottale 2011, sec. 13.8.
- Nottale 2011, p. 520.
- Nottale 2011, p. 656.
- Nottale 1993a, p. 303.
- Nottale 1993b.
- Nottale 2011, pp. 543–45.
- Sidharth 2001.
- Nottale 1993a, p. 305.
- Nottale 2012.
- Planck Collaboration 2013.
- Nottale 2011, p. 554.
- Nottale 2011, p. 543.
- Nottale 1993a, p. 292.
- Dubrulle 2000.
- Nottale, Chaline & Grou 2000.
- Nottale, Célérier & Lehner 2002.
- Turner & Nottale 2015.
- Nottale 2007.
- Nottale, Célérier & Lehner 2009.
- Cash et al. 2002.
- Auffray & Nottale 2008.
- Nottale & Auffray 2008.
- Nottale 2013b.
- Nottale, Martin & Forriez 2012.
- Magee & Devezas 2011, p. 1370.
- Castro 1997, p. 275.
- Nottale 2011, p. 458.
- Loll 2008, p. 114006.
- Nottale 2011, p. 277.
- Connes 1994.
- Lapidus 2008.
- Nottale 2011, p. 459.
- Nelson 1966.
- Wang & Liang 1993.
- Blanchard & Serva 1995.
- Nottale 2011, p. 265.
- Nottale 2011, p. 360.
- Bontems & Gingras 2007.
- Karoji & Nottale 1976.
- Nottale & Vigier 1977.
- Vidal 2010.
- Auffray & Noble 2010, p. 303.
- Nottale 1998a.
- Merker 1999, p. 166.
- Baryshev & Teerikorpi 2002, p. 255.
- Potter & Jargodzki 2005, p. 113.
- Peter 2013.
- Nottale 2001.
- Beringer et al. 2013, p. 33.
- Stenger 2011, p. 100.
- Abbott, Laurence F.; Wise, Mark B. (1981). "Dimension of a Quantum-Mechanical Path" (PDF). American Journal of Physics. 49 (1): 37–39. Bibcode:1981AmJPh..49...37A. doi:10.1119/1.12657.
- Allen, A. D. (1983). "Fractals and Quantum Mechanics". Speculations in Science and Technology. 6 (2): 165–70. ISSN 0155-7785.
- Anz, Meador P. (2000). "A Decade of Growth" (PDF). The Orbital Debris Quarterly News. NASA JSC. 5 (4): 1–2.
- Auffray, C.; Noble, D. (2010). "Scale Relativity: an Extended Paradigm for Physics and Biology?". Foundations of Science. 16 (4): 303–5. arXiv: . doi:10.1007/s10699-010-9203-x.
- ———; Nottale, L. (2008). "Scale relativity theory and integrative systems biology: 1: Founding principles and scale laws" (PDF). Progress in Biophysics and Molecular Biology. 97 (1): 79–114. doi:10.1016/j.pbiomolbio.2007.09.002.
- Baryshev, Yurij; Teerikorpi, Pekka (2002). Discovery of Cosmic Fractals. River Edge, NJ: World Scientific.
- Ben Adda, Fayçal; Cresson, Jacky (2004). "Quantum Derivatives and the Schrödinger Equation". Chaos, Solitons & Fractals. 19 (5): 1323–34. Bibcode:2004CSF....19.1323B. doi:10.1016/S0960-0779(03)00339-4.
- ———; Cresson, Jacky (2005). "Fractional differential equations and the Schrödinger equation". Applied Mathematics and Computation. 161 (1): 323–45. doi:10.1016/j.amc.2003.12.031.
- Beringer J.; et al. (Particle Physics Group) (2013). Status of Higgs Boson Physics (PDF). PR. D86. 010001.
- Blanchard, Philippe; Serva, Maurizio (1995). "Reply to 'Comment on "Repeated measurements in stochastic mechanics"'". Phys. Rev. D. 51: 3132. Bibcode:1995PhRvD..51.3132B. doi:10.1103/PhysRevD.51.3132.
- Bontems, Vincent; Gingras, Yves (2007). "De la science normale à la science marginale. Analyse d'une bifurcation de trajectoire scientifique: le cas de la Théorie de la Relativité d'Echelle". Social Science Information (in French). 46 (4): 607–53. doi:10.1177/0539018407082595.
- Campesino-Romeo, E.; D'Olivo, J. C.; Socolovsky, M. (1982). "Hausdorff Dimension for the Quantum Harmonic Oscillator". Physics Letters A. 89 (7): 321–24. Bibcode:1982PhLA...89..321C. doi:10.1016/0375-9601(82)90182-7.
- Cash, R.; Chaline, J.; Nottale, L.; Grou, P. (2002). "Développement Humain et Loi Log-Périodique" (PDF). Comptes Rendus Biologies. 325 (5): 585–90. doi:10.1016/S1631-0691(02)01468-3.
- Castro, Carlos (1997). "String Theory, Scale Relativity and the Generalized Uncertainty Principle". Foundations of Physics Letters. 10 (3): 273–93. arXiv: . Bibcode:1997FoPhL..10..273C. doi:10.1007/BF02764209.
- Célérier, Marie-Noëlle; Nottale, Laurent (2003). "A Scale-Relativistic Derivation of the Dirac Equation". Electromagnetic Phenomena. 3: 70–80. arXiv: . Bibcode:2002hep.th...10027C.
- ———; Nottale, Laurent (2004). "Quantum–classical transition in Scale Relativity". Journal of Physics A: Mathematical and General. 37 (3): 931–955. arXiv: . Bibcode:2004JPhA...37..931C. doi:10.1088/0305-4470/37/3/026.
- ———; Nottale, Laurent (2010). "Electromagnetic Klein–Gordon and Dirac Equations in Scale Relativity". International Journal of Modern Physics A. 25 (22): 4239–53. arXiv: . Bibcode:2010IJMPA..25.4239C. doi:10.1142/S0217751X10050615.
- Connes, Alain (1994). Noncommutative Geometry. San Diego: Academic Press.
- Cresson, Jacky (2003). "Scale Calculus and the Schrödinger Equation". Journal of Mathematical Physics. 44 (11): 4907–38. arXiv: . Bibcode:2003JMP....44.4907C. doi:10.1063/1.1618923.
- Da Rocha, D.; Nottale, L. (2003). "Gravitational structure formation in scale relativity". Chaos, Solitons & Fractals. 16 (4): 565–95. arXiv: . Bibcode:2003CSF....16..565D. doi:10.1016/S0960-0779(02)00223-0.
- Da Rocha, D. (2004). Structuration gravitationnelle en Relativité d'échelle (Ph.D.) (in French). Observatoire de Paris. tel-00010204.
- Dubois, Daniel M. (2000). "Computational derivation of quantum relativist electromagnetic systems with forward-backward space-time shifts". Computing Anticipatory Systems: CASYS'99 – Third International Conference, Liège 1999. AIP Conference Proceedings. 517: 417–429. doi:10.1063/1.1291279.
- Dubrulle, B. (2000). "Finite Size Scale Invariance". The European Physical Journal B. 14 (4): 757–71. Bibcode:2000EPJB...14..757D. doi:10.1007/s100510051087.
- Feynman, Richard P.; Hibbs, Albert R. (1965). Quantum Mechanics and Path Integrals. International Series in Pure and Applied Physics. New York: McGraw-Hill.
- Forriez, M.; Martin, P.; Nottale, L. (2010). "Lois d'échelle et transitions fractal-non fractal en géographie". L'Espace géographique (in French). 39 (2): 97–112.
- Galileo Galilei (1991). Dialogues Concerning Two New Sciences. Great Minds. Translated by Henry Crew and Alfonso de Salvio. Buffalo, NY: Prometheus. ISBN 978-0-87975-707-6.
- Gell-Mann, M.; Lévy, M. (1960). "The Axial Vector Current in Beta Decay". Il Nuovo Cimento. 16 (4): 705–26. Bibcode:1960NCim...16..705G. doi:10.1007/BF02859738.
- Particle Data Group: Groom, D. E.; Aguillar-Benitez, M.; Amsler, C.; et al. (2000). "Review of particle physics". European Physics Journal C. 15 (1–4): 1–878.
- Herrmann, Felix Johan (1997). A Scaling Medium Representation: A Discussion on Well-Logs, Fractals and Waves (PDF) (Ph.D.). TU Delft. TR diss 2884.
- Jumarie, G. (2001). "Schrödinger Equation for Quantum Fractal Space–Time of Order n via the Complex-Valued Fractional Brownian Motion". International Journal of Modern Physics A. 16 (31): 5061–84. Bibcode:2001IJMPA..16.5061J. doi:10.1142/S0217751X01005468.
- ——— (2006). "Modified Riemann-Liouville derivative and fractional Taylor series of nondifferentiable functions further results". Computers & Mathematics with Applications. 51 (9–10): 1367–76. doi:10.1016/j.camwa.2006.02.001.
- ——— (2007). "The Minkowski's space–time is consistent with differential geometry of fractional order". Physics Letters A. 363 (1–2): 5–11. Bibcode:2007PhLA..363....5J. doi:10.1016/j.physleta.2006.10.085.
- Karoji, H.; Nottale, L. (1976). "Possible implications of the Rubin-Ford effect". Nature. 259 (5538): 31–33. Bibcode:1976Natur.259...31K. doi:10.1038/259031a0.
- Konacki, Maciej; Wolszczan, Alex (2003). "Masses and Orbital Inclinations of Planets in the PSR B1257+12 System". The Astrophysical Journal Letters. 591 (2): L147–150. arXiv: . Bibcode:2003ApJ...591L.147K. doi:10.1086/377093.
- Kröger, H. (1997). "Proposal for an experiment to measure the Hausdorff dimension of quantum-mechanical trajectories". Physical Review A. 55 (2): 951–66. arXiv: . Bibcode:1997PhRvA..55..951K. doi:10.1103/PhysRevA.55.951.
- Lapidus, Michel L. (2008). In Search of the Riemann Zeros: Strings, Fractal Membranes and Noncommutative Spacetimes. Providence, R.I: American Mathematical Society.
- Loll, R. (2008). "The Emergence of Spacetime or Quantum Gravity on Your Desktop". Classical and Quantum Gravity. 25 (11): 114006. arXiv: . Bibcode:2008CQGra..25k4006L. doi:10.1088/0264-9381/25/11/114006.
- Magee, Christopher L.; Devezas, Tessaleno C. (2011). "How many singularities are near and how will they disrupt human history?" (PDF). Technological Forecasting and Social Change. 78 (8): 1365–78. doi:10.1016/j.techfore.2011.07.013.
- Mandelbrot, Benoit B. (1983). The Fractal Geometry of Nature. Macmillan.
- Merker, Joël (1999). "Deux infinis cousus main". Revue de synthèse. 120 (1): 165–74. doi:10.1007/BF03182083.
- Müller, Xavier (2005). "La Relativité d'Echelle - Et si le monde n'était qu'un océan fractal?". Science et Vie (in French). 1051. pp. 70–73.
- Nelson, Edward (1966). "Derivation of the Schrödinger Equation from Newtonian Mechanics". Physical Review. 150 (4): 1079–1085. Bibcode:1966PhRv..150.1079N. doi:10.1103/PhysRev.150.1079.
- Nottale, L. (1989). "Fractals and the Quantum Theory of Spacetime". International Journal of Modern Physics A. 04 (19): 5047–5117. Bibcode:1989IJMPA...4.5047N. doi:10.1142/S0217751X89002156.
- ——— (1992). "The Theory of Scale Relativity" (PDF). International Journal of Modern Physics A. 7 (20): 4899–4936. Bibcode:1992IJMPA...7.4899N. doi:10.1142/S0217751X92002222.
- ——— (1993a). Fractal Space-Time and Microphysics: Towards a Theory of Scale Relativity (PDF). World Scientific. ISBN 978-981-02-0878-3.
- ——— (1993b). "Mach's principle, Dirac's large numbers and the cosmological constant problem" (PDF). Nottale, working paper.
- ——— (1994). "Scale Relativity: First Steps Toward a Field Theory" (PDF). In Alonso Diaz, J.; Lorente Paramo, M. Relativity in General. Spanish Relativity Meeting, Salas, 1993. 1. Gif-sur-Yvette: Editions Frontières. pp. 121–122. ISBN 978-2-86332-168-3.
- ——— (1996a). "Scale Relativity and Fractal Space-Time: Applications to Quantum Physics, Cosmology and Chaotic Systems" (PDF). Chaos, Solitons & Fractals. 7 (6): 877–938. Bibcode:1996CSF.....7..877N. doi:10.1016/0960-0779(96)00002-1.
- ——— (1996b). "Scale-relativity and quantization of extra-solar planetary systems" (PDF). Astronomy and Astrophysics. 315: L9–12. Bibcode:1996A&A...315L...9N.
- ——— (1997). "Scale Relativity" (PDF). In Dubrulle, B.; Graner, F.; Sornette, D. Scale Invariance and Beyond. Berlin, Heidelberg: Springer. ISBN 978-3-540-64000-4.
- ——— (1998a). La relativité dans tous ses états : au-delà de l'espace-temps (in French). Hachette. ISBN 978-2-01-235278-0.
- ——— (1998b). "Scale relativity and quantization of the planetary system around the pulsar PSR B1257 + 12" (PDF). Chaos, Solitons & Fractals. 9 (7): 1043–50. Bibcode:1998CSF.....9.1043N. doi:10.1016/S0960-0779(97)00079-9.
- ——— (1998c). "Scale relativity and quantization of planet obliquities" (PDF). Chaos, Solitons & Fractals. 9 (7): 1035–41. Bibcode:1998CSF.....9.1035N. doi:10.1016/S0960-0779(97)00078-7.
- ——— (1998d). L'Univers et la Lumière. Cosmologie Classique Et Mirages Gravitationnels (in French). Flammarion. ISBN 978-2-08-081383-1.
- ——— (2001). "Scale relativity and non-differentiable fractal space-time" (PDF). In Sidharth, B. G.; Altaisky, M. V. Frontiers of Fundamental Physics. 4. Springer. pp. 65–79.
- ——— (2003). "Scale-Relativistic Cosmology" (PDF). Chaos, Solitons and Fractals. 16 (4): 539–64. Bibcode:2003CSF....16..539N. doi:10.1016/s0960-0779(02)00222-9.
- Nottale, L. (Director) (2004a). Relativité d'échelle et structuration de l'univers. Retrieved 2015-04-12.
- ——— (2004b). "The Theory of Scale Relativity: Non-Differentiable Geometry and Fractal Space-Time" (PDF). Computing Anticipatory Systems: CASYS'03 – Sixth International Conference, Liège 1993. AIP Conference Proceedings. 718: 68–95. doi:10.1063/1.1787313.
- ——— (2007). "Scale Relativity: A Fractal Matrix for Organization in Nature" (PDF). Electronic Journal of Theoretical Physics. 4 (16): 187–274.
- ——— (2010). "Scale Relativity and Fractal Space-Time: Theory and Applications". Foundations of Science. 15 (2): 101–52. arXiv: . doi:10.1007/s10699-010-9170-2.
- ——— (2011). Scale Relativity and Fractal Space-Time: A New Approach to Unifying Relativity and Quantum Mechanics. World Scientific Publishing.
- ——— (2012). "Nature et valeur de la constante cosmologique" (PDF). In Nottale, L.; Martin, P. Premières Rencontres d’Avignon (2007-2009) autour de la Relativité d’Échelle (in French). Actes d'Avignon. pp. 31–39. ISBN 978-2-910545-07-9.
- ——— (2013b). "Macroscopic Quantum-Type Potentials in Theoretical Systems Biology". Cells. 3 (1): 1–35. doi:10.3390/cells3010001. PMC . PMID 24709901.
- Nottale, L.; Auffray, C. (2008). "Scale relativity theory and integrative systems biology: 2 Macroscopic quantum-type mechanics" (PDF). Progress in Biophysics and Molecular Biology. 97 (1): 115–57. doi:10.1016/j.pbiomolbio.2007.09.001.
- ———; Célérier, M. N. (2007). "Derivation of the Postulates of Quantum Mechanics from the First Principles of Scale Relativity". Journal of Physics A: Mathematical and Theoretical. 40 (48): 14471–98. arXiv: . Bibcode:2007JPhA...4014471N. doi:10.1088/1751-8113/40/48/012.
- ———; Célérier, M.N.; Lehner, T. (2002). "On the Fractal Structure of Evolutionary Trees" (PDF). In Losa, Gabriele A.; Nonnenmacher, Theo F.; Merlini, Danilo; Weibel, Ewald R. Fractals in Biology and Medicine. Birkhäuser. pp. 247–58.
- ———; Célérier, M.N.; Lehner, T. (2006). "Non-Abelian gauge field theory in scale relativity". Journal of Mathematical Physics. 47 (3): 032303. arXiv: . Bibcode:2006JMP....47c2303N. doi:10.1063/1.2176915.
- ———; Célérier, M.N.; Lehner, T. (2009). Des fleurs pour Schrödinger : La relativité d'échelle et ses applications (in French). Ellipses. ISBN 978-2-7298-5182-8.
- ———; Chaline, J.; Grou, P. (2000). Les Arbres de L'évolution: Univers, Vie, Sociétés. Hachette littératures.
- ———; Martin, P.; Forriez, M. (2012). "Analyse en relativité d'échelle du bassin versant du Gardon (Gard, France)" (PDF). In Nottale, L.; Martin, P. Premières Rencontres d'Avignon (2007-2009) autour de La Relativité d'Echelle (in French). Actes d'Avignon. pp. 267–95. ISBN 978-2-910545-07-9.
- ———; Schneider, J. (1984). "Fractals and Nonstandard Analysis" (PDF). Journal of Mathematical Physics. 25 (5): 1296–1300. Bibcode:1984JMP....25.1296N. doi:10.1063/1.526285.
- ———; Schumacher, G.; Gay, J. (1997). "Scale Relativity and Quantization of the Solar System" (PDF). Astronomy and Astrophysics. 322: 1018–25. Bibcode:1997A&A...322.1018N.
- ———; Schumacher, G.; Lefevre, E. T. (2000). "Scale-Relativity and Quantization of Exoplanet Orbital Semi-Major Axes" (PDF). Astronomy and Astrophysics. 361: 379–87. Bibcode:2000A&A...361..379N.
- ———; Vigier, J. P. (1977). "Continuous increase of Hubble modulus behind clusters of galaxies". Nature. 268 (5621): 608–10. Bibcode:1977Natur.268..608N. doi:10.1038/268608a0.
- Ord, G. N. (1983). "Fractal space-time: a geometric analogue of relativistic quantum mechanics". Journal of Physics A: Mathematical and General. 16 (9): 1869–84. Bibcode:1983JPhA...16.1869O. doi:10.1088/0305-4470/16/9/012.
- Peter, Patrick (2013). "Laurent Nottale: Scale relativity and fractal space-time". General Relativity and Gravitation. 45 (7): 1459–61. Bibcode:2013GReGr..45.1459P. doi:10.1007/s10714-013-1535-8.
- Plank Collaboration: Ade, P.A.R; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; et al. (2013). "Planck 2013 results. XVI. Cosmological parameters". Astronomy & Astrophysics. 571: A16. arXiv: [astro-Ph]. Bibcode:2014A&A...571A..16P. doi:10.1051/0004-6361/201321591.
- Potter, Frank; Jargodzki, Christopher (2005). Mad About Modern Physics: Braintwisters, Paradoxes, and Curiosities. Hoboken, NJ: Wiley.
- ——— (2013a). "Laurent Nottale : La Tête dans les Étoiles. Interview".
- Sidharth, B. G. (2001). Chaotic Universe: From Planck to the Hubble Scale. Contemporary Fundamental Physics. Huntington, NY: Nova Science. ISBN 978-1-56072-977-8.
- Stenger, V. J. (2011). The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us. Prometheus Books.
- Turner, Philip (2013). "A review of Scale Relativity and fractal space-time". Researchgate. Retrieved 2016-05-31.
- ———; Nottale, L. (2015). "The Origins of Macroscopic Quantum Coherence in High Temperature Super Conductivity". Physica C: Superconductivity and Its Applications. 515: 15–30. arXiv: . Bibcode:2015PhyC..515...15T. doi:10.1016/j.physc.2015.04.006.
- Vidal, C. (2010). "Introduction to the Special Issue on the Evolution and Development of the Universe". Foundations of Science. 15 (2): 95–99. doi:10.1007/s10699-010-9176-9.
- Wang, M. S.; Liang, Wei-Kuang (1993). "Comment on 'Repeated measurements in stochastic mechanics'". Physical Review D. 48 (4): 1875–1877. Bibcode:1993PhRvD..48.1875W. doi:10.1103/PhysRevD.48.1875.
- Wolszczan, A.; Frail, D. A. (1992). "A Planetary System Around the Millisecond Pulsar PSR1257 + 12". Nature. 355 (6356): 145–47. Bibcode:1992Natur.355..145W. doi:10.1038/355145a0.
- Yao, W.-M.; et al. (2006). "Review of Particle Physics". Journal of Physics G: Nuclear and Particle Physics. 33 (1): 1–1232. Bibcode:2006JPhG...33....1Y. doi:10.1088/0954-3899/33/1/001. | <urn:uuid:dffaec3a-c061-411a-835a-2ca71c6d574c> | 3.359375 | 17,172 | Knowledge Article | Science & Tech. | 53.493401 | 95,627,272 |
"In a 125-mL Erlenmeyer flask containing a magnetic stirring bar, mix 0.05 mol of benzaldehyde with the theoretical quantity of acetone, and add one-half the mixture to a solution of 5 g of sodium hydroxide dissolved in 50 mL of water and 40 mL of ethanol at room temperature (Cleaning Up: Dilute the filtrate from the reaction mixture with water and neutralize it with dilute hydrochloric acid before flushing down the drain."
Using the above procedure answer the below question. Please explain your answer through illustration if necessary. Always make your explanation detailed.
1) p. 525 #5 How would you change the procedures if you wanted to synthesize benzalacetone (C6H5CH=CHCOCH3)?
2) How would you change the procedures if you wanted to synthesize benzalacetophenone (C6H5CH=CHCOC6H5)?
3) Draw the three geometric isomers of dibenzalacetone. Which do you expect as the product of the reaction of 2 Benzaldehyde + acetone dibenzalacetone and why?
4) Show a step-by-step mechanism of 2 Benzaldehyde + acetone dibenzalacetone?
5) Calculate a theoretical yield using the above procedure.
6) Show a step-by-step mechanism to make the Benzalacetone?
7) Show a step-by-step mechanism to make benzalacetophenone?
8) Why is it import to maintain the specified proportions of organic reagents 2 Benzaldehyde + acetone dibenzalacetone reaction?
9) Why is the condensation product, dibenzalacetone, formed in this reaction and not the aldol addition product, 1,5-dihydroxy-1,5-diphenylpentane-3-one?
10) Show the complete mechanism (step-by-step) for the formation of dibenzalacetone from benzaldehyde and benzalacetoone?
11) The normal characteristics stretching frequency of the carbonyl group of a ketone in the IR spectrum is about 1720 cm-1. Pick out the carbonyl stretching frequency of dibenzalacetone in Fig.37.6. Why is it so much lower than expected? | <urn:uuid:381e4e2d-1e97-44e2-9fa2-3021579ee1b2> | 3.0625 | 492 | Tutorial | Science & Tech. | 40.352135 | 95,627,276 |
Signup for our emails
Coral Pink Sand Dunes Tiger Beetle Threatened by Off-Road Vehicles, Drought, Climate Change
Washington, DC The U.S. Fish and Wildlife Service (Service) announced today that it will propose to list the Coral Pink Sand Dunes tiger beetle as “threatened” under the Endangered Species Act (ESA), and to designate 2,276 acres of critical habitat for the species. The tiger beetle has been a candidate for listing for nearly 30 years. The Service identified off-road vehicle use, climate change, and drought as primary threats to the species.
“We commend the Service for recognizing and acting on the continuing threats to this rare insect,” said Taylor Jones, Endangered Species Advocate for WildEarth Guardians. “This gorgeous, fierce little creature is found in only one place on earth, and it deserves our respect and needs our protection.”
A portion of the tiger beetles’ habitat is located in a wilderness study area that the Bureau of Land Management (BLM) is required by law to protect, yet the agency has refused to halt damaging ORV use. Heidi McIntosh, associate director of the Southern Utah Wilderness Alliance, said, “This important listing could at long last put an end to destructive ORV use in one of the most beautiful and unique landscapes in the state. BLM’s stubborn refusal to protect this remarkable species and its habitat made this listing proposal inevitable.”
The dunes the Coral Pink Sand Dunes tiger beetle inhabits cover 3,500 acres, of which 2,000 acres is within Coral Pink Sand Dunes (CPSD) State Park in Utah. The remaining 1,500 acres are on adjacent Bureau of Land Management land partly within the Moquith Mountain Wilderness Study Area. The tiger beetle occurs consistently in only two populations, which occupy a total area of only about 500 acres. The northern population may not be self-sustaining, instead relying on dispersal from the central population.
Although core areas of tiger beetle habitat have been closed to ORVs since 1997 (207 acres within CPSD State Park and 370 acres on BLM land), ORV use still occurs in 52 percent of occupied CPSD tiger beetle habitat in the central population in the State Park, as well as in the dispersal corridor between the two populations. For the small northern population, enforcement of protections on BLM land is minimal and relies mainly on voluntary compliance. ORVs, aside from crushing beetle larvae and adults, can damage vegetation, reducing the beetle’s prey base and drying out their habitats even further. During years when their population is small, the beetles concentrate in the protected area in CPSD State Park. In years when beetle numbers are exceptionally high, a greater percentage of them are found outside the conservation areas where they are vulnerable to harm from ORVs.
ORV use has a history of reducing or eliminating tiger beetle populations; victims have included Northeastern Beach tiger beetle populations in several locations, a portion of the White Beach tiger beetle population in Maryland, and the hairy-necked tiger beetle, Siuslaw hairy-necked tiger beetle, and St. Anthony Dune tiger beetle populations in California, Oregon, Washington, and Idaho.
The Service has proposed to designate 2,276 acres of dunes in CPSD State Park (767 ac) and on BLM land (1,508 ac) as critical habitat for the beetle. Species with designated critical habitat are twice as likely to recover as those without critical habitat.
The tiger beetle was petitioned for listing by the Southern Utah Wilderness Alliance in 1994. It is one of 252 candidate species covered in WildEarth Guardians’ settlement agreement with the Service, announced on May 10, 2011, and approved by a federal court on September 9, 2011. The agreement obligates the agency to either list or find “not warranted” for protection all 252 candidates by September 2016.
# # #
Please contact Taylor Jones (email@example.com) for an image of the Coral Pink Sand Dunes tiger beetle. | <urn:uuid:cf3adee6-ba82-4d53-8c5e-94808cc57f07> | 2.828125 | 835 | News (Org.) | Science & Tech. | 37.817817 | 95,627,298 |
Department of Physics
Effect of coupling between excitons and gold nanoparticle surface plasmons on emission behavior of phosphorescent organic light-emitting diodes
Enhanced efficiency and reduced efficiency roll-off in phosphorescent organic light-emitting diodes (PhOLEDs) are realized by interposing a solution-processed gold nanoparticle (GNP)-based interlayer between the anode and the hole-injection layer. Transient photoluminescence measurements elucidate that a reduced lifetime of the triplet excitons was observed for samples having a GNP-interlayer as compared to a control sample without the GNP-interlayer. The decrease in the triplet exciton lifetime, caused by the coupling between the triplet excitons and the localized surface plasmons (LSPs) excited by the GNPs, enables reducing the triplet–triplet and triplet–polaron annihilation processes, thereby a reduced efficiency roll-off in PhOLEDs. The presence of a GNP-interlayer also acts as an optical out-coupling layer contributing to the efficiency enhancement and was demonstrated by the theoretical simulation.
Phosphorescent organic light-emitting diodes, Gold nanoparticle, Surface plasmons
Source Publication Title
Link to Publisher's Edition
Ji, Wenyu, Haifeng Zhao, Haigui Yang, and Fu Rong Zhu. "Effect of coupling between excitons and gold nanoparticle surface plasmons on emission behavior of phosphorescent organic light-emitting diodes." Organic Electronics 22 (2015): 154-159. | <urn:uuid:6f26b732-5da1-46ce-beff-f43cc46e2192> | 2.53125 | 335 | Academic Writing | Science & Tech. | 2.306957 | 95,627,302 |
Where does most of the energy come from in a shady section of a stream?
4 people found this useful
the center of the Earth
Energy human's get daily is derived from food. Food rich incarbohydrates provide more energy to sustain humans each day.
Think about it. Solar panels? Yeah, it does produce a lot of energy, but not the most. Electricity produces the most energy.
Directly or indirectly from the sun
AUstralia get most of its energy from coal. . Most of it comes from coal and oil produced locally
the protein, in the middle, if the food comes from a living thing, such as an animal then it was the veins
The Sun provides most of the energy on Earth.
Most of the earths energy comes from the sun and its rays
It usually comes from the sun.
The sun. Evaporation leads to rain in the mountains & gravity does the job from there.
The energy for running water in streams come from from heaven.
Fossil fuels, which provide virtually all the energy for transportation (mainly petroleum with some natural gas) and the majority of energy for electricity (mainly coal, natur…al gas and a small amount of petroleum). | <urn:uuid:bda99fd6-3557-48e0-842b-2aa3a6486a60> | 2.890625 | 255 | Q&A Forum | Science & Tech. | 60.056667 | 95,627,324 |
Scientific visualization (also spelled scientific visualisation) is an interdisciplinary branch of science. According to Friendly (2008), it is "primarily concerned with the visualization of three-dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component". It is also considered a subset of computer graphics, a branch of computer science. The purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data.
One of the earliest examples of three-dimensional scientific visualisation was Maxwell's thermodynamic surface, sculpted in clay in 1874 by James Clerk Maxwell. This prefigured modern scientific visualization techniques that use computer graphics.
Notable early two-dimensional examples include the flow map of Napoleon's March on Moscow produced by Charles Joseph Minard in 1869; the "coxcombs" used by Florence Nightingale in 1857 as part of a campaign to improve sanitary conditions in the British army; and the dot map used by John Snow in 1855 to visualise the Broad Street cholera outbreak.
Scientific visualization using computer graphics gained in popularity as graphics matured. Primary applications were scalar fields and vector fields from computer simulations and also measured data. The primary methods for visualizing two-dimensional (2D) scalar fields are color mapping and drawing contour lines. 2D vector fields are visualized using glyphs and streamlines or line integral convolution methods. 2D tensor fields are often resolved to a vector field by using one of the two eigenvectors to represent the tensor each point in the field and then visualized using vector field visualization methods.
For 3D scalar fields the primary methods are volume rendering and isosurfaces. Methods for visualizing vector fields include glyphs (graphical icons) such as arrows, streamlines and streaklines, particle tracing, line integral convolution (LIC) and topological methods. Later, visualization techniques such as hyperstreamlines were developed to visualize 2D and 3D tensor fields.
Computer animation is the art, technique, and science of creating moving images via the use of computers. It is becoming more common to be created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films. Applications include medical animation, which is most commonly utilized as an instructional tool for medical professionals or their patients.
Computer simulation is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, and computational physics, chemistry and biology; human systems in economics, psychology, and social science; and in the process of engineering and new technology, to gain insight into the operation of those systems, or to observe their behavior. The simultaneous visualization and simulation of a system is called visulation.
Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for months. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computing Modernization Program.
Information visualization is the study of "the visual representation of large-scale collections of non-numerical information, such as files and lines of code in software systems, library and bibliographic databases, networks of relations on the internet, and so forth".
Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways. Visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. The key difference between scientific visualization and information visualization is that information visualization is often applied to data that is not generated by scientific inquiry. Some examples are graphical representations of data for business, government, news and social media.
Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three-dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading information. The image is a digital image or raster graphics image. The term may be by analogy with an "artist's rendering" of a scene. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output. Important rendering techniques are:
Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner. Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.
According to Rosenblum (1994) "volume visualization examines a set of techniques that allows viewing an object without mathematically representing the other surface. Initially used in medical imaging, volume visualization has become an essential technique for many sciences, portraying phenomena become an essential technique such as clouds, water flows, and molecular and biological structure. Many volume visualization algorithms are computationally expensive and demand large data storage. Advances in hardware and software are generalizing volume visualization as well as real time performances".
Developments of web-based technologies, and in-browser rendering have allowed of simple volumetric presentation of a cuboid with a changing frame of reference to show volume, mass and density data - the HowMuch tool produced by This Equals company.
This section will give a series of examples how scientific visualization can be applied today.
Star formation: The featured plot is a Volume plot of the logarithm of gas/dust density in an Enzo star and galaxy simulation. Regions of high density are white while less dense regions are more blue and also more transparent.
Gravitational waves: Researchers used the Globus Toolkit to harness the power of multiple supercomputers to simulate the gravitational effects of black-hole collisions.
Massive Star Supernovae Explosions: In the image, three-Dimensional Radiation Hydrodynamics Calculations of Massive Star Supernovae Explosions The DJEHUTY stellar evolution code was used to calculate the explosion of SN 1987A model in three dimensions.
Molecular rendering: VisIt's general plotting capabilities were used to create the molecular rendering shown in the featured visualization. The original data was taken from the Protein Data Bank and turned into a VTK file before rendering.
Terrain visualization: VisIt can read several file formats common in the field of Geographic Information Systems (GIS), allowing one to plot raster data such as terrain data in visualizations. The featured image shows a plot of a DEM dataset containing mountainous areas near Dunsmuir, CA. Elevation lines are added to the plot to help delineate changes in elevation.
Tornado Simulation: This image was created from data generated by a tornado simulation calculated on NCSA's IBM p690 computing cluster. High-definition television animations of the storm produced at NCSA were included in an episode of the PBS television series NOVA called "Hunt for the Supertwister." The tornado is shown by spheres that are colored according to pressure; orange and blue tubes represent the rising and falling airflow around the tornado.
Climate visualization: This visualization depicts the carbon dioxide from various sources that are advected individually as tracers in the atmosphere model. Carbon dioxide from the ocean is shown as plumes during February 1900.
Atmospheric Anomaly in Times Square In the image the results from the SAMRAI simulation framework of an atmospheric anomaly in and around Times Square are visualized.
Scientific visualization of mathematical structures has been undertaken for purposes of building intuition and for aiding the forming of mental models.
Higher-dimensional objects can be visualized in form of projections (views) in lower dimensions. In particular, 4-dimensional objects are visualized by means of projection in three dimensions. The lower-dimensional projections of higher-dimensional objects can be used for purposes of virtual object manipulation, allowing 3D objects to be manipulated by operations performed in 2D, and 4D objects by interactions performed in 3D.
Computer mapping of topographical surfaces: Through computer mapping of topographical surfaces, mathematicians can test theories of how materials will change when stressed. The imaging is part of the work on the NSF-funded Electronic Visualization Laboratory at the University of Illinois at Chicago.
Curve plots: VisIt can plot curves from data read from files and it can be used to extract and plot curve data from higher-dimensional datasets using lineout operators or queries. The curves in the featured image correspond to elevation data along lines drawn on DEM data and were created with the feature lineout capability. Lineout allows you to interactively draw a line, which specifies a path for data extraction. The resulting data was then plotted as curves.
Image annotations: The featured plot shows Leaf Area Index (LAI), a measure of global vegetative matter, from a NetCDF dataset. The primary plot is the large plot at the bottom, which shows the LAI for the whole world. The plots on top are actually annotations that contain images generated earlier. Image annotations can be used to include material that enhances a visualization such as auxiliary plots, images of experimental data, project logos, etc.
Scatter plot: VisIt's Scatter plot allows visualizing multivariate data of up to four dimensions. The Scatter plot takes multiple scalar variables and uses them for different axes in phase space. The different variables are combined to form coordinates in the phase space and they are displayed using glyphs and colored using another scalar variable.
Porsche 911 model (NASTRAN model): The featured plot contains a Mesh plot of a Porsche 911 model imported from a NASTRAN bulk data file. VisIt can read a limited subset of NASTRAN bulk data files, in general enough to import model geometry for visualization.
YF-17 aircraft Plot: The featured image displays plots of a CGNS dataset representing a YF-17 jet aircraft. The dataset consists of an unstructured grid with solution. The image was created by using a pseudocolor plot of the dataset's Mach variable, a Mesh plot of the grid, and Vector plot of a slice through the Velocity field.
City rendering: An ESRI shapefile containing a polygonal description of the building footprints was read in and then the polygons were resampled onto a rectilinear grid, which was extruded into the featured cityscape.
Inbound traffic measured: This image is a visualization study of inbound traffic measured in billions of bytes on the NSFNET T1 backbone for the month of September 1991. The traffic volume range is depicted from purple (zero bytes) to white (100 billion bytes). It represents data collected by Merit Network, Inc.
Important laboratory in the field are:
Conferences in this field, ranked by significance in scientific visualization research, are: | <urn:uuid:0d333c9f-d608-45be-8fc2-0dc83cfeaef8> | 3.15625 | 2,450 | Knowledge Article | Science & Tech. | 18.304486 | 95,627,328 |
Remote acquisition of information about phenomena and objects from an imagery is the main objective of remote sensing. The ability to realize aims of image intelligence depends on the quality of acquired remote sensing data. The imagery intelligence can be carried out from different altitudes- from satellite level to terrestrial platforms. In this article, authors are focused on chosen aspects of imagery intelligence from low altitudes. Unfortunately the term low altitudes is not precise defined, therefore, for the purpose of this article is assumed that low altitudes, are altitudes in which operate the mini unmanned aerial vehicles (mini UAVs).The quality of imagery acquired determines the level of analysis that can be performed. The imagery quality depends on many factors, such as platforms on which the sensor is mounted, imaging sensors, height from which the data are acquired and object that is investigated. The article will also present the methods for assessing the quality of imagery in terms of detection, identification, description and technical analysis of investigated objects, as well as in terms of the accuracy of their location in the images (targeting). | <urn:uuid:e888516d-de03-44d2-9cc7-bc3f0dc67e37> | 3.015625 | 212 | Truncated | Science & Tech. | 7.944744 | 95,627,339 |
Sputnik 1 (// or //; "Satellite-1", or "PS-1", Простейший Спутник-1 or Prosteyshiy Sputnik-1, "Elementary Satellite 1") was the first artificial Earth satellite. The Soviet Union launched it into an elliptical low Earth orbit on 4 October 1957, orbiting for three weeks before its batteries died, then silently for two more months before falling back into the atmosphere. It was a 58 cm (23 in) diameter polished metal sphere, with four external radio antennas to broadcast radio pulses. Its radio signal was easily detectable even by radio amateurs, and the 65° inclination and duration of its orbit made its flight path cover virtually the entire inhabited Earth. This surprise success precipitated the American Sputnik crisis and triggered the Space Race, a part of the Cold War. The launch ushered in new political, military, technological, and scientific developments.
Tracking and studying Sputnik 1 from Earth provided scientists with valuable information. The density of the upper atmosphere could be deduced from its drag on the orbit, and the propagation of its radio signals gave data about the ionosphere.
Sputnik 1 was launched during the International Geophysical Year from Site No.1/5, at the 5th Tyuratam range, in Kazakh SSR (now known as the Baikonur Cosmodrome). The satellite travelled at about 29,000 kilometres per hour (18,000 mph; 8,100 m/s), taking 96.2 minutes to complete each orbit. It transmitted on 20.005 and 40.002 MHz, which were monitored by radio operators throughout the world. The signals continued for 21 days until the transmitter batteries ran out on 26 October 1957. Sputnik burned up on 4 January 1958 while reentering Earth's atmosphere, after three months, 1440 completed orbits of the Earth, and a distance travelled of about 70 million km (43 million mi).
On 17 December 1954, chief Soviet rocket scientist Sergei Korolev proposed a developmental plan for an artificial satellite to Minister of Defence Industry Dimitri Ustinov. Korolev forwarded a report by Mikhail Tikhonravov with an overview of similar projects abroad. Tikhonravov had emphasized that the launch of an orbital satellite was an inevitable stage in the development of rocket technology.
On 29 July 1955, U.S. President Dwight D. Eisenhower announced through his press secretary that the United States would launch an artificial satellite during the International Geophysical Year (IGY). A week later, on 8 August, the Politburo of the Communist Party of the Soviet Union approved the proposal to create an artificial satellite. On 30 August Vasily Ryabikov – the head of the State Commission on R-7 rocket test launches – held a meeting where Korolev presented calculation data for a spaceflight trajectory to the Moon. They decided to develop a three-stage version of the R-7 rocket for satellite launches.
On 30 January 1956 the Council of Ministers approved practical work on an artificial Earth-orbiting satellite. This satellite, named Object D, was planned to be completed in 1957–58; it would have a mass of 1,000 to 1,400 kg (2,200 to 3,100 lb) and would carry 200 to 300 kg (440 to 660 lb) of scientific instruments. The first test launch of "Object D" was scheduled for 1957. Work on the satellite was to be divided among institutions as follows:
Preliminary design work was completed by July 1956 and the scientific tasks to be carried out by the satellite were defined. These included measuring the density of the atmosphere and its ion composition, the solar wind, magnetic fields, and cosmic rays. These data would be valuable in the creation of future artificial satellites. A system of ground stations was to be developed to collect data transmitted by the satellite, observe the satellite's orbit, and transmit commands to the satellite. Because of the limited time frame, observations were planned for only 7 to 10 days and orbit calculations were not expected to be extremely accurate. | <urn:uuid:100e4ce3-3fbe-4e29-ab6c-14744037fcc8> | 3.5 | 851 | Knowledge Article | Science & Tech. | 46.781889 | 95,627,342 |
Object-Oriented Analysis and Design (Part 2)
Object-Oriented Analysis and Design (Part 2)
In part two of a four part series on Object-Oriented Design, we go over some of the core definitions behind the practice.
Join the DZone community and get the full member experience.Join For Free
The Agile Zone is brought to you in partnership with Techtown Training. Learn how DevOps and SAFe® can be used either separately or in unison as a way to make your organization more efficient, more effective, and more successful in our SAFe® vs DevOps eBook.
Welcome to article part two of a 4 part series. In the previous article, you learned about the basics of object-oriented design. I also discussed what will you learn and what will not.
In this article, you will not need to feel overwhelmed by learning every definition pertaining to object-oriented design. You will learn important definitions. These are necessary to initiate a design process. Learn them and start applying them to your next project.
Object-Oriented Analysis and Design - Most Needed Definitions
When I developed my first project, which I developed using VB 6.0, I was disappointed in myself. This is because a single change in a small portion of the code propagated to all other parts of the software.
The reason was that I didn't know about how to write modular code. Although it is possible to write modular code in a procedural language (VB 6.0 was procedural), it was difficult and not supported inherently in VB 6.0.
It was a nightmare to develop a simple software with only 4 features. It was terrifying to make any changes in the code because I didn't know about object-oriented programming.
The solution to this problem is object-oriented programming. This enables me to write modular programs. Before explaining how OOP and OOAD helped me out, first, let’s discuss the difference between the development process and a development methodology.
Difference Between Development Process and Development Methodology
Development methodology is something within the process. Examples of development methodologies are structured programming, object-oriented and service-oriented programming.
The development process defines a set of steps to carry out software development activities. Examples of software development processes are the Waterfall process, rational unified process, Scrum, and extreme programming. Generally, you can pick any process methodology and then adopt any development methodology within that process.
One important benefit of object-oriented methodology which no one tells you is that you have the ability to design using real-world terms or domain specific terms. For example, if you are working on a piece of software related to banking services, then you can use terms like Account, Ledger, and Balance Sheet within the software code (as Names or attributes of classes).
How is this beneficial? You can design the software like real systems in the real world work. This makes it easy for you to update, modify, and communicate with the customers. Hence object-oriented analysis is about identifying opportunities where you can represent real world objects in the software world.
The first step in object-oriented analysis is listening to a customer story and writing it down. A story is a description of customer pains and gains in his own words. Your job is to solve these customer pains and/or help the customer to achieve positive gains.
There can be more than one customer story. You can tackle one or two user stories in a single iteration. But do not tackle more than 10 percent of all the stories in a single iteration. The next step is to design the domain model from user stories.
To the design domain model, one simple technique that I used over time is reading the user story and underlining the nouns. These nouns are potential candidates for classes in your domain model. The domain model simply describes the names of the classes and/or attributes of the classes. A domain model does not describe any implementation detail such as function within a class.
In summary, there are two steps:
- Writing down user stories.
- Designing the domain model.
A demonstration of these ideas will be given in the upcoming example.
It’s quite easier to look at an object and say, “yeah, it is a collection of variables.” Let me share a personal story: once I refactored a piece of code and designed a separate class which consisted of two primitive types. The funny thing was that another developer cheered me for developing a new variable.
There are many developers around who think like that. That is, creating a class means creating a new variable. This is not the reason people use the object-oriented methodology. If you just want a collection of variable names, then you can use
struct in C (which is a not an object-oriented programming language).
Object-oriented design is about how your objects collaborate with each other. Here, you will decide who will create which objects and how they will interact to fulfill the needs of a user story.
The most import thing about analysis is identifying the domain classes from the user story and the most important thing in object-oriented design is identifying the collaboration between the objects to satisfy the user story.
In addition to designing the collaboration, there are principles and patterns which are followed by the software community. A group of four patterns is extensively used in software designing, but these patterns deserve another post (I will soon share with you and my personal story of my journey to design patterns).
If you are new to designs, then don’t get overwhelmed by all these pattern things. Just focus on one or two principles to start with. Then extend your learning towards more principles and design patterns.
What Are the Most Important Principles You Should Start With?
Don’t get overwhelmed with principles and design patterns. My suggestion is to focus on few principles and design patterns you are comfortable with in the start and then develop from there.
An important principle that I found very useful and I suggest that you should use is:
Prefer composition over inheritance.
That is, avoid using inheritance whenever possible. Use composition in any situation wherever you think inheritance is the answer.
Also, if you have never used interfaces in your code, then start using interface from today. This will enable you to write reusable and modular software.
Apply these two principles in the upcoming weeks and you will realize the power of object-oriented analysis and design.
Advantages of Object-Oriented Analysis and Design
With these principles and knowledge of object-oriented programming, I was able to write a software which was modular and easy to read. Also, if I open the code today which I wrote 10 years ago, I can easily understand and update the code.
This ends article two of my four parts article series. In this article, I discussed the difference between process and methodology. Also, the principles that every beginner object-oriented programmer should learn.
In part 3, I will share an example and I will discuss the following:
Outline all the steps needed for object-oriented analysis and design.
Applying design steps to a simple example.
I will show why you don't need a fancy UML software package for object-oriented design.
To learn more about object-oriented design, visit here.
Published at DZone with permission of Muhammad Umair . See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:a44072bd-cd40-4aaf-8c4c-7c03a10e3ef8> | 2.796875 | 1,535 | Truncated | Software Dev. | 42.055173 | 95,627,345 |
doi:10.1038/nindia.2018.74 Published online 17 June 2018
Chemists have developed an eco-friendly protocol for biosynthesis of silver nanoparticles – in the size range 100-300 nanometre – using the root extract of the plant Curculigo orchioides, commonly known as kali musli across India1.
The researchers report that in an aqueous solution of the root extract of C. orchioides, silver ions are reduced to zero-valent silver and eventually stabilize as silver nanoparticles.
Due to its unique electronic, optical, chemical and biochemical properties, nano-silver is variously used in spectrally-selective coating for solar energy, absorption electrical batteries, antimicrobial agents for safe water, food packaging and wound dressing. Scientists have long been exploring eco-friendly ways to synthesize them.
Sushma Dave of Jodhpur Institute of Engineering & Technology and Sunita Kumbhat of Jai Narain Vyas University, Jodhpur – both in Rajasthan – report that this extracellular bio-nanosynthesis route using renewable plant resources "represents an environmentally benign, clean, green and rapid approach to nano particle synthesis”. X-ray diffraction analyses and atomic force microscopy confirmed that the particles thus prepared displayed all characteristic features of nano size silver.
At room temperature the reaction time for synthesis of these nanoparticles was 40 minutes, much faster than the microbe-mediated synthesis yielding spherical silver nanoparticles, the report says. The researchers say the biogenic nanoparticles can be used for various biomedical, pharmaceutical and biotechnological commercial applications.
1. Dave, S. & Kumbhat, S. Electrochemical and spectral characterization of silver nanoparticles synthesized employing root extract of Curculigo orchioides. Ind. J. Chem. Technol. 25, 201-207 (2017) | <urn:uuid:1e7771f1-cd19-4bfa-8209-1203d6e13834> | 3.15625 | 394 | News Article | Science & Tech. | 20.571059 | 95,627,354 |
|Group:||Group V ((−)ssRNA)|
|Species:||Lettuce big-vein associated varicosavirus|
The genus Varicosavirus is a group of related plant viruses associated with the swelling in plant vein tissues. They are negative single stranded RNA viruses. Infection occurs through soil by the spores of the fungus Olpidium brassicae.
The genome of the only member of the only species of this genus (lettuce big-vein associated virus (LBVaV)) consists of a bi-segmented linear, single-stranded negative sense RNA. The first segment is about 6350–7000 nucleotides in length; the second, about 5630–6500 nucleotides in length.
Virions consist of a non-enveloped rod-shaped capsid, having a helical symmetry of 120–360 nm in length, and a width of 18–30 nm.
- Kormelink R, Garcia ML, Goodin M, Sasaya T, Haenni AL (2011). "Negative-strand RNA viruses: the plant-infecting counterparts". Virus Res. 162 (1-2): 184–202. doi:10.1016/j.virusres.2011.09.028. PMID 21963660.
- Sasaya T, Ishikawa K, Koganezawa H (2002). "The nucleotide sequence of RNA1 of Lettuce big-vein virus, genus Varicosavirus, reveals its relation to nonsegmented negative-strand RNA viruses". Virology. 297 (2): 289–97. doi:10.1006/viro.2002.1420. PMID 12083827.
|This virus-related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:a344216d-d9e1-4e9d-acb6-c78394f14370> | 3.296875 | 391 | Knowledge Article | Science & Tech. | 69.85566 | 95,627,357 |
If we mate two individuals that are heterozygous (e.g., Bb) for a trait, we find that
- 25% of their offspring are homozygous for the dominant allele (BB)
- 50% are heterozygous like their parents (Bb)
- 25% are homozygous for the recessive allele (bb) and thus, unlike their parents, express the recessive phenotype.
This is what Mendel found when he crossed monohybrids. It occurs because meiosis separates the two alleles of each heterozygous parent so that 50% of the gametes will carry one allele and 50% the other and when the gametes are brought together at random, each B (or b)-carrying egg will have a 1 in 2 probability of being fertilized by a sperm carrying B (or b). (Left table)
|Results of random union of the two gametes produced by two individuals, each heterozygous for a given trait. As a result of meiosis, half the gametes produced by each parent with carry allele B; the other half allele b.||Results of random union of the gametes produced by an entire population with a gene pool containing 80% B and 20% b.|
|0.5 B||0.5 b||0.8 B||0.2 b|
|0.5 B||0.25 BB||0.25 Bb||0.8 B||0.64 BB||0.16 Bb|
|0.5 b||0.25 Bb||0.25 bb||0.2 b||0.16 Bb||0.04 bb|
However, the frequency of two alleles in an entire population of organisms is unlikely to be exactly the same. Let us take as a hypothetical case, a population of hamsters in which 80% of all the gametes in the population carry a dominant allele for black coat (B) and 20% carry the recessive allele for gray coat (b).
Random union of these gametes (right table) will produce a generation:
- 64% homozygous for BB (0.8 x 0.8 = 0.64)
- 32% Bb heterozygotes (0.8 x 0.2 x 2 = 0.32)
- 4% homozygous (bb) for gray coat (0.2 x 0.2 = 0.04)
So 96% of this generation will have black coats; only 4% gray coats.
Will gray coated hamsters eventually disappear? No. Let's see why not.
- All the gametes formed by BB hamsters will contain allele B as will one-half the gametes formed by heterozygous (Bb) hamsters.
- So, 80% (0.64 + .5*0.32) of the pool of gametes formed by this generation with contain B.
- All the gametes of the gray (bb) hamsters (4%) will contain b but one-half of the gametes of the heterozygous hamsters will as well.
- So 20% (0.04 + .5*0.32) of the gametes will contain b.
So we have duplicated the initial situation exactly. The proportion of allele b in the population has remained the same. The heterozygous hamsters ensure that each generation will contain 4% gray hamsters. Now let us look at an algebraic analysis of the same problem using the expansion of the binomial (p+q)2.
\[(p+q)^2 = p^2 + 2pq + q^2\]
The total number of genes in a population is its gene pool.
- Let \(p\) represent the frequency of one gene in the pool and \(q\) the frequency of its single allele.
- So, \(p + q = 1\)
- \(p^2\) = the fraction of the population homozygous for \(p\)
- \(q^2\) = the fraction homozygous for \(q\)
- \(2pq\) = the fraction of heterozygotes
- In our example, p = 0.8, q = 0.2, and thus
(0.8 + 0.2)2 = (0.8)2 + 2(0.8)(0.2) + (0.2)2 = 064 + 0.32 + 0.04
The algebraic method enables us to work backward as well as forward. In fact, because we chose to make B fully dominant, the only way that the frequency of B and b in the gene pool could be known is by determining the frequency of the recessive phenotype (gray) and computing from it the value of q.
q2 = 0.04, so q = 0.2, the frequency of the b allele in the gene pool. Since p + q = 1, p = 0.8 and allele B makes up 80% of the gene pool. Because B is completely dominant over b, we cannot distinguish the Bb hamsters from the BB ones by their phenotype. But substituting in the middle term (2pq) of the expansion gives the percentage of heterozygous hamsters. 2pq = (2)(0.8)(0.2) = 0.32
So, recessive genes do not tend to be lost from a population no matter how small their representation. So long as certain conditions are met (to be discussed next),
gene frequencies and genotype ratios in a randomly-breeding population remain constant from generation to generation.
This is known as the Hardy-Weinberg law in honor of the two men who first realized the significance of the binomial expansion to population genetics and hence to evolution.
Evolution involves changes in the gene pool. A population in Hardy-Weinberg equilibrium shows no change. What the law tells us is that populations are able to maintain a reservoir of variability so that if future conditions require it, the gene pool can change. If recessive alleles were continually tending to disappear, the population would soon become homozygous. Under Hardy-Weinberg conditions, genes that have no present selective value will nonetheless be retained.
When the Hardy-Weinberg Law Fails to Apply
To see what forces lead to evolutionary change, we must examine the circumstances in which the Hardy-Weinberg law may fail to apply. There are five:
- gene flow
- genetic drift
- nonrandom mating
- natural selection
The frequency of gene B and its allele b will not remain in Hardy-Weinberg equilibrium if the rate of mutation of B -> b (or vice versa) changes. By itself, this type of mutation probably plays only a minor role in evolution; the rates are simply too low. However, gene (and whole genome) duplication - a form of mutation - probably has played a major role in evolution. In any case, evolution absolutely depends on mutations because this is the only way that new alleles are created. After being shuffled in various combinations with the rest of the gene pool, these provide the raw material on which natural selection can act.
Many species are made up of local populations whose members tend to breed within the group. Each local population can develop a gene pool distinct from that of other local populations. However, members of one population may breed with occasional immigrants from an adjacent population of the same species. This can introduce new genes or alter existing gene frequencies in the residents.
In many plants and some animals, gene flow can occur not only between subpopulations of the same species but also between different (but still related) species. This is called hybridization. If the hybrids later breed with one of the parental types, new genes are passed into the gene pool of that parent population. This process is called introgression. It is simply gene flow between species rather than within them.
Comparison of the genomes of contemporary humans with the genome recovered from Neanderthal remains shows that from 1–3% of our genes were acquired by introgression following mating between members of the two populations tens of thousands of years ago.
Whether within a species or between species, gene flow increases the variability of the gene pool.
As we have seen, interbreeding often is limited to the members of local populations. If the population is small, Hardy-Weinberg may be violated. Chance alone may eliminate certain members out of proportion to their numbers in the population. In such cases, the frequency of an allele may begin to drift toward higher or lower values. Ultimately, the allele may represent 100% of the gene pool or, just as likely, disappear from it.
Drift produces evolutionary change, but there is no guarantee that the new population will be more fit than the original one. Evolution by drift is aimless, not adaptive.
One of the cornerstones of the Hardy-Weinberg equilibrium is that mating in the population must be random. If individuals (usually females) are choosy in their selection of mates, the gene frequencies may become altered. Darwin called this sexual selection.
Nonrandom mating seems to be quite common. Breeding territories, courtship displays, "pecking orders" can all lead to it. In each case certain individuals do not get to make their proportionate contribution to the next generation.
Humans seldom mate at random preferring phenotypes like themselves (e.g., size, age, ethnicity). This is called assortative mating. Marriage between close relatives is a special case of assortative mating. The closer the kinship, the more alleles shared and the greater the degree of inbreeding. Inbreeding can alter the gene pool. This is because it predisposes to homozygosity. Potentially harmful recessive alleles — invisible in the parents — become exposed to the forces of natural selection in the children.
Fig. 18.6.1 Assortative mating. (Drawing by Koren © 1977 The New Yorker Magazine, Inc.)
It turns out that many species - plants as well as animals - have mechanisms be which they avoid inbreeding. Examples:
- Link to discussion of self-incompatibility in plants.
- Male mice use olfactory cues to discriminate against close relatives when selecting mates. The preference is learned in infancy - an example of imprinting. The distinguishing odors are
- controlled by the MHC alleles of the mice
- detected by the vomeronasal organ (VNO)
If individuals having certain genes are better able to produce mature offspring than those without them, the frequency of those genes will increase. This is simply expressing Darwin's natural selection in terms of alterations in the gene pool. (Darwin knew nothing of genes.) Natural selection results from
- differential mortality and/or
- differential fecundity.
Certain genotypes are less successful than others in surviving through to the end of their reproductive period.
The evolutionary impact of mortality selection can be felt anytime from the formation of a new zygote to the end (if there is one) of the organism's period of fertility. Mortality selection is simply another way of describing Darwin's criteria of fitness: survival.
Certain phenotypes (thus genotypes) may make a disproportionate contribution to the gene pool of the next generation by producing a disproportionate number of young. Such fecundity selection is another way of describing another criterion of fitness described by Darwin: family size.
In each of these examples of natural selection, certain phenotypes are better able than others to contribute their genes to the next generation. Thus, by Darwin's standards, they are more fit. The outcome is a gradual change in the gene frequencies in that population.
Calculating the Effect of Natural Selection on Gene Frequencies.
The effect of natural selection on gene frequencies can be quantified. Let us assume a population containing
- 36% homozygous dominants (AA)
- 48% heterozygotes (Aa) and
- 16% homozygous recessives (aa)
The gene frequencies in this population are
p = 0.6 and q = 0.4
The heterozygotes are just as successful at reproducing themselves as the homozygous dominants, but the homozygous recessives are only 80% as successful. That is, for every 100 AA (or Aa) individuals that reproduce successfully only 80 of the aa individuals succeed in doing so. The fitness (w) of the recessive phenotype is thus 80% or 0.8.
Their relative disadvantage can also be expressed as a selection coefficient, s, where
s = 1 − w
In this case, s = 1 − 0.8 = 0.2.
The change in frequency of the dominant allele (Δp) after one generation is expressed by the equation
|s p0 q02|
|1 - s q02|
where p0 and q0 are the initial frequencies of the dominant and recessive alleles respectively. Substituting, we get
|1 − (0.2)(0.4)2||0.968|
So, in one generation, the frequency of allele A rises from its initial value of 0.6 to 0.62 and that of allele a declines from 0.4 to 0.38 (q = 1 − p).
The new equilibrium produces a population of
- 38.4% homozygous dominants (an increase of 2.4%) (p2 = 0.384)
- 47.1% heterozygotes (a decline of 0.9%)(2pq = 0.471) and
- 14.4% homozygous recessives (a decline of 1.6%)(q2 = 0.144)
If the fitness of the homozygous recessives continues unchanged, the calculations can be reiterated for any number of generations. If you do so, you will find that although the frequency of the recessive genotype declines, the rate at which a is removed from the gene pool declines; that is, the process becomes less efficient at purging allele a. This is because when present in the heterozygote, a is protected from the effects of selection. | <urn:uuid:81e28025-b7bd-47d1-8f42-6cba206d668c> | 3.90625 | 2,968 | Knowledge Article | Science & Tech. | 58.7097 | 95,627,376 |
QUANTUM PARTICLES (CHAPTER-4)
In this chapter-4, I will try my best to give a understanding about dimensions whether these are smaller or higher and also discuss briefly on dimensions.
Universal Mathematics is responsible for structure of space as space itself is a rounded ball where one can attached many smaller strings. As we consider every strings differently and why these strings exist and why there is any need of string lets have a look.
String Theory always talk about multiple dimensions where there are more than ten dimensions and hence we are only observing three dimension and one is time. But question arises why there are so many dimensions exist? Every dimensions exist simultaneously and appears at the same time but we cannot observe this due to some biological and some hidden realty of nature i.e. our world that is three dimensional including time. So, why this is difficult for every conscious beings to observe these hidden dimensions but as we also are a part of universe and space so we are also a creation of atoms and even smaller particles “Quarks” so these particles are easily create dimensions so as to generate new kind of particles and hence transform these into smaller and higher dimensions.
Hence these dimensions are responsible for space structure and its geometry but not for “Dark Space” that itself is a geometry and in my next chapter I will show how and why one cannot find these hidden dimensions and and why space needs dimension? | <urn:uuid:e2f0010d-1428-445d-8679-cf6bf056bcb3> | 3.078125 | 288 | Truncated | Science & Tech. | 33.690678 | 95,627,387 |
Characteristics of crack growth in sintered materials
- 63 Downloads
In sintered materials crack growth occurs as a rule through a mechanism of pore coalescence accompanying the rupture of interparticle (interpore) bridges, which in turn can rupture both through microvoid coalescence and transor intercrystallite cleavage. The rupture of a sintered material with a crack is linked not only with the attainment of peak stresses at the crack tip but also, and to an even greater extent, with the appearance of such stresses at the tips of other cracklike defects, i.e., pores. Unstable crack growth is due to the merging of microcracks formed at pores into a main crack.
KeywordsPeak Stress Main Crack Sintered Material Unstable Crack Unstable Crack Growth
Unable to display preview. Download preview PDF.
- 1.S. Kocanda, Fatigue Fracture of Metals [Russian translation], Metallurgiya, Moscow (1976).Google Scholar
- 2.V. M. Finkel', Physical Principles of Fracture Inhibition [in Russian], Metallurgiya, Moscow (1977).Google Scholar
- 3.G. P. Cherepanov, Brittle Fracture Mechanics [in Russian], Nauka, Moscow (1974).Google Scholar
- 4.W. J. Hall, H. Tihara, V. Zut, and A. Wells, Brittle Fracture of Welded Structures [Russian translation], Mashinostroenie, Moscow (1974).Google Scholar
- 5.F. McClintock and A. Argon, Deformation and Fracture of Materials [Russian translation], Mir, Moscow (1970).Google Scholar
- 6.Z. V. Weiss, in: Analysis of Fracture under Stress Concentration Conditions. Fracture [Russian translation], Vol. 3, Mir, Moscow (1976), pp. 263–302.Google Scholar
- 7.G. A. Prantskyavichus, “Fracture of brittle porous materials,” Tr. Akad. Nauk Lit. SSR, No. 4 (101), 99–108 (1977).Google Scholar
- 8.I. T. Barnby, D. C. Ghosh, and K. Dursdale, Powder Metall.,16, No. 3, 32–36 (1973).Google Scholar | <urn:uuid:339ec6ea-b466-4eac-b87f-9867bc7d5422> | 2.640625 | 495 | Academic Writing | Science & Tech. | 66.698808 | 95,627,399 |
The study, which focuses on North American rattlesnakes, finds that the rate of future change in suitable habitat will be two to three orders of magnitude greater than the average change over the past 300 millennia, a time that included three major glacial cycles and significant variation in climate and temperature.
“We find that, over the next 90 years, at best these species’ ranges will change more than 100 times faster than they have during the past 320,000 years,” said Michelle Lawing, lead author of the paper and a doctoral candidate in geological sciences and biology at IU Bloomington. “This rate of change is unlike anything these species have experienced, probably since their formation.”
The study, “Pleistocene Climate, Phylogeny, and Climate Envelope Models: An Integrative Approach to Better Understand Species' Response to Climate Change,” was published by the online science journal PLoS One. Co-author is P. David Polly, associate professor in the Department of Geological Sciences in the IU Bloomington College of Arts and Sciences.
The researchers make use of the fact that species have been responding to climate change throughout their history and their past responses can inform what to expect in the future. They synthesize information from climate cycle models, indicators of climate from the geological record, evolution of rattlesnake species and other data to develop what they call “paleophylogeographic models” for rattlesnake ranges. This enables them to map the expansion and contraction at 4,000-year intervals of the ranges of 11 North American species of the rattlesnake genus Crotalus.
Projecting the models into the future, the researchers calculate the expected changes in range at the lower and upper extremes of warming predicted by the Intergovernmental Panel on Climate Change — between 1.1 degree and 6.4 degrees Celsius. They calculate that rattlesnake ranges have moved an average of only 2.3 meters a year over the past 320,000 years and that their tolerances to climate have evolved about 100 to 1,000 times slower, indicating that range shifts are the only way that rattlesnakes have coped with climate change in the recent past. With projected climate change in the next 90 years, the ranges would be displaced by a remarkable 430 meters to 2,400 meters a year.
Increasing temperature does not necessarily mean expanded suitable habitats for rattlesnakes. The timber rattlesnake, for example, is now found throughout the Eastern United States. The study finds that, with a temperature increase of 1.1 degree Celsius over the next 90 years, its range would expand slightly into New York, New England and Texas. But with an increase of 6.4 degrees, its range would shrink to a small area on the Tennessee-North Carolina border. The giant eastern diamondback rattlesnake would be displaced entirely from its current range in the Southeastern U.S. with a temperature increase of 6.4 degrees.
The findings suggest snakes wouldn’t be able to move fast enough to keep up with the change in suitable habitat. The authors suggest the creation of habitat corridors and managed relocation may be needed to preserve some species.
Rattlesnakes are good indicators of climate change because they are ectotherms, which depend on the environment to regulate their body temperatures. But Lawing and Polly note that many organisms will be affected by climate change, and their study provides a model for examining what may happen with other species. Their future research could address the past and future effects of climate change on other types of snakes and on the biological communities of snakes.
The article is available online at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0028554.
Steve Hinnefeld | Newswise Science News
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d65dcea9-38f2-4ac1-b553-566add788c5b> | 3.953125 | 1,397 | Content Listing | Science & Tech. | 39.693409 | 95,627,422 |
Any hope of recovering such critically endangered species depends on understanding what drives changes in population size following habitat contraction. In a new study published in PLoS Biology, Nicholas Gotelli and Aaron Ellison test the relative contributions of habitat contraction, keystone species effects, and food-web interactions on species abundance, and provide experimental evidence that trophic interactions exert a dominant effect. Until now, direct evidence that trophic interactions play such an important role has been lacking, in part because manipulating an intact food web has proven experimentally intractable, and in part because these different modeling frameworks have not been explicitly compared.
Gotelli and Ellison overcame such technical limitations by using the carnivorous pitcher plant (Sarracenia purpurea) and its associated food web as a model for studying what regulates abundance in shrinking habitats. Every year, the pitcher plant, found in bogs and swamps throughout southern Canada and the eastern United States, grows six to 12 tubular leaves that collect enough water to support an entire aquatic food web. The pitcher plant food web starts with ants, flies, and other arthropods unlucky enough to fall into its trap. Midges and sarcophagid fly larvae “shred” and chew on the hapless insect. This shredded detritus is further broken down by bacteria, which in turn are consumed by protozoa, rotifers, and mites. Pitcher plant mosquito larvae feed on bacteria, protozoa, and rotifers. Older, larger sarcophagid fly larvae also feed on rotifers as well as on younger, smaller mosquito larvae.
Working with 50 pitcher plants in a bog in Vermont, Gotelli and Ellison subjected the plants to one of five experimental treatments, in which they manipulated habitat size (by changing the volume of water in the leaves), simplified the trophic structure (by removing the top trophic level—larvae of the dipterans fly, midge, and mosquito), did some combination of the two, or none of the above (the control condition). Dipteran larvae and water were measured as each treatment was maintained; both were replaced in the control condition and more water was added in the habitat expansion treatment. These treatments mimic the kinds of changes that occur in nature as habitat area shrinks and top predators disappear from communities.
The best predictors of abundance were models that incorporated trophic structure—including the “mosquito keystone model.” This model accurately reflected the pitcher plant food web, with mosquito larvae preying on rotifers, and sarcophagid flies preying on mosquito larvae. “Bottom-up” food-web models (in which links flow from prey to predator) predicted that changes in bacteria population size influence protozoa abundances, which in turn affect mosquito numbers, and that changes in bacteria abundance also affect mite numbers, which impact rotifer abundance. This scenario lends support to the model of a Sarracenia food web in which each link in the chain performs a specialized service in breaking down the arthropod prey that is used by the next species in the processing chain.
With over 200 million acres of the world’s forestlands destroyed in the 1990s alone, and an estimated 40% increase in the human population by 2050, a growing number of species will be forced to cope with shrinking habitat. Instead of trying to determine how individual species might respond to habitat loss, Gotelli and Ellison argue that incorporating trophic structure into ecological models may yield more-accurate predictions of species abundance—a critical component of species restoration strategies.
Citation: Gotelli NJ, Ellison AM (2006) Food-web models predict species abundances in response to habitat change. PLoS Biol 4(10): e324. DOI: 10.1371/journal.pbio.0040324.
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1276dfca-882c-43ba-9675-72db944e09ae> | 4.125 | 1,367 | Content Listing | Science & Tech. | 33.519432 | 95,627,423 |
DNA Data Storage Is Robust, Scalable
For the first time, scientists believe that they have developed a robust system for encoding information into DNA, and then reading it back with 100% accuracy.
Digital production, transmission and storage have revolutionized how we access and use information but have also made archiving an increasingly complex task that requires active, continuing maintenance of digital media. This challenge has focused some interest on DNA as an attractive target for information storage1 because of its capacity for high-density information encoding, longevity under easily achieved conditions and proven track record as an information bearer.
Previous DNA-based information storage approaches have encoded only trivial amounts of information or were not amenable to scaling-up, and used no robust error-correction and lacked examination of their cost-efficiency for large-scale information archival. Here we describe a scalable method that can reliably store more information than has been handled before. We encoded computer files totalling 739 kilobytes of hard-disk storage and with an estimated Shannon information of 5.2 × 106 bits into a DNA code, synthesized this DNA, sequenced it and reconstructed the original files with 100% accuracy.
Theoretical analysis indicates that our DNA-based storage scheme could be scaled far beyond current global information volumes and offers a realistic technology for large-scale, long-term and infrequently accessed digital archiving. In fact, current trends in technological advances are reducing DNA synthesis costs at a pace that should make our scheme cost-effective for sub-50-year archiving within a decade.
The first time I read about this idea was in an excellent series of fantasy novels by Barbara Hambly. In her 1982 Darwath trilogy, she writes about how wizards of several thousand years ago succeeded in tying information to the DNA of selected individuals.
In the story, several people from 1980's California find themselves transported across the Void to another planet and the Realm of Darwath. They face a deadly species of queerly magical beings - the Dark - who destroyed civilization thousands of years ago. Everything that was made of paper (like books and records) were burned to stave off attacks by the Dark.
Tying memories to a few suitable bloodlines was the only way to preserve a record of that period that would endure.
Update 15-Apr-2017: See the Heritable Memories Bloodline from The Time of the Dark (1982) by Barbara Hambly. End update.
From Towards practical, high-capacity, low-maintenance information storage in synthesized DNA (Nature) via Discover.
Scroll down for more stories in the same category. (Story submitted 1/25/2013)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
you like to contribute a story tip?
Get the URL of the story, and the related sf author, and add
Comment/Join discussion ( 4 )
Related News Stories -
Twist Bioscience High Density Digital Data On DNA
'They tied the memory to the bloodline and that was their record!' - Barbara Humbly, 1982.
Store One Bit On One Atom
'...each individual molecule has a meaning.' - Robert Heinlein, 1951.
DataTraveler Ultimate Generation 2 Terabyte Flashdrive
'A man or woman could carry AIs or complete planetary data spheres...' - Dan Simmons, 1989.
Sandisk 1 Terabyte SD Memory Card Surfaces
'They should be Welton Fine-Grains, or they would be too bulky to ship...' - Robert Heinlein, 1973.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
Ontario Starts Guaranteed Minimum Income
'Earned by just being born.'
Is There Life In Outer Space? Will We Recognize It?
'The antennae of the Life Detector atop the OP swept back and forth...'
Space Traumapod For Surgery In Spacecraft
' It was a ... coffin, form-fitted to Nessus himself...'
Tesla Augmented Reality Hypercard
'The hypercard is an avatar of sorts.'
A Space Ship On My Back
''Darn clever, these suits,' he murmured.'
Biomind AI Doctor Mops Floor With Human Doctors
'My aim was just not to lose by too much.' - Human Physician participant.
Fuli Bad Dog Robot Is 'Auspicious Raccoon Dog' Bot
Bad dog, Fuli. Bad dog.
Las Vegas Humans Ready To Strike Over Robots
'A worker replaced by a nubot... had to be compensated.'
You'll Regrow That Limb, One Day
'... forcing the energy transfer which allowed him to regrow his lost fingers.'
Elon Musk Seeks To Create 1941 Heinlein Speedster
'The car surged and lifted, clearing its top by a negligible margin.'
Somnox Sleep Robot - Your Sleepytime Cuddlebot
Science fiction authors are serious about sleep, too.
Real-Life Macau or Ghost In The Shell
Art imitates life imitates art.
Has Climate Change Already Been Solved By Aliens?
'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.'
First 3D Printed Human Corneas From Stem Cells
Just what we need! Lots of spare parts.
VirtualHome: Teaching Robots To Do Chores Around The House
'Just what did I want Flexible Frank to do? - any work a human being does around a house.'
Messaging Extraterrestrial Intelligence (METI) Workshop
SF writers have thought about this since the 19th century.
More SF in the News Stories
More Beyond Technovelgy science news stories | <urn:uuid:5c156e07-c12e-41f4-8915-b4c3c416df0a> | 3.125 | 1,245 | Content Listing | Science & Tech. | 47.141152 | 95,627,429 |
To simplify the electron emission mechanism involved in microwave electron guns, a team of researchers has created and demonstrated a field-emission plug-and-play solution based on ultrananocrystalline diamond.
Testing for ovarian cancer or the presence of a particular chemical could be almost as simple as distinguishing an F sharp from a B flat, thanks to a new microscopic acoustic device that has been dramatically improved.
No comprehensive study has yet been carried out to characterize the photoexcited lattice dynamics of an opaque thin film on a semi-infinite transparent substrate. As a result, ultrafast X-ray diffraction data for such samples can be challenging to interpret. Now a new study builds a model to help interpret such data.
MIT chemists have developed new nanoparticles that can simultaneously perform magnetic resonance imaging (MRI) and fluorescent imaging in living animals. Such particles could help scientists to track specific molecules produced in the body, monitor a tumor's environment, or determine whether drugs have successfully reached their targets.
Imagine building a chemical reactor small enough to study nanoparticles a billionth of a meter across. A billion times smaller than a raindrop is the volume of an E. coli cell. And another million times smaller would be a reactor small enough to study isolated nanoparticles. Add to that the challenge of making not just one of these tiny reactors, but billions of them, all identical in size and shape. Researchers at Cornell have done just that.
Silicon is the second most-abundant element in the earth's crust. When purified, it takes on a diamond structure, which is essential to modern electronic devices - carbon is to biology as silicon is to technology. A team of scientists has synthesized an entirely new form of silicon, one that promises even greater future applications.
A team of physicists has developed a method to control the movements occurring within magnetic materials, which are used to store and carry information. The breakthrough could simultaneously bolster information processing while reducing the energy necessary to do so.
For the first time, scientists have vividly mapped the shapes and textures of high-order modes of Brownian motions - in this case, the collective macroscopic movement of molecules in microdisk resonators.
Topological insulators are promising to develop into a material for lossless electricity and information transport. Now, researchers investigated for the first time whether the direction of motion of electrons in topological insulators affects their behavior. | <urn:uuid:6f7d5a3b-714c-47d2-9240-b551c2b1ec37> | 3.0625 | 489 | Content Listing | Science & Tech. | 23.343482 | 95,627,437 |
As shown in the previous page we now have an empty table named person. What can we do with such a table? Just use it like a bag! Store things in it, look into it to check the existence of things, modify things in it or throw things out of it. These are the four natural operations, which concerns data in tables:
Each of these four operations are expressed by their own SQL command. They start with a keyword and runs up to a semicolon at the end. This rule applies to all SQL commands: They are introduced by a keyword and terminated by a semicolon. In the middle there may be more keywords as well as object names and values.
When storing new data in rows of a table we must name all affected objects and values: the table name (there may be a lot of tables within the database), the columnnames and the values. All this is embedded within some keywords so that the SQL compiler can recognise the tokens and their meaning. In general the syntax for a simple INSERT is
INSERT INTO <tablename> (<list_of_columnnames>) VALUES (<list_of_values>);
Here is an example
-- put one row INSERT INTO person (id, firstname, lastname, date_of_birth, place_of_birth, ssn, weight) VALUES (1, 'Larry', 'Goldstein', date'1970-11-20', 'Dallas', '078-05-1120', 95); -- confirm the INSERT command COMMIT;
When the DBMS recognises the keywords INSERT INTO and VALUES it knows what to do: it creates a new row in the table and puts the given values into the named columns. In the above example the command is followed by a second one: COMMIT confirms the INSERT operation as well as the other writing operations UPDATE and DELETE. (We will learn much more about COMMIT and its counterpart ROLLBACK in a later chapter.)
Now we will put some more rows into our table. To do so we use a variation of the above syntax. It is possible to omit the list of columnnames if the list of values correlates exactly with the number, order and data type of the columns used in the original CREATE TABLE statement.
-- put four rows INSERT INTO person VALUES (2, 'Tom', 'Burton', date'1980-01-22', 'Birmingham', '078-05-1121', 75); INSERT INTO person VALUES (3, 'Lisa', 'Hamilton', date'1975-12-30', 'Mumbai', '078-05-1122', 56); INSERT INTO person VALUES (4, 'Debora', 'Patterson', date'2011-06-01', 'Shanghai', '078-05-1123', 11); INSERT INTO person VALUES (5, 'James', 'de Winter', date'1975-12-23', 'San Francisco', '078-05-1124', 75); COMMIT;
Now our table should contain five rows. Can we be sure about that? How can we check whether everything worked well and the rows and values exist really? To do so, we need a command which shows us the actual content of the table. It is the SELECT command with the following general syntax
SELECT <list_of_columnnames> FROM <tablename> WHERE <search_condition> ORDER BY <order_by_clause>;
As with the INSERT command you may omit some parts. The simplest example is
SELECT * FROM person;
The asterisk character '*' indicates 'all columns'. In the result, the DBMS should deliver all five rows each with the seven values we used previously with the INSERT command.
In the following examples we add the actually missing clauses of the general syntax - one after the other.
Add a list of some or all columnnames
SELECT firstname, lastname FROM person;
The DBMS should deliver the two columns firstname and lastname of all five rows.
Add a search condition
SELECT id, firstname, lastname FROM person WHERE id > 2;
The DBMS should deliver the three columns id, firstname and lastname of three rows.
Add a sort instruction
SELECT id, firstname, lastname, date_of_birth FROM person WHERE id > 2 ORDER BY date_of_birth;
The DBMS should deliver the four columns id, firstname, lastname and date_of_birth of three rows in the ascending order of date_of_birth.
If we want to change the values of some columns in some rows we can do so by using the UPDATE command. The general syntax for a simple UPDATE is:
UPDATE <tablename> SET <columnname> = <value>, <columnname> = <value>, ... WHERE <search_condition>;
Values are assigned to the named columns. Unmentioned columns keep unchanged. The search_condition acts in the same way as in the SELECT command. It restricts the coverage of the command to rows, which satisfy the criteria. If the WHERE keyword and the search_condition are omitted, all rows of the table are affected. It is possible to specify search_conditions, which hit no rows. In this case no rows are updated - and no error or exception occurs.
Change one column of one row
UPDATE person SET firstname = 'James Walker' WHERE id = 5; COMMIT;
The first name of Mr. de Winter changes to James Walker whereas all his other values keep unchanged. Also all other rows keep unchanged. Please verify this with a SELECT command.
Change one column of multiple rows
UPDATE person SET firstname = 'Unknown' WHERE date_of_birth < date'2000-01-01'; COMMIT;
The <search_condition> isn't restricted to the Primary Key column. We can specify any other column. And the comparison operator isn't restricted to the equal sign. We can use other operators - they solely have to match the data type of the column.
In this example we change the firstname of four rows with a single command. If there is a table with millions of rows we can change all of them using one single command.
Change two columns of one row
-- Please note the additional comma UPDATE person SET firstname = 'Jimmy Walker', lastname = 'de la Crux' WHERE id = 5; COMMIT;
The two values are changed with one single command.
The DELETE command removes complete rows from the table. As the rows are removed as a whole there is no need to specify any columnname. The semantics of the <search_condition> is the same as with SELECT and UPDATE.
DELETE FROM <tablename> WHERE <search_condition>;
Delete one row
DELETE FROM person WHERE id = 5; COMMIT;
The row of James de Winter is removed from the table.
Delete many rows
DELETE FROM person; COMMIT;
All remained rows are deleted as we have omitted the <search_condition>. The table is empty, but it still exists.
No rows affected
DELETE FROM person WHERE id = 99; COMMIT;
This command will remove no row as there is no row with id equals to 99. But the syntax and the execution within the DBMS are still perfect. No exception is thrown. The command terminates without any error message or error code.
The INSERT and DELETE commands affect rows in their entirety. INSERT puts a complete new row into a table (unmentioned columns remain empty) and DELETE removes complete rows. In contrast, SELECT and UPDATE affect only those columns that are mentioned in the command; unmentioned columns are unaffected.
The INSERT command (in the simple version of this page) has no <search_condition> and therefore handles exactly one row. The three other commands may affect zero, one or more rows depending on the evaluation of their <search_condition>.
Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.Visit defaultLogic's partner sites below: | <urn:uuid:e90823eb-129a-47c2-a67f-e62c9baa6e7e> | 3.796875 | 1,741 | Documentation | Software Dev. | 49.519181 | 95,627,444 |
Tourism is bringing rapid development to the islands of Langkawi, which puts pressure on the marine ecosystem. This research records the diversity and will be a useful baseline record for biomonitoring studies in Malaysia.
A study on the taxonomy of Chlorophyta was carried out to identify and to record the diversity of Chlorophyta on several islands and along coastal areas in Langkawi. The selected locations are Pulau Tuba, Pulau dayang bunting, Pulau Beras Basah, Pulau Bumbun besar, Pulau bumbyn Kecil, Teluk Yu, Pantai Kok, pebble beach and Tanjung Rhu.
Sample collections, specimen preservation and species identification are the processes involved in this study. Sample collections were made in various habitats such as at the rocky area, coral area, and sandy beaches.
A total of 19 species of Chlorophyta were collected in this study. The highest number of species was recorded in Pulau Bumbun Kecil with a total of 14 species. The least number of species was recorded Pulau dayang bunting, Pulau Beras Basah and Pantai Kok with only one species each.
Pulau Tuba and Pulau bumbun kecil recorded the highest similarity index with 27.2%. Meanwhile Pulau dayang bunting and Pantai Kok have no similarity of species between all site locations.
Seaweeds are multicellular marine algae that grow on the seashores, in salt marshes, in brackish water, or submerged in the ocean. They are plant-like organisms that usually live attached to rocks or other hard substrates but they do not have the same basic structure and lack of vascular system as higher plants.
Components of seaweeds include blades, holdfast and stripe (Garrison, 2009). Only two studies were carried out on the diversity and distribution of seaweeds in Langkawi waters (Phang et al., 2008).
Langkawi is experiencing rapid development; these will be a continued pressure on the marine ecosystem which may reduce the survival and growth of seaweeds. This leads to extensive reductions in the number of species of marine macroalgae ( Wood and Zieman, 1969).
This research would provide a checklist of diversity of seaweeds found in selected islands of Langkawi. In addition to that, this research could be useful as a baseline record for biomonitoring studies in Malaysia.
It will be beneficial for other researchers as it provides information which can be used as a reference for future study. Besides that, this research would also help us to assess the diversity of green algae.
Seaweeds are considered as an ecologically and economically important component of marine ecosystems. They are marine algae that are often mistaken as plants because they lack vascular systems.
William and Smith (2007) claimed that seaweed production has more than doubled over the past two decades. However, in some developing countries seaweeds are under threat due to human activities ( Shatheesh and Wesley, 2012).
Early detection under threat is the best way for prevention as it may reduce future costs. At the same time, a rapid response is much required when prevention fails (Lodge et al., 2006).
Green algae are the important in the marine ecosystem. They provide food for marine animals. In addition to that, the formations of coral reefs are also contributed by green algae.
High levels of nutrients in polluted environment exhibit a rapid growth response of green algae. Some species of green algae are exotic species that are of concern for marine conservation.
For more information, contact
IHSAN BIN ALWI
FACULTY OF APPLIED SCIENCE
UNIVERSITI TEKNOLOGI MARA
Darmarajah Nadarajah | Research SEA News
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:47161f0f-fc01-49fa-899f-4990e4d780ec> | 3.3125 | 1,439 | Content Listing | Science & Tech. | 37.969642 | 95,627,465 |
The study, published in the journal Nature, demonstrates that two genetically-defined groups of nerve cells are in control of limb alternation at different speeds of locomotion, and thus that the animals' gait is disturbed when these cell populations are missing.
Most land animals can walk or run by alternating their left and right legs in different coordinated patterns. Some animals, such as rabbits, move both leg pairs simultaneously to obtain a hopping motion. In the present study, the researchers Adolfo Talpalar and Julien Bouvier together with professor Ole Kiehn and colleagues, have studied the spinal networks that control these movement patterns in mice. By using advanced genetic methods that allow the elimination of discrete groups of neurons from the spinal cord, they were able to remove a type of neurons characterized by the expression of the gene Dbx1.
"It was classically thought that only one group of nerve cells controls left right alternation", says Ole Kiehn who leads the laboratory behind the study at the Department of Neuroscience. "It was then very interesting to find that there are actually two specific neuronal populations involved, and on top of that that they each control different aspect of the limb coordination."
Indeed, the researchers found that the gene Dbx1 is expressed in two different groups of nerve cells, one of which is inhibitory and one that is excitatory. The new study shows that the two cellular populations control different forms of the behaviour. Just like when we change gear to accelerate in a car, one part of the neuronal circuit controls the mouse's alternating gait at low speeds, while the other population is engaged when the animal moves faster. Accordingly, the study also show that when the two populations are removed altogether in the same animal, the mice were unable to alternate at all, and hopped like rabbits instead.
There are some animals, such as desert mice and kangaroos, which only hop. The researchers behind the study speculate that the locomotive pattern of these animals could be attributable to the lack of the Dbx1 controlled alternating system.
The study was financed with grants from the Söderberg Foundation, Karolinska Institutet (Distinguished Professor Award), the Swedish Research Council, and the European Research Council (ERC advanced grant).
Publication: "Dual mode operation of neuronal networks Involved in left-right alternation", Adolfo E Talpalar, Julien Bouvier, Lotta Borgius, Gilles Fortin, Alessandra Pierani, and Ole Kiehn, Nature, AOP 30 June 2013. EMBARGOED until Sunday 30 June 2013 at 1800 London time / 1900 CET / 1300 US EDT
Journal website: http://www.nature.com
Contact the Press Office and download images: ki.se/pressroom
Karolinska Institutet - a medical university: ki.se/english
Press Office | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:44d005dd-3a2e-426c-9fca-93cc2da50f42> | 3.328125 | 1,242 | Content Listing | Science & Tech. | 37.666222 | 95,627,477 |
PASADENA, Calif., Jan. 20, 2013 /PRNewswire-USNewswire/ -- A NASA spacecraft is providing new evidence of a wet underground environment on Mars that adds to an increasingly complex picture of the Red Planet's early evolution.
The new information comes from researchers analyzing spectrometer data from NASA's Mars Reconnaissance Orbiter (MRO), which looked down on the floor of McLaughlin Crater. The Martian crater is 57 miles (92 kilometers) in diameter and 1.4 miles (2.2 kilometers) deep. McLaughlin's depth apparently once allowed underground water, which otherwise would have stayed hidden, to flow into the crater's interior.
Layered, flat rocks at the bottom of the crater contain carbonate and clay minerals that form in the presence of water. McLaughlin lacks large inflow channels, and small channels originating within the crater wall end near a level that could have marked the surface of a lake.
Together, these new observations suggest the formation of the carbonates and clay in a groundwater-fed lake within the closed basin of the crater. Some researchers propose the crater interior catching the water and the underground zone contributing the water could have been wet environments and potential habitats. The findings are published in Sunday's online edition of Nature Geoscience.
"Taken together, the observations in McLaughlin Crater provide the best evidence for carbonate forming within a lake environment instead of being washed into a crater from outside," said Joseph Michalski, lead author of the paper, which has five co-authors. Michalski also is affiliated with the Planetary Science Institute in Tucson, Ariz., and London's Natural History Museum.
Michalski and his co-authors used the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on MRO to check for minerals such as carbonates, which are best preserved under non-acidic conditions.
"The MRO team has made a concerted effort to get highly processed data products out to members of the science community like Dr. Michalski for analysis," said CRISM Principal Investigator Scott Murchie of the Johns Hopkins University Applied Physics Laboratory in Laurel, Md. "New results like this show why that effort is so important."
Launched in 2005, MRO and its six instruments have provided more high-resolution data about the Red Planet than all other Mars orbiters combined. Data is made available for scientists worldwide to research, analyze and report their findings.
"A number of studies using CRISM data have shown rocks exhumed from the subsurface by meteor impact were altered early in Martian history, most likely by hydrothermal fluids," Michalski said. "These fluids trapped in the subsurface could have periodically breached the surface in deep basins such as McLaughlin Crater, possibly carrying clues to subsurface habitability."
McLaughlin Crater sits at the low end of a regional slope several hundreds of miles long on the western side of the Arabia Terra region of Mars. As on Earth, groundwater-fed lakes are expected to occur at low regional elevations. Therefore, this site would be a good candidate for such a process.
"This new report and others are continuing to reveal a more complex Mars than previously appreciated, with at least some areas more likely to reveal signs of ancient life than others," said MRO project scientist Rich Zurek of NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif.
The Johns Hopkins University Applied Physics Laboratory in Laurel, Md., provided and operates CRISM. JPL manages MRO for NASA's Science Mission Directorate in Washington. Lockheed Martin Space Systems in Denver built the orbiter.
To see an image of the carbonate-bearing layers in McLaughlin Crater, visit:
For more about the Mars Reconnaissance Orbiter mission, visit: | <urn:uuid:30fb7df0-c1fe-41e7-a609-b4a19e710e01> | 3.53125 | 786 | News Article | Science & Tech. | 34.22906 | 95,627,515 |
VSEPR is abbreviated as valence-shell electron-pair repulsion theory. This theory discusses about the arrangement of atoms or group of atoms around a central atom in a covalent compound and is determined completely by the repulsion between electron pairs in the valence shell of the central atom. This theory assumes that the molecule will take a shape such that electronic repulsion in the valence shell of that atom is minimized.
Deciding the shape of the molecule:
- Select the least electronegative atom as this atom will be the best to share its electrons with the other atoms in the molecule.
- Then we count the outer shell electrons in the central atom.
- Then we will count the electrons used by atoms (outside) to make bonds with the central atom.
- By combining these two, we get the result as Valence shell electron pair.
- The shape of the molecule is based on VSEP number.
|VSEP Number||Molecule Shape|
Postulates of VSEPR:
- For polyatomic molecules containing three or more atoms, one of the atoms of that molecule is called the central atom to which all the other atoms are linked.
- The shape of a molecule depends upon the total number of valence shell electron pairs.
- The electron pairs tend to occupy positions in space such that the electronic repulsion gets minimized and hence the distance between them is maximized.
- The valence shell is assumed to be a sphere such that the electron pairs localize on the surface of the sphere with maximum distance between the valence shell and electron pairs.
- If the central atom is surrounded by bond pairs then we can expect a symmetrical shape to the molecule.
- If the central atom is surrounded by lone pairs and bond pairs both, then a distorted shape of the molecule is observed.
- In case of resonating structures, this theory can be applied to any of them.
- The order of repulsion between electron pairs follows the trend:
lone pair – lone pair > lone pair – bond pair > bond pair – bond pair
Limitations of VSEPR theory:
- VSEPR Theory fails to explain isoelectronic species. Isoelectronic species refers to the elements that have same number of electrons. According to VSEPR theory, a shape of the molecule depends on the number of bond pair and a lone pair (valence electrons).But, isoelectronic species can differ in the shape even if they have a same number of electrons.
- VSEPR Theory does not explain the transition metal compounds. The VSEPR model fails to guess the structure of certain compounds. The reason being that it does not take associated sizes of the substituent’s and inactive lone pairs into account.
To follow more about VSEPR theory, download Byjus-the learning app.
Practise This Question | <urn:uuid:6cf896af-9e4c-4588-9764-99edf33522a2> | 3.984375 | 600 | Knowledge Article | Science & Tech. | 41.184355 | 95,627,522 |
posted by Chloe Luster
If 17.7 kJ are released when 1.00 g of O2 reacts with an excess of NO, complete the thermochemical equation below.
4 NO(g) + 3O2(g)=2 N2O5(g) ΔHrxn = ___KJ
1 mol NO = 14 + 16 = 30 grams.
4 mol NO = 120 g
17.7 kJ are released for 120 g NO so how much will be released for 1 g? That's
17.7 kJ x (1/120) = ? kJ. Released kJ means an exothermic reaction and that makes the dH = a negative number. | <urn:uuid:841f3073-687d-432b-ad99-a978930bc837> | 3.0625 | 145 | Q&A Forum | Science & Tech. | 118.74 | 95,627,548 |
Molecular Population Genetics of Drosophila
Knowledge of the nature, level and distribution of nuclear DNA sequence polymorphism present within natural populations is a prerequisite to a complete understanding of organic evolution. To date, studies of molecular variation within populations have been carried out primarily in one dipteran species, Drosophila melanogaster. The reason for this focus has been the practical one that elegant genetic tools are available for D. melanogaster, such as chromosome balancers that allow chromosomes to be manipulated and stocks made homozygous for genes or whole chromosomes. Also, an increasing number of molecular clones of genes with interesting and diverse effect have become available to use as molecular probes for restriction map variation and to isolate additional copies of the genes for direct sequencing. Furthermore, the ability to transform the germline of D. melanogaster with altered or foreign genes has increased the utility of this species for studies addressing the functional significance of naturally occur ring molecular variants (e.g., LaurieAlhberg and Stam 1987).
KeywordsLinkage Disequilibrium Transposable Element Effective Population Size Large Insertion Transposable Element Insertion
Unable to display preview. Download preview PDF. | <urn:uuid:1669795e-ac07-4e7c-9457-efb2fcb20cdf> | 3.15625 | 242 | Truncated | Science & Tech. | 1.345949 | 95,627,567 |
There really wasn't any doubt.
Evidence that Earth is warming is based on three sets of temperature readings. One set is maintained by NASA, one by the National Oceanic and Atmospheric Administration (NOAA), and one by a collaborative effort between Britain's Meteorological Office and the University of East Anglia's Climate Research Unit, a dataset generally referred to as HadCRU. All of them have painted essentially the same picture: a global temperature increase over land of slightly under 1 degree C (1.8 F) over the past century.
But, as we know, not everybody has been willing to accept that evidence. One of those who has been most visibly (or audibly) vocal in his skepticism has been UC Berkeley professor Richard Muller. Muller has claimed erroneously that the famous 'hockey stick' graph showing temperature increases derives from a mathematical error (the graph has in fact been endorsed in a review by the National Academy of Sciences). He also fabricated criticisms by others of Al Gore, misrepresenting Gore's statements in the process. And he helped perpetuate some of the inaccuracies and misstatements that helped flame 'Climategate'.
So the announcement earlier this year that Muller would be co-chair of the Berkeley Earth Surface Temperature (BEST) study, which would use statistical analysis “to resolve current criticism of the [global] temperature analyses, and to prepare an open record that will allow rapid response to further criticism or suggestions,” was greeted with some cynicism, particularly given that the study was being heavily underwritten by the Koch Brothers, serial funders of climate deniers.
But evidence is evidence, and facts are facts, and the BEST team's first four papers, submitted for peer review but meanwhile published online, have concluded that ... well, global land temperatures have increased by about 1 degree Celsius. In fact, the BEST study yielded a temperature increase just two percent less than NOAA's estimate.
In an article for the Wall Street Journal, Muller wrote that:When we began our study, we felt that skeptics had raised legitimate issues, and we didn’t know what we’d find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections. Global warming is real. Perhaps our results will help cool this portion of the climate debate. [Emphasis added]
(Of course, anyone who has actually followed climate science, rather than skeptic blogs about climate science, will have found the sentences in bold far less surprising than Muller and his team apparently did.)
As for whether this might cool the debate: well, probably not. The study monitors temperature increases, and does not address the cause of those increases. As Kevin Drum noted in Mother Jones: "So in one sense, its impact is limited since the smarter skeptics have already abandoned the idea that warming is a hoax and now focus their fire solely on the contention that it's man-made. (And the even smarter ones have given up on that, too, and now merely argue that it's economically pointless to try to stop it.)"
Indeed, James Delingpole in the Daily Telegraph attempted to argue that, "it has been a truth long acknowledged by climate skeptics, deniers and realists of every conceivable hue that since the mid-19th century, the planet has been on a warming trend." Never mind that, just two years ago, skeptics were assailing the very foundations of climate science or that many were arguing the existence of "global cooling" based on the fallacious claim that there has been no warming since 1998 - which was in fact the headline of a blog Delingpole posted less than four months ago. And some of the BEST research was designed specifically to test two particularly persistent skeptic claims: that some temperature stations produce anomalously high readings, and that the "urban heat island effect" distorts the temperature record.
No matter. Skeptics will continue to marshal their arguments and say one thing. The evidence - mountains of it, and growing all the time - will continue to say something else entirely.
Image by Berkeley Earth Surface Temperature (BEST) Study.
Wednesday, 26 October 2011
Skeptics Catching Up on Climate Science : Discovery News
Mentoring isn't a Sweetener, it is Brutally Honest, Bitter Truth Pill and KickAss, Stickler Mentor . Many Crack. Few WIN!
DAILY BLOG 10 Award-Winning Mustaches For Your No-Shave November Date: November 19, 2017 Author: Dhananjaya Pakrhe 0 Comments — ...
Online hiring activity sees 9% growth in October: Naukri Jobspeak Online hiring activity registered a 9% jump in October, indicating s... | <urn:uuid:64ad9253-368b-49e0-8c74-8c2d509751d3> | 3.046875 | 999 | Personal Blog | Science & Tech. | 39.424492 | 95,627,585 |
The fungus Aspergillus fumigatus produces a group of previously unknown natural products. With reference to plant isoquinoline alkaloids, these substances have been named fumisoquins. Researchers from Jena discovered the novel substances together with their American colleagues while studying the fungal genome. The family of isoquinoline alkaloids contains many pharmacologically active molecules. This study, which has just been published in Nature Chemical Biology, shows that fungi and plants developed biosynthetic pathways independently of each other. These findings make Aspergillus an interesting target for the discovery of novel drugs and their biotechnological production.
A large number of drugs used today originate from nature. Most of these molecules, which can be found with or without synthetic modifications and exert their beneficial effect on human health, are derived from microorganisms or plants. Thus, it is of great interest to discover novel active compounds in nature and use them for the treatment of diseases.
The fungus Aspergillus fumigatus produces Fumisoquine in a way similar to plants.
Jeannette Schmaler-Ripcke, Florian Kloss, Luo Yu / HKI
One well-known group of plant metabolites are the isoquinoline alkaloids. Today more than 2,500 different types are known and they are mainly found in poppy and barberry plants. Famous examples include the painkiller morphine or the cough remedy codein.
Together with colleagues from the US, scientists in the labs of Dirk Hoffmeister and Axel Brakhage at the Friedrich Schiller University in Jena found out that fungi synthesize certain natural products in a very similar way to plants.
They analyzed the genome of the common mold Aspergillus and discovered a small cluster of genes whose function was previously unknown. Comparing these genetic sequences with known data implied that they might be responsible for the synthesis of novel natural products.
By manipulating the genetic sequences, characterizing the resulting metabolites and using radioactive labeling experiments it was possible to elucidate the structure of the novel molecules and to unravel the detailed biosynthetic pathways.
The researchers discovered a new linkage mechanism for carbon atoms which had never been seen before in fungi. The whole fumisoquin biosynthetic pathway appears to be a combination of plant biosynthetic principles and the non-ribosomal peptide synthetases commonly found in fungi.
Axel Brakhage, university professor and head of the Leibniz Institute for Natural Product Research and Infection Biology, explains: “Fungi and plants diverged early on during evolution. The newly discovered fumisoquin synthesis pathway shows that there was a parallel development for the production of isoquinoline alkaloid compounds in both groups of organisms. This opens up new roads for combinatorial biotechnology in order to advance the search for novel active compounds and thus to develop urgently needed new drugs.”
Dirk Hoffmeister, professor at the Institute for Pharmacy at Friedrich Schiller University, is pleased with the joint efforts: “The published study is a great example of the tight collaboration between the university and the Leibniz Institute for Natural Product Research and Infection Biology – Hans Knöll Institute – and our American partners. Good research does not know any borders.”
The international scientific association “Faculty of 1000“ included this publication in their hit list of seminal research results.
Dr. Michael Ramm | idw - Informationsdienst Wissenschaft
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:ead34cba-3449-4f67-87fe-fb24ee819ed7> | 3.296875 | 1,362 | Content Listing | Science & Tech. | 29.188752 | 95,627,590 |
Influence of the Carbonization Process on Activated Carbon Properties from Lignin and Lignin-Rich Biomasses
2017-07-28T00:00:00Z (GMT) by
Lignin-rich biomass (beech wood, pine bark, and oak bark) and four lignins were tested as precursors to produce activated carbon (AC) via a two-step chemical activation with KOH. First, the precursors were carbonized via either pyrolysis or hydrothermal carbonization, with the purpose of evaluating the influence of the carbonization process on the AC properties. Pyrolysis chars (pyrochars) were thermally more stable than hydrothermal carbonization chars (hydrochars); thus, more AC was yielded from pyrochars (AC yield calculated from the char amount). The difference between ACs from hydrochars and pyrochars was small regarding the AC yield calculated from the initial amount of biomass or lignin. Additionally, no considerable differences in terms of total surface area and surface chemistry were found between both ACs. To understand this, the mechanism of the activation was explained as a local alkali-catalyzed gasification. In the case of hydrochar, carbonization reactions occurred simultaneously to the gasification because of their lower thermal stability. Thus, the carbon content and yields of hydrochar ACs were similar to pyrochar ACs, but their microporous surface areas were lower, likely due to condensation of volatile matter. | <urn:uuid:eb5c3ae3-9fea-43c3-927e-0275d39a7fb0> | 2.625 | 316 | Truncated | Science & Tech. | 15.458832 | 95,627,601 |
The 1987 Montreal Protocol phased out the use of chloroflourocarbons, or CFCs, a class of chemicals that destroy ozone in the stratosphere, allowing more ultraviolet radiation to reach earth's surface. Though the treaty aimed to reverse ozone losses, the new research shows that it also protected the hydroclimate.
The largest ozone hole over Antarctica (in purple) was recorded in September 2006. Thanks to the Montreal Protocol, the amount of ozone-depleting chemicals in the atmosphere peaked in the late 1990s and Antarctica's ozone hole is expected to recover by 2060.
The study says the treaty prevented ozone loss from disrupting atmospheric circulation, and kept CFCs, which are greenhouse gases, from warming the atmosphere and also disrupting atmospheric circulation. Had these effects taken hold, they would have combined to shift rainfall patterns in ways beyond those that may already be happening due to rising carbon dioxide in the air.
At the time the Montreal Protocol was drafted, the warming potential of CFCs was poorly understood, and the impact of ozone depletion on surface climate and the hydrological cycle was not recognized at all. "We dodged a bullet we did not know had been fired," said study coauthor Richard Seager, a climate scientist at Columbia University's Lamont-Doherty Earth Observatory.
Today, rising carbon dioxide levels are already disturbing earth's hydrological cycle, making dry areas drier and wet areas wetter. But in computer models simulating a world of continued CFC use, the researchers found that the hydrological changes in the decade ahead, 2020-2029, would have been twice as severe as they are now expected to be. Subtropical deserts, for example in North America and the Mediterranean region, would have grown even drier and wider, the study says, and wet regions in the tropics, and mid-to-high latitudes would have grown even wetter.
The ozone layer protects life on earth by absorbing harmful ultraviolet radiation. As the layer thins, the upper atmosphere grows colder, causing winds in the stratosphere and in the troposphere below to shift, displacing jet streams and storm tracks. The researchers' model shows that if ozone destruction had continued unabated, and increasing CFCs further heated the planet, the jet stream in the mid-latitudes would have shifted toward the poles, expanding the subtropical dry zones and shifting the mid-latitude rain belts poleward. The warming due to added CFCs in the air would have also intensified cycles of evaporation and precipitation, causing the wet climates of the deep tropics and mid to high latitudes to get wetter, and the subtropical dry climates to get drier.
The study builds on earlier work by coauthor Lorenzo Polvani, a climate scientist with joint appointments at Lamont-Doherty and Columbia's Fu Foundation School of Engineering and Applied Science. Polvani and others have found that two human influences on climate --ozone loss and industrial greenhouse gases—have together pushed the jet stream in the southern hemisphere south over recent decades. As the ozone hole over Antarctica closes in the coming decades, the jet stream will stop its poleward migration, Polvani found in a 2011 study in the journal Geophysical Research Letters. The projected stopping of the poleward jet migration is a result of the ozone hole closing, canceling the effect of increasing greenhouse gases.
"We wanted to take a look at the more drastic scenario—what would have happened if there had been no Montreal Protocol?" said study lead author Yutian Wu, a former Lamont graduate student who is now a postdoctoral researcher at New York University. "The climatic impacts of CFCs and ozone depletion were not known back then."
The Montreal Protocol is considered one of the most successful environmental treaties of all time. Once scientists linked CFCs to rapid ozone loss over Antarctica, world leaders responded quickly. Nearly 200 countries have ratified the treaty. The ozone depletion that CFCs would have caused is now known to have been far worse than was realized at the time, in 1987. The cost of developing CFC-substitutes also turned out to be far less than the industry estimated.
"It's remarkable that the Montreal Protocol has not only been important in protecting the ozone layer and in decreasing global warming but that it also has had an important effect on rainfall patterns and reducing the changes we are in for," said Susan Solomon, an atmospheric scientist at the Massachusetts Institute of Technology who won the Vetlesen Prize earlier this year for her work on ozone depletion. Solomon was not involved in the study.
As a greenhouse gas, CFCs can be thousands of times more potent than carbon dioxide. Dutch scientist Guus Velders estimated in a 2007 study that had the chemicals not been phased out, by 2010 they would have generated the warming equivalent of 10 billion tons of carbon dioxide, (Humans produced 32 billion tons of CO2 in 2011).
Hydroflourocarbons, or HFCs, have largely replaced CFCs as refrigerants, aerosol propellants and other products. While HFCs are ozone-safe, they, too, are powerful greenhouse gases that have become a concern as world leaders grapple with climate change. The Kyoto Protocol was drafted to regulate global greenhouse gas emissions, but its expiration at the end of 2012 has led some countries to seek climate protections from the Montreal Protocol. Canada, Mexico and the United States have asked that HFCs be regulated under Montreal, though the treaty was never intended to limit greenhouse gases. So far no action has been taken.
"This research supports the principle that it's generally best not to put things into the environment that weren't there before," said Scott Barrett, an economist at the Earth Institute who was not involved in the study. "It's a lesson, surely, for our current efforts to limit greenhouse gas emissions."
The study, "The Importance of the Montreal Protocol in Protecting Earth's Hydroclimate," is available from the authors.Scientist contacts:
Kim Martineau | EurekAlert!
Further reports about: > Antarctica > Earth's magnetic field > Ozone-protection > atmospheric circulation > carbon dioxide > computer model > gas emission > greenhouse gas > greenhouse gas emission > human influence > hydrological cycle > ozone depletion > ozone hole > ozone layer > ultraviolet radiation
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:dbe2e58f-7210-4e4d-a9ba-9e88fd2e3da9> | 4.28125 | 1,928 | Content Listing | Science & Tech. | 37.324157 | 95,627,639 |
Mixed-pollination systems may allow plants to achieve stable seed production when unpredictable conditions cause variation in the relative success of different pollination modes. We studied variation in time (two years) and space (in five populations, three from an island and two from mainland) in the pollination mode of Buxus balearica, an ambophilous (i.e. pollinated by wind and insects) and selfing species distributed in the Mediterranean Basin, by means of direct observations and experimental manipulations (bagging with different material). The relative importance of each pollination mode differed among populations; however, levels of selfing and wind pollination were similar between island and mainland. Flowers of B. balearica were visited only by generalist insects, and species composition and abundance of flower visitors varied both in space and time. Frequency of insect visits to plants were not higher in mainland than island populations, although insects on the mainland were more diverse, visited a proportionally greater number of flowers, and remained longer on the plants than insects on the island. Frequency of insect visits was negatively correlated with flowering synchrony (all populations pooled) and was found to increase seed set in one of the mainland populations (that with highest frequency of insect visits and highest flower visitation rate). Fruit and seed mass were found to be not affected by pollination mode. Scarcity of pollinators in the island seems to have an effect on the pollination mode, although the greatest variation in breeding system was found at a more local scale.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:367e53e5-4bf1-4f83-9b95-573a87d37ad9> | 3.046875 | 326 | Academic Writing | Science & Tech. | 18.514041 | 95,627,666 |
Tropical deforestation accounts for up to 15% of net global carbon emissions each year.
Halting tropical deforestation and allowing for regrowth could mitigate up to 50% of net global emissions through 2050.
The carbon stored in these acres adds up. Two of our flagship projects in Peru, the Sierra del Divisor project and the Airo Pai Community Reserve project, store the equivalent of over 1 billion and 292 million metric tons of carbon dioxide, respectively. Together, that’s equal to the yearly emissions of 275 million cars in the United States.4
You can have an impact.
Just $10 dollars donated towards Rainforest Trust’s Rungan River Peat Swamp project in Borneo can protect 5 acres and approximately 3,675 metric tons of carbon dioxide. If released into the atmosphere, that would be equal to the yearly carbon dioxide released by 781 U.S. cars.
All those emissions, prevented through a $10 donation.
BURNING FOREST TO CLEAR LAND FOR AGRICULTURE IS A MAJOR THREAT TO BORNEO'S RAINFOREST. PHOTO BY BERNAT RIPOLL/BNF
Securing a Missing Link in the Amazon
Size: 1,338,520 acres
Action: Expansion of Airo Pai Community Reserve
Total Carbon Storage: ~292,182,000 metric tons CO2 equivalent (above ground)
Tree Density: 266 trees/acre
Price per Acre: $1.11
Size: 3,460 acres
Action: Establish the Lost Forest Reserve
Total Carbon Storage: ~635,000 metric tons CO2 equivalent (above ground)
Price per Acre: $206
Saving the Bornean Orangutan
Size: 385,000 acres
Action: Overturn logging concessions to create protected areas
Total Carbon Storage: ~282,975,000 metric tons CO2 equivalent
Tree Density: 240 trees/acre
Price per Acre: $2
- 1 Houghton, R.A. et. al. Nature Climate Change 5 1022-1023 (2015)
- 2 IPCC Climate Change 2014: Mitigation of Climate Change.
- 3 Global Forest Watch
- 4 Vehicle emissions data from US EPA | <urn:uuid:c45dfca3-a971-4120-8012-864368578a79> | 3.1875 | 469 | About (Org.) | Science & Tech. | 55.045435 | 95,627,668 |