text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Mpseudo performs multicore and precise computation of pseudospectra of (square or rectangular) matricies. It uses pseudospectra definition and find epsilon-values on a regular grid of a complex plane.
multiprocessing module to share computations between cpu-cores, and
mpmath module to make calculations with high precision.
Mpmath module is needed to perform computations with high precision.
pip install mpmath
If you don't need ability of high precision pseudospectra computation (more than 15 digits), the
mpseudo can work without
The only requirement - NumPy. It should be installed on your system or in virtual environment.
git clone https://github.com/scidam/mpseudo.git
The pseudospectrum of the gallery(5) MatLab matrix looks like this (up to 100-digits of accuracy used for a matrix resolvent computation):
The pseudospectra above is obtained via the following lines of code:
from matplotlib import pyplot from mpseudo import pseudo # Gallery(5) MatLab matrix (exact eigenvalue is 0 (the only!)) A = [[-9, 11, -21, 63, -252], [70, -69, 141, -421, 1684], [-575, 575, -1149, 3451, -13801], [3891, -3891, 7782, -23345, 93365], [1024, -1024, 2048, -6144, 24572]] # compute pseudospectrum in the bounding box [-0.05,0.05,-0.05,0.05] with # resolution 100x100 (ncpu = 2 processes) and 50-digits precision. psa, X, Y = pseudo(A, ncpu=2, digits=50, ppd=100, bbox=[-0.05,0.05,-0.05,0.05]) # show results pyplot.conourf(X, Y, psa) pyplot.show()
mpmath module is not installed, pseudospectrum of the matrix will be computed with standard (double, 15-digits) precision, which is not sufficient for this case.
Read about this script in Russian here.
Mpseudo is free software licensed under the MIT License. | <urn:uuid:c88941c6-e02f-4c3a-86d3-e54e891be753> | 2.578125 | 508 | Product Page | Software Dev. | 69.695816 | 95,521,442 |
- Principles of Chemistry I: Honors
Fall 2013, Unique 52195
Lecture Summary, 5 December 2013
Law of Thermodynamics: Chemical
thermodynamics is the study of the flow and conversion of energy
in chemical systems. Thermodynamics is focused on
understanding the behavior of a system, and the flow of
energy between the system and its surroundings.
We talked about three kinds of systems: open, closed, and
isolated. In this class we will focus only only closed
To understand how energy is exchanged between system and surroundings, we need to be able to understand how the system changes from an initial to a final state. This involves moving along a path, which is associated with two new path functions: work and heat. The magnitude of these depends on the particular path taken to get from an initial to a final state.
w = -Pext(deltaV)
qv = nCv(deltaT)
where Cv is a heat capacity measured at constant volume, and is a characteristic property of the material.
Although both heat and work are path functions, their sum is a state function, called internal energy, U. For any change from one state to another, the change in internal energy is equal to the sum of the heat and work experienced along that path:
(deltaU) = q + w
This is a statement of the 1st law of thermodynamics.
At constant pressure, this equation can be manipulated to give a new state function, called enthalpy:
(deltaH) = (deltaU) + P(deltaV) = nCv(deltaT) + nR(deltaT) = nCp(deltaT)
where Cp = Cv + R. | <urn:uuid:793e2494-a6aa-48c6-ac08-8a24e4e4f0b1> | 3.578125 | 374 | Academic Writing | Science & Tech. | 43.619331 | 95,521,450 |
A molecule that's bonded to other molecules is not as free to move as one that isn't.
Alcohol molecules do bond to water molecules in liquid form, which is why they don't easily separate when boiling or freezing.
However, when freezing, these polar molecules become 'stuck' in a grid, which limits their degrees of freedom (their ability to rotate).
Consequently, the microwave radiation has significantly less effect.
The way you stating this makes me think like "Oh the microwave photons transfer less energy to the polar molecules" but I don't think its that ( I am not sure anyway if you mean this), it is that the polar molecules need more energy (more photons or more energy per photon) to get them moving. And because microwave ovens at the current era are not sophisticated enough to focus more energy in the frozen areas(they would have to have some sort of image analysing to spot the frozen areas and laser beams to focus the energy in the frozen areas), the energy goes randomly inside the oven, so if you try to offer high energy you might defrost the frozen areas but you ll burn the rest of the food, if you don't offer high energy, the frozen doesn't melt good but the rest of the food gets warmed..
Perhaps a simple way of looking at it is that ice (solid water) is somewhat translucent in the microwave frequencies, while liquid water is very opaque. So why are some things opaque (at some particular frequency) while other things are transparent/translucent? That's a big can of worms. Here is my attempt at a simple explanation.
Microwave waves will create torques on the ice molecules causing them to temporarily absorb energy (imagine a molecule to rotate a little bit on a tight spring), but because they are trapped in a restricted structure, they then release that energy coherently with the microwave wave on the rebound. All the other molecules do this too, in unison (mostly). Energy goes in coherently and comes out (mostly) coherently because most of the ice molecules are acting together. (And, for the most part, the motion of the ice molecules in the presence of the microwave energy don't seem to matter if you view it forward or backward in time. More on this later.)
Imagine an array of buoys floating on the water. As a water wave travel's under it, the buoys rise up absorbing some energy (gravitational potential energy), but then they give that energy back coherently (mostly) as they fall on the other side of the wave. In a sense, the buoy array was transparent to the water wave. If you were to film the wave passing under the buoys, and then play it backwards, it would look pretty much identical to a wave traveling in the other direction. If you didn't know any better, you wouldn't be able to tell which way was forward in time and which way was backwards.
On the other hand, if the wave imparts energy in a non-coherent fashion, like a wave crashing on the rocks, it destroys the coherence, and thus the wave. Play that one backwards and its obvious which is forward and which is not. This water crashing onto the rocks is a thermodynamically irreversible process.
Liquid water molecules are free to rotate. That alone wouldn't cause decoherence; but higher-energy, rotating molecules bumping into other molecules certainly would. The chaos involved in spinning molecules bumping into each other, and imparting all sorts of chaotic behavior as a result, is most certainly thermodynamically irreversible. And that kills the coherency of the microwave wave. Thus energy that was once in the form of microwaves is converted into more random thermal energy in the water.
Ignoring reflections, this ratio of coherence to decoherence of a given material to a particular wave is one way to conceptualize transparency.
I should mention that liquids do not need to be polar to absorb microwaves. There are other modes of energy transfer. For example, oil molecules are not polar. But those molecules are long and complicated and they can vibrate in several modes, many of which are at microwave frequencies. Given the complicated nature of these molecules in relationship to nearby oil molecules, and even other atoms in a given molecule, their vibrational modes are rarely coherent with each other. Since the long and twisted oil molecules don't all vibrate in unison, they are bound to absorb the microwave energy in a thermodynamically irreversible process too.
I imagine that it's like this.
Consider a block on the ground.
When we push against it, it starts moving.
That is, we applied work on it, and now it has more energy.
Now consider a block that is stuck somehow.
We push against it, but it cannot move. Therefore we did not apply work and it did not gain energy.
Hmm... I guess we'll have to push real hard so that something breaks before we can actually apply work to it. And then it should get the full energy. ;)
Still, continuing the analogy, if we simultaneously push both loose blocks and stuck blocks, the loose blocks will get all the energy won't they?
Aha! So mixing the water with a couple of drops of oil might have the same effect as mixing it with alcohol?
That sounds as if that won't hit the brick wall of food and tax regulations.
If hope it freezes without fully separating.
(Running to the fridge and putting 3 new glasses in it.)
EDIT: Yep. That seems to be a problem. I can't seem to mix oil and water without it separating again within seconds.
There are Emulsifiers which can help there. Look on the contents labels of many prepared foods (sauces in particular) and the word "Emulsifiers" will be there. It won't be straightforward because they need to be appropriate. They can taste and may separate out during the deep freezing. Looks like you may have suddenly found another vast area of research you need to do.
I find the "ice doesn't heat easily because the molecules can't move as much" theory unconvincing. After all, this isn't too unlike a spring system; even if the spring is more rigid it can still absorb just as much energy. Just because something moves less doesn't mean it contains less energy.
One creeping suspicion I have is that people think ice has just one temperature, whereas an ice cube can be -1C or -100C. They look the same, but the time to melt it will be vastly different.
Oil and water don't mix but a layer of oil (as in under the food container) might create a buffer to disperse the energy better. I think that is the main problem is that the energy is distributed unevenly.
Glass apparently has similar behavior to water in the microwave (transparent at microwave frequencies when solid, opaque when liquid).
Steve Mould has a pretty good video on this -- except for the part where he insists that molecules must be polar to be heated, which isn't always a necessary requirement.
The loss factor (higher for water) and penetration depth (deeper as temperature rises) seem to be the significant variables. Note how they differ between water an ice and how they vary with temperature.
EDIT: One of those references mentioned that the heating mechanism can be explained entirely by Proton exchange between atoms, for both Ice and Water.
Separate names with a comma. | <urn:uuid:2fb2a397-41be-4e23-bfeb-d5c32e24bca5> | 3.34375 | 1,534 | Comment Section | Science & Tech. | 49.025147 | 95,521,468 |
Interactive periodic table with dynamic layouts showing names, electrons, oxidation, trend visualization, orbitals, isotopes, and compound search. The Periodic Table of the Elements. 1. H. Hydrogen. 2. He. Helium. 3. Li. Lithium. 4. Be. Beryllium. 5. B. Boron. 6. C. Municipal and Industrial Analysis - Biological Surveys - Environmental Studies. H · He · Li · Be · B · C · N · O · F · Ne · Na · Mg · Al · Si · P · S · Cl · Ar · K · Ca · Sc. It is a metal texas holdem online kostenlos spielen has a high melting https://casinos.bclc.com/. Home History Alchemy Podcast Video Trends. Bearing http://www.silbertal.eu/system/web/lebenslage.aspx?contentid=10007.229920&lltyp=1701&menuonr=218279568 are brazed little alcheme to prevent fuel bundle to pressure tube contact, free champions league inter-element spacer bayern gegen dortmund supercup 2017 are brazed on wetten oddset prevent element to element contact. Tools made http://steatosis-hepatis.de/ beryllium copper alloys are strong and hard and do not create sparks when they strike a online casinos deutschland legal surface. Iphone gute apps menu Personal tools Not logged in Talk Contributions Create account Log in. The https://www.askgamblers.com/forum/topic/7920-see-anything-strange/ periodic table is based closely on the ideas he used: T Titanium Technetium Tin Tellurium Terbium Thulium Tantalum Online tier spiele Thallium Thorium Tennessine. Beryllium metal was isolated in from beryllium chloride BeCl 2 by reacting this with potassium. In the nuclear reaction that occurs, a beryllium nucleus is transmuted into carbon, and one free neutron is emitted, traveling in about the same direction as the alpha particle was heading. Help text not available for this section currently. Where more than one isotope exists, the value given is the abundance weighted average. The beryllium-beryllium oxide composite " E-Materials " have been specially designed for these electronic applications and have the additional advantage that the thermal expansion coefficient can be tailored to match diverse substrate materials. Vacuum-tight windows and beam-tubes for radiation experiments on synchrotrons are manufactured exclusively from beryllium. This is particularly applicable to cryogenic operation where thermal expansion mismatch can cause the coating to buckle. Link, and for the reason that a genus of plants, Glycine , already exists. Chemistry in its element: Low atomic number also makes beryllium relatively transparent to energetic particles. Basic beryllium nitrate and basic beryllium acetate have similar tetrahedral structures with four beryllium atoms coordinated to a central oxide ion. Radioactive beryllium has been detected in Greenland ice cores and marine sediments and the amount that has been measured in ice cores deposited over the past years increases and decreases in line with the Sun's activity, as shown by the frequency of sun-spots. The chief ores of beryllium are beryl and bertrandite, which is also a silicate. Find more about Beryllium at Wikipedia's sister projects. Specific heat capacity is the amount of energy needed to change the temperature of a kilogram of a substance by 1 K. CRC, Handbook of Chemistry and Physics. Tools made of beryllium copper alloys are strong and hard and do not create sparks when they strike a steel surface. If further elements with higher atomic numbers than this are discovered, they will be placed in additional periods, laid out as with the existing periods to illustrate periodically recurring trends in the properties of the elements concerned. | <urn:uuid:a66a3e99-41ef-4bac-b98a-e54ad2d3ee56> | 2.625 | 836 | Spam / Ads | Science & Tech. | 41.836569 | 95,521,470 |
Microbursts are common in southern Arizona during the monsoon. They often take down power poles, leaving us in the dark and without air conditioning during the hottest part of the summer!
It's important to understand the structure of our atmosphere. Think of it as a sandwich with three parts --- the lower, middle and upper layers.
Early in the Monsoon, moisture makes its way northward from the tropics into Arizona, but resides ONLY in the middle and upper layers. That leaves the lower layer, where we live, dry as a bone.
With daytime heating causing rising air, thunderstorms form, feeding off of the moisture in the middle and upper layers.
As it begins to rain, the rain falls into the very dry air at the surface (in the lower layer.)
Figure 1: Thunderstorms form aloft, but the air at the ground is still very dry.
When rain drops evaporate in the very dry air, the air is cooled. Cooler air is heavier and more dense, thus the air begins to sink toward the ground.
This process sets off a chain reaction of sinking air that results in the thunderstorm collapsing in a matter of minutes, with all of its energy (in the form of wind) rushing straight down to the ground.
When the sinking air hits the ground, it spreads out in all directions. The wind speeds in a microburst can reach 60-100 mph.
Figure 2: As the cold air sinks from the thunderstorm, it hits the ground and spreads out in all directions.
A microburst in Tucson can do damage here, but can also begin the formation of a mammoth Haboob headed for Phoenix.
Copyright 2018 Tucson News Now. All rights reserved. | <urn:uuid:e65177fc-7afd-4a8b-b0ba-f6e785bc30d6> | 3.703125 | 353 | News Article | Science & Tech. | 67.793091 | 95,521,490 |
By C. K. Jørgensen
The ebook first outlines the background of chemical bonding, giving emphasis to assorted theories that cleared the path for additional stories during this box. The textual content then examines the strength degrees of a configuration and molecular orbitals and microsymmetry.
The book takes a glance on the interelectronic repulsion in M.O. configurations, the features of absorption bands, and spectrochemical sequence. Electron move spectra, strength degrees in complexes with virtually round symmetry, molecular orbitals missing round symmetry, and chemical bonding also are mentioned. The e-book examines the decision of advanced species in resolution and their formation constants; survey of the chemistry of heavy, steel components; and tables of absorption spectra.
The manuscript is a accountable resource of information for physicists and team theorists drawn to absorption spectra and chemical bonding.
Read Online or Download Absorption Spectra and Chemical Bonding in Complexes PDF
Similar analytic chemistry books
Biosensors and smooth Biospecific Analytical concepts additional expands the excellent Analytical Chemistry sequence' insurance of speedy research in line with complicated technological advancements. This 12-chapter quantity summarizes the most advancements within the biosensors box over the past 10 years. It offers a accomplished research at the varieties of biosensors, together with DNA-based, enzymatic, optical, self-assembled monolayers and the 3rd iteration of biosensors.
This publication provides a brand new and promising strategy to develop unmarried crystalline compound semiconductor fabrics with outlined stoichometry. The approach is predicated at the high-precision experimental selection of the bounds of the single-phase quantity of the cast within the pressure-temperature-composition P-T-X part house.
This booklet serves as a reference for these drawn to state of the art examine at the technological know-how and know-how of ionic beverages (ILs), fairly with regards to lipids processing and research. issues comprise a evaluation of the chemistry and physics of ILs in addition to a quantitative figuring out of structure-activity relationships on the molecular point.
This 2000 e-book presents an advent to the character, prevalence, actual houses, propagation and makes use of of surfactants within the petroleum undefined. it truly is aimed mostly at scientists and engineers who may perhaps come across or use surfactants, even if in technique layout, petroleum construction, or examine and improvement.
- High Throughput Analysis for Early Drug Discovery
- Determination of Anions: A Guide for the Analytical Chemist
- Modified Cyclodextrins for Chiral Separation
- Thermal Analysis
- Chemical Analysis of Non-antimicrobial Veterinary Drug Residues in Food (Wiley Series on Mass Spectrometry)
Extra resources for Absorption Spectra and Chemical Bonding in Complexes
Absorption Spectra and Chemical Bonding in Complexes by C. K. Jørgensen | <urn:uuid:eb2b7ac1-41b0-442c-adb4-b9fd11360907> | 2.5625 | 609 | Product Page | Science & Tech. | 16.078699 | 95,521,499 |
Share this article:
Clouds often captivate onlookers as they take on curious, billowing or ominous shapes, including those that look like flying saucers, ocean waves, mushrooms or giant cauliflower.
One cloud that has been specifically named after an object is the horseshoe cloud, one of the rarest documented cloud formations.
The unusual atmospheric sight was spotted most recently over Battle Mountain, Nevada, on March 8. As the photo went viral it generated a lot of buzz among cloud watchers and curious readers.
Some disagreed with the horseshoe name as they believe it looks more like a mustache or staple.
One of the rarest clouds ever. This was taken over Battle Mountain, Nevada, USA on 8 March 2018.— NWS Elko (@NWSElko) March 9, 2018
It's called a horseshoe cloud for obvious reasons. #nvwx
Credit goes to eagle-eye Christy Grimes. pic.twitter.com/XgQDY77ZzM
According to the National Weather Service in Elk, Nevada, this cloud formation occurs when rotating air or shearing horizontal winds create spin.
Wind shear, or changing wind speed and direction with altitude, can help to create spin. Gently rising air can then force part of the cloud upward.
AccuWeather Meteorologist Jesse Ferrell said he has been an avid weather photographer for 43 years but it wasn't until this past summer when he walked out his front door that he captured a photo of the rare sight in person.
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
The intense record heat baking the south-central United States is expected to get trimmed back early this week, but a sweep of refreshing air is not on the horizon.
This past weekend's rainstorm was only the start of an abnormally wet pattern that will elevate the flood risk in the eastern United States into the end of the month.
Despite NASCAR moving up the start time of the Foxwoods Resort Casino 301, rain has hung on and delayed the race at Loudon, New Hampshire.
Yet another round of severe weather is threatening the southeastern United States to close out this weekend.
The remainder of July will be dominated by a resurgence of heat across the northwestern United States.
An uptick in monsoon rainfall is expected to heighten the flood threat across eastern and northern India this week. | <urn:uuid:2ba77942-ce0b-4228-b1ea-f14eaa372515> | 2.6875 | 512 | News Article | Science & Tech. | 51.407408 | 95,521,511 |
Scientists from Munich realize a dynamic version of the quantum Hall effect in optical superlattices
The transport of particles is usually induced by applying an external gradient to a system. Water, for example, flows down a slope and electric current is generated by applying a voltage. But already in ancient times another way of generating a directional motion was known: by periodic modulation of a system, as can be seen in the famous Archimedes' screw.
Archimedes screw. Continuous rotation of the screw pumps water from the lower-lying reservoir into the upper one.
Implementation of a topological charge pump in an optical superlattice. (a) An optical superlattice is created by superimposing two standing waves with different periods. Its shape can be changed by moving the long lattice (depicted in green). This induces a motion of the atoms in the lattice where they tunnel through the barriers between neighbouring lattice sites. (b) Measured position of the atom cloud for one pump cycle during which the atoms move by exactly one period of the long lattice dl.
More than 30 years ago, the Scottish physicist David Thouless predicted that a similar phenomenon should also occur in quantum mechanical systems, so called topological pumping. A group of researchers from the Ludwig-Maximilians-Universität München and the Max Planck Institute of Quantum Optics, led by Professor Immanuel Bloch and in collaboration with the theoretical physicist Oded Zilberberg (ETH Zürich), have now successfully implemented such a topological charge pump with ultracold atoms in an optical lattice for the first time.
In 1983, inspired by the recently discovered two-dimensional quantum Hall effect, for which Klaus von Klitzing was awarded the Nobel prize in physics in 1985, Thouless came up with the idea that a similar phenomenon could also be observed in one-dimensional systems if their parameters are varied periodically.
This dynamic version of the quantum Hall effect enables transport of particles without an external bias. Due to its special, so called topological properties, this transport occurs in a quantized fashion so that the particles move exactly by a well-defined distance per cycle. In addition, the transport is extremely robust with respect to external perturbations and is not affected by small changes of the system.
This is of particular interest from a technological point of view since it could facilitate a more precise definition of the standard for electrical current. Despite long lasting efforts, however, the realization of such a quantized charge pump has remained out of reach up to now.
Ultracold atoms in optical lattices constitute an almost ideal model system for such experiments since they can be controlled and detected very well. Inside a vacuum, the atoms can be cooled to a temperature close to absolute zero and subsequently be transferred into a periodic potential that is created by the interference of multiple laser beams.
A superlattice is a special kind of these optical lattices that is created by superimposing two standing waves of light with different periodicities. In the experiment in Munich, the periods of the lattices where chosen in such a way that they differ by a factor of two. This gives rise to double well potentials as shown in Fig. 2.
With a superlattice like this, the idea of Thouless can be realized and atoms can be transported in the lattice. In order to do this, the two standing waves are moved relative to each other by shifting the lattice with the longer period in one direction. This leads to a periodic modulation of both the depth of the lattice sites as well as the height of the barriers in between them. A classical particle would not move in this case as the position of the individual lattice sites does not change, but they only move up and down. In contrast to this, the motion of an atom at such a low temperature is described by a quantum mechanical wave. It can therefore follow the moving lattice by tunneling through the barrier between neighboring lattice sites.
Thouless could prove already that in certain situations the motion of the atoms can only occur in a quantized way so that their position changes by an integer multiple of the period of the moving lattice. This is the case if atoms are initially localized on individual double wells, for example.
The Munich scientists could realize such a situation in their experiments by taking advantage of the repulsive interaction between the atoms which ensures that there is exactly one atom in every double well. Even though such a state is in principle insulating, i.e. the atoms cannot move, they can be transported through the lattice using the modulation described above.
By looking at the atoms with a microscope, the scientists could show for the first time that the motion of the atoms per pump cycle is indeed quantized and occurs in discrete steps due to the tunneling of the atoms.
Furthermore they could show that this motion is independent of the specific implementation of the pump cycle, like for example the depth of the potentials. This is due to the topological nature of the transport which makes it particularly robust against external perturbations. In another series of experiments the behaviour of atoms in excited states in the lattice was studied.
In this case the researchers could observe the remarkable phenomenon that the atoms in certain states moved in the opposite direction as the motion of the lattice. "This behaviour clearly illustrates the quantum mechanical origin of this transport process since something like this would be unthinkable in a classical system" says Michael Lohse, a PhD student who was involved in the Munich experiments.
These measurements demonstrate the importance of topological properties for the behaviour of physical systems in a very clear way and open the route for a variety of further experiments. A pump like this cannot only be used to transport particles, but for example could be modified in such a way that it only transports the so called spin, that is the intrinsic angular momentum of the atoms, while the atoms themselves do not move. Moreover, by extending the pumping scheme to two directions it would be possible to study effects that normally can only occur in four-dimensional systems. [M.L./C.S.]
Michael Lohse, Christian Schweizer, Oded Zilberberg, Monika Aidelsburger and Immanuel Bloch
A Thouless Quantum Pump with Ultracold Bosonic Atoms in an Optical Superlattice
Nature Physics, DOI 10.1038/nphys3584, advance online publication, 14 December 2015
Prof. Dr. Immanuel Bloch
Chair of Quantum Optics, LMU Munich
Schellingstr. 4, 80799 Munich
Director at Max Planck Institute of Quantum Optics
Hans-Kopfermann-Straße 1, 85748 Garching, Germany
Phone: +49 (0)89 / 32 905 -138
Phone: +49 (0)89 / 21 80 -6133
Dr. Olivia Meyer-Streng
Press & Public Relations
Max Planck Institute of Quantum Optics, Garching, Germany
Phone: +49 (0)89 / 32 905 -213
Dr. Olivia Meyer-Streng | Max-Planck-Institut für Quantenoptik
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
Theorists publish highest-precision prediction of muon magnetic anomaly
16.07.2018 | DOE/Brookhaven National Laboratory
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:69d477fa-5f0e-472a-8040-22743cfd99af> | 3.53125 | 2,132 | Content Listing | Science & Tech. | 38.654666 | 95,521,525 |
The 1,000-metre wide monster will hurtle terrifyingly close to the planet within days, sparking fears of an unprecedented disaster, the Express reports.
The object called ‘2014-YB35’ is almost the same size as Ben Nevis and will skim the Earth on FRIDAY travelling at more than 23,000 mph.
Small meteorites often pass close by however one of this size is a once in 5,000-year occurrence, according to concerned astronomers.
A collision with Earth would unleash an explosive force equivalent to more than 15,000 million tonnes - 15,000 megatons - of TNT.
The path of the asteroid is shown below in an animated 'trajectory map' released by NASA. It can clearly be seen only narrowly missing the Earth.
Any impact would trigger devastating changes in the climate, earthquakes and tsunamis leading to the eradication of entire communities.
It would eclipse the destruction caused by the 1908 Tunguska Event which saw a 50-metre lump of extraterrestrial rock crash into Siberia.
It flattened an estimated 80 million trees and sent a shock wave across Russia measuring five on the Richter scale.
Experts warn it is only a matter of time before an asteroid capable of “lifer altering” damage collides with our planet.
Bill Napier, professor of astronomy at the University of Buckinghamshire, said there is a “very real risk” of a comet or damaging asteroid hitting Earth.
He said: “Smaller scale events like Tunguska are absolutely a real risk, largely they are undiscovered and so we are unprepared.
“With something like YB35, we are looking at a scale of global destruction, something that would pose a risk to the continuation of the planet.
“These events are however very rare, it is the smaller yet still very damaging impacts which are a very real threat.”
Experts warn if one of these monsters were to hit Earth plumes of debris thrown into the atmosphere, changing the climate and potentially making the planet inhabitable for all life.
Smaller impacts would be capable of destroying cities and knocking out transport and communication networks.
Professor Napier added: “The real risk is from comets which even if the Earth passes through the tail can generate a massive plume of smoke with hugely significant consequences.
“There is absolutely a real risk and if you look at history, certainly biblical records, there are reports of fires in the heavens.
“Red hot debris resulting from the impact of something a kilometre wide would be capable of incinerating the planet.”
According to NASA’s Near Earth Object Programme the enormous lump of rock will pass within 2.8 million miles - a tiny distance in astronomical terms - of Earth on Friday.
Images from NASA’s jet propulsion Laboratory show the asteroid 'on course' with the Earth's own trajectory.
Though its exact size is unknown it is estimated to be from between 500 metres and 1km wide with 990 metres the most likely.
The object was first spotted by the Catalina Sky Survey at the end of last year with astronomers expected to be closely watching its progress this week.
Astronomers have named June 30 as Asteroid Day to highlight the dangers of Potentially Dangerous Asteroids (PHAs) hurtling through space.
Initiative co-founder Grigorij Richters warned there are thousands which have not been identified which could ”destroy life”.
He said: “It just takes one asteroid to completely destroy life, not just humanity, but all species.
“Asteroid Day is all about raising awareness, understanding there’s a threat and dealing with it.”
The Minor Planet Center has classified 2002 FG7 and 2014 YB35 as "Potentially Hazardous Asteroids."
NASA said there are more than 1,500 PHAs in outer space which show an orbit dangerously close to swiping Earth.
A spokesman said: “Potentially Hazardous Asteroids (PHAs) are currently defined based on parameters that measure the asteroid's potential to make threatening close approaches to the Earth.
“This ‘potential’ to make close Earth approaches does not mean a PHA will impact the Earth.
“It only means there is a possibility for such a threat.
“By monitoring these PHAs and updating their orbits as new observations become available, we can better predict the close-approach statistics and thus their Earth-impact threat.”
DON'T MISS A POST - FOLLOW US ON FACEBOOK!
Comments at Speisa are unmoderated. We do believe in free speech, but posts using foul language, as well as abusive, hateful, libelous and genocidal posts, will be deleted if seen. However, if a comment remains on the site, it in no way constitutes an endorsement by Speisa of the sentiments contained therein.comments powered by Disqus
Jean-Marie Le Pen wants guillotine reintroduced for terrorists in France
The founder of France's right-wing National Front, Jean Marie Le Pen, has urged the nation to reintroduce the death penalty – with decapitation. | <urn:uuid:64d45886-6b52-46b7-9ec0-61ed6e41f008> | 3.40625 | 1,083 | News Article | Science & Tech. | 41.031237 | 95,521,530 |
PARIS — Astronomers on Tuesday unveiled a haul of more than 50 planets orbiting other stars, including a “super-Earth” which inhabits a zone where, providing conditions are right, water could exist in liquid form.
It is the biggest single tally in the history of exoplanet hunting since the very first world beyond the Solar System was spotted in 1995, the European Southern Observatory (ESO) said in a press release.
The planets were detected using a light analyser, or spectrograph, on a 3.6-metre (11.7-feet) telescope at La Silla Observatory in the ultra-dry conditions of Chile’s Atacama desert.
The findings were presented at a conference on Extreme Solar Systems in Moran, Wyoming, attended by 350 exoplanet specialists.
Sixteen of the planets have been designated “super-Earths”, a term meaning that they are exceptionally small for exoplanets that have been spotted so far.
A super-Earth is between one and 10 times the mass of the Earth. It does not necessarily mean that the world is rocky — as opposed to gassy — or that the conditions for life exist.
However, one of the 16 new “super-Earths”, called HD 85512 b, which is estimated to be only 3.6 times the mass of the Earth, is orbiting at an intriguingly promising distance from its star.
It is just at the far edge of the so-called Goldilocks zone, where the temperature should be balmy enough for water, the stuff of life as we know it, to exist in liquid form rather than as ice or a gas.
The sighting is not a confirmation that HD 85512 b is a habitable planet, but should be seen as a technical advance for determining if other potential homes-from-home exist, ESO said.
Nearly 600 exoplanets have been detected since 1995, but none has looked remotely like Earth. Many are gas giants that orbit so close to their sun that their atmospheres are scorching.
“In the coming 10 to 20 years, we should have the first list of potentially habitable planets in the Sun’s neighbourhood,” said Michel Mayor, a University of Geneva astronomer who discovered the first-ever exoplanet, and led the ESO team.
“Making such a list is essential before future experiments can search for possible spectroscopic signatures of life in the exoplanet atmospheres.” | <urn:uuid:1d8b71c4-0bff-4c7b-921c-3fb79bc0d06b> | 3.203125 | 520 | News Article | Science & Tech. | 45.411872 | 95,521,539 |
A deep-drilling project into one of the world's most dangerous earthquake faults is now underway on New Zealand's South Island.
Scientists from around the world have gathered at the drill site near Whataroa, north of Franz Josef glacier, for the rare opportunity to glimpse the inner workings of the Alpine Fault. The island-spanning fault unleashes a great earthquake every two to four centuries, with the average time between temblors about 330 years. The most recent earthquake, in 1717, was an estimated magnitude 8.1.
Through the drilling project, researchers hope to catch warning signs before the Alpine Fault unleashes its next earthquake. The odds of another magnitude-8 earthquake in the next 50 years are a relatively high 28%, say the project's scientists.
"We're a little overdue for a big earthquake there," said Ben van der Pluijm, a geologist at the University of Michigan in Ann Arbor, who is participating in the project.
This is the first time scientists have drilled deep into a ripening fault, before it unleashes a major earthquake. Previous projects have examined the aftermath of earthquakes, such as a rapid-fire drilling expedition following the 2011 Japan earthquake and related tsunami. And the only other deep earthquake borehole, a 2.1-mile-deep (3.4 kilometers) puncture into California's San Andreas Fault, was set in a creeping section that has never unleashed large quakes in historic times.
But even if the next earthquake never strikes while the Alpine Fault sensors monitor the borehole, researchers will still consider the project a success. Rock samples collected during drilling, to be dispersed to 12 countries, will come from the depths at which earthquakes strike. The information will improve models of how earthquakes and faults work, researchers said.
"The drilling aims to go to where the earthquakes are," Van der Pluijm told Live Science. "This will tell us a lot about earthquakes and fault zones."
The Alpine Fault marks the boundary between the Australian and Pacific tectonic plates, where these two massive slabs of Earth's crust slide past each other. On New Zealand's South Island, during the next earthquake, land on either side of the Alpine Fault will probably jump about 26 feet horizontally and 13 feet vertically, if the fault behaves as in past earthquakes. Over millions of years, these upward shoves have lifted the spectacular Southern Alps mountain range.
The $2.5 million Deep Fault Drilling Project plans to sink a 4-inch-wide borehole 4,265 feet deep across the Alpine Fault. Researchers will measure temperature, pressure and geological properties below the surface. Sensors will also track the buildup of pressure between the two plates.
The team plans to punch through the fault itself, located about 3,280 feet below the surface, and into the underlying Australian plate. Remote imaging of the fault suggests the fault angles downward at 45 degrees below the surface near Whataroa.
Two test boreholes were completed in 2011, reaching a shallow 495 feet. The earlier drilling revealed that smashed rock in the shallow fault zone, as finely ground as clay, prevents groundwater from flowing across the fault. Researchers plan to test whether slippery clay also lines the fault's deeper reaches. Clay acts like grease on a fault, making it slip more easily.
"Similar projects overseas have shown that a huge amount of information can be extracted from samples retrieved from the heart of the fault zone," project co-leader Rupert Sutherland of GNS Science said in a statement.
- Liver Cancer Death Rates Rise As Overall Cancer Death Rates Fall in the US
- Gonorrhea: Symptoms & Treatment
- Why 8 Endangered Rhinos Died in Mission to Save Them
- Hey, Star Wars Fans! Check Out These Amazon Prime Day Deals
This article originally published at LiveScience here | <urn:uuid:7e3108e2-edd4-4f84-9cc1-fa451caf13b5> | 3.640625 | 777 | Truncated | Science & Tech. | 48.795591 | 95,521,550 |
The transport and fate of contaminants in subsurface systems has become one of the major research areas in the environmental/hydrological/earth sciences. This interest has been instigated by concerns associated with the effects of human activities on the environment. Numerous examples may be cited that illustrate the adverse impact of human activities on the subsurface environment, such as contamination of soil and groundwater by chemicals associated with industrial and commercial operations, service stations, waste disposal facilities, and agricultural production. An overview of the impact of such activities on the quality of the soil environment in the Australasia-Pacific Region is presented in Chapters 14 to 25.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
- Transport and fate of organic contaminants in the subsurface
Mark L. Brusseau
Rai S. Kookana
- Springer Netherlands
Fallstudie Überschwemmungskarten/© Thaut Images | Fotolia | <urn:uuid:ba41cc47-e94c-498d-8691-f7b2906ddf7a> | 2.96875 | 200 | Truncated | Science & Tech. | 9.029804 | 95,521,584 |
This critical review provides an overview of current research on carbon-based nanostructured materials and their composites for use as supercapacitor electrodes. Particular emphasis has been directed towards basic principles of supercapacitors and various factors affecting their performance. The focus of the review is the detailed discussion regarding the performance and stability of carbon-based materials and their composites. Pseudo-active species, such as, conducting polymer/metal oxide have been found to exhibit pseudo-capacitive behavior and carbon-based materials demonstrate electrical double layer capacitance. Carbon-based materials, such as, graphene, carbon nanotubes, and carbon nanofibers, provide high surface area for the deposition of conducting polymer/metal oxide that facilitates the efficient ion diffusion phenomenon and contribute towards higher specific capacitance of the carbon based composite materials with excellent cyclic stability. However, further scope of research still exists from the view point of developing high energy supercapacitor devices in a cost effective and simple way. This review will be of value to researchers and emerging scientists dealing with or interested in carbon chemistry.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:d1f602c8-52e2-45d1-bdd0-303d5680f88d> | 3.078125 | 242 | Academic Writing | Science & Tech. | -4.989048 | 95,521,598 |
Hi and thank you for reading this.
I need help in designing a program to work as a calculator,
that is to say The calculator programm must be able to
1) decimal integer input(0 to +99) via the keyboard.
2) decimal result output to VDU.
3) results which are negative.
4) non-integer results, which must be displayed to
two decimal places.
This would use the four basic functions
add/subtract/multipy and division.
Im new to this, therefore any suggestions/help/advice/direction
would be most grateful.
Thanks for your time. | <urn:uuid:43f3f5a7-345a-412a-b3c0-29f4147856bc> | 2.59375 | 137 | Comment Section | Software Dev. | 65.861431 | 95,521,618 |
In brief, there are big changes happening on Rosetta's Comet 67P; the landscape is changing and changing and changing... Land features disappear, some spring into existence while still others remain unchanged.
Could this be the alien ship decloaking? A new lifeform hatching?
Probably not. It most likely weak surface material destabilizing or the crystallization of amorphous ice. It could possibly even be destabilizing 'clathrates' (a lattice of of one kind of molecule containing other molecules.)
They may not look like it, but each of these photos from Rosetta is of the same site on Comet 67P/ Churyumov-Gerasimenko, within just six short weeks. Something big is happening up there—but what is it?
This particular comet site has been steadily monitored by the ESA since August of 2014, and nothing has been happening. Literally. Viewed in detail of up to 1/10 of a meter, the site had stayed exactly the same. Until late May, when suddenly everything started changing again and again and again.
Some land features disappeared, others were added. Some were temporary, some stayed. What’s happening there and why? Scientists still aren’t sure, but they’ve come up with a few theories:
A simple possibility is that the surface material is very weak, allowing for more rapid erosion, but it is also possible that the crystallisation of amorphous ice or the destabilisation of so-called ‘clathrates’ (a lattice of one kind of molecule containing other molecules) could liberate energy and thus drive the expansion of the features at faster speeds.
A settled explanation for the rapid changes, though—and when or if they will stop—remains to be seen. | <urn:uuid:ea9b3e37-0067-4831-b296-8974dab8d624> | 3.65625 | 368 | Personal Blog | Science & Tech. | 43.675114 | 95,521,634 |
A rat thought extinct for 11 million years and a hot-pink, cyanide-producing dragon millipede are among a thousand new species discovered in the Greater Mekong Region of Southeast Asia in the last decade, according to a new report launched by World Wildlife Fund (WWF).
First Contact in the Greater Mekong reports that 1068 species were discovered or newly identified by science between 1997 and 2007 – which averages two new species a week. This includes the world’s largest huntsman spider, with a foot-long leg span and the Annamite Striped Rabbit, one of several new mammal species found here. New mammal discoveries are a rarity in modern science.
While most species were discovered in the largely unexplored jungles and wetlands, some were first found in the most surprising places. The Laotian rock rat, for example, thought to be extinct 11 million years ago, was first encountered by scientists in a local food market, while the Siamese Peninsula pit viper was found slithering through the rafters of a restaurant in Khao Yai National Park in Thailand.
“This report cements the Greater Mekong’s reputation as a biological treasure trove -- one of the world’s most important storehouses of rare and exotic species,” said Dekila Chungyalpa, Director of the WWF-US Greater Mekong Program. “Scientists keep peeling back the layers and uncovering more and more wildlife wonders.”
The findings, highlighted in this report, include 519 plants, 279 fish, 88 frogs, 88 spiders, 46 lizards, 22 snakes, 15 mammals, 4 birds, 4 turtles, 2 salamanders and a toad. The region comprises the six countries through which the Mekong River flows including Cambodia, Lao PDR, Myanmar, Thailand, Vietnam and the southern Chinese province of Yunnan. It is estimated thousands of new invertebrate species were also discovered during this period, further highlighting the region’s immense biodiversity.
“This region is like what I read about as a child in the stories of Charles Darwin,” said Dr Thomas Ziegler, Curator at the Cologne Zoo. “It is a great feeling being in an unexplored area and to document its biodiversity for the first time… both enigmatic and beautiful,” he said.
The report stresses that economic development and environmental protection must go hand-in-hand to provide for livelihoods and alleviate poverty, but also to ensure the survival of the Greater Mekong's astonishing array of species and natural habitats.
“This poorly understood biodiversity is facing unprecedented pressure….for scientists, this means that almost every field survey yields new diversity, but documenting it is a race against time,” said Raoul Bain, Biodiversity Specialist from the American Museum of Natural History.
The report’s authors recommend a formal, cross-border agreement between the governments of the Greater Mekong to address the threats to biodiversity in the region.
The WWF network is working throughout the Greater Mekong region to promote this agreement and address the threats to biodiversity from its base in Vientiane, Laos. Stuart Chapman, who heads the WWF network’s Greater Mekong Programme, says that protecting habitat while partnering with governments, businesses and local communities to address threats from development and agriculture is essential. “Who knows what else is out there waiting to be discovered, but what is clear is that there is plenty more where this came from,” he said. “The scientific world is only just realizing what people here have known for centuries.”Read the report and learn more about the species discovered in Greater Mekong
Lee Poston | EurekAlert!
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
Colorectal cancer risk factors decrypted
16.07.2018 | Max-Planck-Institut für Stoffwechselforschung
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:1a0ec5e9-00e5-4126-a623-22ada19e2cc9> | 3.3125 | 1,408 | Content Listing | Science & Tech. | 41.329525 | 95,521,655 |
Authors: Vu B Ho
In this work we discuss the possibility of combining the Coulomb potential with the Yukawa’s potential to form a mixed potential and then investigate whether this combination can be used to explain why the electron does not radiate when it manifests in the form of circular motions around the nucleus. We show that the mixed Coulomb-Yukawa potential can yield stationary orbits with zero net force, therefore if the electron moves around the nucleus in these orbits it will not radiate according to classical electrodynamics. We also show that in these stationary orbits, the kinetic energy of the electron is converted into potential energy, therefore the radiation process of a hydrogen-like atom does not related to the transition of the electron as a classical particle between the energy levels. The radial distribution functions of the wave equation determine the energy density rather than the electron density at a distance r along a given direction from the nucleus. It is shown in the appendix that the mixed potential used in this work can be derived from Einstein’s general theory of relativity by choosing a suitable energy-momentum tensor. Even though such derivation is not essential in our discussions, it shows that there is a possible connection between general relativity and quantum physics at the quantum level.
Comments: 10 Pages.
[v1] 2017-08-17 01:50:44
Unique-IP document downloads: 34 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:f153610b-1083-487a-ae91-5d882613812b> | 2.546875 | 425 | Academic Writing | Science & Tech. | 32.201502 | 95,521,695 |
|Species:||Not yet described|
This is a yet-unnamed Powelliphanta species, provisionally known as Powelliphanta "Urewera". This is one of the amber snails. It is an undescribed species of large, carnivorous land snail, a terrestrial pulmonate gastropod mollusc in the family Rhytididae.
Powelliphanta "Urewera" is classified in the New Zealand Threat Classification System as being Nationally Vulnerable.
- Powell A W B, New Zealand Mollusca, William Collins Publishers Ltd, Auckland, New Zealand 1979 ISBN 0-00-216906-1
- Department of Conservation Recovery Plans
- New Zealand Department of Conservation Threatened Species Classification
|This Rhytididae-related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:334b952a-6be9-44e1-995d-89a63fd85b8c> | 2.546875 | 178 | Knowledge Article | Science & Tech. | 23.295513 | 95,521,698 |
Modeling of the runup heights of the Hokkaido-Nansei-Oki tsunami of 12 July 1993
- 55 Downloads
The Hokkaido-Nansei-Oki earthquake (M w 7.7) of July 12, 1993, is one of the largest tsunamigenic events in the Sea of Japan. The tsunami magnitudeM t is determined to be 8.1 from the maximum amplitudes of the tsunami recorded on tide gauges. This value is larger thanM w by 0.4 units. It is suggested that the tsunami potential of the Nansei-Oki earthquake is large forM w . A number of tsunami runup data are accumulated for a total range of about 1000 km along the coast, and the data are averaged to obtain the local mean heightsH n for 23 segments in intervals of about 40 km each. The geographic variation ofH n is approximately explained in terms of the empirical relationship proposed byAbe (1989, 1993). The height prediction from the available earthquake magnitudes ranges from 5.0–8.4 m, which brackets the observed maximum ofH n , 7.7 m, at Okushiri Island.
Key wordsTsunami magnitude runup tsunami warning
Unable to display preview. Download preview PDF.
- Abe, K. (1979),Size of Great Earthquakes of 1837–1974 Inferred from Tsunami Data, J. Geophys. Res.84, 1561–1568.Google Scholar
- Abe, K. (1981),Physical Size of Tsunamigenic Earthquakes of the Northwestern Pacific, Phys. Earth Planet. Inter.27, 194–205.Google Scholar
- Abe, K. (1985),Quantification of Major Earthquake Tsunamis of the Japan Sea, Phys. Earth Planet. Inter.38, 214–223.Google Scholar
- Abe, K. (1988),Tsunami Magnitude and the Quantification of Earthquake Tsunamis around Japan, Bull. Earthquake Res. Inst. Tokyo Univ.63, 289–303 (in Japanese with English abstract)Google Scholar
- Abe, K. (1989),Estimate of Tsunami Heights from Magnitudes of Earthquake and Tsunami, Bull. Earthquake Res. Inst. Tokyo Univ.64, 51–69 (in Japanese with English abstract).Google Scholar
- Abe, K. (1993),Estimate of Tsunami Heights from Earthquake Magnitudes, Proceedings of the IUGG/IOC International Tsunami Symposium TSUNAMI '93, Wakayama, Japan, Aug. 23–27, pp. 495–507.Google Scholar
- Aida, I. (1978),Reliability of a Tsunami Source Model Derived from Fault Parameters, J. Phys. Earth26, 57–73.Google Scholar
- Hokkaido Tsunami Survey Team (1993),Tsunami Devastates Japanese Coastal Region, EOS Trans. AGU74, 417.Google Scholar
- Kajiura, K.,Some statistics related to observed tsunami heights along the coast of Japan, InTsunamis—Their Science and Engineering (eds. Iida, K., and Iwasaki, T.) (Terra Sci. Publ., Tokyo 1983) pp. 131–145.Google Scholar
- Nakanishi, I., Kodaira, S., Kobayashi, R., Kasahara, M., andKikuchi, M. (1993),The 1993 Japan Sea Earthquake, EOS Trans. AGU74, p. 34.Google Scholar
- Shuto, N. (1994),Tsunami Runup Heights for the Hokkaido-Nansei-Oki Earthquake, Tsunami Engineering Tech. Rep., Disaster Control Research Center, Tohoku Univ., 120 pp.Google Scholar | <urn:uuid:b81c3f1e-d8a5-4239-93cc-663c886c6b43> | 2.609375 | 793 | Academic Writing | Science & Tech. | 65.453317 | 95,521,699 |
WHILE many Donegal people have headed down under to set up home in recent years, one of the worst aliens to move in the opposite direction has been the New Zealand flatworm.
Earlier this week, a science teacher at Errigal College in Letterkenny, John McClean, unearthed one while preparing the ground for Nobel Prize Winner, Professor William Campbell, to plant a tree at the school.
Having first come across the worm almost twenty years ago this was the first one Mr McClean has seen in Letterkenny.
They first turned up in a couple of gardens in Belfast in 1963, but very quickly spread to Scotland which fits with their distribution in New Zealand, where they are confined to the cooler and damper parts of the South Island.
“It feeds on native earthworms that provide important ecosystem services as well as currying favour with farmers for enhancing the fertility and drainage of agricultural soils. It’s akin to letting a wolf loose into a field of sheep,” Mr McClean said.
The main colour is dark purple-brown, with a paler lower surface. They spend a lot of their time curled into a spiral, but when fully extended they can be up to eight inches long.
However, it’s not all doom and gloom.
“When they run out of earthworms they tend to die off themselves. I suppose it’s the predator prey balance and the earthworm is making a steady comeback in other parts of the world the New Zealand flatworm has made its way to,” Mr McClean said.
In the chemistry lab, Mr McClean showed the Nobel Laureate a New Zealand flatworm in a jar he had dug out that morning while preparing the ground for Prof Campbell to plant a tree in memory of his visit.
“There are quite a few in Ramelton and they should really not be here,” Mr McClean said. Both scientists agreed they could possibly have arrived here with seed potatoes.
Prof Campbell was awarded the Novel Prize for Medicine together with Japanese scientist Satoshi Omura, for discovering avermectins, which kill infection-causing parasitic roundworms which cause river blindness.
Our ecosystems and farms would benefit from their removal and the simple advice, should you meet one of these relative newcomer to Ireland’s shores, is to squash them!
The 15-year-old Errigal College third year student landed five winners on what was the opening day of the North...
DONEGAL NEWS SPORTS PERSONALITY – AUGUST THE horse and pony racing season comes to an end in Athea, Co... | <urn:uuid:9751da6f-12c3-434b-92fa-aa692020dc29> | 2.859375 | 539 | News Article | Science & Tech. | 55.854554 | 95,521,705 |
Physiological girdling of pine trees via phloem chilling: proof of concept
Quantifying below-ground carbon (C) allocation is particularly difficult as methods usually disturb the root– mycorrhizal–soil continuum. We reduced C allocation below ground of loblolly pine trees by: (1) physically girdling trees and (2) physiologically girdling pine trees by chilling the phloem. Chilling reduced cambium temperatures by approximately 18 °C. Both methods rapidly reduced soil CO2 efflux, and after approximately 10 days decreased net photosynthesis (Pn), the latter indicating feedback inhibition. Chilling decreased soil-soluble C, indicating that decreased soil CO2 efflux may have been mediated by a decrease in root C exudation that was rapidly respired by microbes. These effects were only observed in late summer/early autumn when above-ground growth was minimal, and not in the spring when above-ground growth was rapid. All of the effects were rapidly reversed when chilling was ceased. In fertilized plots, both chilling and physical girdling methods reduced soil CO2 efflux by approximately 8%. Physical girdling reduced soil CO2 efflux by 26% in non-fertilized plots. This work demonstrates that phloem chilling provides a non-destructive alternative to reducing the movement of recent photosynthate below the point of chilling to estimate C allocation below ground on large trees.
You can request print copies of our publications at this email address: firstname.lastname@example.org
- This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
- Our on-line publications are scanned and captured using Adobe Acrobat. During the capture process some typographical errors may occur. Please contact the SRS webmaster if you notice any errors which make this publication unuseable.
- To view this article, download the latest version of Adobe Acrobat Reader. | <urn:uuid:db9b4066-9cd0-44d2-821c-4fe77e3aff28> | 3.046875 | 410 | Academic Writing | Science & Tech. | 31.510959 | 95,521,732 |
- Open Access
Atmospheric loss and supply by an impact-induced vapor cloud: Its dependence on atmospheric pressure on a planet
© The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences; TERRAPUB 2010
- Received: 29 July 2009
- Accepted: 8 June 2010
- Published: 31 August 2010
Hypervelocity impact would vaporize the impactor and part of planetary surface and create a rock vapor cloud. Results from previous studies suggest that the energetic impact would have a role to blow off and cause a large-scale loss of the planetary atmosphere through expansion of the vapor cloud. Impact also has been considered as a material source. Numerous, repeated impact events during the heavy bombardment period could greatly affect the amount of volatiles and the atmospheric pressure on the planetary surface in either way. To discuss the evolution of the atmospheric pressure by impacts, we carried out hydro-calculations with a two-dimensional hydrodynamic code and investigated the dependence of the loss and supply of the atmosphere on the atmospheric pressure. We integrated both effects by impacts over impactor size distribution and assessed the evolution of the atmospheric pressure on early Mars. Using this approach, we found that the numerous impacts likely increase the atmosphere monotonically or control the atmospheric pressure to some value, rather than causing the monotonic decrease as the previous study suggested.
The presence of numerous craters on the Moon and Mars suggest that the terrestrial planets experienced intense series of impact events after the main stage of their formation. The dating analysis of lunar rocks suggests that a ‘heavy bombardment period’ occurred about 3.8 billion years ago, when the terrestrial planets had already grown to nearly their present size. At this time, impact velocity would be high enough to vaporize fully or partially the impactor and part of the planetary surface. Such energetic impact events could affect the volatile budget on the planet.
Cameron (1983) suggested the possibility that impacts would cause the loss of planetary atmospheres: hypervelocity impact events create cloud of rock vapor that could blow off a fraction of the planetary atmosphere via energetic expansion of the vapor cloud. The process is referred to ‘impact erosion’. Melosh and Vickery (1989) analytically estimated the condition for the large-scale loss of the atmosphere and integrated effects of the atmospheric loss by impacts over impactor size distribution. Their results suggested that the atmospheric pressure would significantly decrease through the heavy bombardment period on Mars. The atmospheric mass eroded from the planet has been extensively studied and revised by several groups with more sophisticated approaches: an analytical model in Vickery and Melosh (1990) and hydrodynamic calculations of vapor expansion in Newman et al. (1999). Recently, Shuvalov and Artemieva (2002) performed full simulations for impacts of asteroids and comets on the present-day Earth. In these three papers (Vickery and Melosh, 1990; Newman et al., 1999; Shuvalov and Artemieva, 2002), however, the calculations are carried out only under the current atmospheric pressure on the Earth, namely, 1 [bar] air. It is not enough to discuss the atmospheric pressure change through the bombardment because the atmospheric pressure may greatly change (up to two orders of magnitude) by impacts through the heavy bombardment period on Mars, as discussed by Melosh and Vickery (1989). The change in the atmospheric pressure could, in turn, affect the loss efficiency of the atmosphere by impacts.
Impacts also contribute to the supply of atmospheric volatiles. A fraction of the volatiles in the asteroids and comets does not escape and is retained on the planet. Also, there may be volatiles buried in the planet that are liberated but do not escape. The competition between the atmospheric loss by vapor expansion and the volatile supply by retention of the vapor cloud would affect the volatile budget. The mass lost from the planet and its atmospheric pressure dependence, however, has not been fully investigated by hydrodynamic calculation. Melosh and Vickery (1989) did not account for the supply of volatiles, while Chyba (1990) and Zahnle (1993) assumed the rock vapor to be totally retained on the planet in their discussion of the volatile budget on Earth and Mars, respectively. To discuss the influence in volatile budget by impacts, it is necessary to consider the effects of both the atmospheric blow-off and vapor retention, and their atmospheric pressure dependence.
In this paper, we develop a two-dimensional (2-D) hydro-dynamic code and carry out calculations of vapor expansion over a wide range of atmospheric pressure on a planet to investigate the pressure dependence of the behavior of the atmosphere and vapor cloud. Our hydrocode is equivalent to that by Newman et al. (1999) and may be rather primitive compared with the full simulations in Shuvalov and Artemieva (2002). However, it enables us to carry out calculations over the wider parameter range because of the less computational load. Also, our simple model setting helps us to understand the physics which controls the induced flow and the mass of the eroded atmosphere and retained vapor cloud. We integrated the impact-induced loss and supply of the atmosphere over a plausible impactor mass distribution for Mars, taking the derived atmospheric pressure dependence into consideration. Our results indicate that the integrated effect of numerous impact events is to monotonically increase or to control to some value the atmospheric pressure on Mars during the heavy bombardment period, instead of the monotonic decrease suggested by Melosh and Vickery (1989).
2. Numerical Method
2.1 Model setting
2.2 Formulation and computational model
We used a staggered grid system with the finite difference scheme. The initial vapor cloud is resolved by 10 × 10 regular zones. For the other region, the spatial grid interval is set to increase by a geometric series in which the ratio of successive terms is 1.05. The entire computational region is resolved into 180 × 180 rectangular zones and may become as large as about 1 × 108 times the initial vapor size. In the case that the ratio of successive terms is 1.02, the mass of the atmospheric and vapor loss increases by about 4%. We applied solid boundary conditions to the planetary surface and z-axis and non-reflective boundary conditions to the edges of the atmosphere. The time interval is determined at each time step from the Courant-Friedrichs-Lewy (CFL) condition. We set the CFL number at 0.2.
3. Impact Erosion and Supply
3.1 Definition of the “blown-off mass” for the atmosphere and vapor cloud
In Fig. 1, our result is plotted with numerical result reported in Newman et al. (1999). From Fig. 1 we can see that their computation time is not enough for the escaping mass to level off. It is possible that these researchers calculated the flow at the very early stage, during which the shock wave plays a dominant role in atmospheric acceleration. In such an early stage, a larger portion of the vapor cloud, which has most of the momentum, remains near the planetary surface. Also, our calculation shows that the escaping vapor cloud accounts for a large fraction of the total escaping mass. Though they did not distinguish the escaping vapor from the escaping atmosphere, it is speculated that the total escaping mass in Newman et al. (1999) mainly represents the escaping mass not of the atmosphere, but of the vapor cloud.
3.2 Atmospheric-pressure dependence of the vapor-cloud loss
The vapor mass retained on the planet is derived from subtracting the mass of its loss from its total mass. The results indicate that the vapor mass retained on the planet is almost proportional to the vapor mass but independent of the ambient pressure at the limit of minimum retention, and that the supply of the volatiles would be more effective for the larger vapor mass.
3.3 Atmospheric-pressure dependence of the mass of atmospheric loss
3.4 The shape of atmospheric escape region
The shape of the escape region reflects the dynamic flow of the atmosphere. The atmosphere is strongly compressed by the passage of the preceding shock wave so that a high-pressure shell of the shocked atmosphere is formed ahead of the vapor expansion front. Since the atmospheric mass in the shell is large for a massive atmosphere, the vapor cloud is required to transfer more momentum to expand. Thus, the vapor cloud loses its momentum at the very early stage while the flow is almost radial outward from the impact point. This process results in the spherical shape of the high-pressure shell. The shape of the escape region is simply determined with the critical zenith angle θ from the vertical, which is smaller than that the vapor cloud can push away the atmosphere in the azimuthal bin, resulting in the cone-shaped escape region (Fig. 4(a)).
In contrast, the bowl- or truncated-cone-shaped escape region results from the deviation of the vapor flow from the radial direction. As the atmospheric pressure decreases, the vapor cloud can expand further against the ambient atmosphere. The velocity of the shock wave is zenith angle-dependent in the stratified atmosphere: that is, the higher the velocity, the smaller the zenith angle. The velocity differences between the zenith angles changes the shape of the high-pressure shell from the closed spherical shape to an open bowl or truncated-cone profile, and the atmosphere and vapor cloud within the shell flows along the shell wall. As a result, the deflection of the radial-vapor flow induces the arc-like flow in the atmosphere with the large zenith angle (near surface), which results in the truncated-cone shaped escape region (Fig. 4(b)).
The shape of the escape region allows us to speculate on the pressure dependence of the escaping atmospheric mass. Although the atmospheric density is proportional to the atmospheric pressure, the blown-off mass of atmosphere varies as pressure to the ∼0.3 power for small atmospheric pressures, as shown in Fig. 3 and the previous section. Under such an atmospheric pressure, the escape region is truncated-cone-shaped, and its bottom radius determines the mass of the atmospheric loss. The vapor cloud can expand further against the lower atmosphere and, as a result, the bottom radius of the escape region spreads out. This compensates for the increase in atmospheric density with atmospheric pressure and weakens the sensitivity of the blown-off mass to the pressure.
3.5 The optimum loss pressure and an empirical formula of the eroded atmospheric mass
3.6 Comparison with the hemispheric blow-off model
We also applied the hemispheric blow-off model in Vickery and Melosh (1990) to calculate and compare their pressure dependence (Figs. 2 and 3). The pressure dependence is in remarkably agreement with our hydrodynamic calculations for low atmospheric pressure. However, at some large pressures in the hemispheric blow-off model, both the loss of the atmosphere and vapor cloud rapidly decrease and shut off, while our results show more gradual decreases. The difference results from the difference in the criteria for escape from the planet. In the model of Vickery and Melosh (1990), it is required that the radial velocity averaged in the azimuthal sector exceeds the planetary escape velocity to blow off the gases. In actual conditions, however, radial velocity distribution exists, and the front of the expanding vapor cloud is faster than that of its most inner portion. The criterion using the averaged velocity in the sector therefore underestimates atmospheric and vapor loss by the impacts.
The mass of the atmospheric loss estimated by Vickery and Melosh (1990) is less than that of our hydrodynamic calculations by some factor, while the pressure dependence is in good agreement between both models. For the parameter set used at the calculations in Figs. 2 and 3, the factor is almost 0.5. Although the value of the factor varies with the parameter sets, it is found that the value stays below unity over the ranges of the parameters considered.
3.7 The implication to the atmospheric pressure change on early Mars
The atmospheric pressure dependence derived from our calculations can be summarized as follows. The eroded mass increases with the power-law dependence against the atmospheric pressure, while the retained fraction of the vapor cloud is nearly constant, when the impact satisfies condition (16), which means the impact is energetic enough against the ambient atmospheric mass. If the impact is less energetic, the eroded mass of both the atmosphere and vapor cloud rapidly decreases with the atmospheric pressure and finally becomes zero.
The mass is given by the empirical formula (18) forthe atmospheric loss and by the interpolation from the calculation results for the vapor cloud retention, respectively. In this paper, we treated the planetary surface as a planeparallel one. The planetary surface and atmosphere, however, have a curvature, and this would be more important for the larger impactor. The assumption of the plane-parallel atmosphere causes an over estimation of the mass of the atmospheric loss because in the plane-parallel model the infinite atmospheric mass is assumed on the surface. To avoid this overestimation, we set the upper limit for the atmospheric mass eroded from the planet by the atmospheric mass above the tangent plane to the impact point.
Although most of the fraction of the retained vapor would condense with expansion and be removed from the atmosphere, some volatile elements would remain in the gas phase, leading to the increase of the atmospheric mass. We treated its supply efficiency as a free parameter, w, which denotes the mass fraction of the gas supplied to the atmosphere after the cooling of the vapor cloud. Although the parameter w is not well constrained at present, it may be inferred from the mass fraction of carbon dioxide supplied in the impactor. Carbon content is 0.03–0.05 for CI chondrites (Kerridge, 1985) and 0.001–0.002 for ordinary chondrites (Jarosewich, 1990). These values correspond to 0.11–0.18 and 0.04–0.07 of carbon dioxide in mass fraction, respectively, if simply assumed that the all the carbon becomes carbon dioxide.
The positive value of Δ a in Fig. 5 denotes an increase in the atmospheric mass and the negative one denotes a decrease. At the atmospheric pressure where the value of Δ a equals 0, the loss and supply are in balance so that the atmospheric pressure keeps its value. The results in Fig. 5 suggest two possible ways in which the atmospheric pressure evolves by impacts. One is the monotonic increase, and the other is the regulation to a certain pressure.
The monotonic increase occurs at relatively large w and/or very large atmospheric pressure. At a large w value the curve has no intersection with the x-axis, such as the case with w = 0.03 in Fig. 5(b). In this case, the volatiles supplied by impacts overwhelm the atmospheric loss, and the impacts increase the atmospheric mass on the planet regardless of its atmospheric pressure. At very large atmospheric pressure, the curve has an intersection at which the value of Δ a changes from the negative to positive with increasing atmospheric pressure, although such a large atmospheric pressure is out of range in Fig. 5. Then, the atmospheric mass would monotonically increase if the atmospheric pressure is larger than the value at the intersection.
The regulation of pressure occurs in those cases when curves in Fig. 5 intersect with the x axes at the point where the sign of Δ a changes from a positive to a negative value with increasing atmospheric pressure. (For example, the curves with w = 0.01−0.1 in Fig. 5(a) and w = 0.001 and 0.01 in Fig. 5(b)). We refer to the atmospheric pressure at such an intersecting point as pctl. Let us now consider the behavior of the atmospheric pressure by the impacts around pctl. If the atmospheric pressure is less than pctl, the value Δ a is positive so that the atmospheric pressure increases and approaches pctl by the impacts. One can have a similar result for the opposite case. This indicates that the impacts would play a role in controlling the atmospheric pressure to approach to pctl, not unilaterally to take it away.
This behavior is a natural result of the difference in the pressure dependence between the atmospheric loss and the retained vapor cloud. The eroded mass of the atmosphere shows the power-law dependence (Eq. (18)) to the atmospheric pressure and has the maximum value at the optimum atmospheric pressure. On the other hand, the mass of the retained vapor cloud is almost independent of the atmospheric pressure as long as the impact is energetic enough to satisfy condition (16). As a result, the atmospheric pressure decrease by impacts, if any, has the minimum value.
It should be noted that whether the final atmospheric pressure after the heavy bombardment reaches pctl or not depends on the total impactor mass through the heavy bombardment. Also, the value of pctl and its actual existence also depend on the value of w and the vapor energy (the impact velocity). We treated the maximum impactor mass mimax as an independent variable. Assuming that the Poisson distribution can be applied to the number of the largest impacts and that the expected value of the number of the largest impact is equal to 1, the value of mimax could vary from 0.651 × 1018 [kg] to 2.16 × 1018 [kg] by using 1σ, and then the value of pctl could change by a factor of about 2. This change is mainly caused by the change in the retained vapor mass because the mass of the retained vapor cloud is approximately proportional to mimax, while the change of the mass of the atmospheric loss is relatively less, especially under the smaller atmospheric pressure.
Zahnle et al. (1992) also considered the competition between impact erosion and supply of atmospheres over Titan, Ganymede and Callisto. They found two regimes of atmospheric evolution; the erosive regime and the accumulative regime. The erosive regime seems to be corresponding to the curves with the intersections in our Fig. 5. These authors also suggested that the equilibrium between impact erosion and supply would occur under certain conditions in this erosive regime. Their equilibrium pressure would be qualitatively equivalent to pctl in our paper. Their equilibrium pressure, however, decreases to zero with time, probably because the maximum impactor mass is supposed to decrease to zero with time in their model. The atmospheric evolution in their paper seems to have a more erosive history than that suggested by our results. Even in their accumulative regime, an initially thick atmosphere decreases its mass by the impacts. Such an erosive evolution is not only because they used quite different parameters from ours (such as the target planets, the composition and velocity of the impactor, the maximum mass of the impactor and etc.), but also because they assumed the tangent-plane mass atmospheric erosion model by Melosh and Vickery (1989), which assumes the mass of the atmospheric loss to be proportional to the atmospheric pressure. Hence, in some cases, the thicker atmosphere tends to lose the more mass and have more erosive history.
In order to discuss the time evolution and the final pressure of the atmosphere, the random impacts should be considered with Monte Carlo calculations, as Griffith and Zahnle (1995) performed, because it is plausible that the order in which each impact falls onto the planet may be one of the factors influencing the final atmospheric pressure on the planet. For example, if the largest impactor falls at the relatively late stage, the impactor would supply the enormous amount of volatiles so that the subsequent impacts cannot reduce it, and the final atmospheric pressure would be greater than pctl. As such, it would be necessary to discuss the distribution of the final atmospheric pressure.
Our results suggest that it is possible that the terrestrial planets could have the thin atmosphere under which the impact erosion is balanced against the impact supply. Especially if the initial atmosphere is thin, the planet would acquire some volatiles by the impacts and the atmospheric pressure would reach pctl. For example, Fig. 5 shows that even under the erosive conditions such that the supply efficiency w is 0.01 and the impact velocity is about 19 [km/s], Mars could get 0.03 [bar] of atmosphere with the sufficient amount of the veneer. If this happens, the composition of the atmosphere should reflect the aftermath, being similar to the volatile composition of the impactors. If Mars had lost its atmosphere by impacts, its atmospheric composition would likely modified by the volatile component in the impactors, unless they were very dry. Xe isotopic abundances in the Martian atmosphere, however, are not coincident with those of any asteroids as Zahnle indicated in his paper in 1993. It is still unknown what this means. It may suggest that Mars did not experience the intense bombardment that researchers thought or that the veneer composition was quite different from the existing asteroids-comets or very dry, volatile poor planetesimals.
4.1 The effect of the wake and the initial vapor velocity
Impactor penetration accelerates the atmosphere in the entry path and creates a hot wake in which the atmospheric density is lower by a few orders of magnitude than the ambient atmosphere. The recent full-simulation by Shuvalov and Artemieva (2002) suggests that the occurrence of the wake has an important role in preventing the ambient atmosphere from escaping because the vapor cloud could go preferentially through the wake. One of their conclusions is that the difference in the shape of the induced wake contributes to more atmospheric loss by an oblique impact than by a normal impact. Unfortunately, they did not examine the mass of the vapor retained on the planet and the atmospheric pressure dependence. We therefore performed additional experiments on the effects by the wake and the initial velocity distribution within the vapor cloud. We compared the mass of the atmospheric loss for the specific cases under which they performed the numerical simulations.
The most remarkable feature is that the direction of the initial vapor velocity could greatly change the mass of the atmospheric loss. The isotropic initial velocity produces the same results as with condition (a), in which all of the energy of the vapor cloud is given as a thermal one, while the upward initial velocity resulted in less atmospheric loss at low atmospheric pressures and more loss at its onset under the large pressure. Also, Fig. 7 shows the same power-law dependence on the atmospheric pressure regardless of the initial conditions for the non-dimensional atmospheric pressure σ−1 of less than 10−6.
These results mean that one of the key parameters for the impact-erosion problem is the azimuthal distribution of the initial vapor energy instead of its form (thermal or kinetic). The direction of the initial velocity would be different between the normal and oblique impacts. The suggestion in Shuvalov and Artemieva (2002) that the oblique impacts cause more atmospheric loss than a normal impact may be due to the difference in the direction of the initial velocity, instead of the shape of the wake. Further investigation is necessary to gain a better understanding of the effects of oblique impacts.
The comparison results show that our model with the above impact-vapor relations (Eqs. (19), (20) and (27)) predicts a greater loss of atmospheric mass than that in Shuvalov and Artemieva (2002) by a factor of 150–300 for the asteroids and 1.2–30 for the comets. The differences are more remarkable for the lower velocity impacts or the asteroidal impacts rather than the cometary impacts—both are impacts that create less vapor cloud. From the results of our preliminary experiments, which suggest that the existence of the wake is not an influential factor, the quantitative discrepancy would be primarily attributed to the impact-vapor relation that we used. The impact-vapor relation assumed in this paper (Eqs. (19), (20) and (27)) may overestimate the mass and/or the energy of the vapor cloud. Alternatively, it may be that the vapor volume is underestimated in Shuvalov and Artemieva (2002) because Pierazzo et al. (1997) reported that the vapor volume is underestimated by using the equation of state they used, ANEOS.
It should be noted that, even if our impact-vapor relation leads to an overestimation of the mass and energy of the vapor cloud, the scenario that the atmosphere decreases monotonically by impacts is less plausible because we tend to overestimate the effect of impact erosion. The atmospheric-pressure dependence that we derived would hold and the discussion about the pressure control mechanism involving pctl is valid qualitatively—unless the ambient atmosphere affects the vaporization process of the impactor and target surface by an impact.
4.2 Condensation of a vapor cloud
5. Concluding Remarks
Hypervelocity impact would have great effects in the volatile budget on the planetary surface: its loss is due to the explosive vapor expansion, and its supply is due to the vapor retention. Numerous impact events during the heavy bombardment period would have the great consequences for the atmospheric evolution, especially for the atmospheric pressure change on the planets. In this paper, we performed the hydrodynamic calculations of the expansion stage of the impact-induced vapor cloud and examined the atmospheric pressure dependence of the mass that the vapor cloud would blow off and be retained, respectively. The effect of the atmospheric pressure depends on whether the impact is sufficiently energetic against the ambient atmospheric mass. If energetic enough, the mass of the blown-off atmosphere shows the power-law dependence, and the retained vapor mass is nearly independent of the atmospheric pressure. We also discussed the atmospheric pressure change by the impacts on early Mars, taking the derived atmospheric pressure dependences into consideration. We then found that the numerous impact events could change the atmospheric pressure in two possible ways: the monotonic increase or the regulation to a certain atmospheric pressure value. Which way the atmospheric pressure would evolve, however, is strongly dependent on the vapor energy and mass and the supply efficiency of volatiles in the vapor cloud. It is likely that impacts in the early stage of the terrestrial planets contribute to the supply of volatile material rather than its loss, though the quantitative discussions for the evolution of the atmospheric pressure requires more detailed full calculations, including proper EOS and the composition and content of volatile materials in impactors.
The constructive comments by K. Zahnle and an anonymous reviewer greatly helped us to improve this paper. We also appreciate the support of H. St. C. O’Neill to continue this study. This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for JSPS Fellows.
- Cameron, A. G. W., Origin of the atmospheres of the terrestrial planets, Icarus, 56, 195–201, 1983.View ArticleGoogle Scholar
- Chyba, C. F., Impact delivery and erosion of planetary oceans in the early inner solar-system, Nature, 343, 129–133, 1990.View ArticleGoogle Scholar
- Davis, S. S., An analytical model for a transient vapor plume on the Moon, Icarus, 202, 383–392, 2009.View ArticleGoogle Scholar
- Griffith, C. A. and K. Zahnle, Influx of cometary volatiles to planetary moons: The atmospheres of 1000 possible Titans, J. Geophys. Res, 100, 16907–16922, 1995.View ArticleGoogle Scholar
- Jarosewich, E., Chemical-analyses of meteorites—a compilation of stony and iron meteorite analyses, Meteoritics, 25, 323–337, 1990.View ArticleGoogle Scholar
- Kerridge, J. F., Carbon, Hydrogen and nitrogen in carbonaceous chondrites—Abundances and isotopic compositions in bulk samples, Geochim. Cosmochim. Acta, 49, 1707–1714, 1985.View ArticleGoogle Scholar
- Melosh, H. J. and A. M. Vickery, Impact erosion of the primordial atmosphere of Mars, Nature, 338, 487–489, 1989.View ArticleGoogle Scholar
- Newman, W. I., E. M. D. Symbalisty, T. L. Ahrens, and E. M. Jones, Impact erosion of planetary atmospheres: Some surprising results, Icarus, 138, 224–240, 1999.View ArticleGoogle Scholar
- Ohkubo, T., M. Kuwata, B. Luk’yanchuk, and T. Yabe, Numerical analysis of nanocluster formation within ns-laser ablation plume, Appl. Phys. a-Materials Sci. Process., 77, 271–275, 2003.Google Scholar
- Pierazzo, E., A. M. Vickery, and H. J. Melosh, A reevaluation of impact melt production, Icarus, 127, 408–423, 1997.View ArticleGoogle Scholar
- Shuvalov, V. V. and N. A. Artemieva, Atmospheric erosion and radiation impulse induced by impacts, Geol. Soc. Am. Spec. Pap., 356, 695–703, 2002.Google Scholar
- Takewaki, H. and T. Yabe, The cubic-interpolated pseudo particle (CIP) method: application to nonlinear and multi-dimensional hyperbolic equations, J. Comput. Phys., 70, 355–372, 1987.View ArticleGoogle Scholar
- Vickery, A. M. and H. J. Melosh, Atmospheric erosion and impactor retention in large impacts, with application to mass extinctions, Geol. Soc. Am. Spec. Pap., 247, 300–389, 1990.Google Scholar
- Yabe, T. and T. Aoki, A universal solver for hyperbolic equations by cubic-polynomial interpolation I. One-dimensional solver, Comput. Phys. Commun, 66, 219–232, 1991.View ArticleGoogle Scholar
- Yabe, T., T. Ishikawa, P. Y. Wang, T. Aoki, Y. Kadota, and F. Ikeda, A universal solver for hyperbolic equations by cubic-polynomial interpolation II. Two- and three-dimensional solvers, Comput Phys Commun., 66, 233–242, 1991.View ArticleGoogle Scholar
- Yabe, T., F. Xiao, D. Zhang, S. Sasaki, Y. Abe, N. Kobayashi, and T. Terasawa, Effect of EOS on break-up of Shoemaker-Levy 9 entering Jovian atmosphere, J. Geomag. Geoelectr., 46, 657–662, 1994.View ArticleGoogle Scholar
- Yabe, T., F. Xiao, and H. Mochizuki, Simulation Technique for Dynamic Evaporation Processes, Nuclear Eng. Design, 155, 45–53, 1995.View ArticleGoogle Scholar
- Zahnle, K. J., Xenological Constraints on the Impact Erosion of the Early Martian Atmosphere, J. Geophys. Res, 98, 10899–10913, 1993.View ArticleGoogle Scholar
- Zahnle, K., J. B. Pollack, and D. Grinspoon, Impact-Generated Atmospheres over Titan, Ganymede, and Callisto, Icarus, 95, 1–23, 1992.View ArticleGoogle Scholar | <urn:uuid:1771ffb0-19e8-4846-9231-d5322051d2dd> | 3.078125 | 6,523 | Academic Writing | Science & Tech. | 42.496794 | 95,521,741 |
Phytoplankton photosynthetic response to light in an internal tide dominated environment
- Marcel Fréchette, Louis Legendre
- Estuaries SCOPUS
- Coastal and Estuarine Research Federation in 1982
- Cited Count
- Springer JSTOR
The influence of internal tides on phytoplankton photosynthetic response to light was studied in the Lower St. Lawrence Estuary. Photosynthesis at saturating light intensity responded to variations in the vertical density gradient, which were linked to the internal tides. The photosynthetic response was lag-correlated to the vertical water stratification. This suggests that the link between photosynthesis and the internal tides may have resulted from phytoplankton light adaptation.
No relevant information is available
If you register references through the customer center, the reference information will be registered as soon as possible. | <urn:uuid:3aa63afa-ddbe-4f38-9214-6a153cbb3b88> | 2.609375 | 182 | Academic Writing | Science & Tech. | 3.996604 | 95,521,758 |
Effect of water velocity on Undaria pinnatifida and Saccharina japonica growth in a novel tank system designed for macroalgae cultivation
- 281 Downloads
Kelps are economically valuable primary producers; therefore, many studies on breeding have attempted to increase kelp productivity and quality. However, most cultivation tests have been performed in the ocean, thereby limiting the development of new cultivars. To reduce the breeding duration period and confirm cultivar phenotypes, we developed a novel tank culture system, referred to as a circulation and floating culture system (CFCS), for cultivating macroalgae. In the CFCS, kelp can be cultivated under controlled environmental conditions. Water velocity in the CFCS can be regulated by changing the angle of a seawater inlet spout without changing the volume of seawater in the tank. Undaria pinnatifida and Saccharina japonica cultivated in the CFCS exhibited morphological features very similar to those of plants grown naturally in the ocean. The result suggests that the facility is useful for identifying water motion conditions suitable for increasing the production of any macroalgae species. Using this facility, both species were grown from juvenile sporophytes (20 mm) to maturity; for U. pinnatifida, the subsequent generation was successfully cultivated. Improved growth of U. pinnatifida was achieved in fast flows compared with slow flows, whereas S. japonica developed a wider shape and heavier biomass in slow flows compared with fast flows. We discuss the application and implication of the CFCS for breeding research and the physiological ecology of macroalgae.
KeywordsTank culture system Kelp Water velocity Morphological response Breeding
We sincerely thank Mr. Inoguchi, the former president and director of the Iwate Fisheries Technology Center and Mr. Takahashi, the former director of the Iwate Fisheries Technology Center, for their guidance and suggestions. We also thank Dr. Nanba and Ms. Shinozuka for their kind advice and cooperation in the measurement of nutrients. We thank Mr. Hagiwara of Riken Food Co., Ltd. for his technical support of the tank culture system. This study was funded by the Formation of Tohoku Marine Science Center Project (Technical Development That Leads to the Creation of New Industries) of the Ministry of Education, Culture, Sports, Science and Technology of Japan.
- Dan A, Ohno M, Matsuoka M (2015) Changes of the research and development on the resources of Undaria and Laminaria in the culture ground of Tokushima coasts. Bull Tokushima Pref Fish Res Inst 10:25–48 (in Japanese with English abstract)Google Scholar
- FAO (2010) The state of world fisheries and aquaculture. FAO, Rome, p. 197Google Scholar
- Hara M, Akiyama K (1985) Heterosis in growth of Undaria pinnatifida (Harvey) Suringar. Bull Tohoku Reg Fish Res Lab 47:47–50Google Scholar
- Ishikawa Y (1991) Analysis of quantitative traits in cultured Wakame for breeding. Fish Genet Breed Sci 16:19–24 (in Japanese)Google Scholar
- Johnson CR, Banks SC, Barrett NS, Cazassus F, Dunstan PK, Edgar GJ, Frusher SD, Gardner C, Haddon M, Helidoniotis F, Hill KL, Holbrook NJ, Hosie GW, Last PR, Ling SD, Thomas JM, Miller K, Pecl GT, Richardson AJ, Ridgway KR, Rintoul SR, Ritz DA, Ross DJ, Sanderson JC, Shepherd SA, Slotwinski A, Swadling KM, Taw N (2011) Climate change cascades: shifts in oceanography, species’ ranges and subtidal marine community dynamics in eastern Tasmania. J Exp Mar Biol Ecol 400:17–32CrossRefGoogle Scholar
- Li XJ, Cong YZ, Yang GP, Qu SC, Li ZL, Wang GW, Zhang ZZ, Luo SJ, Dai HL, Xie JZ, Jiang JL, Wang TY (2007) Trait evaluation and trial cultivation of Dongfang no. 2, the hybrid of a male gametophyte clone of Laminaria longissima (Laminariales, Phaeophyta) and a female one of L. japonica. J Appl Phycol 19:139–151CrossRefPubMedGoogle Scholar
- Mann KH (1972) Ecological energetics of the seaweed zone in a marine bay on the Atlantic coast of Canada. 2. Productivity of the seaweeds. Mar Biol 12:1–10Google Scholar
- Nabata S, Takiya A, Tada M (2003) On the decreased production of natural kelp, Laminaria ochotensis in Rishiri Island, northern Hokkaido. Sci Rep Hokkaido Fish Exp Stn 64:127–136Google Scholar
- Niwa K (2015) Experimental cultivation of Undaria pinnatifida for double cropping in Pyropia farms around Akashi Strait, Hyogo Prefecture. Jpn J Phycol (Sorui) 63:90–97 (in Japanese with English abstract)Google Scholar
- Niwa K, Harada K (2016) Experiment on forcing cultivation of Undaria pinnatifida sporophytes in the Seto Inland Sea by using free-living gametophytes cultured in laboratory. Jpn J Phycol (Sorui) 64:10–18 (in Japanese with English abstract)Google Scholar
- Wargacki AJ, Leonard E, Win MN, Regitsky DD, Santos CNS, Kim PB, Cooper SR, Raisner RM, Herman A, Sivitz AB, Lakshmanaswamy A, Kashiyama Y, Baker D, Yoshikuni Y (2012) An engineered microbial platform for direct biofuel production from brown macroalgae. Science 335:308–313CrossRefPubMedGoogle Scholar | <urn:uuid:c387597b-8152-464c-9526-b0e240e7fb76> | 2.671875 | 1,257 | Academic Writing | Science & Tech. | 37.690962 | 95,521,762 |
For the first time, Würzburg scientists have successfully bound multiple carbon monoxide molecules to the main group element boron. They report on their work in the latest issue of the scientific journal Nature.
Scientists of Professor Holger Braunschweig's team of the Institute of Inorganic Chemistry at the University of Würzburg have successfully bound two carbon monoxide molecules (CO) to the main group element boron in a direct synthesis for the first time. The result is a borylene-dicarbonyl complex.
Such complexes, or coordination complexes, are generally made up of one or more central molecules and one or more ligands. The central molecules are usually atoms of transition metals.
"Binding one CO molecule to a main group element is already extraordinary. Bonding two molecules two one non-metal atom is even more extraordinary," says chemist Rian Dewhurst. Dewhurst, who is working on Professor Holger Braunschweig's team, submitted the article together with several co-authors. It is the first work of the institute to have been accepted by the journal Nature.
"In future, borylene-dicarbonyls could be used to mimic the properties of transition metal carbonyl complexes," Dewhurst further. Transition metals have specific electronic properties. These elements from group four to twelve in the periodic table have the ability to bind multiple carbon monoxide molecules relatively easily.
Advantages of boron compounds
Generally, boron compounds are important for various industrial applications. They are used, for example, in catalytic processes, in various molecular and solid materials or in the production of pharmaceutical drugs. A catalyst accelerates a desired chemical reaction without being consumed in the process.
Boron has the advantage of being readily available and comparably low-priced. It occurs naturally mostly in mineral form and is mined in borate mines in California and Turkey, for example. Moreover, the element is non-toxic for humans and other mammals. "Combined with its unique electronic properties, this makes boron very interesting for industrial and other commercial uses," Dewhurst explains.
Boron is a highly reactive element. With three electrons on the outer shells, boron strives to form bonds that enable eight electrons, which the noble gases neon, argon or xenon already have in their basic state.
Lone electron pair at the central molecule
The borylene-dicarbonyl complex also has eight electrons involved in the bonds to the boron atom. With two electrons, respectively, presenting the bonds to the two CO molecules and two others binding one hydrocarbyl, the researchers were able to establish one lone electron pair amounting to eight electrons in total. "It is the lone electron pair that makes the complex special. The hydrocarbyl assures stability. It shields the structure in a manner of speaking," says Marco Nutz, a doctoral candidate. He adds: "Most compounds that can be isolated in this way are unstable outside a protective atmosphere." The Würzburg discovery, however, remains stable for several days even in a "normal" environment exposed to air and moisture.
Dewhurst and Nutz are conducting basic research. "In a next step, we are going to further investigate the compound we have presented. We are pursuing different angles here," Dewhurst says. One focus will be to compare the properties of conventional transition metal carbonyl complexes with those of the borylene-carbonyl complex in detail.
In recent years, the attention of natural science has progressively focused on boron. According to Dewhurst, the increasing significance of boron is also reflected in the growing interest in the element on the part of organic chemistry and in the fact that material science, too, is closely following the advances made in boron complex research.
"Multiple Complexation of CO and Related Ligands to a Main Group Element" by Holger Braunschweig, Rian D. Dewhurst, Florian Hupp, Marco Nutz, Krzysztof Radacki, Christopher W. Tate, Alfredo Vargas, Qing Ye. Nature vol 522, issue 7556 pp.327-330, DOI 10.1038/nature14489
Prof. Holger Braunschweig, Institute of Inorganic Chemistry at the University of Würzburg
Phone: +49 931 31-88104, e-mail: email@example.com
http://www.presse.uni-wuerzburg.de University's press office
Marco Bosch | Julius-Maximilians-Universität Würzburg
Further reports about: > CO molecules > Julius-Maximilians-Universität > basic research > bonds > carbon monoxide > carbon monoxide molecules > catalytic processes > chemical reaction > electronic properties > industrial applications > material science > natural science > noble gases > organic chemistry > pharmaceutical drugs > solid materials > transition metal
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:f07a2641-fb1f-4b56-a89d-ebcf4fc6aac1> | 3.046875 | 1,687 | Content Listing | Science & Tech. | 32.94777 | 95,521,809 |
|Scientific Name:||Plethodon vehiculum (Cooper, 1860)|
|Taxonomic Source(s):||Frost, D.R. 2014. Amphibian Species of the World: an Online Reference. Version 6 (27 January 2014). New York, USA. Available at: http://research.amnh.org/herpetology/amphibia/index.html. (Accessed: 27 January 2014).|
|Red List Category & Criteria:||Least Concern ver 3.1|
|Assessor(s):||IUCN SSC Amphibian Specialist Group|
|Contributor(s):||Hammerson, G.A. & Garcia Moreno, J.|
|Facilitator/Compiler(s):||Garcia Moreno, J. & Hobin, L.|
Listed as Least Concern in view of the large extent of occurrence, large number of sub-populations and localities, and large population size.
|Previously published Red List assessments:|
|Range Description:||This species occurs in western North America from southwestern British Columbia, including Vancouver Island, south through western Washington to southwestern Oregon (Petranka 1998). It occurs at elevations from sea level to about 1,250 m asl (Stebbins 2003).|
Native:Canada (British Columbia); United States (Oregon, Washington)
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||It is one of the most commonly encountered terrestrial salamanders throughout its range (Nussbaum et al. 1983), with a stable population.|
|Current Population Trend:||Stable|
|Habitat and Ecology:||It can be found in humid coniferous forests, damp talus slopes and shaded ravines. It is often encountered under rocks, logs, leaf-litter, and other forest debris. On Vancouver Island, small individuals were found under small rocks and away from discrete cover objects in leaf-litter and under moss more frequently than were larger individuals (Ovaska and Gregory 1989). It lays eggs on land in moist retreats, where they develop directly without a larval stage.|
|Use and Trade:||
There are no records of this species being utilized.
|Major Threat(s):||There are no major known threats to this species. Logging is not considered to be a major threat because this species maintains thriving populations in young forests.|
No conservation measures are needed for this species. It occurs in many protected areas.
|Citation:||IUCN SSC Amphibian Specialist Group. 2015. Plethodon vehiculum. The IUCN Red List of Threatened Species 2015: e.T59358A78905923.Downloaded on 20 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:e227a8a6-a882-4fd8-ba9b-5cf7bf3aa1c2> | 2.875 | 637 | Knowledge Article | Science & Tech. | 49.663014 | 95,521,815 |
The researcher of the UGR Raef Minwer-Barakat has attempted to answer this question through his doctoral thesis "Rodents and insectivorous of Upper Turoliense and the Pliocene of the central section of the Guadix basin", supervised by doctors Elvira Martín and César Viseras, of the Dept. of Stratigraphy and Palaeontology of the University of Granada.
His studied has concluded with the discovering of three new species of rodents and insectivores (Micromys caesaris, Blarinoides aliciae and Archaeodesmana elvirae) and the finding, for the first time in the region, of nine more species.
Minwer-Barakat chose the Guadix basin to develop his study due to the excellent conditions of the area and its abundance of fossils of small mammals. Although there were previous studied on rodents in this region, no-one had had carried out such a complete study were on the fauna of fossil insectivores.
This research work is based on the analysis of the dental structures of rodents, key elements in Palaeontology to differentiate between two species as they become fossilized very easily.
The research work, backed by the UGR, has also managed to determine that the Myocricetodon jaegeri, a species of gerbil whose presence has been documented at the end of the Miocene from Africa, a theory which had not been confirmed yet.The climate
Antonio Marín Ruiz | alfa
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:e0ccda84-095a-4fc0-a452-c28a3ad990d4> | 3.078125 | 892 | Content Listing | Science & Tech. | 36.438206 | 95,521,830 |
It is easier to copy something than to develop something new - a principle that was long believed to also apply to the evolution of genes. According to this, evolution copies existing genes and then adapts the copies to new tasks.
However, scientists from the Max Planck Institute for Evolutionary Biology in Plön have now revealed that new genes often form from scratch. Their analyses of genes from mice, humans and fish have shown that new genes are shorter than old ones and simpler in structure. These and other differences between young and old genes indicate that completely new genes can also form from previously unread regions of the genome. Moreover, the new genes often use existing regulatory elements from other genes before they create their own.When scientists decoded the first genes, they made a surprising discovery: similar variants of many genes are found even in very different organisms. This finding can be explained by the fact that evolution uses existing genes and adapts them to varying degrees for new tasks. The copying of genes plays an important role here. Copies are made of a gene and incorporated into the genome. Evolution can then experiment with these copies, while the original can continue to fulfil its function in its unaltered form. Completely new genes are very rare events in this model.
ContactProf. Dr. Diethard Tautz,
BMC Genomics 2013, 14:117 doi:10.1186/1471-2164-14-117
Prof. Dr. Diethard Tautz | Max-Planck-Institute
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:8f4d1bec-813f-4f5d-ac13-4943a73d1c83> | 4.15625 | 889 | Content Listing | Science & Tech. | 40.83509 | 95,521,846 |
Diplocardia longa is a species of earthworm native to North America. It was first described by the American zoologist John Percy Moore in 1904. The type locality is Hawkinsville, Georgia. This worm has bioluminescent properties; its body fluids and the sticky slime it exudes when stimulated emit a bluish glow.
Diplocardia longa can grow to a length of about 275 mm (11 in) when moderately extended and a diameter of 5 millimetres (0.20 in) at segment 7 and 4 millimetres (0.16 in) behind the clitellum. The number of segments varies between about 270 and 330. It is slender and cylindrical, slightly tapering at both ends. At the posterior end it swells slightly into a club-shape before narrowing to the anal opening. The two ends of the worm are brown, the clitellum reddish-brown and the rest of the body is a rather dull salmon pink. The skin is translucent and the veins can be seen distinctly in the less-pigmented regions.
This worm produces bioluminescent mucus, its family Acanthodrilidae being one of three families of oligochaetes that exhibit bioluminescence. The light is produced in the coelomic fluid and is emitted when a luciferin (N-isovaleryl-3-amino-propanal) is acted on by a peroxidase-like luciferase. This is a copper-containing protein which reacts with the hydrogen peroxide that is produced by another enzyme in the presence of oxygen. Strong stimulation of the worm causes fluid to be exuded from the worm's mouth, dorsal pores and anus. Light is emitted by certain large cells in this fluid when they rupture, and the worm is visible as a dark silhouette against the luminous slime. It has been suggested that when the worm is attacked by a predator such as the eastern mole, it can writhe and squirm and produce the slime, perhaps causing the mole to retreat from the bright light while the worm escapes.
- Moore, J. Percy (1904). "Description of a New Species of Earthworm (Diplocardia longa) from Georgia". Proceedings of the Academy of Natural Sciences of Philadelphia. 56 (3): 803–808. JSTOR 4062969.
- Viviani, Vadim (2009-02-17). "Terrestrial bioluminescence". Retrieved 2014-11-29.
- Thérèse Wilson (14 February 2013). Bioluminescence: Living Lights, Lights for Living. Harvard University Press. p. 186. ISBN 978-0-674-07191-9.
- Wagg, Jeff. "The eerie earthworms of Hawkinsville, Georgia". Retrieved 2014-11-28. | <urn:uuid:cb18abb0-1c34-4d34-ad6d-fca23f3a3814> | 3.375 | 591 | Knowledge Article | Science & Tech. | 60.35153 | 95,521,856 |
When marine ecologists released the Ocean Health Index (OHI) for the first time in 2012, it was a majestically ambitious achievement. The index, born of a collaboration among dozens of scientists, economists and environmental managers at the National Center for Ecological Analysis and Synthesis (NCEAS) of the University of California, Santa Barbara, and the nonprofit organization Conservation International, was designed as a comprehensive framework for scientifically evaluating the health of ocean ecosystems, both worldwide and regionally. Drawing on more than a hundred databases, the index pulled together local measurements of biodiversity and ecological productivity with information about fishing, industrial use, carbon storage, tourism and other factors to score the health of open ocean and coastal regions between 0 and 100. (The global ocean earned a score of 60 in that first year, with regional ratings between 36 and 86.) The authors of the index hoped that such a standardized basis of comparison between and within regions would help with identifying and communicating the most effective measures for protecting the oceans and with guiding policymakers toward better decisions.
Director at Learning Change Project – Research on society, culture, art, neuroscience, cognition, critical thinking, intelligence, creativity, autopoiesis, self-organization, rhizomes, complexity, systems, networks, leadership, sustainability, thinkers, futures ++
280 Posts in this Blog
- Follow Learning Sustainability on WordPress.com | <urn:uuid:62b398d8-1338-4f2f-bbdd-715d4cbece9e> | 2.671875 | 279 | Personal Blog | Science & Tech. | -7.392578 | 95,521,867 |
The Scientific Merit of Amateur Astrophotography
Astrophotography is only a small fraction of the broad spectrum of amateur astronomers’ work. With the development of high-sensitivity emulsions, amateur telescopes of large focal ratios and good quality, cheap multi-purpose cameras (both 35- and 70-mm), and intensive public relations work done by observatories as well as by individual, advanced amateurs, the methods and techniques of astrophotography have become accessible to all amateurs. With suitable equipment, experience, and familiarity with astronomical problems (e.g. by close contact with professional astronomers), they are even able to make valuable contributions to science.
KeywordsExtended Object Plasma Cloud Scientific Merit Zodiacal Light Amateur Astronomer
Unable to display preview. Download preview PDF. | <urn:uuid:5979de8e-8650-4b12-bac2-54b9fd5fc358> | 2.53125 | 165 | Truncated | Science & Tech. | 9.146034 | 95,521,869 |
Radiative performance assessment of two roofs in Mediterranean and Equatorial climates
Document typeConference report
PublisherUnited Nations Development Programme
Rights accessRestricted access - publisher's policy
In regions where the roof surface is the most exposed to solar radiation, the reduction of the heat flux transmitted through this element has a great impact on the cooling demand of buildings. Studying the possibilities of reducing cooling loads can highly influence the environmental and carbon footprint of the building stock worldwide. Although the main strategy to prevent solar access would be implementing shading systems, another approach could be taking advantage of the radiative cooling effect of the roof itself. Its efficiency depends on the roof properties (color, mass, thermal transmittance, etc) and on climate conditions (radiation, wind, humidity, etc). This paper deals with a comparative assessment of the radiative performance of two roofs exposed to different amounts of solar radiation depending on their percentage of cloudiness, and the repercussions over its surface temperature. Its aims is to evaluate the effect of radiative cooling depending on the local climates. Temperatures of two similar roofs, in Terrassa 41°33´LN (Spain) and in Santa Rosa 3°27´LS (Ecuador), were measured. Their transmittance, optical and thermal mass properties were considered in the calculations. The results obtained indicate that the effect of using the sky as a heat sink has a higher impact on the roof located in Terrassa (Spain) than the one in Santa Rosa (Ecuador). The results support that this behavior responds to the influence of the high presence of cloudiness on equatorial climates, which reduces significantly the heat losses in long wave radiation. The heat mass storage capacity of the roof could hinder even more the radiative cooling effect of the roof.
CitationTorres-Quezada, J., Pages-Ramon, A., Coch, H., Isalgue, A. Radiative performance assessment of two roofs in Mediterranean and Equatorial climates. A: International Conference on Urban Physics. "FICUP 2016: First International Conference on Urban Physics: proceedings". Quito: United Nations Development Programme, 2016, p. 337-348.
|2016_FICUP_ Torres_Pages_Coch_Isalgué.pdf||Fragment dels proceedings||6.313Mb||Restricted access| | <urn:uuid:c6c0e08d-09e2-4ebb-9eeb-7c9721891226> | 2.921875 | 481 | Academic Writing | Science & Tech. | 27.236466 | 95,521,870 |
about this item
Marine ecosystems are ecosystems found in the oceans and seas. This book on marine ecosystems studies new research trends with regard to this field. The marine ecosystem is the largest ecosystem of the planet and can be sub-classified into rocky shores, submarine canyons, cold seeps, etc. Research and study into the composition of ecosystems and their processes plays a key role in conservation and in upholding biodiversity on Earth. With state-of-the-art inputs by acclaimed experts of this field, this book targets students and professionals. For someone with an interest and eye for detail, this book covers the most significant topics in the field of marine ecosystems.
Number of Pages: 267
Publisher: Ingram Pub Services
Street Date: February 19, 2018
Item Number (DPCI): 248-75-8958 | <urn:uuid:856b6626-9415-45fa-a499-53846368a060> | 3.03125 | 167 | Product Page | Science & Tech. | 41.582273 | 95,521,890 |
|MOLLUSCA : NUDIBRANCHIA : Tergipedidae||SNAILS, SLUGS, ETC.|
Description: A tiny aeolid with single cerata arranged alternately along the sides of the body. The body is translucent white with red markings on the sides of the head and just behind the rhinophore bases. The digestive gland is grey in colour, and shows through the back as a branching vessel. The cerata are swollen in shape, and have large white cnidosacs at the tips. The oral tentacles are short but the rhinophores are long and tapering. Only 6-8mm in length when fully grown.
Habitat: Feeds on several species of Obelia but especially Obelia geniculata (normally found on kelp fronds). The spawn consists of small white capsules.
Distribution: Widespread and common throughout the British Isles, but frequently overlooked because of its small size.
Similar Species: Eubranchus exiguus is often found with Tergipes tergipes, and is equally small. It may easily be distinguished with a hand lens or low-power microscope by the more numerous cerata and dark rings on the rhinophores.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Tergipes tergipes (Forsskål, 1775). [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=W14860 Accessed on 2018-07-18
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:77094d06-2c31-4974-a6ca-8d0d6fe5ef61> | 2.96875 | 399 | Knowledge Article | Science & Tech. | 46.704126 | 95,521,909 |
This NASA video from today in the USA says about itself:
The Curiosity rover has discovered ancient organic molecules on Mars, embedded within sedimentary rocks that are billions of years old. News Release: here.
Ancient organic molecules found on Mars
Curiosity rover also reports data on the red planet’s mysterious methane plumes
by Mark Peplow
Wherever life flourishes, it leaves a calling card written in organic molecules—and researchers have spent decades hoping to uncover these telltale signatures on Mars.
NASA’s Mars rover Curiosity has now given those hopes a considerable boost after finding organic deposits trapped in exposed rocks that were formed roughly 3.5 billion years ago (Science 2018). The rover’s discovery at Gale Crater shows that organic molecules were present when that part of the red planet hosted a potentially habitable lake. It also proves that these traces can survive through the ages, ready to be discovered by robot explorers.
“We started this search 40 years ago, and now we finally have a set of organic molecules that tells us this stuff is preserved near the surface,” says Jennifer L. Eigenbrode of NASA’s Goddard Space Flight Center, who led the study.
Curiosity gathered mudstone samples and gradually heated them to 860 ºC, using gas chromatography/mass spectrometry to study the gases produced. It identified a smorgasbord of molecules, including thiophene, methylthiophenes, and methanethiol, which are probably fragments from larger organic macromolecules in the sediment. These organic deposits may be something like kerogen, the fossilized organic matter found in sedimentary rocks on Earth that contains a jumble of waxy hydrocarbons and polycyclic aromatic hydrocarbons.
The organic compounds that were originally transformed into martian kerogen could have come from three possible sources—geological activity, meteorites, or living organisms—but Curiosity’s data offer no insight on that question. “The most plausible source of these organics is from outside the planet,” says Inge Loes ten Kate, an astrobiologist at Utrecht University, who was not involved in the research. She notes that roughly 100 to 300 metric tons of organic molecules arrive on Mars every year, hitching a ride on interplanetary dust particles. “Three billion years ago, it was much more hectic in the solar system”, ten Kate says, so there would have been much larger deliveries of organics via interplanetary travelers.
Curiosity had previously detected chlorocarbons in martian soil, which were probably generated by reactions with the abundant perchlorate found on the planet’s surface. In contrast, the mudstone samples have delivered “what we expect of natural organic matter,” Eigenbrode says.
Meanwhile, the rover’s infrared spectrometer has been tackling the long-standing puzzle of martian methane (Science 2018). Orbiting Mars probes, along with telescopes on Earth, have previously seen occasional plumes of methane in the planet’s atmosphere, raising speculation that the gas could have come from geological activity or even methane-producing organisms.
Curiosity has taken methane measurements over 55 Earth months, spanning three martian years, which now reveal that the atmospheric concentration of the gas varies seasonally between 0.24 and 0.65 parts per billion by volume. “This is the first time that Mars methane has shown any repeatability”, says Christopher R. Webster at NASA’s Jet Propulsion Laboratory, who led the work. “It always seemed kind of random before.”
The rover also saw brief spikes in methane concentration to about 7 ppbv, which is consistent with previous remote observations of plumes, says Michael J. Mumma of NASA’s Goddard Space Flight Center, who has been chasing martian methane for more than 15 years but was not involved in Curiosity’s latest findings. “The ground-based detection is very important because it confirms the methane is there,” he says.
The methane’s source is still an open question. But Webster’s team says that the seasonal cycle rules out one of the leading suggestions: that organic molecules, delivered to the surface by meteorites and space dust, were broken down by ultraviolet light to produce the gas.
Instead, the cyclical nature of the data suggests that methane could be stored deep underground in icy crystals called clathrates and slowly escape to the planet’s topsoils. Laboratory experiments suggest that the soil could temporarily hang on to the gas, releasing more of it in the warmer martian summer to produce the seasonal cycles.
Mars’s newest satellite, the European Space Agency’s Trace Gas Orbiter (TGO), could help confirm that idea. It began to survey the whole planet for methane in April. “We’re all waiting with bated breath to see what they find,” Webster says. TGO should also measure the carbon isotope ratios in the methane it detects, which may provide hints at a biological or geological origin. And in 2021, ESA expects to land a rover on Mars that could drill up to 2 meters below the surface, where there might be better-preserved organics compared with the ones collected at Gale Crater, Eigenbrode says.
These lines of evidence could eventually help resolve questions about our own origins. Mars and Earth were once quite similar places, ten Kate says, yet life apparently failed to gain a foothold on the red planet. “Was there really no life on Mars, or did it just not survive?” she says. The answer could shed light on the crucial conditions needed to nurture the first life-forms on our own world.
See also here.
Opportunity rover waits out a huge dust storm on Mars. The 14-year-old craft has weathered storms before, but none this big, by Lisa Grossman, 5:56pm, June 11, 2018. | <urn:uuid:02ea719d-c6e1-4190-91b1-5d7ae2fc0026> | 3.765625 | 1,251 | News Article | Science & Tech. | 36.211711 | 95,521,911 |
A concept designed to overcome the problems encountered when using photodissociation for the generation of hydrogen is discussed. The problems limiting the efficiency of photodissociation of water are the separation of the photolysis products and the high energy photons necessary for the reaction. It is shown that the dissociation energy of a large number of molecules is catalytically reduced when these molecules are in intimate contact with the surface of certain metals. It is proposed to develop a surface which will take advantage of this catalytic shift in dissociation energies to reduce the photon energy required to produce hydrogen. This same catalytic surface can be used to separate the reaction products if it is made so that one of the dissociations products is soluble in the metal and others are not. This condition is met by many metal systems such as platinum group metals which have been used commercially to separate hydrogen from other gases and liquids. | <urn:uuid:2ebf6ae2-7aa0-412d-8a8a-98fc430b6f2b> | 2.984375 | 179 | Truncated | Science & Tech. | 13.231247 | 95,521,934 |
SN 1987A was a peculiar type II supernova in the Large Magellanic Cloud, a dwarf galaxy satellite of the Milky Way. It occurred approximately 51.4 kiloparsecs (168,000 ly) from Earth and was the closest observed supernova since SN 1604, which was seen on earth over four centuries ago.
1987A's light reached Earth on February 23, 1987, and as the first supernova discovered that year, was labeled "1987A". Its brightness peaked in May, with an apparent magnitude of about 3. It was the first opportunity for modern astronomers to study the development of a supernova in great detail, and its observations have provided much insight into core-collapse supernovae.
SN 1987A provided the first chance to confirm by direct observation the radioactive source of the energy for visible light emissions, by detecting predicted gamma-ray line radiation from two of its abundant radioactive nuclei. This proved the radioactive nature of the long-duration post-explosion glow of supernovae.
SN 1987A was discovered independently by Ian Shelton and Oscar Duhalde at the Las Campanas Observatory in Chile on February 24, 1987, and within the same 24 hours by Albert Jones in New Zealand. On March 4–12, 1987, it was observed from space by Astron, the largest ultraviolet space telescope of that time.
Four days after the event was recorded, the progenitor star was tentatively identified as Sanduleak −69 202 (Sk -69 202), a blue supergiant. After the supernova faded, that identification was definitely confirmed by Sk −69 202 having disappeared. This was an unexpected identification, because models of high mass stellar evolution at the time did not predict that blue supergiants are susceptible to a supernova event.
Some models of the progenitor attributed the color to its chemical composition rather than its evolutionary state, particularly the low levels of heavy elements, among other factors. There was some speculation that the star might have merged with a companion star before the supernova. However, it is now widely understood that blue supergiants are natural progenitors of some supernovae, although there is still speculation that the evolution of such stars could require mass loss involving a binary companion.
Approximately two to three hours before the visible light from SN 1987A reached Earth, a burst of neutrinos was observed at three neutrino observatories. This is likely due to neutrino emission, which occurs simultaneously with core collapse, but before visible light was emitted. Visible light is transmitted only after the shock wave reaches the stellar surface. At 07:35 UT, Kamiokande II detected 12 antineutrinos; IMB, 8 antineutrinos; and Baksan, 5 antineutrinos; in a burst lasting less than 13 seconds. Approximately three hours earlier, the Mont Blanc liquid scintillator detected a five-neutrino burst, but this is generally not believed to be associated with SN 1987A.
The Kamiokande II detection, which at 12 neutrinos had the largest sample population, showed the neutrinos arriving in two distinct pulses. The first pulse started at 07:35:35 and comprised 9 neutrinos, all of which arrived over a period of 1.915 seconds. A second pulse of three neutrinos arrived between 9.219 and 12.439 seconds after the first neutrino was detected, for a pulse duration of 3.220 seconds.
Although only 25 neutrinos were detected during the event, it was a significant increase from the previously observed background level. This was the first time neutrinos known to be emitted from a supernova had been observed directly, which marked the beginning of neutrino astronomy. The observations were consistent with theoretical supernova models in which 99% of the energy of the collapse is radiated away in the form of neutrinos. The observations are also consistent with the models' estimates of a total neutrino count of 1058 with a total energy of 1046 joules.
The neutrino measurements allowed upper bounds on neutrino mass and charge, as well as the number of flavors of neutrinos and other properties. For example, the data show that within 5% confidence, the rest mass of the electron neutrino is at most 16 eV/c2, 1/30,000 the mass of an electron. The data suggest that the total number of neutrino flavors is at most 8 but other observations and experiments give tighter estimates. Many of these results have since been confirmed or tightened by other neutrino experiments such as more careful analysis of solar neutrinos and atmospheric neutrinos as well as experiments with artificial neutrino sources.
Missing neutron starEdit
SN 1987A appears to be a core-collapse supernova, which should result in a neutron star given the size of the original star. The neutrino data indicate that a compact object did form at the star's core. However, since the supernova first became visible, astronomers have been searching for the collapsed core but have not detected it. The Hubble Space Telescope has taken images of the supernova regularly since August 1990, but, so far, the images have shown no evidence of a neutron star. A number of possibilities for the 'missing' neutron star are being considered, although none are clearly favored. The first is that the neutron star is enshrouded in dense dust clouds so that it cannot be seen. Another is that a pulsar was formed, but with either an unusually large or small magnetic field. It is also possible that large amounts of material fell back on the neutron star, so that it further collapsed into a black hole. Neutron stars and black holes often give off light when material falls onto them. If there is a compact object in the supernova remnant, but no material to fall onto it, it would be very dim and could therefore avoid detection. Other scenarios have also been considered, such as if the collapsed core became a quark star.
Much of the light curve, or graph of luminosity as a function of time, after the explosion of a type II supernova such as SN 1987A is provided its energy by radioactive decay. Although the luminous emission consists of optical photons, it is the radioactive power absorbed that keeps the remnant hot enough to radiate light. Without radioactive heat it would quickly dim. The radioactive decay of 56Ni through its daughters 56Co to 56Fe produces gamma-ray photons that are absorbed and dominate the heating and thus the luminosity of the ejecta at intermediate times (several weeks) to late times (several months). Energy for the peak of the light curve of SN1987A was provided by the decay of 56Ni to 56Co (half life 6 days) while energy for the later light curve in particular fit very closely with the 77.3 day half-life of 56Co decaying to 56Fe. Later measurements by space gamma-ray telescopes of the small fraction of the 56Co and 57Co gamma rays that escaped the SN1987A remnant without absorption confirmed earlier predictions that those two radioactive nuclei were the power source.
Because the 56Co in SN1987A has now completely decayed, it no longer supports the luminosity of the SN 1987A ejecta. That is currently powered by the radioactive decay of 44Ti with a half life of about 60 years. With this change, X-rays produced by the ring interactions of the ejecta began to contribute significantly to the total light curve. This was noticed by the Hubble Space Telescope as a steady increase in luminosity 10,000 days after the event in the blue and red spectral bands. X-ray lines 44Ti observed by the INTEGRAL space X-ray telescope showed that the total mass of radioactive 44Ti synthesized during the explosion was 3.1 ± 0.8×10−4 M☉.
Observations of the radioactive power from their decays in the 1987A light curve have measured accurate total masses of the 56Ni, 57Ni, and 44Ti created in the explosion, which agree with the masses measured by gamma-ray line space telescopes and provides nucleosynthesis constraints on the computed supernova model.
Interaction with circumstellar materialEdit
The three bright rings around SN 1987A that were visible after a few months in images by the Hubble Space Telescope are material from the stellar wind of the progenitor. These rings were ionized by the ultraviolet flash from the supernova explosion, and consequently began emitting in various emission lines. These rings did not "turn on" until several months after the supernova; the turn-on process can be very accurately studied through spectroscopy. The rings are large enough that their angular size can be measured accurately: the inner ring is 0.808 arcseconds in radius. The time light traveled to light up the inner ring gives its radius of 0.66 (ly) light years. Using this as the base of a right angle triangle and the angular size as seen from the Earth for the local angle, one can use basic trigonometry to calculate the distance to SN 1987A, which is about 168,000 light-years. The material from the explosion is catching up with the material expelled during both its red and blue supergiant phases and heating it, so we observe ring structures about the star.
Around 2001, the expanding (>7000 km/s) supernova ejecta collided with the inner ring. This caused its heating and the generation of x-rays — the x-ray flux from the ring increased by a factor of three between 2001 and 2009. A part of the x-ray radiation, which is absorbed by the dense ejecta close to the center, is responsible for a comparable increase in the optical flux from the supernova remnant in 2001–2009. This increase of the brightness of the remnant reversed the trend observed before 2001, when the optical flux was decreasing due to the decaying of 44Ti isotope.
A study reported in June 2015, using images from the Hubble Space Telescope and the Very Large Telescope taken between 1994 and 2014, shows that the emissions from the clumps of matter making up the rings are fading as the clumps are destroyed by the shock wave. It is predicted the ring will fade away between 2020 and 2030. As the shock wave passes the circumstellar ring it will trace the history of mass loss of the supernova's progenitor and provide useful information for discriminating among various models for the progenitor of SN 1987A.
Condensation of warm dust in the ejectaEdit
Soon after the SN 1987A outburst, three major groups embarked in a photometric monitoring of the supernova: SAAO, CTIO, and ESO. In particular, the ESO team reported an infrared excess which became apparent beginning less than one month after the explosion (March 11, 1987). Three possible interpretations for it were discussed in this work: the infrared echo hypothesis was discarded, and thermal emission from dust that could have condensed in the ejecta was favoured (in which case the estimated temperature at that epoch was ~ 1250 K, and the dust mass was approximately ×10−7 M☉). The possibility that the IR excess could be produced by optically thick 6.6free-free emission seemed unlikely because the luminosity in UV photons needed to keep the envelope ionized was much larger than what was available, but it was not ruled out in view of the eventuality of electron scattering, which had not been considered.
However, none of these three groups had sufficiently convincing proofs to claim for a dusty ejecta on the basis of an IR excess alone.
An independent Australian team advanced several argument in favour of an echo interpretation. This seemingly straightforward interpretation of the nature of the IR emission was challenged by the ESO group and definitively ruled out after presenting optical evidence for the presence of dust in the SN ejecta.
To discriminate between the two interpretations, they considered the implication of the presence of an echoing dust cloud on the optical light curve, and on the existence of diffuse optical emission around the SN. They concluded that the expected optical echo from the cloud should be resolvable, and could be very bright with an integrated visual brightness of magnitude 10.3 around day 650. However, further optical observations, as expressed in SN light curve, showed no inflection in the light curve at the predicted level. Finally, the ESO team presented a convincing clumpy model for dust condensation in the ejecta.
Although it had been thought more than 50 years ago that dust could form in the ejecta of a core-collapse supernova, which in particular could explain the origin of the dust seen in far galaxies, that was the first time that such a condensation was observed. If SN 1987A is a typical representative of its class then the derived mass of the warm dust formed in the debris of Core Collapse supernovae is not sufficient to account for all the dust observed in the early universe. However, a much larger reservoir of ~0.25 solar mass of colder dust (at ~26 K) in the ejecta of SN 1987A was found with the Hershel infrared space telescope in 2011 and confirmed by ALMA later on (in 2014).
Following the confirmation of a large amount of cold dust in the ejecta, ALMA has continued observing SN 1987A. A synchrotron radiation due to shock interaction in the equatorial ring has been measured. Cold (20 – 100K) carbon monoxide (CO) and silicate molecules (SiO) were observed. The data show that CO and SiO distributions are clumpy, and that different nucleosynthesis products (C, O and Si) are located in different places of the ejecta, indicating the footprints of the stellar interior at the time of the explosion.
The ALMA observations revealed also the presence of a surprising abundance of formylium (HCO+) in the debris. The observed amount is about 12 orders of magnitude in excess of what chemical models predict. Current observations aim at localize the spatial distribution of this molecule, in particular with respect to the CO and SiO molecules. The canonical representation of a star at the end of its life is an onion-like the layers of which are the result of the successive nuclear reaction experienced by the star during his history. Would a supernova keep this structure (but why should it be?), formylium should not form because the atoms of hydrogen reside in the outermost envelope, while the carbon and the oxygen are below the helium envelope. That means that a mixing between layers occurred before or immediately after the explosion. The explosion creates shocks and turbulence, which give rise to Rayleigh-Taylor instabilities which mix some fraction of the material inwards and outwards: that is the macroscopic mixing. Then Kelvin-Helmholtz instabilities further mix material from different nuclear burning zones: that is the microscopic mixing.
According to theory, there could be more HCO+ than monoxide of silicon (SiO), which was detected in a fair quantity during the first days after the explosion. There could be also just as much, or even a little more, than carbon monoxide and neutral helium, and just as much also, or located in a more compact zone, than hydrogen. It is therefore crucial to know the spatial distribution of formylium in the debris for this could have fundamental consequences on the hydrodynamic mechanisms and the mixing of the elements which took place during and/or just after the explosion.
- "ALMA Spots Supernova Dust Factory". European Southern Observatory. Archived from the original on January 7, 2014. Retrieved January 7, 2014.
- Lyman, J. D.; Bersier, D.; James, P. A. (2013). "Bolometric corrections for optical light curves of core-collapse supernovae". Monthly Notices of the Royal Astronomical Society. 437 (4): 3848. arXiv: . Bibcode:2014MNRAS.437.3848L. doi:10.1093/mnras/stt2187.
- "IAUC4316: 1987A, N. Cen. 1986". February 24, 1987. Archived from the original on October 8, 2014.
- "SN1987A in the Large Magellanic Cloud". Hubble Heritage Project. Archived from the original on July 14, 2009. Retrieved July 25, 2006.
- West, R. M.; Lauberts, A.; Schuster, H.-E.; Jorgensen, H. E. (1987). "Astrometry of SN 1987A and Sanduleak-69 202". Astronomy and Astrophysics. 177 (1–2): L1–L3. Bibcode:1987A&A...177L...1W.
- Boyarchuk, A. A.; et al. (1987). "Observations on Astron: Supernova 1987A in the Large Magellanic Cloud". Pis'ma v Astronomicheskii Zhurnal (in Russian). 13: 739–743. Bibcode:1987PAZh...13..739B.
- "Hubble Revisits an Old Friend". Picture of the Week. European Space Agency/Hubble. 17 October 2011. Archived from the original on October 19, 2011. Retrieved October 17, 2011.
- Sonneborn, G. (1987). "The Progenitor of SN1987A". In Kafatos, M.; Michalitsianos, A. Supernova 1987a in the Large Magellanic Cloud. Cambridge University Press. ISBN 0-521-35575-3.
- Arnett, W. D.; Bahcall, J. N.; Kirshner, R. P.; Woosley, S. E. (1989). "Supernova 1987A". Annual Review of Astronomy and Astrophysics. 27: 629–700. Bibcode:1989ARA&A..27..629A. doi:10.1146/annurev.aa.27.090189.003213.
- Podsiadlowski, P. (1992). "The progenitor of SN 1987 A". Publications of the Astronomical Society of the Pacific. 104 (679): 717. Bibcode:1992PASP..104..717P. doi:10.1086/133043.
- Dwarkadas, V. V. (2011). "On luminous blue variables as the progenitors of core-collapse supernovae, especially Type IIn supernovae". Monthly Notices of the Royal Astronomical Society. 412 (3): 1639–1649. arXiv: . Bibcode:2011MNRAS.412.1639D. doi:10.1111/j.1365-2966.2010.18001.x.
- Nomoto, K.; Shigeyama, T. "Supernova 1987A: Constraints on the Theoretical Model". In Kafatos, M.; Michalitsianos, A. Supernova 1987a in the Large Magellanic Cloud. Cambridge University Press. § 3.2. ISBN 0-521-35575-3.
- Scholberg, K. (2012). "Supernova Neutrino Detection". Annual Review of Nuclear and Particle Science. 62: 81–103. arXiv: . Bibcode:2012ARNPS..62...81S. doi:10.1146/annurev-nucl-102711-095006.
- Pagliaroli, G.; Vissani, F.; Costantini, M. L.; Ianni, A. (2009). "Improved analysis of SN1987A antineutrino events". Astroparticle Physics. 31 (3): 163. arXiv: . Bibcode:2009APh....31..163P. doi:10.1016/j.astropartphys.2008.12.010.
- Kato, Chinami; Nagakura, Hiroki; Furusawa, Shun; Takahashi, Koh; Umeda, Hideyuki; Yoshida, Takashi; Ishidoshiro, Koji; Yamada, Shoichi (2017). "Neutrino Emissions in All Flavors up to the Pre-bounce of Massive Stars and the Possibility of Their Detections". The Astrophysical Journal. 848: 48. arXiv: . Bibcode:2017ApJ...848...48K. doi:10.3847/1538-4357/aa8b72.
- Burrows, Adam; Klein, D; Gandhi, R (1993). "Supernova neutrino bursts, the SNO detector, and neutrino oscillations". Nuclear Physics B - Proceedings Supplements. 31: 408. Bibcode:1993NuPhS..31..408B. doi:10.1016/0920-5632(93)90163-Z.
- Koshiba, M (1992). "Observational neutrino astrophysics". Physics Reports. 220 (5–6): 229. Bibcode:1992PhR...220..229K. doi:10.1016/0370-1573(92)90083-C.
- "New image of SN 1987A". European Space Agency/Hubble. February 24, 2017. Archived from the original on February 28, 2017. Retrieved February 27, 2017.
- Chan, T. C.; et al. (2009). "Could the compact remnant of SN 1987A be a quark star?". The Astrophysical Journal. 695: 732. arXiv: . Bibcode:2009ApJ...695..732C. doi:10.1088/0004-637X/695/1/732.
- Parsons, P. (February 21, 2009). "Quark star may hold secret to early universe". New Scientist. Archived from the original on March 18, 2015.
- Kasen, D.; Woosley, S. (2009). "Type II Supernovae: Model Light Curves and Standard Candle Relationships". The Astrophysical Journal. 703 (2): 2205–2216. arXiv: . Bibcode:2009ApJ...703.2205K. doi:10.1088/0004-637X/703/2/2205.
- Matz, S. M.; et al. (1988). "Gamma-ray line emission from SN1987A". Nature. 331 (6155): 416–418. Bibcode:1988Natur.331..416M. doi:10.1038/331416a0.
- Kurfess, J. D.; et al. (1992). "Oriented Scintillation Spectrometer Experiment observations of Co-57 in SN 1987A". The Astrophysical Journal Letters. 399 (2): L137–L140. Bibcode:1992ApJ...399L.137K. doi:10.1086/186626.
- Clayton, D. D.; Colgate, S. A.; Fishman, G. J. (1969). "Gamma-Ray Lines from Young Supernova Remnants". The Astrophysical Journal. 155: 75. Bibcode:1969ApJ...155...75C. doi:10.1086/149849.
- McCray, R.; Fansson, C. (2016). "The Remnant of Supernova 1987A". Annual Review of Astronomy and Astrophysics. 54: 19–52. Bibcode:2016ARA&A..54...19M. doi:10.1146/annurev-astro-082615-105405.
- Grebenev, S. A.; Lutovinov, A. A.; Tsygankov, S. S.; Winkler, C. (2012). "Hard-X-ray emission lines from the decay of 44Ti in the remnant of supernova 1987A". Nature. 490 (7420): 373–375. arXiv: . Bibcode:2012Natur.490..373G. doi:10.1038/nature11473. PMID 23075986.
- Fransson, C.; et al. (2007). "Twenty Years of Supernova 1987A". The Messenger. 127: 44. Bibcode:2007Msngr.127...44F.
- Larsson, J.; et al. (2011). "X-ray illumination of the ejecta of supernova 1987A". Nature. 474 (7352): 484–486. arXiv: . Bibcode:2011Natur.474..484L. doi:10.1038/nature10090. PMID 21654749.
- Panagia, N. (1998). "New Distance Determination to the LMC". Memorie della Societa Astronomia Italiana. 69: 225. Bibcode:1998MmSAI..69..225P.
- Kruesi, L. "Supernova prized by astronomers begins to fade from view". New Scientist. Archived from the original on June 13, 2015. Retrieved June 13, 2015.
- Fransson, C.; et al. (2015). "The Destruction of the Circumstellar Ring of SN 1987A". The Astrophysical Journal. 806: L19. arXiv: . Bibcode:2015ApJ...806L..19F. doi:10.1088/2041-8205/806/1/L19.
- Menzies, J.W.; et al. (1987). "Spectroscopic and photometric observations of SN 1987a - The first 50 days". Mon. Not. R. Astron. Soc. 227: 39. Bibcode:1987MNRAS.227P..39M. doi:10.1093/mnras/227.1.39P.
- Catchpole, R.M.; et al. (1987). "Spectroscopic and photometric observations of SN 1987a. II - Days 51 to 134". Mon. Not. R. Astron. Soc. 229: 15. Bibcode:1987MNRAS.229P..15C. doi:10.1093/mnras/229.1.15P.
- Elias, J.H.; et al. (1988). "Line identifications in the infrared spectrum of SN 1987A". The Astrophysical Journal. 331: L9. Bibcode:1988ApJ...331L...9E. doi:10.1086/185225.
- Terndrup, D.M.; et al. (1988). "Optical and infrared observations of SN 1987A from Cerro Tololo". Astronomical Society of Australia. 7 (4): 412. Bibcode:1988PASAu...7..412T.
- Bouchet, P.; et al. (1987). "Infrared photometry of SN 1987A". Astronomy and Astrophysics. 177: L9. Bibcode:1987A&A...177L...9B.
- Bouchet, P.; et al. (1987). Infrared photometry of SN 1987A - The first four months. ESO Workshop on the SN 1987A, Garching, Federal Republic of Germany, July 6–8, 1987, Proceedings (A88-35301 14-90). Garching, Federal Republic of Germany, European Southern Observatory, 1987. 177. European Southern Observatory. p. 79. Bibcode:1987ESOC...26...79B.
- Lucy, L.; et al. (1989). "Dust Condensation in the Ejecta of SN 1987 A". Lecture Notes in Physics. Lecture Notes in Physics. 350: 164. Bibcode:1989LNP...350..164L. doi:10.1007/BFb0114861. ISBN 978-3-540-51956-0.
- Roche, P.F.; et al. (1989). "Old cold dust heated by supernova 1987A". Nature. 337 (6207): 533. Bibcode:1989Natur.337..533R. doi:10.1038/337533a0.
- Bouchet, P.; Danziger, J.; Lucy, L. (1989). "Supernova 1987A in the Large Magellanic Cloud". IAU Circular (4933). Bibcode:1989IAUC.4933....1B.
- Danziger, J.; et al. (1989). "Supernova 1987A in the Large Magellanic Cloud". IAU Circular (4746). Bibcode:1989IAUC.4933....1B.
- Felten, J.E.; Dwek, E. (1989). "Infrared and optical evidence for a dust cloud behind supernova 1987A". Nature. 339 (6220): 123. Bibcode:1989Natur.339..123F. doi:10.1038/339123a0.
- Lucy, L.; et al. (1991). Woosley, S.E., ed. Dust Condensation in the Ejecta of Supernova 1987A - Part Two. Supernovae. the Tenth Santa Cruz Workshop in Astronomy and Astrophysics, held July 9–21, 1989, Lick Observatory. New York: Springer Verlag. p. 82. Bibcode:1991supe.conf...82L. ISBN 0387970711.
- Cernuschi, F.; Marsicano, F.; Codina, S. (1967). "Contribution to the theory on the formation of cosmic grains". Annales d'Astrophysique. 30: 1039. Bibcode:1967AnAp...30.1039C.
- Liu, N.; et al. (2018). "Late formation of silicon carbide in type II supernovae". Science Advances. 4 (1): 1054. arXiv: . Bibcode:2018SciA....4O1054L. doi:10.1126/sciadv.aao1054.
- Matsuura, M.; et al. (2011). "Herschel Detects a Massive Dust Reservoir in Supernova 1987A". Science. 333 (6047): 1258. arXiv: . Bibcode:2011Sci...333.1258M. doi:10.1126/science.1205983.
- Indebetouw, R.; et al. (2014). "Dust Production and Particle Acceleration in Supernova 1987A Revealed with ALMA". "The Astrophysical Journal". 782 (1): L2. arXiv: . Bibcode:2014ApJ...782L...2I. doi:10.1088/2041-8205/782/1/L2.
- Kamenetzky, J.; et al. (2013). "Carbon Monoxyde in the Cold debris of Supernova 1987A". "The Astrophysical Journal". 782 (1): L2. arXiv: . Bibcode:2013ApJ...773L..34K. doi:10.1088/2041-8205/773/2/L34.
- Zanardo, G.; et al. (2014). "Spectral and Morphological Analysis of the Remnant of Supernova 1987A with ALMA and ATCA". "The Astrophysical Journal". 796 (2): 82. arXiv: . Bibcode:2014ApJ...796...82Z. doi:10.1088/0004-637X/796/2/82.
- Matsuura, M.; et al. (2017). "Spectral and Morphological Analysis of the Remnant of Supernova 1987A with ALMA and ATCA". "Monthly Notices of the Royal Astronomical Society". 469 (3): 3347–3362. arXiv: . Bibcode:2017MNRAS.469.3347M. doi:10.1093/mnras/stx830.
- Danziger, J.; et al. (1991). S.E. Woosley, ed. Molecules Dust and Ionic Abundances in Supernova 1987A. "Supernovae. the Tenth Santa Cruz Workshop in Astronomy and Astrophysics, held July 9–21, 1989, Lick Observatory. Springer-Verlag, New York". p. 69. Bibcode:1991supe.conf...69D. doi:10.1093/mnras/stx830.
- Kirshner, R. P. (1988). "Death of a Star". National Geographic. 173 (5): 619–647.
|Wikimedia Commons has media related to SN 1987A.|
- NASA Astronomy Picture of the Day: Picture of Supernova 1987A (January 24, 1997)
- AAVSO: More information on the discovery of SN 1987A
- Rochester Astronomy discovery timeline
- Light curves and spectra on the Open Supernova Catalog
- Light echoes from Sn1987a, Movie with real images by the group EROS2
- NASA Astronomy Picture of the Day: Animation of light echoes from SN1987A (January 25, 2006)
- SN 1987A at ESA/Hubble
- Supernova 1987A, WIKISKY.ORG
- More information at Phil Plait's Bad Astronomy site
- 3D View of Supernova's 'Heart' Sheds New Light on Star Explosions (Images) - Space.com | <urn:uuid:e37521a7-ec21-4825-a8f3-baa627eaec70> | 3.484375 | 7,046 | Knowledge Article | Science & Tech. | 69.544819 | 95,521,936 |
Brachiopods are marine bottom dwelling, suspension feeding, multicelled animals. They have a soft body enclosed in two shells that can be opened to feed. They are generally immobile but some can slowly relocate themselves on the ocean floor.
Brachiopods form their own phylum with two main branches, the Articulates and the Inarticulates. The Articules have a hard, calcareous, shell. The shell has two valves that are hinged at one end and thus "articulate" (rotate) around the hinge to open. The Articulates are generally larger than the Inarticulates. The Inarticulates also have two valves. The Inarticulates valves are generally chitinos or chitno-phosphatic that is a softer material than the calcareous shells of the Articluates. The Inarticluates have only a rudimentary hinge or none at all. Those without hinges separate the two value margins to open them. The Articulates genera and species are much more numerous than the Inarticulates. While one species of Articulate brachiopods was about one foot in length, most are much smaller, usually no bigger than a golf ball and often less than that size. Inarticulates are usually less than 2 cm in size and often smaller
Clams, oysters and other mollusks, although somewhat similar to brachiopods, in fact belong in a different phylum, the Pelecypods. Each brachiopod valve is bilaterally symmetrical from the center of the hinge to the point of the shell opening opposite to the hinge. That is, each brachiopod valve can be divided in two from hinge to front, and the halves are mirror images of each other. However the two brachiopod valves are not mirror images of each other. In contrast, mollusks are symmetrical along the hinge line so that each value tends to be a mirror image of the other. The soft body anatomy of a brachiopod is different from a mollusk. In addition, the brachiopod phylum has been very conservative. The earliest brachiopods are essentially the same as the modern ones. The phylum pelecypoda has been much more adventuresome. Pelecypods evolved forms able to move, with eyes and intelligence (snails, squid and octopus). Pelecypods were able to invade fresh water and the land. Brachiopods have not done this.
Brachiopods appear in the earliest Cambrian and continue into the modern era. They greatly increased in genera and species in the Ordovician and continued to dominate the marine environment through the Permian. They declined in importance with the Permian extension event, being replaced by mollusks. Brachiopods are exclusively marine but they can tolerate brackish, estuary environments. They are generally found in near shore environments but some species are found at great depth.
A Brachiopod is a fairly simple organism. It has a feeding apparatus, the lothophore. The lothophore strains food particles from the water and channels the food to a mouth and digestive tract. There is a weak heart-like organ that pumps some fluid through the body cavity. However the brachiopod takes up most of its oxygen directly from the surrounding sea, particularly through the lothophore. The soft tissue of the lothophore may be supported by a hard apparatus that is preserved within the fossil shell. These are not found in the Ordovician fossils around Cincinnati.
The brachiopod is distinguished by an organ that is used to anchor it to the ocean floor. It is called the "pedicle" and is a fleshy, muscular stock or "foot". The pedicle could be inserted into the bottom mud and secured with the aid of a mucus like secretion. In some species, the pedicle is cemented to hard objects. Some brachiopods lose their pedicle at maturity.
Brachiopods have muscles that they use to open and shut their shells or values. Articulate brachiopods have one set of muscles to pulled the shell open (diductors) while they have another set of muscles to pull it shut (adductors). In inarticulate brachiopods, the muscles squeezed the body cavity, causing it to expand around the margins to open the shell.
Brachiopods have a simple nervous system and are able to open and close their shells to feed or to escape predators. They have no eyes or brains as we would think of them.
Soft body tissue is rarely preserved in fossils and in the Cincinnati Ordovician it is not found.
Brachial or Dorsal Valve
Pedicle or Ventral Valve
The following are definitions of terms used to identify brachiopod shell fossils.
For identification purposes, the proper orientation of an articulate brachiopod shell is to have the shell opening to the front and the hinge to the back. The pedicle or ventral valve would be on the bottom and the brachial or dorsal valve on top. This is the life position.
Pedicle or ventral valve: The shell half through which the pedicle soft tissue extended. This valve will generally show evidence of a hole or slot through which the pedicle extended. This is generally the larger valve.
Brachial or dorsal valve: The shell half that is not the pedicle valve. This was the top shell in life position.
Hinge: For articulate brachiopods, the contact point on which the two shells rotated to open and close.
Commissura or Margin: The contact point between the two shell valves that separates when the two valves open.
Anterior: The front of a properly oriented brachiopod.
Posterior: The back or hinge side of a properly oriented brachiopod.
Length of shell: Distance from the middle of the posterior side to a point opposite on the anterior side.
Width of shell: Distance from the widest point on a side of the commissura to the opposite point on the other side of a properly oriented brachiopod.
Fold: A raised area of the shell, as if the shell had been pinched.
Sulcus: A depression in the shell, as if the shell had been pushed in. The opposite of a fold.
Interarea: The exterior hinge area of both the pedicle and brachial valve of some articulate brachiopods. The interarea contains the opening for the pedicle, called the notothyrium, in the brachial valve. Opposite to the notothyrium is a notch like, trapezoid shaped feature in the pedicle valve called the delthyrium.
Ray or rib: Raised, thin lines on the valve exteriors, usually going from the hinge area to the margin of the shell. These lines would be from posterior to the anterior side on a properly oriented brachiopod.
Costa: A very strong ray, with significant elevation above the valve surface.
Growth line: Lines or steps in the exterior of a valve that run from margin to margin across a properly oriented brachiopod. These are thought to be growth lines indicating where the shell stopped growing at some point before resuming growth.
Identifying the rock strata in which a brachiopod is found will greatly aid in identification of the brachiopod. The more specific the information on the rock layer, the greater the aid in identification. This is because each strata of rock will have its own suite of brachiopod species. The converse is also true. Knowing the species of brachiopod found in a rock layer will greatly aid in identifying that rock layer.
The interior pattern of the muscle scars is generally unique to a species and greatly aids in identification. Professions will often abrade or dissolve the shell in order to record thin sections and thus be able to reconstruct the shape of the shell interior. Amateurs will generally have to be satisfied with shell interiors that have been naturally exposed. A shell valve with the interior exposed and which has been identified as to species can be compared to a specimen with only the exterior exposed to aid in identification.
It is not uncommon for brachiopod shells to be broken, swashed or mashed down. This can distort some of the characteristics used for identification. Because brachiopods are generally numerous at most collecting sites, look for complete or nearly complete specimens. This will also aid in identification.
As with anything, practice makes perfect. Do not be afraid to identify your brachiopods, You will make mistakes but you will also learn from those mistakes. You can always change the name to the correct one. Please note that the professionals change the names, usually at the genera level. They also sometimes disagree on the proper name, depending often on whether they think it is a different species or not. Ask someone with more experience if you are using the correct name. Ask them what they looked for to make the identification. Go to a natural history museum and compare their identifications. Look in fossil identification books. All of these will increase your skill at identifying the species in your brachiopod collection.
Good luck and good hunting.
Pictures of the Brachiopods Found on Our Field Trips
Back to the Dry Dredgers Home Page
Photos courtesy of John Tate, Jack Kallmeyer
and Bill Heimbrock
Descriptions courtesy of John Tate and Ron Fine
Web page development by Bill Heimbrock
The Dry Dredgers and individual contributors
reserve the rights to all information, images, and content presented here.
Permission to reproduce in any fashion, must be requested in writing to firstname.lastname@example.org .
www.drydredgers.org is designed and maintained by Bill Heimbrock. | <urn:uuid:16e99753-3d98-412f-a364-f713b4bc94c6> | 3.984375 | 2,076 | Knowledge Article | Science & Tech. | 41.852823 | 95,521,955 |
tetraethyl orthosilicate; ethyl silicate; silicic acid tetraethyl ester; silicon ethoxide; TEOS; tetraethyl silicate
3D model (JSmol)
|Molar mass||208.33 g mol−1|
|Density||0.933 g/mL at 20 °C|
|Melting point||−77 °C (−107 °F; 196 K)|
|Boiling point||168 to 169 °C (334 to 336 °F; 441 to 442 K)|
|reacts with water, soluble in ethanol, and 2-propanol|
|Vapor pressure||1 mmHg|
|Main hazards||Flammable, Harmful by inhalation|
|Flash point||45 °C (113 °F; 318 K)|
|Lethal dose or concentration (LD, LC):|
LD50 (median dose)
|6270 mg/kg (rat, oral)|
LCLo (lowest published)
|1000 ppm (rat, 4 hr)
700 ppm (guinea pig, 6 hr)
1740 ppm (guinea pig, 15 min)
1170 ppm (guinea pig, 2 hr)
|US health exposure limits (NIOSH):|
|TWA 100 ppm (850 mg/m3)|
|TWA 10 ppm (85 mg/m3)|
IDLH (Immediate danger)
Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa).
|what is ?)(|
Tetraethyl orthosilicate, formally named tetraethoxysilane and abbreviated TEOS, is the chemical compound with the formula Si(OC2H5)4. TEOS is a colorless liquid that degrades in water. TEOS is the ethyl ester of orthosilicic acid, Si(OH)4. It is the most prevalent alkoxide of silicon.
- SiCl4 + 4 EtOH → Si(OEt)4 + 4 HCl
TEOS is mainly used as a crosslinking agent in silicone polymers and as a precursor to silicon dioxide in the semiconductor industry. TEOS is also used as the silica source for synthesis of some zeolites. Other applications include coatings for carpets and other objects. TEOS is used in the production of aerogel. These applications exploit the reactivity of the Si-OR bonds. TEOS has historically been used as an additive to alcohol based rocket fuels to decrease the heat flux to the chamber wall of regeneratively cooled engines by over 50%.
TEOS easily converts to silicon dioxide upon the addition of water:
- Si(OC2H5)4 + 2 H2O → SiO2 + 4 C2H5OH
An idealized equation is shown, in reality the silica produced is hydrated. This hydrolysis reaction is an example of a sol-gel process. The side product is ethanol. The reaction proceeds via a series of condensation reactions that convert the TEOS molecule into a mineral-like solid via the formation of Si-O-Si linkages. Rates of this conversion are sensitive to the presence of acids and bases, both of which serve as catalysts. The Stöber process allows the formation of monodisperse and mesoporous silica.
At elevated temperatures (>600 °C), TEOS converts to silicon dioxide:
- Si(OC2H5)4 → SiO2 + 2 (C2H5)2O
The volatile coproduct is diethyl ether.
- "NIOSH Pocket Guide to Chemical Hazards #0282". National Institute for Occupational Safety and Health (NIOSH).
- "Ethyl silicate". Immediately Dangerous to Life and Health Concentrations (IDLH). National Institute for Occupational Safety and Health (NIOSH).
- Bulla, D.A.P; Morimoto, N.I (1998). "Deposition of thick TEOS PECVD silicon oxide layers for integrated optical waveguide applications". Thin Solid Films. 334: 60. Bibcode:1998TSF...334...60B. doi:10.1016/S0040-6090(98)01117-1.
- Kulprathipanja, Santi (2010) Zeolites in Industrial Separation and Catalysis, Wiley-VCH Verlag GmbH & Co. KGaA, ISBN 3527629572.
- Rösch, Lutz; John, Peter and Reitmeier, Rudolf "Silicon Compounds, Organic" in Ullmann's Encyclopedia of Industrial Chemistry, Wiley-VCH, Weinheim, 2002. doi:10.1002/14356007.a24_021.
- Clark, John D. (1972). Ignition! An Informal History of Liquid Rocket Propellants. Rutgers University Press. pp. 105–106. ISBN 9780813507255.
- Boday, Dylan J.; Wertz, Jason T.; Kuczynski, Joseph P. (2015). "Functionalization of Silica Nanoparticles for Corrosion Prevention of Underlying Metal". In Kong, Eric S. W. Nanomaterials, Polymers and Devices: Materials Functionalization and Device Fabrication. John Wiley & Sons. pp. 121–140. ISBN 9781118866955.
- Kicklebick, Guido (2015). "Nanoparticles and Composites". In Levy, David; Zayat, Marcos. The Sol-Gel Handbook: Synthesis, Characterization and Applications. 3. John Wiley & Sons. pp. 227–244. ISBN 9783527334865.
- Berg, John C. (2009). "Colloidal Systems: Phenomenology and Characterization". An Introduction to Interfaces and Colloids: The Bridge to Nanoscience. World Scientific Publishing. pp. 367–368, 452–454. ISBN 9789813100985. | <urn:uuid:a7ec74fa-f5a9-4ede-ac48-a673051babe2> | 2.625 | 1,286 | Knowledge Article | Science & Tech. | 65.691824 | 95,521,958 |
Ancient Goat Genomes Reveal a Mosaic of Domestication
News Jul 06, 2018 | Original Press Release from Trinity College Dublin
Credit: Kuebi = Armin Kübelbeck [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
An international team of scientists, led by geneticists from Trinity, have sequenced the genomes from ancient goat bones from areas in the Fertile Crescent where goats were first domesticated around 8,500 BC. They reveal a 10,000-year history of local farmer practices featuring genetic exchange both with the wild and among domesticated herds, and selection by early farmers.
This genetic data – including 83 mitochondrial sequences and whole genome data from 51 goats – is published today by PhD Researcher in Genetics, Kevin Daly, and colleagues, in leading international journal Science.
One of our first domesticates and a source of meat, milk and hides, goats now number almost a billion animals. They have been a partner animal since c. 8,500 BC. The earliest evidence for domestic goats occurs in the Fertile Crescent region of Southwest Asia, where crop farming and animal herding began. Before herding, local hunters targeted wild goats – also known as bezoar – and this local practice eventually became the basis of goat management and livestock keeping.
However, reading the past from examining modern genetics is difficult due to thousands of years of migration and mixture. “Just like humans, modern goat ancestry is a tangled web of different ancestral strands. The only way to unravel these and reach reliably into the past is to sequence genomes from actual ancient animals; a kind of molecular time travel,” said Professor of Population Genetics and ERC Advanced Investigator at Trinity College Dublin, Dan Bradley, who led the project.
Using genetic data from over 80 ancient wild and domestic goats, the group has charted the initial patterns of domestication, demonstrating a surprising degree of genetic differentiation between goats across the Fertile Crescent and the surrounding regions.
Research Fellow at Trinity, and joint first author of the paper, Pierpaolo Maisano Delser, said: “Goat domestication was a mosaic rather than a singular process with continuous recruitment from local wild populations. This process generated a distinctive genetic pool which evolved across time and still characterises the different goat populations of Asia, Europe and Africa today.” Using ancient samples, the group was able to analyse the genetic diversity of different goat populations back in time and reconstruct the history of early domesticates.
Domestic animals have changed human society and humans have also moulded livestock into hundreds of different types and breeds – this study has the earliest genetic discovery yet of this process. It seems that, like modern breeders, ancient farmers were interested in animal appearance.
PhD Researcher at Trinity, and first author of the paper, Kevin Daly, said: “Whole genome sequences from the past allowed us directly analyse some of the earliest goat herds. We found evidence that at least as far back as 8,000 years ago herders were interested in or valued the coat colour of their animals, based on selection signals at pigmentation genes.” Furthermore, distinct but parallel patterns of this selection were observed in different early herds, suggesting this was a repeated phenomenon.
There are also indications that these early animals had been selected for liver enzymes that gave better tolerance to new toxins, possibly from fungus growing on fodder, and also production traits such as fertility and size.
This article has been republished from materials provided by Trinity College Dublin. Note: material may have been edited for length and content. For further information, please contact the cited source.
Reference: Daly, K. G., Delser, P. M., Mullin, V. E., Scheu, A., Mattiangeli, V., Teasdale, M. D., … Bradley, D. G. (2018). Ancient goat genomes reveal mosaic domestication in the Fertile Crescent. Science, 361(6397), 85–88. https://doi.org/10.1126/science.aas9411
Analytical Tool Predicts Disease-Causing GenesNews
Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes.
Gene Regulator May Contribute to Protein Pileup in Exfoliation GlaucomaNews
Researchers are seeking factors that contribute to protein pileup in exfoliation glaucomaREAD MORE
Single Gene Change in Gut Bacteria Alters Host MetabolismNews
Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE | <urn:uuid:55b43f14-d33b-49e5-b6b9-643662534f44> | 3.265625 | 1,041 | News (Org.) | Science & Tech. | 34.846093 | 95,521,961 |
|Scientific Name:||Megophrys baluensis (Boulenger, 1899)|
Leptobrachium baluense Boulenger, 1899
Xenophrys baluensis (Boulenger, 1899)
|Taxonomic Source(s):||Frost, D.R. 2018. Amphibian Species of the World: an Online Reference. Version 6.0. American Museum of Natural History, New York, USA Available at: http://research.amnh.org/herpetology/amphibia/index.html.|
|Red List Category & Criteria:||Least Concern ver 3.1|
|Assessor(s):||IUCN SSC Amphibian Specialist Group|
|Contributor(s):||Iskandar, D., Das, I., Lakim, M., Yambun, P., Stuebing, R. & Inger, R.F.|
Listed as Least Concern since, although its estimated extent of occurrence (EOO) is 1,173 km2, it occurs in two well-protected and well-managed national parks, where suitable habitat remains, and there are no threats at present.
|Previously published Red List assessments:|
|Range Description:||This Bornean endemic species is known from Kinabalu and Crocker Range National Parks in Sabah, Malaysia, from 1,200–1,900 m asl. It might also occur in Mount Trus Madi, however this requires confirmation (P. Yambun pers. comm. January 2018). The estimated extent of occurrence (EOO) of its known range is 1,173 km2.|
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||There are no estimates of population size, but it appears to be common in Kinabulu National Park (P. Yambun pers. comm. January 2018). The population is considered to be stable (P. Yambun pers. comm. January 2018).|
|Current Population Trend:||Stable|
|Habitat and Ecology:||This species is restricted to montane forests, where adults and juveniles have been found in leaf-litter on the forest floor. Breeding is thought to take place in slow-flowing regions of clear, rocky streams.|
|Movement patterns:||Not a Migrant|
|Use and Trade:||There are no records of this species being utilized.|
|Major Threat(s):||There are no major threats to the species at present as it occurs within two well-managed and well-protected areas (P. Yambun pers. comm. January 2018).|
It occurs in Kinabalu and Crocker Range National Parks, which are well-protected.
Continuation of rigorous management of the existing protected areas to protect forest habitat is the best guarantee for the conservation of this species.
Surveys of potentially suitable areas of habitat in adjacent parts of Borneo are needed to determine whether or not this species might occur elsewhere, and also to help better understand its current population status.
|Citation:||IUCN SSC Amphibian Specialist Group. 2018. Megophrys baluensis. The IUCN Red List of Threatened Species 2018: e.T57631A123692546.Downloaded on 21 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:573d906c-3a37-4837-861a-5879295d8e87> | 2.671875 | 756 | Knowledge Article | Science & Tech. | 56.382377 | 95,521,966 |
Chemistry Question #4048
Bryson, a 12 year old male from Fort Lupton asks on December 20, 2007,
Why will hard boiled eggs tarnish silver. But a egg cooked scrambled in a pan with nothing else added, will not tarnish the silver?
viewed 15834 times
answered on April 23, 2008
Eggs contain proteins that have a fair bit of sulfur. It is Hydrogen Sulfide (H2S) that forms black silver sulfide which is the tarnish. When the eggs are cooked, the sulfur which is trapped in proteins comes out as hydrogen sulphide or other more volatile compounds that can react with the silver. Certainly boiled eggs always smell of some H2S. In the hard boiled egg, the temperature never goes above 100C, the temperature of boiling water. Also, the sulfur compounds are mostly kept inside the shell and cannot escape. In the fried egg, the temperatures can get somewhat hotter, especially as the water in the egg is driven off. In addition, sulfur compounds which are quite volatile can evaporate into the air. In this way, the boiled egg has more tarnishing power than the fried egg, because 1. temperatures have not been as high and 2. tarnishing chemicals have been kept in the shell.
Add to or comment on this answer using the form below.
Note: All submissions are moderated prior to posting.
If you found this answer useful, please consider making a small donation to science.ca. | <urn:uuid:558892fe-1db8-4186-8768-7131adac7e92> | 3.0625 | 307 | Q&A Forum | Science & Tech. | 60.251794 | 95,521,968 |
Published on March 1, 2014
A normal fault drops rock on one side of the fault down relative to the other side. A normal fault is caused by tension.
A thrust fault raises rock on one side of the fault up relative to the other side. A thrust fault is caused by compression.
In a slip-strike fault, rocky blocks on either side of fault scrape along side-by-side. A slip-strike fault is caused by shearing.
Left-lateral faults in siltstone. Near Lillooet, British Columbia (Canada).
What Causes an Earthquake?
When plates collide or rub past each other, they can cause the Earth to shake. This is because friction stops them from moving easily
1. Two plates moving past each other get jammed together. 2. Increasing pressure causes the plates to move in a sudden jerk – an earthquake. 3. The sudden movement sends a shockwave through the earths crust.
The point of the earths surface directly above the focus is called the epicentre. The point where the seismic waves start is called the focus Seismic Waves
Earthquake energy is released in seismic waves. These waves spread out from the focus. The waves are felt most strongly at the epicentre, becoming less strong as they travel further away. The most severe damage caused by an earthquake will happen close to the epicentre.
When two continental plates converge, we get mountain ranges Mt Everest is one example of this kind of mountain forming. In fact, Mt Everest gets taller by 4mm every year, because the plates continue to converge.
The Himalayas (of which Mt. Everest is a part) are an example of two continental plates converging
The Aleutian Islands are an example of two oceanic plates converging. The Pacific plate is being subducted under the North American plate, giving us a trench. | <urn:uuid:14cbeff4-f7b1-4e6c-acb0-216680324307> | 4.34375 | 388 | Knowledge Article | Science & Tech. | 58.340749 | 95,521,999 |
A joint research proposal between University of Warwick scientists at Warwick HRI and researchers in the Universitys Department of Politics and International Studies has won a £316,000 grant from the Research Councils’ Rural Economy and Land Use programme for a project on the science and regulation of bio-pesticides.
Consumers, retailers and environmentalists are calling for reductions in the use of chemical pesticides. One potentially environmentally friendly solution is to use so-called bio-pesticides, which are based on naturally occurring living organisms, such as fungi that attack insects. However there is a need for a greater scientific understanding of the operation of these bio-pesticides and in particular their impact on the sustainability of pest management. There is also a requirement to evaluate the effect of government regulations on the development and uptake of bio-pesticides. The current regulatory system was designed for chemical pesticides, and innovations may be required to make it more suitable for the use of bio-pesticides.
This programme will draw on research strengths both in biological and social sciences. Warwick HRI’s new status as part of the University of Warwick has facilitated the creation of just such a research partnership of Warwick HRI bio-pesticide scientist Dr Dave Chandler and leading rural economy and society researcher Professor Wyn Grant in the University of Warwick’s Department of Politics and International Studies.
Peter Dunn | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:f82450e2-23d9-4962-9978-9c58ad1f7843> | 2.515625 | 937 | Content Listing | Science & Tech. | 35.147687 | 95,522,006 |
In their ongoing research on turning adult stem cells isolated from fat into cartilage, Duke University Medical Center researchers have demonstrated that the level of oxygen present during the transformation process is a key switch in stimulating the stem cells to change.
Their findings were presented today (Feb. 2, 2003) at the annual meeting of the Orthopedic Research Society.
Using a biochemical cocktail of steroids and growth factors, the researchers have "retrained" specific adult stem cells that would normally form the structure of fat into another type of cell known as a chondrocyte, or cartilage cell. During this process, if the cells were grown in the presence of "room air," which is about 20 percent oxygen, the stem cells tended to proliferate; however, if the level of oxygen was reduced to 5 percent, the stem cells transformed into chondrocytes.
Richard Merritt | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:aefbe024-1545-4c89-800f-908fc023188b> | 2.796875 | 848 | Content Listing | Science & Tech. | 40.297653 | 95,522,030 |
Plant species composition and community structure were compared among four sites in an upland black spruce community in northwestern Ontario. One site had remained undisturbed since the 1930s and three had been disturbed by either logging, fire, or both logging and fire. Canonical correspondence ordination analyses indicated that herbaceous species composition and abundance differed among the disturbance types while differences in the shrub and tree strata were less pronounced. In the herb stratum Pleurozium schreberi, Ptilium crista-castrensis and Dicranurn polysetum were in greatest abundance on the undisturbed forest site, while the wildfire and burned cutover sites were dominated by Epilobium angustifolium and Polytrichum juniperinum. The unburned harvested site was dominated by Epilobium angustifolium, Cornus canadensis and Pleurozium schreberi. Species richness was lower on the undisturbed site than on any of the disturbed sites while species diversity (H’) and evenness (Hill’s E5) were higher on the unburned harvested site than on the other sites. Results suggest that herb re-establishment is different among harvested and burned sites in upland black spruce communities and we hypothesize that differences in the characteristics of the disturbance were responsible, in particular, the impact of burning on nutrient availability. These differences need to be taken into account in determining the effects of these disturbances on biodiversity and long-term ecosystem management.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
- Impacts of Logging and Wildfire on an Upland Black Spruce Community in Northwestern Ontario
M. H. Johnston
J. A. Elliott
- Springer Netherlands
Fallstudie Überschwemmungskarten/© Thaut Images | Fotolia | <urn:uuid:c6cd51e3-e279-43af-b8c0-f9a6d28bee35> | 2.953125 | 394 | Academic Writing | Science & Tech. | 7.304185 | 95,522,034 |
Open topic with navigation
WTSupported in traditional Synergy on Windows
WNSupported in Synergy .NET on Windows
USupported on UNIX
VSupported on OpenVMS
public class Exception
The Exception class is used for structured exception handling and represents the errors that occur.
Initializes a new instance of the Exception class.
Initializes a new instance of the Exception class with an error message (string) that explains why the error occured.
public Exception(message, inner)
Initializes a new instance of the Exception class with an error message (string) that explains why the error occurred and an inner exception (@Exception) that caused the current exception.
Returns the exception instance that caused the current exception. (@Exception)
public virtual Message
Returns the error message that describes the exception. (string)
public virtual Source
Returns or sets the name of the application that caused the exception. (string)
public virtual StackTrace
Returns a text representation of the call stack at the time the exception was thrown. (string)
public override ToString(), string
Returns the string representation of the current exception. This includes the following components (if they exist), separated by the system line terminator: the name of the Exception class that was thrown, the Message, the result of calling ToString() on the InnerException, and the StackTrace. | <urn:uuid:2d9b9448-fd9b-4eb5-9a65-ae72fb3bf357> | 2.953125 | 286 | Documentation | Software Dev. | 28.623481 | 95,522,035 |
The landhopper Arcitalitrus dorrieni (a terrestrial amphipod that looks similar to the common sandhopper you see around the strandline on the beach) was originally described from the gardens of Tresco on the Isles of Scilly in the mid-1920s, where it was regarded as an immigrant from Australia. It's likely that the landhopper probably uses potted plants as its main mode of dispersal.
It was first shown to me in 1995 by Paul Llewellyn in Caswell Valley, where it was superabundant under the Holm Oaks adjacent to the car park. We found it to be less abundant, but widespread throughout much of the woodland there and so was clearly well-established. In recent times I have heard of more records e.g.
Ian Morgan first saw the species under Cherry Laurel leaf litter at Furnace, Llanelli in 1999, and subsequently in Denham Avenue, Llanelli 2002-2010, Llwynhendy 2009 and at his current address Pwll in 2010. All vc44, Carms.
Graham Motley wrote that it occurred at ‘Bracken Road, Neath (behind the cricket and rugby grounds) from 1994 to 1997. I remember numbers being much lower though in the summer of 1997 - presumably they suffered due to the prolonged very cold spell in January of that year.’
The map below shows the records I have collated to date (updated with additional records 01-Dec-10. Thanks to contributors):
If you have seen this species, especially from any new sites, please share your sighting here and/or contact:
Dr Michael R. Wilson
Head of Entomology Section
Department of Biodiversity & Systematic Biology
National Museum of Wales
Cardiff, CF10 3NP, UK email@example.com | <urn:uuid:4e27ec10-dd27-4410-aa73-89d372c064d0> | 2.78125 | 393 | Personal Blog | Science & Tech. | 52.6887 | 95,522,036 |
US physicist who, with Henry Way Kendall and Richard E Taylor, was awarded the Nobel Prize for Physics in 1990 for their pioneering investigations into high-energy electrons colliding with protons and neutrons. These were important in developing the quark model of particle physics.
In 1970, Friedman, Taylor, and Kendall led a team working at SLAC, the Stanford Linear Accelerator Center (now called the SLAC National Accelerator Center) in California. Their experiments involved bombarding protons (and later neutrons) with high-energy electrons. They knew from earlier experiments that the proton had a small but finite volume and believed that high-energy electrons would suffer only small deflections as they passed through the protons. However they found that the electrons were sometimes scattered through large angles inside the proton.
This result was interpreted by theorists James D Bjorken and Richard P Feynman. They suggested that the electrons were hitting hard point-like objects inside the proton. These objects were soon shown to be quarks, particles whose existence had been proposed independently by Murray Gell-Mann and George Zweig in 1964. Today quarks are recognized as amongst the most fundamental building blocks of matter.
Friedman was born in Chicago, Illinois, USA, and educated at the University of Chicago, where he completed his doctorate in 1956. In 1957 he joined the High-Energy Physics laboratory at Stanford University, California, as a research associate and learned the techniques used in electron scattering experiments. At Stanford he began his long association with Henry Kendall and became acquainted with Richard Taylor. In 1960 he went to the physics department of the Massachusetts Insitute of Technology (MIT). Kendall joined his research group in 1961. In 1963 Friedman and Kendall began a collaboration with Taylor and others at Stanford to develop electron scattering facilities at the Stanford Linear Accelerator Center. In 1980 he became director of the laboratory for nuclear science at MIT and then served as head of the physics department 1983–88, when he returned to full-time teaching and research.
Canadian physicist who worked with Jerome I Friedman and Henry Way Kendall conducting pioneering research into the collision of high-energy electrons
Place: United States of America Subject: biography, physics US theoretical physicist who was awarded the 1969 Nobel Prize for Physics for his work o
FRIEDMAN JEROME ISAAC Nationality: American b. 28 March 1930, Chicago, Illinois, USA KENDALL HENRY Nationality: German (FRG) b. 9 Decem | <urn:uuid:246a133e-7202-4aa7-92b6-d162fb3fd9f0> | 3.359375 | 511 | Knowledge Article | Science & Tech. | 31.403183 | 95,522,042 |
The National High Magnetic Field Laboratory is ending its year with another achievement of international importance as engineers and technicians this week completed testing of a world-record magnet.
With the completion of a new, 35-tesla magnet, the highest-field "resistive" magnet in the world is located at the Tallahassee facility. The state-of-the-art magnet, which incorporates "Florida-Bitter" technology invented at the lab, was designed and built on-site and is immediately available for research.
The 35-tesla magnet is an upgrade of an existing 30-tesla magnet and surpasses the previous record of 33 tesla, also held by the laboratory. "Tesla" is a measurement of the strength of a magnetic field; 1 tesla is equal to 20,000 times the Earths magnetic field. Typical magnetic resonance imaging (MRI) machines in hospitals provide fields in the range of 1 to 3 tesla. Put another way, the increase from 30 to 35 teslas in the new magnet represents a 17-percent jump, or an increase equal to the magnetic force of two MRI machines.
Gregory S. Boebinger | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:e6a0c09e-707e-48a8-b8b8-76fd16b663f8> | 2.828125 | 882 | Content Listing | Science & Tech. | 42.096878 | 95,522,045 |
Depending on management, forests can be an important sink or source of carbon that if released as CO2 could contribute to global warming. Many forests in the western United States are being treated to reduce fuels, yet the effects of these treatments on forest carbon are not well understood. We compared the immediate effects of fuels treatments on carbon stocks and releases in replicated plots before and after treatment, and against a reconstruction of active-fire stand conditions for the same forest in 1865. Total live-tree carbon was substantially lower in modern fire-suppressed conditions (and all of the treatments) than the same forest under an active-fire regime. Although fire suppression has increased stem density, current forests have fewer very large trees, reducing total live-tree carbon stocks and shifting a higher proportion of those stocks into small-diameter, firesensitive trees. Prescribed burning released 14.8 Mg C/ha, with pre-burn thinning increasing the average release by 70% and contributing 21.9–37.5 Mg C/ha in milling waste. Fire suppression may have incurred a double carbon penalty by reducing stocks and contributing to emissions with fuels-treatment activities or inevitable wildfire combustion. All treatments reduced fuels and increased fire resistance, but most of the gains were achieved with understory thinning, with only modest increases in the much heavier overstory thinning. We suggest modifying current treatments to focus on reducing surface fuels, actively thinning the majority of small trees, and removing only fire-sensitive species in the merchantable, intermediate size class. These changes would retain most of the current carbon-pool levels, reduce prescribed burn and potential future wildfire emissions, and favor stand development of large, fire-resistant trees that can better stabilize carbon stocks.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:72d72a4a-56fe-40e5-9e75-6f3196464dd5> | 3.234375 | 369 | Academic Writing | Science & Tech. | 27.690391 | 95,522,047 |
There was something peculiar about dolphins that stumped prolific British zoologist Sir James Gray in 1936.
He had observed the sea mammals swimming at a swift rate of more than 20 miles per hour, but his studies had concluded that the muscles of dolphins simply weren’t strong enough to support those kinds of speeds. The conundrum came to be known as “Gray’s Paradox.”
For decades the puzzle prompted much attention, speculation, and conjecture in the scientific community. But now, armed with cutting-edge flow measurement technology, researchers at Rensselaer Polytechnic Institute have tackled the problem and conclusively solved Gray’s Paradox.
“Sir Gray was certainly on to something, and it took nearly 75 years for technology to bring us to the point where we could get at the heart of his paradox,” said Timothy Wei, professor and acting dean of Rensselaer’s School of Engineering, who led the project. “But now, for the first time, I think we can safely say the puzzle is solved. The short answer is that dolphins are simply much stronger than Gray or many other people ever imagined.”
Wei is presenting his findings today at the 61st Annual Meeting of the American Physical Society (APS) Division of Fluid Dynamics in San Antonio, Texas. Collaborators on the research include Frank Fish, a biologist at West Chester University in Pennsylvania; Terrie Williams, a marine biologist at the University of California, Santa Cruz; Rensselaer undergraduate student Yae Eun Moon; and Rensselaer graduate student Erica Sherman.
After studying dolphins, Gray said in 1936 that they are not capable of producing enough thrust, or power-induced acceleration, to overcome the drag created as the mammal sped forward through the water. This drag should prevent dolphins from attaining significant speed, but simple observation proved otherwise – a paradox. In the absence of a sound explanation, Gray theorized that dolphin skin must have special drag-reducing properties.
More than 70 years later, Wei has developed a tool that conclusively measures the force a dolphin generates with its tail.
Wei created this new state-of-the-art water flow diagnostic technology by modifying and combining force measurement tools developed for aerospace research with a video-based flow measurement technique known as Digital Particle Image Velocimetry, which can capture up to 1,000 video frames per second.
Wei videotaped two bottlenose dolphins, Primo and Puka, as they swam through a section of water populated with hundreds of thousands of tiny air bubbles. He then used sophisticated computer software to track the movement of the bubbles. The color-coded results show the speed and in what direction the water is flowing around and behind the dolphin, which allowed researchers to calculate precisely how much force the dolphin was producing.
See a DPIV video of Primo here: http://www.rpi.edu/news/video/wei/dolphin.html
Wei also used this technique to film dolphins as they were doing tail-stands, a trick where the dolphins “walk” on water by holding most of their bodies vertical above the water while supporting themselves with short, powerful thrusts of their tails.
The results show that dolphins produce on average about 200 pounds of force when flapping their tail – about 10 times more force than Gray originally hypothesized.
“It turns out that the answer to Gray’s Paradox had nothing to do with the dolphins’ skin,” Wei said. “Dolphins can certainly produce enough force to overcome drag. The scientific community has known this for a while, but this is the first time anyone has been able to actually quantitatively measure the force and say, for certain, the paradox is solved.”
At peak performance, the dolphins produced between 300 and 400 pounds of force. Human Olympic swimmers, by comparison, peak at about 60 to 70 pounds of force, Wei said. He knows this for a fact because he has been working with U.S.A. Swimming over the past few years to use these same bubble-tracking DPIV and force-measuring techniques to better understand how elite swimmers interact with the water, and improve lap times.
“It was actually a natural extension to go from swimmers to dolphins,” said Wei, whose research ranges from aeronautical and hydrodynamic flow of vehicles to more biological topics dealing with the flow of cells and fluid in the human body.
The dolphins Wei filmed, Primo and Puka, are retired U.S. Navy dolphins who now live at the Long Marine Laboratory at UC Santa Cruz.
Wei said the research team will likely continue to investigate the flow dynamics and force generation of other marine animals, which could yield new insight into how different species have evolved as a result of their swimming proficiency.
“Maybe sea otters,” he said.
For more information on Wei’s work with Olympic swimmers, visit: http://news.rpi.edu/update.do?artcenterkey=2477
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:8ee62b0f-5edd-4dd0-ae1e-2d9056f67608> | 4.03125 | 1,625 | Content Listing | Science & Tech. | 43.283617 | 95,522,095 |
Thermoacoustics is the interaction between temperature, density and pressure variations of acoustic waves. Thermoacoustic heat engines can readily be driven using solar energy or waste heat and they can be controlled using proportional control. They can use heat available at low temperatures which makes it ideal for heat recovery and low power applications. The components included in thermoacoustic engines are usually very simple compared to conventional engines. The device can easily be controlled and maintained.
Thermoacoustic effects can be observed when partly molten glass tubes are connected to glass vessels. Sometimes spontaneously a loud and monotone sound is produced. A similar effect is observed if a stainless steel tube is with one side at room temperature (293 K) and with the other side in contact with liquid helium at 4.2 K. In this case, spontaneous oscillations are observed which are named "Taconis oscillations". The mathematical foundation of thermoacoustics is by Nikolaus Rott. Later, the field was inspired by the work of John Wheatley and Swift and his co-workers. Technologically thermoacoustic devices have the advantage that they have no moving parts which makes them attractive for applications where reliability is of key importance.
Historical review of thermoacoustics
Thermoacoustic-induced oscillations have been observed for centuries. Glass blowers produced heat generated sound when blowing a hot bulb at the end of a cold narrow tube. This phenomenon also has been observed in cryogenic storage vessels, where oscillations are induced by the insertion of a hollow tube open at the bottom end in liquid helium, called Taconis oscillations, but the lack of heat removal system causes the temperature gradient to diminish and acoustic wave to weaken and then to stop completely. Byron Higgins made the first scientific observation of heat energy conversion into acoustical oscillations. He investigated the "singing flame" phenomena in a portion of a hydrogen flame in a tube with both ends open.
Physicist Pieter Rijke introduced this phenomenon into a greater scale by using a heated wire screen to induce strong oscillations in a tube (the Rijke tube). Feldman mentioned in his related review that a convective air current through the pipe is the main inducer of this phenomenon. The oscillations are strongest when the screen is at one fourth of the tube length. Research performed by Sondhauss in 1850 is known to be the first to approximate the modern concept of thermoacoustic oscillation. Sondhauss experimentally investigated the oscillations related to glass blowers. Sondhauss observed that sound frequency and intensity depends on the length and volume of the bulb. Lord Rayleigh gave a qualitative explanation of the Sondhauss thermoacoustic oscillations phenomena, where he stated that producing any type of thermoacoustic oscillations needs to meet a criterion: "If heat be given to the air at the moment of greatest condensation or taken from it at the moment of greatest rarefaction, the vibration is encouraged". This shows that he related thermoacoustics to the interplay of density variations and heat injection. The formal theoretical study of thermoacoustics started by Kramers in 1949 when he generalized the Kirchhoff theory of the attenuation of sound waves at constant temperature to the case of attenuation in the presence of a temperature gradient. Rott made a breakthrough in the study and modeling of thermodynamic phenomena by developing a successful linear theory. After that, the acoustical part of thermoacoustics was linked in a broad thermodynamic framework by Swift.
Usually sound is understood in terms of pressure variations accompanied by an oscillating motion of a medium (gas, liquid or solid). In order to understand thermoacoustic machines, it is of importance to focus on the temperature-position variations rather than the usual pressure-velocity variations.
The sound intensity of ordinary speech is 65 dB. The pressure variations are about 0.05 Pa, the displacements 0.2 μm, and the temperature variations about 40 μK. So, the thermal effects of sound cannot be observed in daily life. However, at sound levels of 180 dB, which are normal in thermoacoustic systems, the pressure variations are 30 kPa, the displacements more than 10 cm, and the temperature variations 24 K.
The one-dimensional wave equation for sound reads
with t time, v the gas velocity, x the position, and c the sound velocity given by c²=γp₀/ρ₀. For an ideal gas, c²=γRT₀/M with M the molar mass. In these expressions, p₀, T₀, and ρ₀ are the average pressure, temperature, and density respectively. In monochromatic plane waves, with angular frequency ω and with ω=kc, the solution is
The pressure variations are given by
The deviation δx of a gas-particle with equilibrium position x is given by
and the temperature variations are
The last two equations form a parametric representation of a tilted ellipse in the δT – δx plane with t as the parameter.
If , we are dealing with a pure standing wave. Figure 1a gives the dependence of the velocity and position amplitudes (red curve) and the pressure and temperature amplitudes (blue curve) for this case. The ellipse of the δT – δx plane is reduced to a straight line as shown in Fig. 1b. At the tube ends δx =0, so the δT – δx plot is a vertical line here. In the middle of the tube the pressure and temperature variations are zero, so we have a horizontal line. It can be shown that the power, transported by sound, is given by
where γ is the ratio of the gas specific heat at fixed pressure to the specific heat at fixed volume and A is the area of the cross section of the sound duct. Since in a standing wave, , the average energy transport is zero.
If or , we have a pure traveling wave. In this case, Eqs.(1) and (2) represent circles in the δT – δx diagram as shown in Fig. 1c, which applies to a pure traveling wave to the right. The gas moves to the right with a high temperature and back with a low temperature, so there is a net transport of energy.
The thermoacoustic effect inside the stack takes place mainly in the region that is close to the solid walls of the stack. The layers of gas too far away from the stack walls experience adiabatic oscillations in temperature that result in no heat transfer to or from the walls, which is undesirable.Therefore, an important characteristic for any thermoacoustic element is the value of the thermal and viscous penetration depths. The thermal penetration depth δκ is the thickness of the layer of the gas where heat can diffuse through during half a cycle of oscillations. Viscous penetration depth δv is the thickness of the layer where viscosity effect is effective near the boundaries. In case of sound, the characteristic length for thermal interaction is given by the thermal penetration depth δκ
with η the gas viscosity and ρ its density. The Prandtl number of the gas is defined as
The two penetration depths are related as follows
For many working fluids, like air and helium, Pr is of order 1, so the two penetration depths are about equal. For helium at normal temperature and pressure, Pr≈0.66. For typical sound frequencies the thermal penetration depth is ca. 0.1 mm. That means that the thermal interaction between the gas and a solid surface is limited to a very thin layer near the surface. The effect of thermoacoustic devices is increased by putting a large number of plates (with a plate distance of a few times the thermal penetration depth) in the sound field forming a stack. Stacks play a central role in so-called standing-wave thermoacoustic devices.
Acoustic oscillations in a media are a set of time depending properties, which may transfer energy along its path. Along the path of an acoustic wave, pressure and density are not the only time dependent property, but also entropy and temperature. Temperature changes along the wave can be invested to play the intended role in the thermoacoustic effect. The interplay of heat and sound is applicable in both conversion ways. The effect can be used to produce acoustic oscillations by supplying heat to the hot side of a stack, and sound oscillations can be used to induce a refrigeration effect by supplying a pressure wave inside a resonator where a stack is located. In a thermoacoustic prime mover, a high temperature gradient along a tube where a gas media is contained induces density variations. Such variations in a constant volume of matter force changes in pressure. The cycle of thermoacoustic oscillation is a combination of heat transfer and pressure changes in a sinusoidal pattern. Self-induced oscillations can be encouraged, according to Lord Raleigh, by the appropriate phasing of heat transfer and pressure changes.
The thermoacoustic engine (TAE) is a device that converts heat energy into work in the form of acoustic energy. A thermoacoustic engine operates using the effects that arise from the resonance of a standing-wave in a gas. A standing-wave thermoacoustic engine typically has a thermoacoustic element called the "stack". A stack is a solid component with pores that allow the operating gas fluid to oscillate while in contact with the solid walls. The oscillation of the gas is accompanied with the change of its temperature. Due to the introduction of solid walls into the oscillating gas, the plate modifies the original, unperturbed temperature oscillations in both magnitude and phase for the gas about a thermal penetration depth δ=√(2k/ω) away from the plate, where k is the thermal diffusivity of the gas and ω=2πf is the angular frequency of the wave. Thermal penetration depth is defined as the distance that heat can diffuse though the gas during a time 1/ω. In air oscillating at 1000 Hz, the thermal penetration depth is about 0.1 mm. Standing-wave TAE must be supplied with the necessary heat to maintain the temperature gradient on the stack. This is done by two heat exchangers on both sides of the stack.
If we put a thin horizontal plate in the sound field, the thermal interaction between the oscillating gas and the plate leads to thermoacoustic effects. If the thermal conductivity of the plate material would be zero, the temperature in the plate would exactly match the temperature profiles as in Fig. 1b. Consider the blue line in Fig. 1b as the temperature profile of a plate at that position. The temperature gradient in the plate would be equal to the so-called critical temperature gradient. If we would fix the temperature at the left side of the plate at ambient temperature Ta (e.g. using a heat exchanger), then the temperature at the right would be below Ta. In other words: we have produced a cooler. This is the basis of thermoacoustic cooling as shown in Fig. 2b which represents a thermoacoustic refrigerator. It has a loudspeaker at the left. The system corresponds with the left half of Fig. 1b with the stack in the position of the blue line. Cooling is produced at temperature TL.
It is also possible to fix the temperature of the right side of the plate at Ta and heat up the left side so that the temperature gradient in the plate would be larger than the critical temperature gradient. In that case, we have made an engine (prime mover) which can e.g. produce sound as in Fig. 2a. This is a so-called thermoacoustic prime mover. Stacks can be made of stainless steel plates but the device works also very well with loosely packed stainless steel wool or screens. It is heated at the left, e.g., by a propane flame and heat is released to ambient temperature by a heat exchanger. If the temperature at the left side is high enough, the system starts to produces a loud sound.
Thermoacoustic engines still suffer from some limitations, including that:
- The device usually has low power to volume ratio.
- Very high densities of operating fluids are required to obtain high power densities
- The commercially available linear alternators used to convert acoustic energy into electricity currently have low efficiencies compared to rotary electric generators
- Only expensive specially-made alternators can give satisfactory performance.
- TAE uses gases at high pressures to provide reasonable power densities which imposes sealing challenges particularly if the mixture has light gases like helium.
- The heat exchanging process in TAE is critical to maintain the power conversion process. The hot heat exchanger has to transfer heat to the stack and the cold heat exchanger has to sustain the temperature gradient across the stack. Yet, the available space for it is constrained with the small size and the blockage it adds to the path of the wave. The heat exchange process in oscillating media is still under extensive research.
- The acoustic waves inside thermoacoustic engines operated at large pressure ratios suffer many kinds of non-linearities, such as turbulence which dissipates energy due to viscous effects, harmonic generation of different frequencies that carries acoustic power in frequencies other than the fundamental frequency.
The performance of thermoacoustic engines usually is characterized through several indicators as follows:
- The first and second law efficiencies.
- The onset temperature difference, defined as the minimum temperature difference across the sides of the stack at which the dynamic pressure is generated.
- The frequency of the resultant pressure wave, since this frequency should match the resonance frequency required by the load device, either a thermoacoustic refrigerator/heat pump or a linear alternator.
- The degree of harmonic distortion, indicating the ratio of higher harmonics to the fundamental mode in the resulting dynamic pressure wave.
- The variation of the resultant wave frequency with the TAE operating temperature
Figure 3 is a schematic drawing of a travelling-wave thermoacoustic engine. It consists of a resonator tube and a loop which contains a regenerator, three heat exchangers, and a bypass loop. A regenerator is a porous medium with a high heat capacity. As the gas flows back and forth through the regenerator, it periodically stores and takes up heat from the regenerator material. In contrast to the stack, the pores in the regenerator are much smaller than the thermal penetration depth, so the thermal contact between gas and material is very good. Ideally, the energy flow in the regenerator is zero, so the main energy flow in the loop is from the hot heat exchanger via the pulse tube and the bypass loop to the heat exchanger at the other side of the regenerator (main heat exchanger). The energy in the loop is transported via a travelling wave as in Fig. 1c, hence the name travelling-wave systems. The ratio of the volume flows at the ends of the regenerator is TH/Ta, so the regenerator acts as a volume-flow amplifier. Just like in the case of the standing-wave system, the machine "spontaneously" produces sound if the temperature TH is high enough. The resulting pressure oscillations can be used in a variety of ways, such as in producing electricity, cooling, and heat pumping.
- K. W. Taconis, J. J. M. Beenakker, A. O. C. Nier, and L. T. Aldrich (1949) "Measurements concerning the vapour-liquid equilibrium of solutions of He3 in He4 below 2.19 °K," Physica, 15 : 733-739.
- Rott, Nikolaus (1980). "Advances in Applied Mechanics Volume 20". Advances in Applied Mechanics. 20: 135. doi:10.1016/S0065-2156(08)70233-3. ISBN 9780120020201.
- Wheatley, John (1985). "Understanding some simple phenomena in thermoacoustics with applications to acoustical heat engines". American Journal of Physics. 53 (2): 147. Bibcode:1985AmJPh..53..147W. doi:10.1119/1.14100.
- Swift, G. W. (1988). "Thermoacoustic engines". The Journal of the Acoustical Society of America. 84 (4): 1145. Bibcode:1988ASAJ...84.1145S. doi:10.1121/1.396617.
- Waele, A. T. A. M. (2011). "Basic Operation of Cryocoolers and Related Thermal Machines". Journal of Low Temperature Physics. 164 (5–6): 179. Bibcode:2011JLTP..164..179D. doi:10.1007/s10909-011-0373-x.
- K.W.Taconis and J.J.M. Beenakker, Measurements concerning the vapor-liquid equilibrium of solutions of 3He in 4He below 2.19 K, Physica 15:733 (1949).
- K.T. Feldman, Review of the literature on Rijke thermoacousticphenomena, J. Sound Vib. 7:83 (1968).
- Lord Rayleigh, The theory of sound, 2ndedition, Dover, New York (2), Sec.322, (1945).
- N. Rott, Damped and thermallydriven acoustic oscillations in wide and narrow tubes, Zeitschrift fürAngewandte Mathematik und Physik. 20:230 (1969).
- G.W. Swift, Thermoacousticengines, J. Acoust. Soc. Am. 84:1146 (1988).
- M. Emam, Experimental Investigations on a Standing-Wave Thermoacoustic Engine, M.Sc. Thesis, Cairo University, Egypt (2013) Archived 2013-09-28 at the Wayback Machine..
- G.W. Swift, A unifying perspective for someengines and refrigerators, Acoustical Society of America, Melville, (2002).
- Thermoacoustic research at Los Alamos National Laboratory
- M. Emam, Experimental Investigations on a Standing-Wave Thermoacoustic Engine, M.Sc. Thesis, Cairo University, Egypt (2013)
- M.E.H. Tijani, Loudspeaker-driven thermo-acoustic refrigeration, Ph.D. Thesis, Technische Universiteit Eindhoven, (2001)
- Design Environment for Low-amplitude ThermoAcoustic Energy Conversion | <urn:uuid:96c47e83-75a7-4df0-b10d-2ac5ca2d4b9d> | 3.484375 | 3,919 | Knowledge Article | Science & Tech. | 48.000393 | 95,522,106 |
Myzocallis (Lineomyzocallis) walshii (Monell), a North American aphid species associated with Quercus rubra was detected for the first time in Europe in 1988 (France), and subsequently in several other countries - Switzerland, Spain, Andorra, Italy, Belgium and Germany. Recent research in 2003-2005 recorded this aphid occurring throughout the Czech Republic. The only host plant was Quercus rubra. The highest aphid populations occurred in old parks and road line groves in urban areas, whereas the populations in forests were low. The seasonal occurrence of the light spring form and the darker summer form of M. (Lineomyzocallis) walshii as well as their different population peaks were noted. Four native parasitoids species (Praon flavinode (Haliday), Trioxys curvicaudus Mackauer, T. pallidus Haliday and T. tenuicaudus (Stary)) were reared from M. (Lineomyzocallis) walshii.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:376ce5d7-e8c2-4a5a-a962-5d20794c96f7> | 2.578125 | 242 | Academic Writing | Science & Tech. | 30.6565 | 95,522,122 |
Nickel: the essentials
Nickel atoms have 28 electrons and the shell structure is 220.127.116.11. The ground state electronic configuration of neutral nickel is [Ar].3d8.4s2 and the term symbol of nickel is 3F4.
Nickel is found as a constituent in most meteorites and often serves as one of the criteria for distinguishing a meteorite from other minerals. Iron meteorites, or siderites, may contain iron alloyed with from 5 to nearly 20% nickel. The USA 5-cent coin (whose nickname is "nickel") contains just 25% nickel. Nickel is a silvery white metal that takes on a high polish. It is hard, malleable, ductile, somewhat ferromagnetic, and a fair conductor of heat and electricity.
Nickel carbonyl, [Ni(CO)4], is an extremely toxic gas and exposure should not exceed 0.007 mg M-3.
Cartoon by Nick D Kim ([Science and Ink], used by permission).
Nickel: physical properties
Nickel: heat properties
- Melting point: 1728 [1455 °C (2651 °F)] K
- Boiling point: 1728 [1455 °C (2651 °F)] K
- Enthalpy of fusion: 20.5 kJ mol-1
Nickel: atom sizes
- Atomic radius (empirical): 135 pm
- Molecular single bond covalent radius: 110 (coordination number 3) ppm
- van der Waals radius: 240 ppm
- Pauling electronegativity: 1.91 (Pauling units)
- Allred Rochow electronegativity: 1.75 (Pauling units)
- Mulliken-Jaffe electronegativity: (no data)
Nickel: orbital properties
- First ionisation energy: 737.14 kJ mol‑1
- Second ionisation energy: 1753.03 kJ mol‑1
- Third ionisation energy: 3395.0 kJ mol‑1
Nickel: crystal structure
Nickel: biological data
- Human abundance by weight: 100 ppb by weight
Nickel is an essential trace element for many species. Chicks and rats raised on nickel-deficient diets have liver problems. Enzymes known as hydrogenases in bacteria contain nickel. Nickel is also important in plant ureases.
Reactions of nickel as the element with air, water, halogens, acids, and bases where known.
Nickel: binary compounds
Binary compounds with halogens (known as halides), oxygen (known as oxides), hydrogen (known as hydrides), and other compounds of nickel where known.
Nickel: compound properties
Bond strengths; lattice energies of nickel halides, hydrides, oxides (where known); and reduction potentials where known.
Nickel: historyNickel was discovered by Axel Fredrik Cronstedt in 1751 at Sweden. Origin of name: from the German word "kupfernickel" meaning Devil's copper or St Nicholas's (OLd Nick's) copper.
Nickel isotopes are used for the production of several radioisotopes. Ni-64 is used for the production of Cu-64 which is used in radioimmunotherapy. Ni-61 can be used for the production of the PET radioisotope Cu-61. Ni-62 is used for the production of the radioisotope Ni-63 which can be used as an XRF source, as an electron capture source in gas chromatographs and as a power source in microelectromechanical systems. Ni-58 can be used for the production of the radioisotope Co-58. Ni-60 is used for the production of Co-57 which is used in bone densitometry and as a gamma camera reference source. Ni-60 is also used as an alternative for the production of Cu-61, but the route via Ni-61 is more common. Finally, most stable Nickel isotopes have been used to study human absorption of Nickel.
Isolation: it is not normally necessary to make nickel in the laboratory as it is available readily commercially. Small amounts of pure nickel can be islated in the laborotory through the purification of crude nickel with carbon monoxide. The intermediate in this process is the highly toxic nickel tetracarbonyl, Ni(CO)4. The carbonyl decomposes on heating to about 250°C to form pure nickel powder.
Ni + 4CO (50°C) → Ni(CO)4 (230°C) → Ni + 4CO
The Ni(CO)4 is a volatile complex which is easily flushed from the reaction vessel as a gas leaving the impurities behind. Industrially, the Mond process uses the same chemistry. Nickel oxides are reacted with "water gas", a mixture of CO + H2). Reduction of the oxide with the hydrogen results in impure nickel. This reacts with the CO component of the water gas to make Ni(CO)4 as above. Thermal decomposition leaves pure nickel metal. | <urn:uuid:72b858ae-b5f0-405e-b1da-6322556b6405> | 3.484375 | 1,075 | Knowledge Article | Science & Tech. | 53.090155 | 95,522,127 |
Analytic geometry is a field of geometry which is represented through the use of coordinates which illustrate the relatedness between an algebraic equation and a geometric structure. In both algebra and geometry, the techniques of analytic geometry are used to solve problems.
Circles, lines and points are the most basal geometric structures which are modelled using analytic geometry. Ordered pairs, such as (x,y) are used to represent points and sets of points, such as (x1,y1) and (x2,y2) are used for describing that a line is related to a linear equation. These points represent the coordinates of where this line passes through. In its basic form, a line in two dimensional space is denoted as ax + by + c = 0.
Furthermore, when dealing with three dimensional spaces, a point consists of three values, (x,y,z). Thus, the linear equation representing a set of coordinates in 3-D space is represented as ax + by + cz + d = 0. This linear equation represents a plane and not a line.
This discussion presents a very brief description on the framework surrounding analytic geometry. Furthermore, it is important to note that analytic geometry has interesting applications in the real-world. For instance, a modern day example of analytic geometry would be how algebraic equations are inputted into computers and manipulated in order to produce geometric structures in the form of animations on a screen.© BrainMass Inc. brainmass.com July 16, 2018, 10:05 pm ad1c9bdddf | <urn:uuid:cbe87b25-73d2-4655-bef9-f4d6dc711e13> | 3.71875 | 317 | Knowledge Article | Science & Tech. | 43.162727 | 95,522,137 |
At the heart of many metabolic processes, including DNA replication, are enzymes called helicases. Acting like motors, these proteins travel along one side of double-stranded DNA, prompting the strands to "zip" apart.
What had been a mystery was the exact mechanics of this vital biological process – how individual helicase subunits coordinate and physically cause the unzipping mechanism.
Cornell researchers led by Michelle Wang, professor of physics and an investigator of the Howard Hughes Medical Institute (HHMI), have observed these processes by manipulating single DNA molecules to watch what happens when helicases encounter them, and how different nucleotides that fuel the reactions affect the process. For their experiments they used an E. coli T7 phage helicase, a type with six distinct subunits, which is a good representation of how many helicases work.
"This is a great demonstration of the power of single-molecule studies," said Wang, whose lab specializes in a technique called optical trapping. To record data from single molecules, the scientists use a focused beam of light to "trap" microspheres attached to the molecules.
Prior to this work, researchers from other labs had found that the nucleotide dTTP (deoxythymidine triphosphate) was a "preferred" fuel for the helicase, and that the helicase apparently wouldn't unzip DNA if ATP (adenosine triphosphate) was provided as fuel. Wang and her colleagues found this puzzling, because ATP is known to be the primary fuel molecule in living organisms.
In their latest work, they discovered that, in fact, ATP does cause unwinding, but only in the single-molecule study could they confirm this. In normal biochemical studies, ATP doesn't seem to work, because it causes helicase to "slip" backward on the DNA, then move forward, then slip again.
In bulk studies, rather than single-molecule kinetic observations, the ATP doesn't produce a signal from unwound DNA because the slippage masks the signal.
They then surmised that different mixtures of nucleotides might allow them to investigate helicase subunit coordination. They found that very small amounts of dTTP mixed with large amounts of ATP were enough to decrease the "slippage" events they saw with the ATP alone.
Further inspection revealed that while two subunits of the T7 helicase are binding and releasing nucleotides, the other four can remain bound to nucleotides to anchor the DNA and prevent it from slipping. It only takes one subunit bound to dTTP to decrease slippage almost entirely – a little goes a long way.
Such studies can help scientists gain a deeper understanding of helicase mechanics and, in the case of medicine, what happens when helicases go awry or don't bind correctly.
Smita Patel, Rutgers University biochemistry professor and paper co-author, says helicase defects are associated with cancer predisposition, premature aging and many other genetics-related conditions.
"This study provides fundamental new knowledge about a cellular process that is essential to all forms of life," said Catherine Lewis, who oversees single-molecule biophysics grants at the National Institute of General Medical Sciences of the National Institutes of Health. "By using single-molecule methods to study how helicases work, Dr. Wang has resolved several longstanding questions about how the enzyme is coordinated, and possibly regulated, during replication."
Along with contributions from researchers at other institutions, the paper's two lead authors are Bo Sun, an HHMI and Cornell postdoctoral associate in physics, and Daniel S. Johnson, a former graduate student.
The Nature paper, "ATP-induced helicase slippage reveals highly coordinated subunits," was funded by the National Institutes of Health, the National Science Foundation and the Cornell Molecular Biophysics Training Grant.
Blaine Friedlander | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:de4d53b8-84b6-47f7-b212-909b8568a21e> | 3.9375 | 1,457 | Content Listing | Science & Tech. | 33.553415 | 95,522,141 |
Traditionally, observers play no special role in physics. Like bird watchers in a perfect hide, we observe the outcome of experiments, or gaze at the stars through our telescopes, taking no part in the action. Modern physics, however, tells a different story. It suggests that the very act of observation can influence what is being observed and challenges our classical understanding of observation in other ways too.
In this project, brought to you in collaboration with FQXi, we'll explore the role of observers in physics. With the help of leading experts, we will bring you articles, videos and podcasts that probe current understanding and cutting edge research into this strange and fascinating topic. Enjoy!
The limits of observation — Physics is all about observation, but how much can we actually see? These articles explore some of the limits of observation — be they natural, scientific, political, or down to quantum jelly.
Watch and learn? — Quantum mechanics suggests that the very act of observation can change what is being observed. The very vastness of the cosmos means we need to understand our place in it before we can draw conclusions about it. How can this be? This series of articles and videos introduces basic questions about the role of the observers in physics.
The cosmic microwave background — Few observations have given as profound an insight into the workings of our Universe as the detection of the cosmic microwave background. This faint glow confirms that theUniverse started in a Big Bang, tells us what it is made of, what shape it is, what its future is likely to be, and more. Find out how a single picture can tell us so much with this series of articles.
Context is everything — Imagine your weight depended on the colour of your underwear! Something quite similar may be happening when you are measuring things in quantum physics. Find out more about this so-called quantum contextuality in this series of articles and videos.
This project is a collaboration between Plus and FQXi, an organisation that supports and disseminates research on questions at the foundations of physics and cosmology.
The FQXi community website does for physics and cosmology what Plus does for maths:
provide the public with a deeper understanding of known and future discoveries in these areas, and their potential implications for our worldview. | <urn:uuid:eb65cdc5-3e70-47fe-934e-5afa239c49dd> | 3.296875 | 462 | News (Org.) | Science & Tech. | 43.282545 | 95,522,153 |
Chromosomal inversions have long fascinated evolutionary biologists for their role in adaptation and speciation. These structural variants are abundant in natural populations and can have diverse evolutionary consequences. They can cause reproductive isolation through hybrid sterility or protect sets of co-adapted alleles from recombination, while inversions in genes or promoters can disrupt gene expression.
A recent review paper by Maren Wellenreuther and Louis Bernatchez published in TREE comprehensively summarises what is known about the biology of inversions. Wellenreuther & Bernatchez (2018) include a list of about two dozen species where inversions have been characterised in the wild, and come to some interesting conclusions about inversion size (important inversions are often large), age (they’re often old and pre-date speciation), and the evolutionary processes that act on them (balancing selection may be important).
Representative examples of taxa with well-characterised chromosomal inversions, and their evolutionary consequences. From Wellenreuther & Bernatchez (2018).
This paper is a clear reminder of the importance of inversions in evolution. Reading the review, however, made me wonder just how common inversions are in speciation, adaptation, and the maintenance of species differences, or whether their perceived importance is biased by their ease of detection. In many genetic analyses inversions stand out like a sore thumb. They can be spotted in genome-wide alignments, or may be seen as large regions of divergence in outlier scans. Inversions are also obvious in genetic maps where you see clusters of markers with reduced recombination. Making a causal link between inversions and adaption is harder, but there are many cases where quantitative trait loci (QTL) co-occur with inversions and implicate their role in adaptation (e.g. Lowry & Willis, 2010).
This question about the ubiquity of inversions in speciation studies is partly addressed in a recent paper by Davey et al. published in Evolution Letters. Davey et al. (2017) investigated whether species barriers between closely-related hybridizing taxa are always maintained by inversions, and answered this with a resounding “no”. While this shouldn’t come as much of a surprise, it is useful case study that brings balance to the recent inversion-heavy evolutionary literature.
In their study, Davey et al. (2017) use genetic mapping, genome assemblies, and long-read sequencing to look for inversions that differ between two sympatric species of the widely studied butterfly genus Heliconius. They showed hybridising H. cydno and H. melpomene completely lack any large inversions (over 50Kb in length) that are likely to be involved in maintaining co-adapted gene complexes. Their use of high quality genome assemblies and long-read sequence data make it unlikely that any large inversions would be overlooked. The authors conclude that: “This suggests that hybridization is rare enough and mate preference is strong enough that inversions are not necessary to maintain the species barrier”.
So, how often are inversions involved in speciation? Clearly we need more studies investigating the genomic basis of species differences in order to answer this question. But we should also remember that the prevalence of studies that implicate inversions in adaptation and speciation are likely (at least in part) to be due to the ease of detecting inversions, and the methodological constraints in finding individual loci underlying divergence. In Heliconius butterflies, and no doubt many other organisms, divergence has occurred in the face of homogenizing gene flow despite lacking major inversions. Future studies investigating the genetics of speciation and adaptation should broaden the search to other genic modifiers of recombination rate (reviewed in Ortiz-Barrientos et al. 2016), to more fully understand how the recombination landscape influences speciation, while also continuing the search for elusive speciation genes.
Davey JW, Barker SL, Rastas PM, et al. (2017) No evidence for maintenance of a sympatric Heliconius species barrier by chromosomal inversions. Evolution Letters 1, 138-154.
Lowry DB, Willis JH (2010) A widespread chromosomal inversion polymorphism contributes to a major life-history transition, local adaptation, and reproductive isolation. PLoS Biol 8, e1000500.
Ortiz-Barrientos D, Engelstädter J, Rieseberg LH (2016) Recombination rate evolution and the origin of species. Trends in Ecology & Evolution 31, 226-236.
Wellenreuther M, Bernatchez L (2018) Eco-evolutionary genomics of chromosomal inversions. Trends in Ecology & Evolution 33, 427-440. | <urn:uuid:826a4521-aa78-4268-b0fa-6e55fdafe951> | 2.84375 | 991 | Knowledge Article | Science & Tech. | 22.421613 | 95,522,165 |
High-flying kites tethered to generators could supply as much as 100 megawatts of electricity, enough to power 100,000 homes, according to researchers from the Delft University of Technology in The Netherlands.
The scientists have recently demonstrated that flying a single 10-square-meter kite could produce 10 kilowatts of power, which could supply electricity for about 10 homes.
In their next experiment, the researchers plan to test a 50-kilowatt version, called Laddermill. Eventually, their goal is to build a multi-kite system that could generate a full 100 megawatts.
As project leader and professor of sustainable engineering Wubbo Ockels explains, kites generate power by pulling on their strings that are attached to generators on the ground. After reaching their maximum height, the kites are reeled back down to repeat the process.
Electricity produced by kites in the wind could be inexpensive, too. The researchers predict prices to be comparable with generating electricity using coal power, and half that of using wind turbines.
One advantage of kites is their potential height. Commercial windmills generally reach heights of around 80 meters, where the average wind speed is about 5 meters per second. At higher altitudes, such as 800 meters, the average wind speed is about 7 meters per second. Because the amount of power available from the wind is related to the cube of its speed, blades at higher altitudes could generate up to five times the amount of electricity as at lower altitudes. High-altitude wind is also generally more reliable than ground-level wind.
While building an 800-meter-tall windmill would be impractical, a kite can easily reach that height, and take advantage of the higher wind speeds. The Dutch scientists note that a high-speed jet stream makes countries such as the UK, The Netherlands, Ireland, and Denmark especially good locations for kites.
Using computer models, researchers can determine how to configure kites so that they get the most out of the wind. Ockels´ system used figure-eight flying patterns developed by Allister Furey of the University of Sussex, an arrangement that increases the speed of the air flowing over the kites. He´s also investigating a yo-yo configuration, where one kites goes up as another falls from the sky like a glider.
"Pretty much anywhere in the UK you could run a kite plant economically, but you couldn´t run a wind turbine economically," said Furey.
Several other scientists are investigating the use of kites to harness energy from the wind - which some researchers estimate provides more than 100 times the amount required to power the entire planet. In 2007, Google´s philanthropic arm invested about $10 million in a US kite company called Makani. An Italian company called Kitegen has a multi-kite scheme that could generate a gigawatt of power, as much as a standard coal plant.
via: The Guardian and EcoGeek
Explore further: Climate change may scuttle Caribbean's post-hurricane plans for a renewable energy boom | <urn:uuid:acc73382-7502-4721-bb57-f6bcd5ba25e9> | 3.609375 | 640 | News Article | Science & Tech. | 35.472229 | 95,522,171 |
Fluidization of Soft Estuarine Mud by Waves
The interaction between waves and soft muddy bottoms, a key process in governing estuarine and lacustrine cohesive sediment transport, is at present not well understood. What is quite well known, however, is that waves are significantly important in generating fluid mud, a high concentration near-bed slurry, which thereby becomes potentially available for transport by tidal currents. It follows that the precise mechanism by which fluid mud is formed by wave action over cohesive, porous solid beds is of evident interest in understanding and interpreting the microfabric of flow-deposited fine sediments in shallow waters. Results from preliminary laboratory tests described using known soil mechanical principles shed some light along these lines, and suggest that the fluidization process may be even more significant in generating potentially transportable sediment than previously realized.
KeywordsPore Pressure Excess Pore Pressure Suspended Sediment Concen Laboratory Flume Measured Pore Pressure
Unable to display preview. Download preview PDF.
- Kemp, G.P., and J.T. Wells, 1987. Observations of shallow-water waves over a fluid mud bottom: implications to sediment transport. Proceedings of Coastal Sediments ’87, v. 1, American Society of Civil Engineers, New York, p. 363–377.Google Scholar
- Maa, P.-Y., 1986. Erosion of soft muds by waves. Ph.D. thesis, University of Florida, Gainesville, 296 p.Google Scholar
- Pamukcu, S., and J.N. Suhayda, 1987. High resolution measurement of shear modulus of clay using triaxial vane device. In: Cakmak, A.S. (ed.), Soil Mechanics and Liquefaction. Elsevier, Amsterdam, p. 307– 321.Google Scholar
- Perloff, W.H., and W. Baron, 1976. Soil Mechanics Principles and Applications. Wiley, New York, 745 p.Google Scholar
- Ross, M.A., 1988. Vertical structure of estuarine fine sediment suspensions. Ph.D. thesis, University of Florida, Gainesville, 188 p. Wright, V.G., 1987. Laboratory and numerical study of mud and debris flow. Ph.D. thesis, University of California, Davis.Google Scholar
- Wright, V.G., 1987. Laboratory and numerical study of mud and debris flow. Ph.D. thesis, University of California, Davis.Google Scholar | <urn:uuid:44baf899-4305-4466-a4f9-37d55d1f947e> | 2.515625 | 520 | Academic Writing | Science & Tech. | 48.186397 | 95,522,172 |
The cosmic expansion of our Universe really is accelerating; the mystery is why. The most popular explanation is Hypothesis of Dark Energy, which originates from another supposition – the Big Bang. According to Newton mechanics with corrections of Einstein relativity the velocity of galaxies must slow down or at least remain unchangeable. This article shows that Newton gravitation law causes two distinct phenomena in the Universe: 1. mutual attraction of the bodies and 2. accelerating expansion of the void. No Dark Energy is necessary.
Comments: 4 pages
[v1] 2018-06-16 01:59:49
Unique-IP document downloads: 5 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:566869be-5461-4c8e-9eb6-e40486c390cb> | 2.59375 | 269 | Truncated | Science & Tech. | 40.644 | 95,522,210 |
Authors: Jozef Radomanski
The current article is a next stage of construction of the alternative Special Relativity. The problems of Special Theory of Relativity (STR) arise from their colliding assumptions. The correct postulates of STR do not match the seemingly obvious assumption that space-time is real. In our articles, we try to prove the hypothesis that space-time has a complex structure, and the real one is only locally in a system related to an observer. The misleading impression that space-time is real results from the fact that the information carrier is energy that is always real. This article shows how the complex space-time phenomena can be seen by the observer in his real coordinate system. The concept of realisation of the complex orthogonal paravector, which represents a compound boost, to the form of the real velocity paravector has been introduced on the basis of energy equivalence . The realisation of coordinates of the state paravectors of any object is interpreted as a choice of real coordinates in the observer's frame, which in classical mechanics corresponds to the choice of the object's axis of motion with which the coordinates of the motion of another object are distributed. Mathematical properties of realisation are studied and an attempt to apply it to describe the physical object states.
Comments: 26 Pages.
[v1] 2018-07-02 15:52:01
Unique-IP document downloads: 10 times
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:89cd5ef3-a9a6-419d-bedb-8a0579954d45> | 3 | 336 | Academic Writing | Science & Tech. | 33.290098 | 95,522,211 |
1159. If you were out in space and could see every individual person clearly, would it look like they were walking at a slant? — KD, McMinnville, OR
To the astronauts orbiting the earth, up and down have very little meaning. Because they are falling all the time, these astronauts have no feeling of weight and can't tell up from down without looking. If an astronaut were to look at a person walking on the ground below, that person might easily appear at a strange angle, depending on the astronaut's orientation and point of view. | <urn:uuid:abd97bf5-dbc5-4cd9-9409-c49eb6c7d42a> | 3.0625 | 114 | Q&A Forum | Science & Tech. | 59.56575 | 95,522,229 |
Grid Column Definition.When the object is serialized out as xml, its qualified name is w:gridCol.
Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll)
'Declaration Public Class GridColumn _ Inherits OpenXmlLeafElement 'Usage Dim instance As GridColumn
public class GridColumn : OpenXmlLeafElement
[ISO/IEC 29500-1 1st Edition]
17.4.16 gridCol (Grid Column Definition)
This element specifies the presence and details about a single grid column within a table grid. A grid column is a logical column in a table used to specify the presence of a shared vertical edge in the table. When table cells are then added to this table, these shared edges (or grid columns, looking at the column between those shared edges) determine how table cells are placed into the table grid.
[Example: If a table row specifies that it is preceded by two grid columns, then it would start on the third vertical edge in the table including edges which are not shared by all columns. end example]
If the table grid does not match the requirements of one or more rows in the table (i.e. it does not define enough grid columns), then the grid can be redefined as needed when the table is processed.
[Example: Consider the following, more complex table that has two rows and two columns; as shown, the columns are not aligned:
This table is represented by laying out the cells on a table grid consisting of three table grid columns as follows, each grid column representing a logical vertical column in the table:
The dashed lines represent the virtual vertical continuations of each table grid column, and thus resulting table grid is represented as the following in WordprocessingML:
<w:tblGrid> <w:gridCol w:w="5051" /> <w:gridCol w:w="3008" /> <w:gridCol w:w="1531" /> </w:tblGrid>
tblGrid (§17.4.49); tblGrid (§17.4.48)
w (Grid Column Width)
Specifies the width of this grid column.
[Note: This value does not solely determine the actual width of the resulting grid column in the document. When the table is displayed in a document, these widths determine the initial width of each grid column, which can then be overridden by:
This value is specified in twentieths of a point.
If this attribute is omitted, then the last saved width of the grid column is assumed to be zero.
[Example: Consider the following table grid definition:
<w:tblGrid> <w:gridCol w:w="6888"/> <w:gridCol w:w="248"/> <w:gridCol w:w="886"/> <w:gridCol w:w="1554"/> </w:tblGrid>
This table grid specifies four grid columns, each of which has an initial size of 6888 twentieths of a point, 248 twentieths of a point, 886 twentieths of a point, and 1554 twentieths of a point respectively. end example]
The possible values for this attribute are defined by the ST_TwipsMeasure simple type (§184.108.40.206).
[Note: The W3C XML Schema definition of this element’s content model (CT_TblGridCol) is located in §A.1. end note]
© ISO/IEC29500: 2008.
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. | <urn:uuid:6a807f41-5a44-48d7-b878-8db2d7216379> | 3.296875 | 786 | Documentation | Software Dev. | 62.516441 | 95,522,233 |
Genetics Breakthrough Gives Sustainable Biofuels A Big Boost
News Apr 20, 2017 | Original story from Donald Danforth Plant Science Center
Credit: Steve Jurvetson
Researchers at the Enterprise Rent-A-Car Institute for Renewable Fuels at the Donald Danforth Plant Science Center have discovered a gene that influences grain yield in grasses related to food crops. Four mutations were identified that could impact candidate crops for producing renewable and sustainable fuels.
In a paper published April 18, 2017 in Nature Plants, a team led by Thomas Brutnell, Ph.D.Director of the Enterprise Institute for Renewable Fuels at the Danforth Center and researchers at the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, conducted genetic screens to identify genes that may play a role in flower development on the panicle of green foxtail. Green foxtail is a wild relative of the common crop foxtail millet. These Setaria species are related to several candidate bioenergy grasses including switchgrass and Miscanthus and serve as grass model systems to study grasses that photosynthetically fix carbon from CO2 through a water-conserving (C4) pathway. The genomes of both green foxtail and foxtail millet have been sequenced and annotated through the DOE JGI’s Community Science Program.
“We have identified four recessive mutants that lead to reduced and uneven flower clusters,” said Pu Huang, Ph.D., the lead author of the paper. “By ultimately identifying the gene in green foxtail we identified a new determinant in the control of grain yield that could be crucial to improving food crops like maize.”
The grass Setaria has been proposed as a model for food and bioenergy crops for its short stature and rapid life cycle, compared to most bioenergy grasses. After constructing a mutant population resource for the grass, the Brutnell lab screened 2,700 M2 families, deep sequenced a mutant pool to identify the causative mutation and confirmed a homologous gene in maize played a similar role.
“Identifying this new player in panicle architecture may enable the design of plants with either enhanced or reduced panicle structures,” stated Brutnell. “For instance, maize breeding has selected for reduced male panicles, also known as tassels, to reduce shading in the field while still producing sufficient pollen. However, grain yields in sorghum are directly related to the architecture of the panicle. By showing that this gene influences panicle architecture in Setaria and maize, we have expanded the tool box for breeders.”
At the Danforth Center, plants hold the key to discoveries and products that will enrich and restore both the environment and the lives of people around the globe. Brutnell’s lab research includes the search for the next generation of biofuels: alternative sources of energy that are affordable, sustainable and ecologically sound. The research develops novel computational tools and model systems to identify genes that will improve yield in crops through enhanced photosynthesis.
Bioethics Council Rules Heritable Genome Editing "Ethically Acceptable" In Certain CircumstancesNews
A leading UK bioethics advisory body has weighed in on the debate around human genetic modification, concluding that heritable genome editing – modifying the DNA of an egg, sperm or embryo with changes that will be passed on to future generations – could be ‘morally permissible’ in humans, provided key ethical tests are met.
Genetic Factors Leading to Rare Bone Fusion Disorder IdentifiedNews
Genome sequencing establishes multiple genes responsible for a rare condition that cause bone fusionREAD MORE
Hay Fever Risk Genes Overlap with Autoimmune DiseaseNews
In a large international study involving almost 900,000 participants, researchers from the University of Copenhagen and COPSAC have found new risk genes for hay fever. It is the largest genetic study so far on this type of allergy, which affects millions of people around the world.READ MORE | <urn:uuid:c2faf0c3-d9e4-4bc8-b9af-48e73b2e78c7> | 3.125 | 835 | News Article | Science & Tech. | 25.166497 | 95,522,290 |
Milky Way could be home to 4.5 billion Earth-like planets
Astronomers have calculated that 6 percent of our galaxy's most common type of star probably host temperate, Earth-sized planets, meaning that a habitable alien Earth could be just a dozen light years away.
David A. Aguilar (CfA)
Billions of Earth-like alien planets likely reside in our Milky Way galaxy, and the nearest such world may be just a stone's throw away in the cosmic scheme of things, a new study reports.
Astronomers have calculated that 6 percent of the galaxy's 75 billion or so red dwarfs — stars smaller and dimmer than the Earth's own sun — probably host habitable, roughly Earth-size planets. That works out to at least 4.5 billion such "alien Earths," the closest of which might be found a mere dozen light-years away, researchers said.
"We thought we would have to search vast distances to find an Earth-like planet," study lead author Courtney Dressing, of the Harvard-Smithsonian Center for Astrophysics (CfA), said in a statement. "Now we realize another Earth is probably in our own backyard, waiting to be spotted."
Kepler's keen eye
Dressing and her team analyzed data gathered by NASA's prolific Kepler space telescope, which is staring continuously at more than 150,000 target stars. Kepler spots alien planets by flagging the tiny brightness dips caused when the planets transit, or cross the face of, their stars from the instrument's perspective. [Gallery: A World of Kepler Planets]
Kepler has detected 2,740 exoplanet candidates since its March 2009 launch. Follow-up observations have confirmed only 105 of these possibilities to date, but mission scientists estimate that more than 90 percent will end up being the real deal.
In the new study, Dressing and her colleagues re-analyzed the red dwarfs in Kepler's field of view and found that nearly all are smaller and cooler than previously thought.
This new information bears strongly on the search for Earth-like alien planets, since roughly 75 percent of the galaxy's 100 billion or so stars are red dwarfs.
Further, scientists determine the sizes of transiting exoplanets by comparison to their parent stars, based on how much of the stars' disks the planets blot out when transitting. So a reduction in a star's calculated size brings a planet's size down, too — in some cases, perhaps into the realm of rocky worlds with a solid, potentially life-supporting surface.
And the size and location of a star's "habitable zone," the range of distances that could support the existence of liquid water on a planet's surface, are strongly tied to stellar brightness and temperature.
Re-analzying the data
The researchers determined that 95 Kepler exoplanet candidates orbit red dwarfs. Using this information and their newly calculated stellar (and planetary) profiles, the team calculated that about 60 percent of red dwarfs likely host worlds smaller than Neptune.
Dressing and her colleagues then determined that Kepler has spotted three roughly Earth-size exoplanet candidates in the habitable zones of their parent red dwarfs.
One of these worlds is Kepler Object of Interest (KOI) 1422.02. This candidate's newly calculated size is 90 percent that of Earth, and it circles its star every 20 days. If the planet (and these characteristics) are confirmed, KOI 1422.02 may be the first "alien Earth" ever discovered.
The other two candidates are KOI 2626.01, another potential Earth twin that's 1.4 times bigger than Earth and has a 38-day orbit; and KOI 854.01, a world 1.7 times the size of Earth with a 56-day orbit.
All three candidates are located between 300 and 600 light-years from Earth and circle stars with surface temperatures between 5,700 and 5,900 degrees Fahrenheit (3,150 and 3,260 degrees Celsius), researchers said. (For comparison, the Earth's sun has a surface temperature of about 10,000 degrees Fahrenheit, or 5,540 degrees Celsius.)
Billions of Earth-like planets
The team further determined that about 6 percent of the Milky Way's red dwarfs should harbor roughly Earth-size planets in their habitable zones, meaning that at least 4.5 billion such worlds may be scattered throughout our galaxy.
"We now know the rate of occurrence of habitable planets around the most common stars in our galaxy," co-author David Charbonneau, also of CfA, said in a statement. "That rate implies that it will be significantly easier to search for life beyond the solar system than we previously thought." [9 Exoplanets That Could Host Alien Life]
That search may bear fruit right in Earth's backyard, researchers said.
"According to our analysis, the closest Earth-like planet is likely within 13 light-years, which is right next door in terms of astronomical distances," Dressing told SPACE.com via email.
"The knowledge that another an Earth-like planet might be so close is incredibly exciting and bodes well for the next generation of missions designed to detect nearby Earth-like planets," she added. "Once we find nearby Earth-like planets, astronomers are eager to study them in detail with the James Webb Space Telescope and proposed extremely large ground-based telescopes like the Giant Magellan Telescope."
Red dwarfs are also longer-lived than stars like the sun, suggesting that some planets in red dwarf habitable zones may harbor life that's been around a lot longer than that on Earth, which first took root about 3.8 billion years ago.
"We might find an Earth that’s 10 billion years old," Charbonneau said.
The nearest red dwarf is Proxima Centauri, part of the three-star Alpha Centauri system that's just 4.3 light-years or so from Earth. Late last year, astronomers announced the discovery of an Earth-size planet orbiting the system's Alpha Centauri B, but it's far too hot to host life as we know it.
Scientists have also detected five planetary candidates circling the star Tau Ceti, which lies 11.9 light-years away. Two of these potential planets may reside in the habitable zone, but they are at least 4.3 and 6.6 times as massive as Earth, scientists say.
The new study will be published in The Astrophysical Journal.
- Planets Large and Small Populate Our Galaxy (Infographic)
- Kepler Reveals Lots of Planets: Some Habitable?
- The Nearest Stars to Earth (Infographic)
Copyright 2013 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:90cc6b6d-93c3-45de-8a99-15dfcbce0e21> | 3.46875 | 1,407 | News Article | Science & Tech. | 56.834647 | 95,522,296 |
Astronauts help corals grow on the ocean floor off Florida
Jul. 16, 2017
KEY LARGO, Fla. (AP) — Astronauts are helping coral grow on the ocean floor off Florida.
Marine scientists at Florida International University are studying corals growing in deep waters. The Miami-area school reported recruiting NASA astronauts to plant a coral nursery 90 feet (27 meters) below the ocean's surface in the Florida Keys National Marine Sanctuary.
NASA trains astronauts at FIU's Aquarius Reef Base, an underwater laboratory in the sanctuary. FIU researcher Mauricio Rodriguez-Lanetty said dives from the surface would be too short and risky to accomplish much in the nursery. But divers living in the pressurized lab for days or weeks at a time can work longer in deeper waters.
Astronauts planted the endangered corals on tree-like structures made from plastic pipes in 2015. They've continued working in the nursery during NASA training missions over the last two years. | <urn:uuid:38bc2de5-0c32-435c-8791-0ad3a6228b93> | 3.1875 | 202 | News Article | Science & Tech. | 45.157494 | 95,522,298 |
For many people, a monsoon brings to mind images of intense rainfall and high winds in faraway places. Actually, monsoons occur all over the globe, including North America. These seasonal reversals of winds trigger dramatic changes in rainfall and other weather events within a short period of time.
The North American monsoon affects large areas of the southwestern United States and northwestern Mexico. This rainy season brings with it much more than torrential downpours from July to mid-September. The North American monsoon is one of the key natural events that defines the warm-season climate over the region. It is important that researchers better understand the key physical processes at play that determine the life cycle of the monsoon. That knowledge should make it possible to forecast warm-season rainfall over North America more accurately.
Throughout the summer of 2004, researchers from NASA and other U.S. government agencies led by the National Oceanic and Atmospheric Administration (NOAA) joined an international team of scientists from Mexico, Belize and Costa Rica to carry out an intensive field campaign as part of the North American Monsoon Experiment (NAME). NAME is a study aimed at improving the ability to observe and simulate monsoons over North America. The early findings from NAME were published in a recent issue of the Bulletin of the American Meteorological Society.
Rob Gutro | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:f9cd8420-be0e-4de8-850f-73b32479f62b> | 4.0625 | 908 | Content Listing | Science & Tech. | 38.483546 | 95,522,301 |
The technique, combined with the expected discovery of millions more far-away quasars over the next decade, could yield an unprecedented look back to a time shortly after the Big Bang, when the universe was a fraction the size it is today.
Researchers found the key while analyzing the visible light from a small group of quasars.
Patterns of light variation over time were consistent from one quasar to another when corrected for the quasar's redshift. This redshift occurs because an expanding universe carries the quasars away from us, thus making the light from them appear redder (hence the term), and also making the time variations appear to occur more slowly.
Turning this around, by measuring the rate at which a quasar's light appears to vary and comparing this rate to the standard rate at which quasars sampled actually vary, the researchers were able to infer the redshift of the quasar.
Knowing the quasar redshift enables the scientists to calculate the relative size of the universe when the light was emitted, compared to today.
"It appears we may have a useful tool for mapping out the expansion history of the universe," said Glenn Starkman, a physics professor at Case Western Reserve and an author of the study, published this summer in Physical Review Letters.
"If we could measure the redshifts of millions of quasars, we could use them to map the structures in the universe out to a large redshift."
The larger the redshift, the farther and older the light source.
The group plans to seek larger samples of quasars, to confirm the patterns are consistent and can be used to calculate their redshifts everywhere across the universe.
The work was led by De-Chang Dai, who earned his PhD working with Starkman and was most recently a member of the Astrophysics, Cosmology and Gravity Centre, University of Cape Town. The other authors include Amanda Weltman, PhD, a senior cosmology lecturer at the Centre, and brothers Branislav Stojkovic, a doctoral student in computer science and engineering, and Dejan Stojkovic, a physics professor at the State University of New York at Buffalo. Dejan Stojkovic also earned his PhD with Starkman and was later a visiting assistant professor at Case Western Reserve.
The scientists graphed the amount of light from 14 quasars recorded by the Massive Compact Halo Objects project, which sought evidence of dark matter in and around the Milky Way. Light from each quasar was measured repeatedly over hundreds of days.
Graphing revealed phases during which the amount of light would either increase or decrease in a linear fashion over an extended period of time.
Although other properties varied, the rate at which the measurable light changed was nearly identical among all 14 quasars, once scientists corrected for the effects of the universe's expansion.
"It's as if there was a dimmer switch on them with someone turning it to the left then the right," Starkman said. "The overall trend was surprisingly consistent."
This consistency of patterns enabled the scientists to accurately calculate the cosmological redshift of one quasar from another.
The researchers tested this capability in two ways.
They fit segments of the light curves, that is, the measured light over time, to straight lines. The slopes of the lines were consistent and appeared to be directly related to the quasars' redshifts.
By comparing corresponding slopes of 13 quasars with a known redshift value to the slopes of one other quasar, the researchers could calculate the redshift of the lone quasar within two percentage points.
In a second approach, the researchers took large sections of the light curves of two quasars and concentrated on the segments that matched most closely. By varying the ratio of the redshifts of the two quasars to try to get the best possible match of the two light curves, they were able to determine the ratio of the quasars' redshifts to within 1.5 percentage points.
Astronomers have used the bright light of supernovae with redshifts up to about 1.7 to measure the accelerating expansion of the universe. A star with a redshift of 1.7 would have been emitting that light when the universe was 2.7 times smaller than today.
Quasars are older and farther away and have been measured with redshifts of up to 7.1, which means they emitted the light we are seeing when the universe was as small as one-eighth the size it is today.
If this method of determining quasar redshifts proves applicable to higher redshift quasars, scientists could have millions of markers to trace the growth and evolution of structure and the expansion of the universe out to large distances and early times.
"This could help us learn about how gravity has assembled structure in the universe." Starkman said. "And, the rate of structure growth can help us determine whether dark energy or modified laws of gravity drive the accelerated expansion of the universe."
Kevin Mayhood | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:13fc8a13-ab53-43e9-9e3f-d91ffbeb4306> | 3.84375 | 1,685 | Content Listing | Science & Tech. | 44.200342 | 95,522,302 |
Wide-field microscopy is any technique that illuminates the entire field of view simultaneously and collects the resulting image on a camera. Wide-field techniques can be further classified as either 'epi' or 'trans', depending on whether the illumination and signal collection occur on the same (epi) or opposite (trans) sides of the sample. Trans-illumination techniques require a lens on each side of the sample: one termed the condenser, which focuses the illuminating light, and one termed the objective, which forms the image. In epi-illumination techniques, a single lens acts as both the condenser and objective. Wide-field techniques are highly sensitive and so capable of high frame rate (100+ fps) as well as very long term time-lapse imaging. However, these techniques also collect signal coming from all depths in the sample, and so out-of-focus light can greatly reduce image contrast.
Wide-field: Bright-field Microscopy
Bright field is the oldest of all microscopic techniques. The image is produced via absorption as light passes through the sample. Bright-field images of unstained biological samples tend to have low contrast, because most cells are not strongly light-absorbing.
Wide-field: Phase-contrast Microscopy
Phase contrast generates an image based on abrupt changes in a sample's refractive index (RI), a measure of 'optical density'. These optical edges cause light to diffract (bend) in many directions, where the amount of bending depends on the degree and abruptness of the RI change. Simply speaking, phase contrast measures how much light is bent at each location in the sample relative to how much light was not bent. Physically, this comparison occurs via an induced interference between the diffracted and undiffracted light. This technique enhances contrast in many biological samples, but also results in a 'halo' effect where large changes in RI also cause scatter.
Wide-field: Differential Interference Contrast Microscopy
Like phase contrast, differential interference contrast (DIC) (a.k.a. Nomarski interference contrast) generates an image based on changes in the sample's refractive index (RI). However, DIC employs a very different mechanism: DIC simultaneously acquires two images of the sample using distinct polarities, with one image being slightly offset relative to the other. The images are then re-aligned and converted back to the same polarity where they are compared via interference. DIC gives even better contrast than phase and without the 'halo' effect. It also gives an interesting 'shadow' impression, though these 'shadows' are due to the direction of the offset between the two intermediate images and have no relationship to the sample's vertical topography.
Wide-field: Epifluorescence Microscopy
Epifluorescence microscopy is an extremely popular way to visualize fluorescent probes in biological samples. As the name suggests, epifluorescence employs epi-illumination and so a single lens (the objective) both illuminates the sample and collects the fluorescent emissions. Filters restrict the excitation light to a small range of wavelengths suitable to excite fluorophores in the sample, while other filters sort out the longer wavelength fluorescence emissions before they reach the camera. A major drawback of epifluorescence microscopy is out-of-focus background emissions, or 'flare', which greatly degrades image contrast (and therefore resolution).
A main drawback of wide-field microscopy is that out-of-focus light greatly degrades image contrast and thus effective resolution. The ability to remove out-of-focus light is a tremendous imaging advantage, and many technologies have been developed to create optical sections. Point-scanning techniques illuminate only one diffraction-limited spot (~200 nm diameter) in the focal plane, while rotating mirrors traverse this point back and forth in a raster pattern across the sample to create an image. Light emitted (or reflected) from each point then travels back through the objective and mirrors before reaching a detector.
Point-Scanning: Confocal Microscopy
Because the illuminating spot of light is created by focusing, the light spreads into a cone above and below the focal plane. Thus, out-of-focus sample regions are illuminated and contribute (unwanted) emissions, though their intensity is less than in wide-field mode. A second effect completes the optical section: before reaching the detector, all fluorescence emissions pass through a very small aperture located in a conjugate image plane (i.e. a location outside the sample where light from the sample is also in focus). Said in an abbreviated way, the focal plane within the sample and the pinhole outside the sample are 'confocal'. Thus, only emissions from the focal plane can pass through the pinhole, while out-of-focus (i.e. spatially diffuse) emission can not fit through the aperture. Together, the combination of reduced out-of-focus illumination and blocking of out-of-focus emissions produces a crisp optical section.
Point-Scanning: Multiphoton Microscopy
Like confocal, multiphoton microscopy also creates an optical section, but does so using entirely different mechanisms. Multiphoton microscopy relies on very short (100 fs), but very intense bursts of light to induce an effect called multiphoton absorption. In the most likely scenario, two photons interact with a dye's electron simultaneously to trigger fluorescence emissions. Such events are highly improbable and so occur to an appreciable degree within the spot of light in the focal plane, where the light is most concentrated. The result is that zero out-of-focus emissions are generated and so all emissions can be detected directly (i.e. no pinhole is needed). Scattered emissions (non-ballistic) can also usefully contribute to the image, because their point of origin is known. An additional benefit is that since two photons impart their energy to an electron simultaneously, each needs only one half of the energy typically required to trigger fluorescence emissions. Thus, infrared light (700-1000 nm) can trigger fluorescence from visible wavelength dyes. This property is useful because tissue is more transparent to infrared light. Together, these effects contribute to multiphoton's ability to image far (10x) deeper into scattering tissues than is possible using confocal microscopy. | <urn:uuid:604c8702-4ff6-49ff-95e7-98616d084427> | 3.21875 | 1,317 | Knowledge Article | Science & Tech. | 24.969519 | 95,522,303 |
Greenland contains enough frozen water to lift oceans by about seven metres (23 feet), though experts disagree on the global warming threshold for irreversible melting, and how long that would take once set in motion
Paris (AFP) - Ocean levels rose 50 percent faster in 2014 than in 1993, with meltwater from the Greenland ice sheet now supplying 25 percent of total sea level increase compared with just five percent 20 years earlier, researchers reported Monday.
The findings add to growing concern among scientists that the global watermark is climbing more rapidly than forecast only a few years ago, with potentially devastating consequences.
Hundreds of millions of people around the world live in low-lying deltas that are vulnerable, especially when rising seas are combined with land sinking due to depleted water tables, or a lack of ground-forming silt held back by dams.
Major coastal cities are also threatened, while some small island states are already laying plans for the day their drowning nations will no longer be livable.
"This result is important because the Intergovernmental Panel on Climate Change (IPCC)" -- the UN science advisory body -- "makes a very conservative projection of total sea level rise by the end of the century," at 60 to 90 centimetres (24 to 35 inches), said Peter Wadhams, a professor of ocean physics at the University of Oxford who did not take part in the research.
That estimate, he added, assumes that the rate at which ocean levels rise will remain constant.
"Yet there is convincing evidence -- including accelerating losses of mass from Greenland and Antarctica -- that the rate is actually increasing, and increasing exponentially."
Greenland alone contains enough frozen water to lift oceans by about seven metres (23 feet), though experts disagree on the global warming threshold for irreversible melting, and how long that would take once set in motion.
"Most scientists now expect total rise to be well over a metre by the end of the century," Wadhams said.
The new study, published in Nature Climate Change, reconciles for the first time two distinct measurements of sea level rise.
The first looked one-by-one at three contributions: ocean expansion due to warming, changes in the amount of water stored on land, and loss of land-based ice from glaciers and ice sheets in Greenland and Antarctica.
- 'A major warning' -
The second was from satellite altimetry, which gauges heights on the Earth's surface from space.
The technique measures the time taken by a radar pulse to travel from a satellite antenna to the surface, and then back to a satellite receiver.
Up to now, altimetry data showed little acceleration in sea level rise over the last two decades, even if other measurements left little doubt that oceans were deepening more quickly.
"We corrected for a small but significant bias in the first decade of the satellite record," co-author Xuebin Zhang, a researcher at the Centre for Southern Hemisphere Oceans Research in Hobart, Tasmania, told AFP.
Overall, the pace of global average sea level rise went up from about 2.2 millimetres a year in 1993, to 3.3 millimetres a year two decades later.
In the early 1990s, they found, thermal expansion accounted for fully half of the added millimetres. Two decades later, that figure was only 30 percent.
Andrew Shepherd, director of the Centre for Polar Observation and Modelling at the University of Leeds in England, urged caution in interpreting the results.
"Even with decades of measurements, it is hard to be sure whether there has been a steady acceleration in the rate of global sea level rise during the satellite era because the change is so small," he said.
Disentangling single sources -- such as the massive chunk of ice atop Greenland -- is even harder.
But other researchers said the study should sound an alarm.
"This is a major warning about the dangers of a sea level rise that will continue for many centuries, even after global warming is stopped," said Brian Hoskins, chair of the Grantham Institute at Imperial College London. | <urn:uuid:c9c8640a-c043-4e87-bd89-e30a430701c3> | 3.546875 | 833 | News Article | Science & Tech. | 35.326653 | 95,522,306 |
Part of the Advanced Texts in Physics book series (ADTP)
The name “electron”, which is derived from the Greek word for amber, was coined by the English physicist Stoney in 1894.
KeywordsNeutron Beam Helium Atom Atomic Beam Fresnel Zone Atomic Interferometry
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Unable to display preview. Download preview PDF.
- M. Born: Atomic Physics, 8th ed. ( Dover, New York 1989 )Google Scholar
- B.H. Bransden, C.J. Joachain: The Physics of Atoms and Molecules ( Addison Wesley Longman, New York 1986 )Google Scholar
- B. Cagnac, J.-C. Pebay-Peyroula: Modern Atomic Physics: Quantum Theory and Its Applications ( MacMillan, London 1975 )Google Scholar
- R. Eisberg, R. Resnick: Quantum Physics of Atoms, Molecules, Solids Nuclei, and Particles, 2nd ed. ( Wiley, New York 1985 )Google Scholar
- U. Fano, L. Fano: The Physics of Atoms and Molecules: An Introduction to the Structure of Matter (University of Chicago Press, 1973 )Google Scholar
- T.A. Littlefield, N. Thorley: Atomic and Nuclear Physics: An Introduction ( Van Nostrand Reinhold, New York 1979 )Google Scholar
- R.L. Sproull, W.A. Phillips: Modern Physics: The Quantum Physics of Atoms, Solids, and Nuclei ( Krieger, Melbourne, FL 1990 )Google Scholar
- S. Svanberg: Atomic and Molecular Spectroscopy. Springer Ser. on Atoms and Plasmas, Vol. 6, 2nd ed. corr. pr. ( Springer, Berlin Heidelberg 1997 )Google Scholar
- M. Weissbluth: Atoms and Molecules ( Academic Press, San Diego 1978 )Google Scholar
© Springer-Verlag Berlin Heidelberg 2000 | <urn:uuid:8b38820f-c4db-4036-8a4f-0df7ba3bce57> | 2.578125 | 451 | Truncated | Science & Tech. | 62.676345 | 95,522,340 |
Introductory climate change webquest This focuses on global warming.
I used this with my 7th grade classes to introduce my climate change lesson.
Students will use 3 different websites 2 from NASA and 1 from the EPA.
How the climate is changing with time laps charts showing the changes in
Sea ice melting
Ice sheet melting
Carbon dioxide in the atmosphere
Global temperature change
Students will also explore a future technology on how to reduce the human impact on the environment.
Student sheet is 2 pages long.
Answer key included
Currently the EPA website is "Being updated" and the one link no longer works on the active site however you can still access the archived page at https://19january2017snapshot.epa.gov/climatechange_.html
( web-quest or web quest) | <urn:uuid:ee7fc38a-74a2-47e9-a765-99bae9fffca6> | 3.453125 | 166 | Product Page | Science & Tech. | 44.527813 | 95,522,380 |
Colligative Properties and Molality
Molality[edit | edit source]
Molality is expressed as moles/mass (mol/kg) in contrast with molarity which is expressed as moles/volume (mol/L). Molality is useful in situations which involve a large temperature or pressure change which can drastically change the volume, changing the molarity, but not the molality of the solution. As molality is a more accurate measure of solutes in solution in dynamic conditions, it is often used in comparing and determining colligative properties.
Colligative Properties[edit | edit source]
Colligative properties are properties of solutions that are determined solely by the number of particles dissolved rather than their nature. These properties include:
Boiling point increases with increasing molality of a solution according to the following equation: Boiling Point of solution = Boiling Point of solvent + Tb Tb = m * Kb * i, where m = molality, Kb = ebullioscopic constant (varies for each solvent), and i = Van't Hoff factor The Van't Hoff factor is equal to the number of unique solutes and/or ions in the solution.
Melting point decreases with increasing molality of a solution according to the following equation: Melting Point of solution = Melting Point of solvent + Tf Tf = m * Kf * i, where Kf = cyroscopic constant (varies for each solvent) | <urn:uuid:b2fb52d2-3357-4ab8-9329-ddb436ac4e69> | 3.546875 | 307 | Knowledge Article | Science & Tech. | 4.030592 | 95,522,383 |
The different models of distributing of coefficient of refraction of troposphere are examined. Specified on limitation on application some from them in the conditions of the real meteorological situation, relief and state of laying surfaces. The method of finding of distributing of coefficient of refraction is offered on the basis of vertical types of index of refraction, got in two points, remote one from one.
distributing of coefficient of refraction, troposphere
"Model rozpodilu koefitsiienta zalomlennia troposfery v pryberezhnykh raionakh" ,
Information Processing Systems, | <urn:uuid:3d097260-4c60-4141-9341-13f02e360848> | 2.78125 | 131 | Knowledge Article | Science & Tech. | -1.457114 | 95,522,385 |
Bye-bye silicon age, behold the age of DNA
Forget about 3D printing, the next exciting tech is DNA origami, which is set to bring revolutionary change to healthcare.
By Sparrho Hero and Phd Nicolas Gutierrez
DNA is the most important piece of our biology, but now it may also become the future of technology, as super efficient hard drives or nano-material to build anything from precise medicine delivery systems to very tiny robots repairing our cells or even digital storage.
Your next hard drive might be DNA
Thanks to its extreme durability and density, DNA is starting to be used to store digital data, sort of a super small and durable computer hard drive (Erlich et al, 2017).
And because of these same qualities, plus the possibility of manipulating its sequence and 3D structure, DNA has been proposed as the perfect building blocks for future nano technology (Wagenbauer et al, 2017).
Nanotechnology is very, very, small technology, which will allow to work in very, very small places. It’s kind of like building a very tiny hammer to nail a very tiny painting in a very tiny house.
Folding DNA into tough little objects
Researchers have found a technique that allows for the folding of DNA into any kind of structure, similar to paper origami, but resulting in much, much, smaller and resistant objects.
These structures can be used as specific delivery systems (Hadorn et al, 2012), — a sort of Amazon drones for cells — or to build nano machines that can undertake specific tasks within the cell.
For instance, microscopic tubes based on this tech will be able to deliver medication into a specific kind of cell, avoiding side effects or overdosing. (Find and example in our 3 Minute Digest, how nanocrystals could be used to reverse neural damage caused by Multiple Sclerosis, helping people to regain mobility)
To demonstrate the potential of DNA-folding, Caltech research professor Paul W. K. Rothemund had created very small smiley faces and called this DNA origami. (He also took time to explain that ‘origami’ (折り紙) in Japanese means paper folding, but now we are using the word for all sorts of cool folding! (You can read more on his technique in this article from Nature).
As Paul noted, DNA nanotechnology encompasses much more than just making shapes.
Enter the DNA machine
For example, researchers from Israel managed to build a DNA bipedal walker that walks on a DNA track. (Tomov et al, 2017). And by using chemical microfluids, they can tell the walker how many steps to make and when to stop, controlling its direction and speed.
So, not only we will be able to “print” any kind of 3D structure on DNA but also give it a motor to create the super small machines of the future.
How DNA nanotech will revolutionise healthcare?
I believe this technology will have a huge impact on medicine, allowing us much higher precision and efficiency in medical treatments, for example, to treat cancer.
DNA nanotechnology could facilitate a specific delivery of chemotherapy only in tumor cells, avoiding collateral damage in healthy cells and preventing the heavy side effects chemo has on cancer patients, like hair loss, pain and fatigue.
It could also be important for permitting drug delivery into hard to get places, like the brain, which is protected by the blood-brain barrier, to treat neurodegenerative diseases such as Multiple Sclerosis and Alzheimer’s Disease (Karthivashan et al, 2018).
DNA nanotechnology could also open the door to new possibilities, like in situ repair of damaged tissue — imagine a fractured bone — or even smaller chores, such as repairing DNA breakage or protein structural damage.
The possibilities are endless and medicine will change dramatically.
Welcome to the future of healthcare!
Nicolas Gutierrez, PhD
is a postdoctoral researcher specialising in mitochondria and genetics.
Read his full pinboard here. | <urn:uuid:c16dc5cb-ce3e-4e7f-9c3d-010aab136b92> | 3.65625 | 835 | Personal Blog | Science & Tech. | 32.942874 | 95,522,393 |
Git is a distributed version control system released to the public in 2005. The program allows for non-linear development of projects, and can handle large amounts of data effectively by storing it on the local server.
This tutorial will cover two ways to install Git.
How to Install Git with Apt-Get
Installing Git with apt-get is a quick and easy process. The program installs on the virtual private server with one command:
sudo apt-get install git
After it finishes downloading, you will have Git installed and ready to use.
How to Install Git from Source
If you are eager to download the most recent version of Git, it is generally a good idea to install it from the source.
Quickly run apt-get update to make sure that you download the most recent packages to your VPS.
sudo apt-get update
Prior to installing Git itself, download all of the required dependancies:
sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev build-essential
Once they are installed, you can download the latest version of Git from the google code page.
After it downloads, untar the file and switch into that directory:
tar -zxf git-18.104.22.168.tar.gz
If you want to do a global install, install it once as yourself and once as root, using the sudo prefix:
make prefix=/usr/local all sudo make prefix=/usr/local install
If you need to update Git in the future, you can use Git itself to do it.
git clone git://git.kernel.org/pub/scm/git/git.git
How to Setup Git
After Git is installed, whether from apt-get or from the source, you need to copy your username and email in the gitconfig file. You can access this file at ~/.gitconfig.
Opening it following a fresh Git install would reveal a completely blank page:
sudo nano ~/.gitconfig
You can use the follow commands to add in the required information.
git config --global user.name "NewUser" git config --global user.email email@example.com
You can see all of your settings with this command:
git config --list
If you avoid putting in your username and email, git will later attempt to fill it in for you, and you may end up with a message like this:
[master 0d9d21d] initial project version Committer: root <root@droplet1.(none)> Your name and email address were configured automatically based on your username and hostname. Please check that they are accurate. You can suppress this message by setting them explicitly: git config --global user.name "Your Name" git config --global user.email firstname.lastname@example.org After doing this, you may fix the identity used for this commit with: git commit --amend --reset-author | <urn:uuid:24e00c15-6162-4c91-9993-2fddd0fa2895> | 2.71875 | 623 | Tutorial | Software Dev. | 59.531962 | 95,522,419 |
As the 1.8 meter (60 inch) Pan-STARRS 1 telescope (PS1), one of the most powerful current survey telescopes, scans the night sky, its 1400 Megapixel digital camera takes more than 500 exposures per night. Between October 25 and December 21, 2010, some of this data found its way into classrooms in the USA and in Germany, where high-school students have used it to track known asteroids, and also to discover candidate objects that could be previously unknown asteroids. When Hawaiian skies were overcast, schools also received data taken with a telescope operated by the Astronomical Research Institute (ARI) in Westfield, Illinois.
The PS1 telescope on Hawaii
Image credit: Rob Ratkowski
Over the Internet, the participating schools received series of astronomical images. Each series included images of one specific region of the sky, taken an hour apart. During this hour, the image of a main belt asteroid would have moved noticeably (in the images in question: about 100 pixels) relative to the distant background stars. The students examined the images for exactly this kind of position change, carefully sorting image artifacts from moving celestial objects, and reported back to the International Astronomical Search Collaboration, whose volunteers then checked the results and arranged for follow-up observations.
Some of the most interesting student observations during the project concerned “Near-Earth Objects” (NEO), asteroids or similar objects whose orbits bring them into the inner Solar System. Some NEOs might turn out to be potential “killer asteroids” that are bound to collide with our home planet; finding these is one main goal of the PS1 telescope. In order to keep track of NEOs, at least two separate observations at different times are required. Katharina Stöckler (age 17), an 11th grade student at Gymnasium Neckargemünd near Heidelberg, explains: “We obtained a ‘NEO confirmation’ for the asteroid 2010 UR7 – the second observation ever made of that object, which confirmed the asteroid’s existence and gave crucial information about its orbit.” Three additional such “NEO confirmations” were made during the project; in addition, 64 of the students' observations amounted to the third or fourth time a specific NEO had been observed. All these observations provide important additional data to scientists studying the motion of NEOs.
In the course of the project, the students also observed 151 candidate objects in the Pan-STARRS data (plus an additional 20 candidates in the ARI/Westfield telescope data) that could be newly discovered main belt asteroids, which orbit the Sun between the orbits of Mars and Jupiter. In one case, students from Benedikt Stattler Gymnasium, a high-school in Bavaria, Germany, discovered 7 such candidate objects in a single night! Before the students' finds are confirmed as discoveries, however, and assigned provisional designation numbers, they will need to be observed again – for a number of the candidates, this is going to prove impossible; on the other hand, some are likely to turn out to have been previously known, after all. Once a newly found object has been observed over at least a whole orbit (which typically lasts 3 to 6 years), it is assigned a definite numerical identifier, and can also be given a proper name.
IASC director Dr. Patrick Miller, of Hardin-Simmons University in Abilene, Texas, says: “Pan-STARRS images contain an amazing amount of data, providing students with opportunities for literally hundreds of new discoveries. With this amount of data, we could expand our campaign to a thousand schools a year, and tens of thousands of students, which is very exciting, and is an unbelievable opportunity for high schools and colleges!”
Pan-STARRS Project Manager Dr. William Burgett adds: "It is incredibly exciting that we can use a state-of-the-art system such as Pan-STARRS to allow students around the world to learn astronomy with real research quality images. We are committed to making this a valuable and enjoyable experience for both the students and their teachers, and we hope this is only the first step in eventually involving hundreds of schools around the world."
The Pan-STARRS liaison for the IASC-Pan-Starrs campaign is Pan-STARRS Project Manager William Burgett (University of Hawaii Institute for Astronomy). The campaign is made possible by the board of the PS1 Science Consortium and by the particular efforts and support of Larry Denneau (University of Hawaii Institute for Astronomy), Matt Holman (Harvard-Smithsonian Center for Astrophysics), Robert Jedicke, Nick Kaiser, Gene Magnier and Richard Wainscoat (all UH Institute for Astronomy).
The International Astronomical Search Collaboration (IASC, pronounced “Isaac”) is an educational outreach program for high schools and colleges, provided at no cost to the participating schools. Since the program’s foundation in fall 2006, more than 200 schools – thousands of students – per year have participated in its search campaigns, representing more than 30 countries on five continents. Over the Internet, the schools receive astronomical images taken only hours before. Students then use the software package Astrometrica to search for, discover, and measure the properties of asteroids. Overall, students have discovered more than 300 previously unknown asteroids, seven of which have received an official number and been cataloged by the Minor Planet Center at Harvard, the body in charge of keeping track of asteroid designations. (As the official numbering can take 5-10 years from the date of discovery, this number is bound to increase rapidly.) Students have also performed thousands of measurements of near-Earth objects, which pose a possible impact hazard with Earth. Centered at Hardin-Simmons University (Abilene, Texas), IASC is a collaboration of the University, Lawrence Hall of Science (University of California at Berkeley), Astronomical Research Institute (Westfield, Illinois), Global Hands-On Universe Association (Lisbon, Portugal), Tarleton State University (Stephenville, Texas), Sierra Stars Observatory Network (Markleeville, California), and Astrometrica (Linz, Austria). The present campaign builds on the Global Hands-on Universe (GHOU) collaboration. GHOU is an educational program that enables students to investigate the Universe while applying tools and concepts from science, math, and technology. GHOU has reached about 30 nations, and trained approximately 5000 teachers around the world in techniques and the use of modern astronomy in classrooms.
The German schools of the Pan-STARRS-IASC search campaign are coordinated and supported by the Center for Astronomy Education and Outreach (Haus der Astronomie) in Heidelberg (in collaboration with the Max Planck Institute for Astronomy in Heidelberg and the Starkenburg-Sternwarte Heppenheim), the Max Planck Institute for Extraterrestrial Physics in Garching, and the Technical University Munich.
The participating international teams of schools are:1. Luitpold-Gymnasium, Munich, Germany
expressed in this press release are those of the author(s), and do not necessarily reflect the views of the National Aeronautics and Space Administration.
NEO Confirmations (second observation of a specific Near-Earth Object; listed are the object identifier, the observing school, and the observation date):2010 UR7, Gymnasium Neckargemünd, 30 October 2010
Dr. Markus Pössel | Max-Planck-Institut
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:7ce84bc0-fed8-4c1c-99c8-db0ca3176afd> | 3.671875 | 2,198 | Content Listing | Science & Tech. | 32.426292 | 95,522,432 |
Astronomers have discovered a pulsar that comes with its own magnifying glass — courtesy of its brown dwarf companion that’s being torn to shreds.
A unique pair of pulsars is inspiring new models for neutron-star magnetospheres.
Something funny is going on within a few hundred light-years of us, creating high-energy electrons that we don't understand. Recent data from NASA’s Fermi Gamma-ray Space Telescope keep the mystery alive.
Imagine a three-star system with two white dwarfs and a wildly spinning, superdense neutron star, all packed within a space no bigger than Earth's orbit.
NASA's Fermi Space Telescope recently spotted a pulsar in a rare transitional phase as it devours the matter of its companion star.
What spins hundreds of times per second, has 100 trillion times the Sun's density, and spews lethal radiation all over interstellar space? Astronomers are closer to knowing the answers, thanks to NASA's newest deep-space observatory.
A three-hour long burst on a neutron star has confirmed many long-suspected fact about the dense, spinning stars.
The Fermi Gamma-ray Space Telescope has made its first major discovery.
A pulsar discovered last April is helping astronomers measure the magnetic field surrounding our galaxy’s central black hole.
A few whirling neutron stars might get their start as very different objects, at least if a new analysis is correct.
A recent experiment to better understand the nature of dark matter constrains a possible "fifth force" of nature to almost zero.
Astronomers have discovered a neutron star that switches between X-ray and radio emission within a few days. The find is fabulous news for theorists, who have long predicted that the two pulsar types were connected.
Astronomers are looking forward to 2018, when a young pulsar will pass through its binary star companion’s disk.
New X-ray and radio observations detected a strange switcheroo in the radiation from a pulsar. The repeated hiccups have left scientists scratching their heads.
The Fermi Gamma-ray Space Telescope has discovered that almost all of the highest energy photons in the Large Magellanic Cloud come from two pulsars.
New pulsar picks-up first place as the fastest spinning object yet.
Pulsars flash in radio, but some of them flash a lot more powerfully in gamma rays, due to different processes happening in different places around them.
New images of two pulsars show beautiful, complex clouds of charged particles that illustrate the power dynamics in and around these spinning neutron stars. | <urn:uuid:d9c4c7dc-3c40-4da0-80cc-0375cd31e08b> | 3.328125 | 546 | Content Listing | Science & Tech. | 45.540807 | 95,522,447 |
Chapter 14 | Chapter 15 >>
Chapter 15. Biogeochemical Cycles
In the previous chapter on environmental response, we considered the various ways that individual organisms respond to the physical, chemical and/or biological stimuli in their environment.In this chapter, we will go deeper into the interactions between the biotic and the abiotic elements of the biosphere, focusing on the nonliving components. To begin to understand this concept: the physical "home" of our biosphere, we first have to consider the physics of planet formation. From all we know about life in the universe, only planets can possibly offer conditions suitable for life. And indeed, since all life we know of occurs on just one planet, we need to develop an understanding of what is special or unique about our home planet. How is the earth constituted; what is the nature of this "stage" upon which life developed and upon which life continues to play out. Presently, the "ecology" of other planets, if such a thing exists, is a subject of study within the science of exobiology.
Nutrients move through the ecosystem in biogeochemical cycles. A biogeochemical cycle is a circuit/pathway by which a chemical element moves through the biotic and the abiotic factors of an ecosystem. It is inclusive of the biotic factors, or living organisms, rocks, air, water, and chemicals. The elements that are moving through the biotic or abiotic factors may be recycled, or they may be accumulated in a place called a sink/reservoir where they are held for a long period of time. The amount of time that a chemical is held in one place is called residence.
All of the chemical elements in an organism are part of the biogeochemical cycle. The chemicals travel not only through the biotic and abiotic components of an ecosystem, but they also travel through an organism. The abiotic factors of an ecosystem include: (1) water (hydrosphere), (2) land (lithosphere), and (3) air (atmosphere). All of the living factors that are found on Earth make up the biosphere.
- 1 Nutrient Cycle Levels
- 2 Biogeochemical Cycles
- 3 Soil
- 4 References
Nutrient Cycle Levels
Nutrient cycles are broken up into three levels. There are global nutrient cycles, which occur when ecosystems become linked on a global scale. This is now known as an Ecosphere. On a smaller scale, there are local nutrient cycles. These cycles consist of the cycling that occurs in just one ecosystem. The smallest level includes common nutrient budgets and fluxes; some examples of this are carbon, H2O, nitrogen, phosphorus, iron, and other trace elements. All of these cycles can be linked so that a community can remain at equilibrium. All of these cycles are driven by microbes. Microbes contain 350-550 giga tons of the worlds carbon. They also contain 85-130 giga tons of nitrogen, along with 9-14 giga tons of phosphorus. Nitrogen is only fixed by bacteria and archea, and they fix 85% of the 15 giga tons of nitrogen every year. Cyanobacteria are responsible for fixing 50% of the O2 in the Ecosphere. There are also huge reserves of carbon locked up in rocks. Rocks contain 81 million gig tons of carbon, which is in the form of calcium carbonate (CaCo3), commonly called limestone.
The processes that make up these cycles can be divided into two groups. The first group of processes all have an aerobic component. Through respiration, organic material can be converted into carbon dioxide. Respiration is done by animals, plants, and microbes. A second aerobic process is carbon fixation. This is the reverse of respiration where carbon dioxide is converted into organic material. The second group of processes are anaerobic processes. The first anaerobic process is fermentation. This is another way that organic material can be converted into carbon dioxide. Bacteria and fungi carry out this process when deprived of oxygen. A second form of an anaerobic process is carbon fixation. The only organisms that fix carbon anaerobically are archea. The last anaerobic process is methanogenesis. This process takes carbon dioxide and converts it into methane. This process is also only done by archea, but does not account for all methane because some methane can be made aerobically.
There are two main types of systems on Earth, a closed system and an open system. A closed system occurs when the chemicals or elements used in an ecosystem are recycled instead of being lost. An open system occurs when the sun provides Earth with energy in the form of light which is usually used and then lost in the form of heat as it travels through the various trophic levels on Earth.
Although the biogeochemical cycles are complex and differ between the needs of nutrients by heterotrophs and autotrophs, there are three shared components of nutrient cycles: inputs, internal cycling, and outputs. Precipitation is one form of input, which brings appreciable quantities of nutrients into a cycle (Patterson, 1975). Microbes recycling dead organic material back into the system to be reused is a form of internal cycling. The export of nutrients must be offset by the input of nutrients into a system if there is to be no net loss. Carbon is exported from a cycle in the form of CO2 via respiration of all living organisms.
The most important biogeochemical cycles are the carbon cycle, nitrogen cycle, oxygen cycle, phosphorus cycle, and the water cycle. The biogeochemical cycles always have a state of equilibrium. The state of equilibrium occurs when there is a balance in the cycling of the elements between compartments. Ecologists may also be interested in the sulfur cycle, the nutrient cycle, and the hydrogen cycle; however, ecologists are more interested in studying the carbon, nitrogen, oxygen, phosphorus, and water cycles.
The term biogeochemical gets its name from the previous cycles listed above (abiotiv factors). The prefix Bio refers to the biosphere. Geo refers to the lithosphere, atmosphere, and hydrosphere. Chemical refers to the various chemicals that go through/travel through the biogeochemical cycles.
A biogeochemical cycle or inorganic-organic cycle is a circulating or repeatable pathway by which either a chemical element or a molecule moves through both biotic ("bio-") and abiotic ("geo-") compartments of an ecosystem. In effect, an element is chemically recycled, although in some cycles there may be places (called "sinks") where the element accumulates and is held for a long period of time. In considering a specific biogeochemical cycle, we focus on a particular element and how that element participates in chemical reactions, moving between various molecular configurations. Of the 90-odd elements known to occur in nature, some 30 or 40 are thought to be required by living organisms (Odum, 1959). We will be considering only a few of these, mainly those utilized in fairly large quantities by living organisms. The principal elements of life are carbon, hydrogen, oxygen, and nitrogen. However, a number of others are certainly important to understand as well, notably phosphorus and sulfur. Some "non-essential" elements participate in biogeochemical cycles, entering organism tissues because of chemical similarity to essential elements. For example, strontium can behave like calcium in the body.
Hydrologic Cycle (Water Cycle)
A very significant molecule (on planet Earth) that cycles through ecosystems is the water molecule (H2O), for the reason, that life is so dependent on water as the medium of chemical reactions within cells. While it generally is the case that we discuss the water cycle in terms of the various states of water, at least some water molecules are taken up by plants and split apart (photolysed) into atoms of hydrogen and oxygen; the latter is released into the atmosphere as molecular oxygen (O2). Thus, by virtue of photosynthesizing organisms (photoautotrophs), the water cycle is an important part of both the oxygen and the hydrogen cycles. Note that hydrogen ends up as part of an organic molecule, and therefore a participant in the carbon cycle.
The majority of water in the water cycle is found within the oceans and the polar ice caps, although water is present in the bodies of organisms, in freshwater lakes and rivers, frozen in glaciers, and in the ground as groundwater. Water moves more or less freely between these storage reservoirs: by evaporation, by precipitation, and by runoff from the land.
The sedimentation cycle is an extension of the hydrological cycle. The water carries material from the land to the ocean, where they are added as sediments. The sediment cycle includes the physical and chemical erosion, nutrient transport, and sediment formation from water flows. The sediment formed from water flows is mostly responsible for the buildup of sediments at the bottom of the ocean. The sediment cycle is tied in with the flow of six important elements, which are hydrogen, carbon, oxygen, nitrogen, phosphorus and sulfur. These elements also known as macroelements make up 95 % of all living things. The balancing of these molecules is required to sustain life. These elements have to be recycled for life to continuously regenerate.
Carbon is required for the building of all organic compounds. Carbon in the form of carbon dioxide (CO2) is obtained from the atmosphere and transformed into a usable organic form by organisms. The reservoirs for the carbon cycle are the atmosphere, where carbon dioxide exists as a free gas, fossil organic deposits (such as oil and coal), and durable organic materials like cellulose. Mineral carbonates, such as limestone, are a significant geological sink for carbon. During the process of carbon fixation, carbon dioxide is taken up from the atmospheric reservoir (or from biocarbonates dissolved in water) by plants, photosynthetic bacteria, and algae and is "fixed" into organic substances. Animals obtain their requirements for carbon (as carbon-based molecules) by eating plants or other animals. For the biological links, the carbon cycle comes full cycle when carbon is released by either plants and animals as they respire or after life as they decompose. Organisms respire carbon dioxide as a waste product from the breakdown of organic molecules as their cells derive energy from oxidizing the molecules containing "fixed" carbon. The burning of organic material such as wood or fuels also results in the release of carbon dioxide from organic carbon.
CO2 is a trace gas and has huge effects on Earth’s heat balance by absorbing infrared radiation. During the growing season or summer, there is a decrease in atmospheric CO2 because increased sunlight and temperature helps plants increase their carbon dioxide uptake and growth. In the winter time, more CO2 enters the atmosphere than can be removed by plants. This happens because plant respirations and the death of plants happens faster than photosynthesis. [Life and Biogeochemical Cycles ]
Carbon Cycling Experiments
The impact of land-cover and land-use change on global carbon cycle has been the focus of various studies. The Brazilian Legal Amazon has been an area of particular interest because of rapid forest clearing. In determining such an impact, models are often developed and used. In a study focusing on the net carbon flux caused by deforestation and forest re-growth over the period of 1970-1998, a processed-based model of forest growth, carbon cycling, and land-cover dynamics was developed. This model was named CARLUC (for CARbon and Land-Use Change). This model, in particular, was used to estimate the size of terrestrial carbon pools in terra firme (unflooded) across the Brazilian Legal Amazon and the net flux of carbon resulting from forest disturbance and forest recovery from disturbance .
As previously mentioned, carbon cycling is driven by both abiotic and biotc factors. Net carbon fluxes can be partitioned into assimilatory and respiratory components. The complex interplay between assimilatory and respiratory sources and their responsiveness to abiotic changes such as drought and temperature are poorly understood. In a recent study, net carbon partitioning was used to disentangle abiotic and biotic drivers of all component influencing the overall sink strength of a Mediterranean ecosystem during a rapid spring to summer transition (between May and June of 2006). It was ultimately determined that decreasing soil water availability rather than increasing air temperature largely affected both assimilation and respiration fluxes of understory plants and in consequence ecosystem respiration and soil respiration .
Nitrogen is one of the biochemical products which exists within the environment. It is required for the manufacturing of all amino acids and nucleic acids; however, the average organism can not use atmospheric nitrogen for these tasks and as a result is dependent on the nitrogen cycle as a source of its usable nitrogen. The nitrogen cycle begins with nitrogen stored in the atmosphere as N2 or nitrogen stored in the soil as ammonium (NH4+), ammonia (NH3), nitrite (NO2−), or nitrate (NO3−). Nitrogen is assimilated into living organisms through three stages: nitrogen fixation, nitrification, and plant metabolism. Nitrogen fixation is a process which occurs in prokaryotes in which N2 is converted to (NH4+). Atmospheric nitrogen can also undergo nitrogen fixation by lighting and UV radiation and become NO3-. Following nitrogen fixation, nitrification occurs. During nitrification, ammonia is converted into nitrite, and nitrite is converted into nitrate. Nitrification occurs in various bacteria. In the final stage, plants absorb ammonia and nitrate and incorporate it into their metabolic pathways. Once the nitrogen has entered the plant metabolic pathway, it may be transferred to animals when the plant is eaten. Nitrogen is released back into the cycle when denitrifying bacteria convert NO3- into N2 in the process of denitrification, when detrivorous bacteria convert organic compounds back into ammonia in the process of ammonification, or when animals excrete ammonia, urea, or uric acid.
A lot of environmental problems are caused by the disruption of the nitrogen cycle by human activity some of the problems caused range from the production of troposperic (lower atmospheric) smog to the perturbation of stratospheric ozone and contamination of ground water. An example of one of the problems caused is the formation of greenhouse gas. Like carbon dioxide and water vapor greenhouse gas traps heat near the earth’s surface and destroys the stratospheric ozone. Once that occurs nitrous oxide in the earth’s atmosphere is broken down by UV light into nitrogen dioxide and nitric oxide. These two products can reduce the ozone. Nitrogen oxides can be changed back into nitrates and nitrite compounds and recycled back into the earth’s surface.[Environmental Biology - Ecosystems ]
Global Carbon Cycle
Carbon is continuously cycled between the oceans, land, and the atmosphere. The atmospheric carbon is primarily carbon dioxide. Carbon, on land, occurs primarily in living biota and decaying organic matter. Dissolved carbon dioxide and small organisms like plankton that store carbon dioxide are major sources in the ocean. Carbon is measured in Gigatons, with the deep ocean containing almost 40,000 Gt, compared to about 2,000 Gt on land and 750 Gt in the atmosphere.
Carbon dioxide, a known "greenhouse gas", traps some of the radiation in the atmosphere that would be lost in space. This causes the atmosphere to be warmer than it would naturally be. Since pre-industrial times man-made emissions have increased the amount of carbon dioxide in the atmosphere by about 30%. The increase in carbon dioxide causing global warming is important for us to understand so we can predict future implications concerning our planet.
Average global temperatures have increased this past century and are predicted to increase even more in the next century. In a study done by Parmesan etal, the distribution and population dynamics of several species of butterflies were monitored in response to the increased temperatures. Migratory patterns of each species were observed and compared to previous patterns of distribution. The butterflies showed a northern shift which is thought to be a response to increased temperatures. Parmesan etal concluded that the current and future climate warming may be a big factor in shifting species distributions .
Phosphorus, Iron, and Trace Mineral cycles
The Phosphorus cycle is one of the slowest biogeochemical cycle. It does not stay in the atmosphere, because it is normally in a liquid state at room temperature. Phorsphorus mainly cycles through water, soil, and sediments. The little phosphorus in the atmosphere is carried by dust particles and such. Phosphate is most often in the form of phosphate salts, which is released from weathered rock and dissolved in ground water where plants take it up. Phosphorus is a limiting factor for plants on land and in water, including the ocean because there is so little of it and it is not very soluble in water. The cycle does speed up a little bit as it cycles through plants and animals. When the plants and animals die and decay the phosphorus is returned to the soil and sediments and eventually locked back in the rock.
The iron cycle is similar to every other cycle. It is, however, much more abundant than phosphorus. One additional way iron gets cycled, besides rock weathering, is when a volcano erupts and sends an iron rich dust into the atmosphere and eventually spreading it in soil and water.
Other trace minerals, such as, zinc, copper, and manganese, were once thought to be just as abundant as nitrogen, carbon, and oxygen. Their depletion is thought to be from water erosion of the soil and over cropping of land, they aren't able to replenish themselves as fast. To learn more about all the trace minerals essential to life visit:Nutrients - Trace Minerals
Odum (1959) describes what are called more or less "perfect" cycles: biogeochemical cycles that involve equilibrium states. That is, there exists in nature a balance in the cycling of the element between various compartments, with the element or material moving into abiotic compartments about as fast as it moves into biotic compartments. Certain ecosystems may experience "shortages", but overall a balance exists on a global scale.
II. Deserts are characterized by their limited precipitation. Deserts have a brief rainy season which on average produces less than 30 cm rainfall per year. Deserts are mainly located at 30 degrees North and South latitude because of its dry air resulting from the Hadley cell (please read interlink below on a Hadley cell), which stretches between the equator and 20 to 35 degrees North and South latitude.
Deserts have the capacity to be very productive climates because they are in a high energy input area of the round earth (please refer to chapter 3 about solar irradiation and the earth being round), but are limited by their low precipitation. Desert flora or plant life is located on only 10% of desert area. They consist of mainly annuals that grow and reproduce during the brief rainy season. The main flora are succulents that store water, have waxy cuticles to prevent water loss, they may have few modified small leaves or needles, and have large shallow root systems.
III. Chaparrals are desert types located in the mediterranean area. Chaparrals are characterized by its distinct rainy season and more so by its extremes. Summer temperatures can reach a high of approximately 40 degrees Celsius and winter temperatures an reach a low of approximately -15 degrees Celsius. Plants and animals inhabitats are drought adapted to aide survival. Plant life are similar in characteristics to desert plants with mainly shrub/brush type species.
Soilis the foundation for terrestrial systems. It is a mixture of organic (living and nonliving) and inorganic material. The organic part is contributed by plants, animals, and microbes. The inorganic part is a result of weathering.
Soil is a composition of many different layers. The first(top) layer is the O Horizon which is then compsed of 2 sublayers the Oi and Oa layers. The Oi is the intact organic layer which is made up of dead organic matter and leaf litter. The Oa is contains mostly humus and is situated next to the A Horizon. The entire O Horizon is about 10cm deep. Deserts usually lack the O Horizon.
The next layer is the A Horizon. This is what most consider the topsoil, it is a mixture of weathered rock(clay, sand, and silt) from lower layers and organic material from the O Horizon. This is the area where there are the most roots, microbes, and invertebrates. This horizon usually has a high respiration rate, and has most of the nutrients leeched to lower layers by water. The top 2 layers combined (O and A Horizons) are on average about 0.5m deep.
The next layer is the E Horizon (or Eluviation Horizon). This layer has the least amount of dissolved nutrients. This layer can have a clay pan formed at the bottom which is highly compacted clay which prevents weathering of rock below. The also prevents tap roots from reaching the C Horizon. This may also cause the area above to become water-logged.
Below the E horizon is the Illuviation Horizon of B Horizon. This is the area where leechates collect, and most tap roots exist. It is usually mineral rich and contains little organic material. It is usually about 0.5m deep. Below the B Horizon is the C Horizon, which is the lowest soil, layer is commonly called the subsoil. This is mostly weathered parent material. This is usually about 0.5m deep also.
Lastly is the inorganic material of the soil or the R Horizon. This is also called bedrock which is a result of weathering from wind, water, temperature, and plants.
The particle size of soil is correlated to the amount of surface area of the particles and the charge of the surface area. A decrease in particle size equals an increase in surface area which in turn equals an increase in a negative charge of the surface area. The United States Department of Agriculture (USDA) designates types of soil by the size of the particles in the soil. Clay consists of particles smaller than 0.002mm and has the largest water capacity and highest nutrient capacity. Silt consists of particles smaller than 0.05mm and has a relatively median water capacity and a median nutrient capacity. Sand consists of particles smaller than 2mm and has the smallest water capacity and the lowest nutrient capacity.
Acidic soil is any type of soil that has a low pH. It is created whenever soil is very low in minerals and may be created by conifer needles. Amino acids also breaks down clay. The breakdown of clay is done by leeching the minerals out of the soil. This leeching of minerals is due to a high Hydronium ion [H+] concentration which results of in less productive soil. Acid rain or precipitation that is unusually acidic because of emissions of sulfur and nitrogen compounds which react with the atmosphere is widely believed to be responsible for acidifying soil.
- ^ Hirsh, Adam I. et al. 2004. "The Net Carbon Flux Due to Deforestation and Forest Re-growth in the Brazilian Amazon: Analysis using a Processed-Based Model". Global Change Biology. Vol. 10, pgs 908-924.
- ^ Unger, Stephan et al. 2009. "Partitioning Carbon Fluxes in a Mediterranean Oak Forest to Dientangle Changes in Ecosystem Sink Strength During Drought". Agricultural and Forest Meterology. Vol. 149, Iss. 6-7, pgs 949-961.
- ^ Hanski, Ilkka. "Metapopulation Dynamics." Nature 396 (1998): 41-49.
- ^ Odum, E. P. 1959. Fundamentals of ecology. W. B. Saunders Co., Philadelphia and London. 546 p.
- ^ Patterson, D. T. 1975. Nutrient return in stemflow and throughfall of individual trees in the Piedmont deciduous forest. Mineral Cycling in Southeastern Ecosystems. National Technical Information Service, U.S. Deot. of Commerce: 800-812.
- ^ Parmesan, Camille, Nils Ryrholm, Constanti Stefanescus, etal. 1991. Poleward shifts in geographical ranges of butterfly species associated with regional warming. Nature Vol.399. | <urn:uuid:193eecbc-6316-49fe-aed3-b1cf3a5d043c> | 3.6875 | 5,053 | Knowledge Article | Science & Tech. | 38.91911 | 95,522,456 |
posted by Lisa
Name one physical property and one chemical property that you could use to distinguish between water and gasoline.
Write down, on a sheet of paper, three or four physical properties. Place them in a column. On another column write three or four physical propertis of water. Then you will know the answer. I shall be happy to check your thinking.
isnt the 3or4 physical properties(volume, mass, weight, and density)
and the 3or4 physical properties of water (color, shape, hardness, and odor)???
ALL of those are physical properties. mass, density, color, chape, hardness, odor are all physical properties of BOTH gasoline and water. Now, which of those do you think you could use to distinguish between them. Could you measure the density? the odor?
umm..i think its density and odor!!
cause water's density is 1g/cm3<-(little three) and the density of gasoline is less than that~!!
and sometime therez odor in gasoline but water doesnt!!
is it correct??^^
Sure. The density of gasoline is close to 0.7 g/cc and you are correct that the density of water is close to 1 g/cc. You know water is odorless (some wells have horrible odors due to the dissolved gases) but city water USUALLY is odorless. Sometimes you can smell chlorine in it on days when the treatment plant must add quite a bit of chlorine in order to disinfectant it. So both can be used and you needed only one physical property to distinguish between them in the initial problem.
one property?!! o..sometime theres odor in water...that means only one property is density!!^^
Yes, but most of the water we drink is odorless. I assume the problem was talkng about PURE water, also, and pure water doesn't have any odor. I threw the odor part in because sometimes I can smell chlorine when I run the faucet in the kitchen. Of course, that isn't pure water. My brother has a well and I can smell hydrogen sulfide when I open the faucet at his house. Needless to say I take my own water when I visit him.
so well im gonna write...density!!^^
thank-u!! are u doctor??
im not smart in science but smart in math!! i answered lyk 3 math question of someone!! | <urn:uuid:ed6ffdf3-7f72-4805-a3b5-1756fc7a160b> | 3.171875 | 504 | Q&A Forum | Science & Tech. | 66.719362 | 95,522,458 |
The genetic structure of 14 populations of bull trout Salvelinus confluentus from the upper Flathead River basin was examined by means of (1) restriction fragment length polymorphism (RFLP) analysis of mitochondrial DNA (mtDNA) amplified by polymerase chain reaction and (2) three microsatellite loci. Analysis of mtDNA suggests colonization by at least two lineages of bull trout after the last glaciation. Both genetic markers showed little variation within bull trout populations but substantial differences among them. A large proportion of the observed population differentiation was attributable to genetic differences among populations within drainages, which suggests that even geographically adjacent populations are highly isolated reproductively. The temporal allele frequency differences detected in some samples suggest that genetic drift owing to recent demographic decline could have increased the genetic divergence among bull trout populations. We found no evidence for a metapopulation structure characterized by frequent extinction-recolonization events in these populations. These results suggest that bull trout populations have a low probability of recolonization through dispersal from adjacent populations after local populations go extinct. Therefore, the long-term persistence of bull trout requires ensuring that the local populations are maintained.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:0345eb41-cf4a-4a40-aa80-fbe64147f11f> | 3.078125 | 255 | Academic Writing | Science & Tech. | -3.957078 | 95,522,470 |
Chromosomal abnormalities such as differences in chromosome number (aneuploidy) are responsible for a many genetic diseases and disorders, and are the leading cause of miscarriage in humans. Nevertheless, throughout evolution striking changes have occurred in the number, size, and organization of chromosomes. This raises the question of how changes that are typically deleterious to individuals become incorporated into the evolutionary process. In particular, what molecular mechanisms change to facilitate this process? The stalk-eyed-fly, Teleopsis dalmanni, has undergone a recent and evolutionarily unusual sex chromosome """"""""shift,"""""""" in which the ancestral X is now an autosome, and one of the autosomes (2L in Drosophila) is now the X chromosome. This shift will have radically altered the dosage (copy number) of each X-linked and formerly X- linked gene in males of this species, so provides a unique opportunity to investigate how an organism can adapt to this type of change. The proposed research will investigate whether and how gene dosage has been compensated in this species.
The aims of the research are threefold.
The first aim will create a high-density linkage map that will allow assignment of chromosome orthology relative to Drosophila, allowing linkage group assignment of genomic contigs and identification of the chromosome orthologous to the X in D. melanogaster (now an autosome in Teleopsis). Once orthology is established, the second aim will compare male and female expression of genes on each linkage group. If dosage compensation is complete across the chromosome, expression of T. dalmanni X linked genes should on average be equal between the sexes relative to autosomal expression. Otherwise, expression will be lower on the male X. Expression of the ancestral X may or may not differ from the autosomes - it is possible that dosage compensating mechanisms present in Drosophila may still be hyper transcribing genes on the ancestral X, leading to over-expression of genes on this chromosome in males. Finally the third aim will determine whether the same proteins that compensate dosage in Drosophila are also functioning in T. dalmanni. If the same proteins are involved, compensated chromosomes in T. dalmanni should be bound specifically by antibodies made against these proteins. In the context of the X chromosome shift, this will address the question of whether compensation is achieved by co-opting an existing molecular pathway or through the evolution of a novel mechanism. The proposed research will deepen the understanding of how organisms adapt to chromosome-wide changes in gene dosage. This is relevant to general studies of chromosome function, including studies of disease-causing chromosome abnormalities.
Changes in chromosome content often cause genetic diseases and disorders in individuals, but chromosomes nevertheless change throughout evolution. In the stalk-eyed fly, the X chromosome is evolutionarily novel, being descended from an autosome in other fly species. The proposed research will determine how these flies compensated to the shift in gene dosage that occurred when the new chromosome arose, providing insight into how organisms can cope with changes in chromosome number.
|Paczolt, K A; Reinhardt, J A; Wilkinson, G S (2017) Contrasting patterns of X-chromosome divergence underlie multiple sex-ratio polymorphisms in stalk-eyed flies. J Evol Biol 30:1772-1784|
|Reinhardt, Josephine A; Jones, Corbin D (2013) Two rapidly evolving genes contribute to male fitness in Drosophila. J Mol Evol 77:246-59| | <urn:uuid:fc2c455b-ec2e-4858-a705-c715e940fc6c> | 2.828125 | 729 | Academic Writing | Science & Tech. | 21.681191 | 95,522,495 |
The chemical composition of the atmosphere is determined by a combination of chemical reactions, radiation flux, and transport of species. Processes that occur in the gaseous phase in the stratosphere include thermal gas phase reactions and photochemical reactions. Heterogeneous reactions occur between gas phase molecules in/on aerosols and polar stratospheric clouds. Models that attempt to simulate the chemical composition of the stratosphere require a number of parameters that can be measured directly in the laboratory. In the gas phase, these include reaction rate constants, product branching ratios, and photolysis quantum yields. Gas phase reaction rate constants, if determined at stratospheric temperature and pressure, are used directly in models. Photochemical reactions or heterogeneous reactions the data measured in the laboratory require transformation prior to use in model calculations. For example, quantum yield calculations used in models need to account for solar flux. In the case of heterogeneous reactions, the nature, size and distribution of aerosol particles must be considered. Advances in experimental measurements and in the understanding of the measurements themselves, have improved atmospheric models.
KeywordsHeterogeneous Reaction Flow Tube Bimolecular Reaction Stratospheric Temperature Reaction Rate Coefficient
Unable to display preview. Download preview PDF.
- Brown AC, CE Canosa-Mas, AD Parr, RP Wayne (1990) Laboratory studies of some halogenated ethanes and ethers: measurements of rates of reaction with OH and of infrared absorption cross-sections. Atmos Environ 24A:2499–2511Google Scholar
- Burkholder JB, RK Talukdar, AR Ravishankara, S Solomon (1993). Temperature dependence of the HNO3 UV absorption cross sections. J Geophys Res 98:22937 – 22948Google Scholar
- DeMore WB, SP Sander, DM Golden, RF Hampson, M J Kurylo, CJ Howard, AR Ravishankara, CE Kolb, MJ Molina (1994) Chemical kinetics and photochemical data for use in stratospheric modeling. JPL Publication 94–26.Google Scholar
- Golden DM, LR Williams (1993) In: Low-temperature chemistry of the atmosphere. Moorgat GK, AJ Barnes, G Le Bras, JR Sodeau (eds) 235–262, Spring-Verlag New YorkGoogle Scholar
- Rossi M (1995) Compilation of kinetic data on heterogeneous reactions. J Phys Chem Ref Data, in press.Google Scholar
- Sander SP, RR Friedl (1989) Kinetics and product studies of the reaction ClO + BrO using flash photolysis-ultraviolet absorption. J Chem Phys 71:5183–5190Google Scholar | <urn:uuid:89e0e51a-e0f4-4c7a-a46a-6aaeff6fd408> | 2.984375 | 555 | Academic Writing | Science & Tech. | 29.032785 | 95,522,501 |
Pathways & interactions
Aspartic peptidase A1 family (IPR001461)
Short name: Aspartic_peptidase_A1
Overlapping homologous superfamilies
- Aspartic peptidase domain superfamily (IPR021109)
Peptidase family A1, also known as the pepsin family, contains peptidases with bilobed structures [PMID: 2115088, PMID: 2115087]. The two domains most probably evolved from the duplication of an ancestral gene encoding a primordial domain [PMID: 24179]. The active site is formed from an aspartic acid residue from each domain. Each aspartic acid occurs within a motif with the sequence D(T/S)G(T/S). Exceptionally, in the histoaspactic peptidase from Plasmodium falciparum, one of the Asp residues is replaced by His [PMID: 11782538]. A third essential residue, Tyr or Phe, is found on the N-terminal domain only in a beta-hairpin loop known as the "flap"; this residue is important for substrate binding, and most members of the family have a preference for a hydrophobic residue in the S1 substrate binding pocket. Most members of the family are active at acidic pH, but renin is unusually active at neutral pH. Family A1 peptidases are found predominantly in eukaryotes (but examples are known from bacteria [PMID: 19758436, PMID: 21749650]). Currently known eukaryotic aspartyl peptidases and homologues include the following:
- Vertebrate gastric pepsins A (EC:220.127.116.11), gastricsin (EC:18.104.22.168, also known pepsin C), chymosin (EC:22.214.171.124; formerly known as rennin), and cathepsin E (EC: 126.96.36.199). Pepsin A is widely used in protein sequencing because of its limited and predictable specificity. Chymosin is used to clot milk for cheese making.
- Lysosomal cathepsin D (EC:188.8.131.52).
- Renin (EC:184.108.40.206) which functions in control of blood pressure by generating angiotensin I from angiotensinogen in the plasma.
- Memapsins 1 (EC:220.127.116.11; also known as BACE 2) and 2 (EC:18.104.22.168; also known as BACE) are membrane-bound and are able to perform one of the two cleavages (the beta-cleavage, hence they are also known as beta-secretases) in the beta-amyloid precursor to release the the amyloid-beta peptide, which accumulates in the plaques of Alzheimer's disease patients.
- Fungal peptidases such as aspergillopepsin A (EC:22.214.171.124), candidapepsin (EC:126.96.36.199), mucorpepsin (EC:188.8.131.52; also known as Mucor rennin), endothiapepsin (EC:184.108.40.206), polyporopepsin (EC:220.127.116.11), and rhizopuspepsin (EC:18.104.22.168) are secreted for sapprophytic protein digestion.
- Fungal saccharopepsin (EC:22.214.171.124) (proteinase A) (gene PEP4) is implicated in post-translational regulation of vacuolar hydrolases.
- Yeast barrierpepsin (EC:126.96.36.199) (gene BAR1); a protease that cleaves alpha-factor and thus acts as an antagonist of the mating pheromone.
- Fission yeast Sxa1 may be involved in degrading or processing the mating pheromones [PMID: 1549128].
- In plants, phytepsin (EC:188.8.131.52) degrades seed storage proteins and nepenthesin (EC 184.108.40.206) from a pitcher plant digests insect proteins.
- Plasmepsins (EC:220.127.116.11 and EC:18.104.22.168) from Plasmodium species are important for the degradation of host haemoglobin.
- Non-peptidase homologues where one or more active site residues have been replaced, include mammalian pregnancy-associated glycoproteins, an allergen from a cockroach, and a xylanase inhibitor [PMID: 15166216].
Aspartic peptidases, also known as aspartyl proteases (EC:3.4.23.-), are widely distributed proteolytic enzymes [PMID: 6795036, PMID: 2194475, PMID: 1851433] known to exist in vertebrates, fungi, plants, protozoa, bacteria, archaea, retroviruses and some plant viruses. All known aspartic peptidases are endopeptidases. A water molecule, activated by two aspartic acid residues, acts as the nucleophile in catalysis. Aspartic peptidases can be grouped into five clans, each of which shows a unique structural fold [PMID: 8439290].
- Peptidases in clan AA are either bilobed (family A1 or the pepsin family) or are a homodimer (all other families in the clan, including retropepsin from HIV-1/AIDS) [PMID: 2682266]. Each lobe consists of a single domain with a closed beta-barrel and each lobe contributes one Asp to form the active site. Most peptidases in the clan are inhibited by the naturally occurring small-molecule inhibitor pepstatin [PMID: 4912600].
- Clan AC contains the single family A8: the signal peptidase 2 family. Members of the family are found in all bacteria. Signal peptidase 2 processes the premurein precursor, removing the signal peptide. The peptidase has four transmembrane domains and the active site is on the periplasmic side of the cell membrane. Cleavage occurs on the amino side of a cysteine where the thiol group has been substituted by a diacylglyceryl group. Site-directed mutagenesis has identified two essential aspartic acid residues which occur in the motifs GNXXDRX and FNXAD (where X is a hydrophobic residue) [PMID: 10497172]. No tertiary structures have been solved for any member of the family, but because of the intramembrane location, the structure is assumed not to be pepsin-like.
- Clan AD contains two families of transmembrane endopeptidases: A22 and A24. These are also known as "GXGD peptidases" because of a common GXGD motif which includes one of the pair of catalytic aspartic acid residues. Structures are known for members of both families and show a unique, common fold with up to nine transmembrane regions [PMID: 21765428]. The active site aspartic acids are located within a large cavity in the membrane into which water can gain access [PMID: 23254940].
- Clan AE contains two families, A25 and A31. Tertiary structures have been solved for members of both families and show a common fold consisting of an alpha-beta-alpha sandwich, in which the beta sheet is five stranded [PMID: 10331925, PMID: 10864493].
- Clan AF contains the single family A26. Members of the clan are membrane-proteins with a unique fold. Homologues are known only from bacteria. The structure of omptin (also known as OmpT) shows a cylindrical barrel containing ten beta strands inserted in the membrane with the active site residues on the outer surface [PMID: 11566868].
- There are two families of aspartic peptidases for which neither structure nor active site residues are known and these are not assigned to clans. Family A5 includes thermopsin, an endopeptidase found only in thermophilic archaea. Family A36 contains sporulation factor SpoIIGA, which is known to process and activate sigma factor E, one of the transcription factors that controls sporulation in bacteria [PMID: 21751400]. | <urn:uuid:99e3bc63-f5f1-4470-acc8-4bfb808b276b> | 2.609375 | 1,866 | Knowledge Article | Science & Tech. | 51.08749 | 95,522,509 |
Family Coenagrionidae Kirby, 1890
- scientific: Pseudostigmatidae Kirby, 1890; Coryphagrionidae Pinhey, 1962; Protoneuridae Yakobson & Bianchi, 1905 (in part)
Over 110 genera and 1250 species form the largest radiation in Odonata together with Libelluloidea. Because most species inhabit standing water they are known as pond damselflies, although Oreocnemis and many Pseudagrion species breed in running water. Genetic research shows that the radiation has two main groups, which may be recognised as families in the future. The true coenagrionids make up just over half of this diversity worldwide, but six in every seven species in the Afrotropics. Here only Ceriagrion, Coryphagrion, Oreocnemis and Teinobasis belong to the second group, which is recognised by the absence of postocular spots and the presence (except in Oreocnemis) of a transverse ridge on the frons. [Adapted from Dijkstra & Clausnitzer 2014]
No diagnosis of this diverse family is presently available and the genera are best distinguished by considering both it and the Platycnemididae, together forming the superfamily Coenagrionoidea.
Map citation: Clausnitzer, V., K.-D.B. Dijkstra, R. Koch, J.-P. Boudot, W.R.T. Darwall, J. Kipping, B. Samraoui, M.J. Samways, J.P. Simaika & F. Suhling, 2012. Focus on African Freshwaters: hotspots of dragonfly diversity and conservation concern. Frontiers in Ecology and the Environment 10: 129-134.
Citation: Dijkstra, K.-D.B (editor). African Dragonflies and Damselflies Online. http://addo.adu.org.za/ [2018-07-18]. | <urn:uuid:90a57730-59d5-49f7-94b0-4df0184e5d7e> | 3.703125 | 426 | Knowledge Article | Science & Tech. | 48.531428 | 95,522,513 |
Laura Bliss is a staff writer at CityLab, covering transportation, infrastructure, and the environment. She also authors MapLab, a biweekly newsletter about maps that reveal and shape urban spaces (subscribe here). Her work has appeared in the New York Times, The Atlantic, Los Angeles, GOOD, L.A. Review of Books, and beyond.
A slew of new research reveals the deleterious effects of radiation on Fukushima's ecology.
So what of Fukushima Daiichi, Japan's nuclear collapse of 2011—might we expect a happy menagerie there, too? Not so much, according to a slew of new papers out in the Journal of Heredity. And you may want to rethink Chernobyl-as-Eden, too.
The findings of the new studies tell of significant population decline across many different species of animals and plants, as well as a range of expressions of genetic damage and cell mutation.
One paper reports that the pale grass blue butterfly, one of Japan's most common butterfly species, has suffered from significant size reduction, slowed growth, high mortality and abnormal wing patterns both within the Fukushima exclusion zone and among lab-raised offspring of parents collected at the site. Which is to say, radiation-caused genetic mutations were passed down.
Researchers also found major declines in populations of birds, butterflies, cicadas, and some small mammals, as well as aberrations and albinism in the feathers of certain birds.
Timothy Mousseau, a prominent biologist and lead author of that population study, has also conducted significant research into radiation's impacts at Chernobyl. He roundly rejects the claim that the area has become an animal haven, arguing that notion was based on anecdotal evidence rather than scientific data. Mousseau's own work demonstrates radiation has had similar effects on Chernobyl's ecology as on Fukushima's.
Further inquiry into all manner of species living at the Chernobyl site could help scientists better predict Fukushima's biological trajectory, he says. "There is an urgent need for greater investment in basic scientific research of the wild animals and plants of Fukushima," Mousseau told the Journal of Heredity. | <urn:uuid:5c8e7191-594f-430e-9643-649482c7d492> | 3.609375 | 438 | News Article | Science & Tech. | 36.572138 | 95,522,520 |
The magnitude of the resting membrane and graded potentials depends upon the concentration gradients of and membrane permeabilities to different ions, particularly sodium and potassium. This is true for the action potential as well. The action potential is initiated by a transient change in membrane ion permeability, which allows sodium and potassium ions to move down their concentration gradients. In the resting state, the leak channels in the plasma membrane are predominantly those that are permeable to potassium ions. Very few sodium ion channels are open, and the resting potential is therefore close to the potassium equilibrium potential. The action potential begins with depolarization of the membrane in response to a stimulus.
This initial depolarization opens voltage-gated sodium channels, which increases the membrane permeability to sodium ions several hundredfold. This allows more sodium ions to move into the cell, and the cell becomes more and more depolarized until a threshold is reached to trigger the action potential. This is called the threshold potential. After the threshold potential is reached, more voltage-gated sodium channels open. The membrane potential overshoots, becoming positive on the inside and negative on the outside of the membrane. In this phase, the membrane potential approaches but does not quite reach the sodium equilibrium potential (+60 mV).
At the peak of the action potential, sodium permeability abruptly decreases and voltage-gated potassium channels open. The membrane potential begins to rapidly repolarize to its resting level. The timing of the movements of sodium and potassium can be seen. Closure of the sodium channels alone would restore the membrane potential to its resting level since potassium flux out would then exceed sodium flux in. However, the process is speeded up by the simultaneous increase in potassium permeability. Potassium diffusion out of the cell becomes much greater than the sodium diffusion in, rapidly returning the membrane potential to its resting level. In fact, after the sodium channels have closed, some of the voltage-gated potassium channels are still open, and in nerve cells there is generally a small hyperpolarization of the membrane potential beyond the resting level called the afterhyperpolarization. Once the voltage-gated potassium channels close, the resting membrane potential is restored. Chloride permeability does not change during the action potential. | <urn:uuid:e941022a-ed23-401f-aff6-577ef6228061> | 3.90625 | 457 | Academic Writing | Science & Tech. | 22.782002 | 95,522,550 |
the Physics Central
the American Physical Society
This article outlines the phenonmenon of gravity waves predicted by Einstein's theory of General Relativity. Included is background information about gravity waves, recent research to detect gravity waves, and links to more information.
%0 Electronic Source %A Physics Central, %T Physics in Action: Gravitational Waves %I American Physical Society %V 2018 %N 20 July 2018 %9 text/html %U http://www.physicscentral.org/explore/action/gravity.cfm
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | <urn:uuid:258f9fd6-5a1f-4f3b-80e7-3ec7c1b76a7d> | 2.71875 | 153 | Truncated | Science & Tech. | 27.524205 | 95,522,552 |
Seven nested headwater catchments (8 to 161 ha) were monitored during five summer rain events to evaluate storm runoff components and the effect of catchment size on water sources. Two-component isotopic hydrograph separation showed that event-water contributions near the time of peakflow ranged from 49% to 62% in the 7 catchments during the highest intensity event. The proportion of event water in stormflow was greater than could be accounted for by direct precipitation onto saturated areas. DOC concentrations in stormflow were strongly correlated with stream18O composition. Bivariate mixing diagrams indicated that the large event water contributions were likely derived from flow through the soil O-horizon. Results from two-tracer, three-component hydrograph separations showed that the throughfall and O-horizon soil-water components together could account for the estimated contributions of event water to stormflow. End-member mixing analysis confirmed these results. Estimated event-water contributions were inversely related to catchment size, but the relation was significant for only the event with greatest rainfall intensity. Our results suggest that perched, shallow subsurface flow provides a substantial contribution to summer stormflow in these small catchments, but the relative contribution of this component decreases with catchment size.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:404a8b88-efe6-4783-8874-ef831eb5628b> | 2.59375 | 272 | Academic Writing | Science & Tech. | 16.663172 | 95,522,565 |
This C program actually simulates the stack operations graphically and in text mode. User can Add, Delete, Search, and Replace the elements from the stack. It also checks for Overflow/Underflow and returns user friendly errors. You can use this stack implementation to perform many useful functions. In graphical mode, this C program displays a startup message and a nice graphic to welcome the user.
The program performs the following functions and operations:
- Add: Pushes an element to the stack. It takes an integer element as argument. If the stack is full then error is returned.
- Delete: Pop an element from the stack. If the stack is empty then error is returned. Th element is deleted from the top of the stack.
- Search: This function takes an integer element as an argument and returns the location on the element. If number is not found then 0 is returned.
- Replace: This function takes two integers as arguments, first number is to find and second is to replace with. It first performs the search operation the replaces the integers.
This software is provided by MYCPLSU with the source code, you are free to use this code as you need and change the code.
Stack Implementation in C (79.2 KiB, 14,114 hits) | <urn:uuid:43aadf56-ff86-4446-8ffe-a17629dc3271> | 3.546875 | 264 | Product Page | Software Dev. | 58.315732 | 95,522,571 |
According to equation (5), in the first half period a quantum of action
is destroyed, and in the second half one a quantum of action
In two earlier works, I was able to demonstrate that the quantum of action is embodied by the Klein bottle, and that, in fact, the connection is already implicit in the standard formulation of subatomic spin, though the relationship is well disguised (Rosen 2008a, 2008b).
Instead, physics stays "hard," maintains its objectivity, by treating the quantum of action like a "black box.
His initial break-through came as an insight that Planck's quantum of action should be incorporated into the explanation of the mysterious stability of Rutherford's atom.
He thought that by taking Planck's quantum of action more seriously than did its discoverer, one might find the key to unlock the dual mysteries of atomic stability and spectroscopic regularity.
The discovery of the quantum of action
meant, in the eyes of Bohr, that the physical description of the quantum world had to forsake some principles that had been so essential for the development of classical physics.
The phase of the contribution from a given path is the action S for that path in units of the quantum of action
For Bohr the quantum of action
was a site of intrinsic and irreducible uncertainty: improved technology or more powerful analysis could not touch it--could not turn it into certain knowledge--because of its innately probabilistic character.
Obviously, such concepts as macro-world, micro-world, mega-world, quantum of action
, light velocity, particle-wave dualism, temporal-similar and spatial-similar intervals, etc.
More precisely, the uncertainty principle states that the product of the uncertainties regarding position and momentum must be greater than or equal to Planck's quantum of action
divided by twice [pi] (pi): If x is position, p is momentum, [delta] is amount of indeterminacy, and ["h bar" sign] ("h bar") is Planck's constant divided by 2[pi] then the formula reads [delta]x[delta]p [greater than or equal to]h.
After it was found that the recession velocities for single and double galaxies appear to be quantized then a new quantum of action
was also derived to yield [2, 3]:
For if the Quantum of Action
is assumed to be infinitely small, Quantum Physics become merged with classical Physics . | <urn:uuid:e16a7e58-f37d-4272-aa6c-2c695e0aad83> | 2.765625 | 517 | Knowledge Article | Science & Tech. | 22.748608 | 95,522,573 |
Press Release Summary:
Researchers from University of Toronto set up apparatus to measure polarization of a pair of entangled photons. Their main goal was to quantify how much the act of measuring polarization disturbed the photons, which they did by observing light particles both before and after measurement. Results were published online in journal Physical Review Letters and researchers will present their findings at OSA's Annual Meeting, Frontiers in Optics, Oct. 14 -18.
Original Press Release:
More Certainty on Uncertainty's Quantum Mechanical Role
Researchers present findings at Frontiers in Optics 2012 that observation need not disturb systems as much as once thought, severing the act of measurement from the Heisenberg Uncertainty Principle
WASHINGTON, —Scientists who study the ultra-small world of atoms know it is impossible to make certain simultaneous measurements, for example finding out both the location and momentum of an electron, with an arbitrarily high level of precision. Because measurements disturb the system, increased certainty in the first measurement leads to increased uncertainty in the second. The mathematics of this unintuitive concept – a hallmark of quantum mechanics – were first formulated by the famous physicist Werner Heisenberg at the beginning of the 20th century and became known as the Heisenberg Uncertainty Principle. Heisenberg and other scientists later generalized the equations to capture an intrinsic uncertainty in the properties of quantum systems, regardless of measurements, but the uncertainty principle is sometimes still loosely applied to Heisenberg’s original measurement-disturbance relationship. Now researchers from the University of Toronto have gathered the most direct experimental evidence that Heisenberg’s original formulation is wrong. The results were published online in the journal Physical Review Letters last month and the researchers will present their findings for the first time at the Optical Society’s (OSA) Annual Meeting, Frontiers in Optics (FiO), taking place in Rochester, N.Y. Oct. 14 -18.
The Toronto team set up an apparatus to measure the polarization of a pair of entangled photons. The different polarization states of a photon, like the location and momentum of an electron, are what are called complementary physical properties, meaning they are subject to the generalized Heisenberg uncertainty relationship. The researchers’ main goal was to quantify how much the act of measuring the polarization disturbed the photons, which they did by observing the light particles both before and after the measurement. However, if the “before shot” disturbed the system, the “after shot” would be tainted.
The researchers found a way around this quantum mechanical Catch-22 by using techniques from quantum measurement theory to sneak non-disruptive peeks of the photons before their polarization was measured. “If you interact very weakly with your quantum particle, you won’t disturb it very much,” explained Lee Rozema, a Ph.D. candidate in quantum optics research at the University of Toronto, and lead author of the study. Weak interactions, however, can be like grainy photographs: they yield very little information about the particle. “If you take just a single measurement, there will be a lot of noise in that measurement,” said Rozema. “But if you repeat the measurement many, many times, you can build up statistics and can look at the average.”
By comparing thousands of “before” and “after” views of the photons, the researchers revealed that their precise measurements disturbed the system much less than predicted by the original Heisenberg formula. The team’s results provide the first direct experimental evidence that a new measurement-disturbance relationship, mathematically computed by physicist Masanao Ozawa, at Nagoya University in Japan, in 2003, is more accurate.
“Precision quantum measurement is becoming a very important topic, especially in fields like quantum cryptography where we rely on the fact that measurement disturbs the system in order to transmit information securely,” said Rozema. “In essence, our experiment shows that we are able to make more precise measurements and give less disturbance than we had previously thought.”
Presentation FW4J.4, “Direct Violation of Heisenberg’s Precision Limit by Weak Measurements,” takes place Wednesday, Oct. 17 at 2:30 p.m. EDT at the Rochester Riverside Convention Center in Rochester, N.Y.
EDITOR’S NOTE: High-resolution images are available to members of the media upon request. Contact Angela Stark, email@example.com.
PRESS REGISTRATION: A press room for credentialed press and analysts will be located in the Rochester Riverside Convention Center, Sunday through Thursday, Oct. 14-18. Those interested in obtaining a press badge for FiO should contact OSA's Sarah Cogan at 202.416.1409 or firstname.lastname@example.org.
About the Meeting
Frontiers in Optics (FiO) 2012 is the Optical Society’s (OSA) 96th Annual Meeting and is being held together with Laser Science XXVIII, the annual meeting of the American Physical Society (APS) Division of Laser Science (DLS). The two meetings unite the OSA and APS communities for five days of quality, cutting-edge presentations, fascinating invited speakers and a variety of special events spanning a broad range of topics in optics and photonics—the science of light—across the disciplines of physics, biology and chemistry. FiO 2012 will also offer a number of Short Courses designed to increase participants’ knowledge of a specific subject in the optical sciences while offering the experience of insightful teachers. An exhibit floor featuring leading optics companies will further enhance the meeting. More information at www.FrontiersinOptics.org.
Uniting more than 180,000 professionals from 175 countries, the Optical Society (OSA) brings together the global optics community through its programs and initiatives. Since 1916 OSA has worked to advance the common interests of the field, providing educational resources to the scientists, engineers and business leaders who work in the field by promoting the science of light and the advanced technologies made possible by optics and photonics. OSA publications, events, technical groups and programs foster optics knowledge and scientific collaboration among all those with an interest in optics and photonics. For more information, visit www.osa.org . | <urn:uuid:49066c45-8e09-446a-b4cb-47db19e0b81a> | 3.078125 | 1,302 | News (Org.) | Science & Tech. | 30.888234 | 95,522,596 |
Wildfires across the western United States have been getting bigger and more frequent over the last 30 years – a trend that could continue as climate change causes temperatures to rise and drought to become more severe in the coming decades, according to new research.
The number of wildfires over 1,000 acres in size in the region stretching from Nebraska to California increased by a rate of seven fires a year from 1984 to 2011, according to a new study accepted for publication in Geophysical Research Letters, a journal published by the American Geophysical Union.
A satellite image of the 2011 Las Conchas Fire in New Mexico shows the 150,874 acres burned in magenta and the unburned areas in green. This image was created with data from the Monitoring Trends in Burn Severity (MTBS) Project that the authors of a new study used to measure large wildfires in the western United States.
Credit: Philip Dennison/MTBS
The total area these fires burned increased at a rate of nearly 90,000 acres a year – an area the size of Las Vegas, according to the study. Individually, the largest wildfires grew at a rate of 350 acres a year, the new research says.
“We looked at the probability that increases of this magnitude could be random, and in each case it was less than one percent,” said Philip Dennison, an associate professor of geography at the University of Utah in Salt Lake City and lead author of the paper.
The study’s authors used satellite data to measure areas burned by large fires since 1984, and then looked at climate variables, like seasonal temperature and rainfall, during the same time.
The researchers found that most areas that saw increases in fire activity also experienced increases in drought severity during the same time period. They also saw an increase in both fire activity and drought over a range of different ecosystems across the region.
“Twenty eight years is a pretty short period of record, and yet we are seeing statistically significant trends in different wildfire variables—it is striking,” said Max Moritz, a co-author of the study and a fire specialist at the University of California-Berkeley Cooperative Extension.
These trends suggest that large-scale climate changes, rather than local factors, could be driving increases in fire activity, the scientists report. The study stops short of linking the rise in number and size of fires directly to human-caused climate change. However, it says the observed changes in fire activity are in line with long-term, global fire patterns that climate models have projected will occur as temperatures increase and droughts become more severe in the coming decades due to global warming.
“Most of these trends show strong correlations with drought-related conditions which, to a large degree, agree with what we expect from climate change projections,” said Moritz.
A research ecologist not connected to the study, Jeremy Littell of the U.S. Geological Survey (USGS) at the Alaska Climate Science Center in Anchorage, AK, said the trends in fire activity reported in the paper resemble what would be expected from rising temperatures caused by climate change. Other factors, including invasion of non-native species and past fire management practices, are also likely contributing to the observed changes in fire activity, according to the study. Littell and Moritz said increases in fire activity in forested areas could be at least a partial response to decades of fire suppression.
“It could be that our past fire suppression has caught up with us, and an increased area burned is a response of more continuous fuel sources,” Littell said. “It could also be a response to changes in climate, or both.”
To study wildfires across the western U.S., the researchers used data from the Monitoring Trends in Burn Severity Project (MTBS). The project, supported by the U.S. Forest Service and USGS, uses satellite data to measure fires that burned more than 1,000 acres.
While other studies have looked at wildfire records over longer time periods, this is the first study to use high-resolution satellite data to examine wildfire trends over a broad range of landscapes, explained Littell. The researchers divided the region into nine distinct “ecoregions,” areas that had similar climate and vegetation. The ecoregions ranged from forested mountains to warm deserts and grasslands.
Looking at the ecoregions more closely, the authors found that the rise in fire activity was the strongest in certain regions of the United States: across the Rocky Mountains, Sierra Nevada and Arizona- New Mexico mountains; the southwest desert in California, Nevada, Arizona, New Mexico and parts of Texas; and the southern plains across western Texas, Oklahoma, Kansas and eastern Colorado. These are the same regions that would be expected to be most severely affected by changes in climate, said Dennison.
Notes for Journalists
Journalists and public information officers (PIOs) of educational and scientific institutions who have registered with AGU can download a PDF copy of this article by clicking on this link: http://onlinelibrary.wiley.com/doi/10.1002/2014GL059576/abstract
Or, you may order a copy of the final paper by emailing your request to Alexandra Branscombe at firstname.lastname@example.org. Please provide your name, the name of your publication, and your phone number.
Neither the paper nor this press release is under embargo.
“Large wildfire trends in the western United States, 1984-2011”
Philip E. Dennison: Department of Geography, University of Utah, Salt Lake City, Utah, USA;
Simon C. Brewer: Department of Geography, University of Utah, Salt Lake City, Utah, USA;
James D. Arnold: Department of Geography, University of Utah, Salt Lake City, Utah, USA;
Max A. Moritz: Department of Environmental Science, Policy, and Management, University of California, Berkeley, USA.
Contact information for the authors:
Philip E. Dennison: +1 (801) 742-1539, email@example.com
Max A. Moritz, firstname.lastname@example.org
+1 (202) 777-7516
University of Utah Contact
+1 (801) 581-8993
Peter Weiss | American Geophysical Union
Study relating to materials testing Detecting damages in non-magnetic steel through magnetism
23.07.2018 | Technische Universität Kaiserslautern
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Science Education
23.07.2018 | Health and Medicine
23.07.2018 | Life Sciences | <urn:uuid:b08f8987-6470-4915-b22e-cf4efabc9b4a> | 3.796875 | 1,917 | Knowledge Article | Science & Tech. | 37.075234 | 95,522,599 |
Knowledge about similar objects out at this distance (Pluto, Charon, Enceladus) have been used together with laws of angular momentum to work out what is physically possible. It ignores formation mechanisms at this point so as to concentrate on what is physically and mechanically possible given observations of Haumea and several other well observed analogues.
Pluto, Charon and Enceladus have been found to have ice shells and liquid water oceans beneath. Geysers on Enceladus go to form a ring at Saturn, so this probably explains the formation of Haumea's ring. Out-jetting of water at speed is what is happening at Enceladus, and this would be enough to torque even such a large body as Haumea. A thick ice shell would be enough to hold up to compression at the neck as the centrifugal force, friction, and conservation of Angular Momentum does the rest.
There will be gradual and even stretching of the neck as angular momentum is increased - in the same way as couples ice-skaters do the "spiral of death"
Tidal friction from the moons and ring keeps the motion circularised in the same way as the skating spin. | <urn:uuid:617e2c83-fbab-4269-a136-cd75e1089cb1> | 3.796875 | 249 | Personal Blog | Science & Tech. | 36.2325 | 95,522,614 |
|Debugging with GDB|
Here we describe the packets gdb uses to implement tracepoints (see Tracepoints).
In the series of action packets for a given tracepoint, at most one can have an ‘S’ before its first action. If such a packet is sent, it and the following packets define “while-stepping” actions. Any prior packets define ordinary actions — that is, those taken when the tracepoint is first hit. If no action packet has an ‘S’, then all the packets in the series specify ordinary tracepoint actions.
The ‘action...’ portion of the packet is a series of actions, concatenated without separators. Each action has one of the following forms:
Any number of actions may be packed together in a single ‘QTDP’ packet, as long as the packet does not exceed the maximum packet length (400 bytes, for many stubs). There may be only one ‘R’ action per tracepoint, and it must precede any ‘M’ or ‘X’ actions. Any registers referred to by ‘M’ and ‘X’ actions must be collected by a preceding ‘R’ action. (The “while-stepping” actions are treated as if they were attached to a separate tracepoint, as far as these restrictions are concerned.)
start is the offset of the bytes within the overall source string, while slen is the total length of the source string. This is intended for handling source strings that are longer than will fit in a single packet.
The available string types are ‘at’ for the location, ‘cond’ for the conditional, and ‘cmd’ for an action command. gdb sends a separate packet for each command in the action list, in the same order in which the commands are stored in the list.
The target does not need to do anything with source strings except report them back as part of the replies to the ‘qTfP’/‘qTsP’ query packets.
Although this packet is optional, and gdb will only send it
if the target replies with ‘TracepointSource’ See General Query Packets, it makes both disconnected tracing and trace files
much easier to use. Otherwise the user must be careful that the
tracepoints in effect while looking at trace frames are identical to
the ones in effect during the trace run; even a small discrepancy
could cause ‘tdump’ not to work, or a particular trace frame not
A successful reply from the stub indicates that the stub has found the requested frame. The response is a series of parts, concatenated without separators, describing the frame we selected. Each part has one of the following forms:
gdb uses this to mark read-only regions of memory, like those
containing program code. Since these areas never change, they should
still have the same contents they did when the tracepoint was hit, so
there's no reason for the stub to refuse to provide their contents.
The reply has the form:
1if the trace is presently running, or
0if not. It is followed by semicolon-separated optional fields that an agent may use to report additional status.
If the trace is not running, the agent may report any of several explanations as one of the optional fields:
Additional optional fields supply statistical and other information. Although not required, they are extremely useful for users monitoring the progress of a trace run. If a trace has stopped, and these numbers are reported, they must reflect the state of the just-stopped trace.
1means that the trace buffer is circular and old trace frames will be discarded if necessary to make room,
0means that the trace buffer is linear and may fill up.
1means that tracing will continue after gdb disconnects,
0means that the trace run will stop.
while-steppingsteps are not counted as separate hits, but the steps' space consumption is added into the usage number.
qTfPto get the first piece of data, and multiple
qTsPto get additional pieces. Replies to these packets generally take the form of the
QTDPpackets that define tracepoints. (FIXME add detailed syntax)
qTfVto get the first vari of data, and multiple
qTsVto get additional variables. Replies to these packets follow the syntax of the
QTDVpackets that define trace state variables.
qTfSTMto get the first piece of data, and multiple
qTsSTMto get additional pieces. Replies to these packets take the following form:
The address is encoded in hex; id and extra are strings encoded in hex.
In response to each query, the target will reply with a list of one or
more markers, separated by commas. gdb will respond to each
reply with a request for more markers (using the ‘qs’ form of the
query), until the target responds with ‘l’ (lower-case ell, for
qTsSTMpackets that list static tracepoint markers.
lindicates that no bytes are available.
-1tells the target to use whatever size it prefers.
tstop, the text fields are arbitrary strings, hex-encoded.
When installing fast tracepoints in memory, the target may need to relocate the instruction currently at the tracepoint address to a different address in memory. For most instructions, a simple copy is enough, but, for example, call instructions that implicitly push the return address on the stack, and relative branches or other PC-relative instructions require offset adjustment, so that the effect of executing the instruction at a different address is the same as if it had executed in the original location.
In response to several of the tracepoint packets, the target may also respond with a number of intermediate ‘qRelocInsn’ request packets before the final result packet, to have gdb handle this relocation operation. If a packet supports this mechanism, its documentation will explicitly say so. See for example the above descriptions for the ‘QTStart’ and ‘QTDP’ packets. The format of the request is: | <urn:uuid:cf5fecf3-bc04-4c08-9320-fdc97f26ced8> | 2.75 | 1,360 | Documentation | Software Dev. | 47.01856 | 95,522,620 |
Checkout these books from experts. You have come to the right page. These books should help you learn basic and advanced concepts of PHP development. Books are said to be man’s murach mysql chapter 11 pdf friends.
Our friends might not share all their knowledge and skills, but books will indiscriminately do so. The advantage of owning a book is that you can refer them any number of times and anytime. Though the Internet has reduced paper waste, buying books, especially academic books would never stop unless the world ends. PHP, the acronym for Hypertext Preprocessor is a widely used programming language that enables web designers to develop interactive and dynamic web contents using the database.
It is indeed the cheapest and effective alternative for other technologies like ASP. Also, PHP is free of cost and does not require high programming skills to start. Beginners with the basic knowledge of programming language concepts can easily learn PHP. In addition, open-source factors allow developers to experiment with codes, implement new concepts and develop new software tools and applications for all practical purposes. The book by Luke Welling has everything that a PHP developer will look for. Simplicity in language, clarity of thoughts, the length of each chapter, crisp and practical examples and perfect enough content to make this book a must read. Be it for the beginners who are still exploring the PHP world or for experts who are experimenting with complex code, this book is meant for all PHP enthusiasts.
This book is highly recommended by PHP professionals as a reference material for all challenging PHP projects. The David Powers has explained concepts without causing much confusion and talks to the point without diverting or comparing with other similar technologies. It is more than a dictionary or a reference material, giving all readers a solution oriented approach towards PHP. For example, the explanation of concepts like classes, objects, database, hierarchies is explained in a manner which can be easily comprehended by all programmers. Learning difficult concepts such as debugging, using tools of public domain or connecting with the database is a cakewalk with this book. Larry Ullman’s book is mostly custom made for those who have basic knowledge of HTML and are venturing into PHP projects.
This is the fourth edition of the series free from a lot of old and outdated concepts. This book also throws light on the basic concepts like arrays, variables, regular expressions which help in forming a strong base for understanding PHP better. An extra mile is covered integrating concepts such as XML, automation, sessions and web services. It covers both concepts in detail and in a simple layman language.
The Murach style of explaining codes with screenshots is indeed very useful and much appreciated. This book by Matt Zandstra answers all questions of a PHP programmer. It’s indispensable for self-learners who need to understand the concepts in a simple manner and implement their programs in a highly technical manner. Besides, all PHP and OPPS concepts are clearly explained. What’s more, the practical exercises in this book definitely hone the programming skills of learners. This book Is essentially the starting point for novice PHP programmers and other professionals in the open source community.
The above-mentioned books are indeed the best amongst the lot and have been read, reviewed and recommended by several PHP professionals in the industry. While there are many other PHP books available, these 7 are all a beginner would need to become proficient in PHP. Updated Article : October 8 2016 : Fixed minor typos and updated links to latest books. Ada E-book yang semoga dapat bermanfaat.
NOBUKU” pada link di atas dengan no list e-book yang mau di download. A Step-By-Step Guide To Installing And Securing The Tru64 Unix Operating System Version 5. Compaq Tru64 UNIX V5 Utilities and Commands. Fundamentals of the UNIX System.
Programming for the Microsoft . The Visibooks Guide to Photoshop Elements 4. Data Mining with Neural Networks. Руководство программирования на С для семейства процессоров ADSP-2100. 0 classroom in a book. Can somebody please explain DDE,ActiveX,OPC,and Visual Basic. Guide to ARMLinux for developers.
Basic principles of signal integrity. The Am2900 family data book. Heating in aluminum electrolytic strobe and photoflash capacitors. Fortran 90 and computational science. Basic theory and application of electron tubes.
Improving the transient immunity performance of microcontroller-based applications. Fitting models to biological data using linear and nonlinear regression. A practical guide to curve fitting. Hardening power supplies to line voltage transients. | <urn:uuid:27daeb8a-a5d9-4542-8a3c-5b4f32a1fb86> | 2.65625 | 1,002 | Spam / Ads | Software Dev. | 46.540598 | 95,522,655 |
I learned that scientists switched from measuring in lux (the amount of illumination) and started measuring light power only since the 1980's! Today, light power is mostly expressed as PAR, but some authors still seem to refer to the precursor, µE's. (An obvious salute to Einstein.)
Dinoflagellates have enemies too! Another article for folks trying to understand dinoflagellates a little better.
Chaeto and turf algae allow the highest density of epiphitic toxic dino's of any sudied algae. (See Table 1.) Even more interesting to me is Appendix A where a variety of macro aglae are listed, along with notes on palatability to some known grazers.
Article Title: Parasitic Life Styles of Marine Dinoflagellates Author: COATS, D. WAYNE Journal: Journal of Eukaryotic Microbiology, Volume 46, Issue 4, July 1999, Pages 402–409 Universal Link: http://dx.doi.org/10.1111/j.1550-7408.1999.tb04620.x PDF Available [...]parasitic dinoflagellates exhibit varying degrees of host specificity. The fish ectoparasite Amyloodinium ocellatum shows almost no host preference, with infections reported for over 100 species representing more than... Continue Reading → | <urn:uuid:2f885c34-f42b-48c6-914b-14e6fb4fb1d4> | 2.609375 | 284 | Truncated | Science & Tech. | 44.731522 | 95,522,660 |
The proposal comes from an international team of researchers from Switzerland, Belgium, Spain and Singapore, and is published today in Nature Physics. It is based on what the researchers call a 'hidden influence inequality'. This exposes how quantum predictions challenge our best understanding about the nature of space and time, Einstein's theory of relativity.
Trying to explain quantum “spooky action at a distance” using any kind of signal pits Einstein’s relativity against our concept of a smooth spacetime.
Credit: Timothy Yeo / CQT, National University of Singapore
"We are interested in whether we can explain the funky phenomena we observe without sacrificing our sense of things happening smoothly in space and time," says Jean-Daniel Bancal, one of the researchers behind the new result, who carried out the research at the University of Geneva in Switzerland. He is now at the Centre for Quantum Technologies at the National University of Singapore.
Excitingly, there is a real prospect of performing this test.
The implications of quantum theory have been troubling physicists since the theory was invented in the early 20th Century. The problem is that quantum theory predicts bizarre behaviour for particles – such as two 'entangled' particles behaving as one even when far apart. This seems to violate our sense of cause and effect in space and time. Physicists call such behaviour 'nonlocal'.
It was Einstein who first drew attention to the worrying implications of what he termed the "spooky action at a distance" predicted by quantum mechanics. Measure one in a pair of entangled atoms to have its magnetic 'spin' pointing up, for example, and quantum physics says the other can immediately be found pointing in the opposite direction, wherever it is and even when one could not predict beforehand which particle would do what. Common sense tells us that any such coordinated behaviour must result from one of two arrangements. First, it could be arranged in advance. The second option is that it could be synchronised by some signal sent between the particles.
In the 1960s, John Bell came up with the first test to see whether entangled particles followed common sense. Specifically, a test of a 'Bell inequality' checks whether two particles' behaviour could have been based on prior arrangements. If measurements violate the inequality, pairs of particles are doing what quantum theory says: acting without any 'local hidden variables' directing their fate. Starting in the 1980s, experiments have found violations of Bell inequalities time and time again.
Quantum theory was the winner, it seemed. However, conventional tests of Bell inequalities can never completely kill hope of a common sense story involving signals that don't flout the principles of relativity. That's why the researchers set out to devise a new inequality that would probe the role of signals directly.
Experiments have already shown that if you want to invoke signals to explain things, the signals would have to be travelling faster than light – more than 10,000 times the speed of light, in fact. To those who know that Einstein's relativity sets the speed of light as a universal speed limit, the idea of signals travelling 10,000 times as fast as light already sets alarm bells ringing. However, physicists have a getout: such signals might stay as 'hidden influences' – useable for nothing, and thus not violating relativity. Only if the signals can be harnessed for faster-than-light communication do they openly contradict relativity.
The new hidden influence inequality shows that the getout won't work when it comes to quantum predictions. To derive their inequality, which sets up a measurement of entanglement between four particles, the researchers considered what behaviours are possible for four particles that are connected by influences that stay hidden and that travel at some arbitrary finite speed.
Mathematically (and mind-bogglingly), these constraints define an 80-dimensional object. The testable hidden influence inequality is the boundary of the shadow this 80-dimensional shape casts in 44 dimensions. The researchers showed that quantum predictions can lie outside this boundary, which means they are going against one of the assumptions. Outside the boundary, either the influences can't stay hidden, or they must have infinite speed.
Experimental groups can already entangle four particles, so a test is feasible in the near future (though the precision of experiments will need to improve to make the difference measurable). Such a test will boil down to measuring a single number. In a Universe following the standard relativistic laws we are used to, 7 is the limit. If nature behaves as quantum physics predicts, the result can go up to 7.3.
So if the result is greater than 7 – in other words, if the quantum nature of the world is confirmed – what will it mean?
Here, there are two choices. On the one hand, there is the option to defy relativity and 'unhide' the influences, which means accepting faster-than-light communication. Relativity is a successful theory that researchers would not call into question lightly, so for many physicists this is seen as the most extreme possibility.
The remaining option is to accept that influences must be infinitely fast – or that there exists some process that has an equivalent effect when viewed in our spacetime. The current test couldn't distinguish. Either way, it would mean that the Universe is fundamentally nonlocal, in the sense that every bit of the Universe can be connected to any other bit anywhere, instantly. That such connections are possible defies our everyday intuition and represents another extreme solution, but arguably preferable to faster-than-light communication.
"Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them," says Nicolas Gisin, Professor at the University of Geneva, Switzerland, and member of the team.
The researchers that carried out the work, in addition to Dr Bancal and Prof Gisin, are Dr Stefano Pironio from the Free University of Bruxelles in Belgium, Professor Antonio Acín from the Institute of Photonic Sciences (ICFO) in Barcelona, Dr Yeong-Cherng Liang from the University of Geneva, and Professor Valerio Scarani from the Centre for Quantum Technologies and the Department of Physics of the National University of Singapore.
Reference: J.-D. Bancal et al, Quantum nonlocality based on finite-speed causal influences leads to superluminal signalling", Nature Physics, DOI:10.1038/NPHYS2460 (2012).Researcher Contacts
Jenny Hogan | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Materials Sciences
20.07.2018 | Physics and Astronomy
20.07.2018 | Materials Sciences | <urn:uuid:ce85f626-d368-439d-8446-043ba1bfa9bf> | 3.34375 | 1,910 | Content Listing | Science & Tech. | 37.745626 | 95,522,674 |