text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Quantization of the electromagnetic field/Related Articles From Citizendium, the Citizens' Compendium - See also changes related to Quantization of the electromagnetic field, or pages that link to Quantization of the electromagnetic field or to this page or whose text . Classical electromagnetic field - Electromagnetic wave : A change, periodic in space and time, of an electric field E(r,t) and a magnetic field B(r,t); a stream of electromagnetic waves, referred to as electromagnetic radiation, can be seen as a stream of massless elementary particles, named photons. - Electromagnetic radiation : a collection of electromagnetic waves, usually of different wavelengths. - Maxwell equations : Mathematical equations describing the interrelationship between electric and magnetic fields; dependence of the fields on electric charge- and current- densities. - Vector potential : Add brief definition or description - Quantization : Replacement of a classical variable by a quantum mechanical operator; phenomenon that energy is discontinuous. - Quantum electrodynamics : A relativistic theory of the interaction between electrically charged bodies based upon the exchange of photons, the quanta of the electromagnetic field. - Second quantization : Add brief definition or description - Standard Model : A mathematical theory that describes the weak, electromagnetic and strong interactions between leptons and quarks, the basic particles of particle physics. - Photon : elementary particle with zero rest mass and unit spin associated with the electromagnetic field.
<urn:uuid:c797fb78-b46c-468f-99fc-4762051cc762>
3.171875
303
Content Listing
Science & Tech.
-4.426651
95,528,347
In chemistry, a heteroatom (from Ancient Greek heteros, "different", + atomos, "uncut") is any atom that is not carbon or hydrogen. Usually, the term is used to indicate that non-carbon atoms have replaced carbon in the backbone of the molecular structure. Typical heteroatoms are nitrogen (N), oxygen (O), sulfur (S), phosphorus (P), chlorine (Cl), bromine (Br), and iodine (I). In the description of protein structure, in particular in the Protein Data Bank file format, a heteroatom record (HETATM) describes an atom as belonging to a small molecule cofactor rather than being part of a biopolymer chain. In the context of zeolites, the term heteroatom refers to partial isomorphous substitution of the typical framework atoms (silicon, aluminium, and phosphorus) by other elements such as beryllium, vanadium, and chromium. The goal is usually to adjust properties of the material (e.g., Lewis acidity) to optimize the material for a certain application (e.g., catalysis). - Housecroft, Catherine E.; Constable, Edwin C. (2006). Chemistry - An introduction to organic, inorganic and physical chemistry (3rd ed.). Prentice Hall. p. 945. ISBN 0131275674. - Senda, Y. (2002). "Role of the heteroatom on stereoselectivity in the complex metal hydride reduction of six-membered cyclic ketones". Chirality. 14 (2-3): 110–120. doi:10.1002/chir.10051. - Walling, Cheves. "The Role of Heteroatoms in Oxidation". In Mayo, Frank R. Oxidation of Organic Compounds. Advances in Chemistry. 75. pp. 166–173. doi:10.1021/ba-1968-0075.ch013. ISBN 9780841200760. - "Atomic Coordinate Entry Format Version 3.2". wwPDB. October 2008. Archived from the original on 2011-08-14. - Xu; Pang; Yu; Huo; Chen (2007). Chemistry of Zeolites and Related Porous Materials: Synthesis and Structure. p. 373. ISBN 0470822333. |Look up heteroatom in Wiktionary, the free dictionary.| |This organic chemistry article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:af3aa379-c29a-4c15-8abc-48e4cf6120f2>
3.578125
531
Knowledge Article
Science & Tech.
55.283031
95,528,353
Nine times more ice is melting annually due to warmer temperatures Ice loss from Canada's Arctic glaciers has transformed them into a major contributor to sea level change, new research by University of California, Irvine glaciologists has found. From 2005 to 2015, surface melt off ice caps and glaciers of the Queen Elizabeth Islands grew by an astonishing 900 percent, from an average of three gigatons to 30 gigatons per year, according to results published today in the journal Environmental Research Letters. "In the past decade, as air temperatures have warmed, surface melt has increased dramatically," said lead author Romain Millan, an Earth system science doctoral student. The team found that in the past decade, overall ice mass declined markedly, turning the region into a major contributor to sea level change. Canada holds 25 percent of all Arctic ice, second only to Greenland. The study provides the first long-term analysis of ice flow to the ocean, from 1991 to 2015. The Canadian ice cap has glaciers on the move into the Arctic Ocean, Baffin Bay and Nares Strait. The researchers used satellite data and a regional climate model to tally the "balance" of total gain and loss each year, and the reasons why. Because of the huge number of glaciers terminating in area marine basins, they expected that discharge into the sea caused by tide water hitting approaching glacier fronts would be the primary cause. In fact, they determined that until 2005, the ice loss was caused about equally by two factors: calving icebergs from glacier fronts into the ocean accounted for 52 percent, and melting on glacier surfaces exposed to air contributed 48 percent. But since then, as atmospheric temperatures have steadily climbed, surface melt now accounts for 90 percent. Millan said that in recent years ice discharge was only a major component in a few basins, and that even rapid, short term increases from these ice fields only had a minor impact on the long-term trend. Millan added, "We identified meltwater runoff as the major contributor to these ice fields' mass loss in recent years. With the ongoing, sustained and rapid warming of the high Arctic, the mass loss of the Queen Elizabeth Islands area is likely to continue to increase significantly in coming decades." Fellow authors are UCI professor of Earth system science Eric Rignot and UCI assistant research scientist Jeremie Mouginot. NASA provided funding. About the University of California, Irvine: Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. It's located in one of the world's safest and most economically vibrant communities and is Orange County's second-largest employer, contributing $5 billion annually to the local economy. For more on UCI, visit http://www. Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UCI faculty and experts, subject to availability and university approval. For more UCI news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists. Janet Wilson | EurekAlert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:83c5ea16-9073-47b6-8768-97b72ae25749>
3.578125
1,308
Content Listing
Science & Tech.
40.184031
95,528,358
A contact force is any force that requires contact to occur. Contact forces are ubiquitous and are responsible for most visible interactions between macroscopic collections of matter. Pushing a car up a hill or kicking a ball or pushing a desk across a room are some of the everyday examples where contact forces are at work. In the first case the force is continuously applied by the person on the car, while in the second case the force is delivered in a short impulse. Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force. In the Standard Model of modern physics, the four fundamental forces of nature are known to be non-contact forces. The strong and weak interaction primarily deal with forces within atoms, while gravitational effects are only obvious on an ultra-macroscopic scale. Molecular and quantum physics show that the electromagnetic force is the fundamental interaction responsible for contact forces. The interaction between macroscopic objects can be roughly described as resulting from the electromagnetic interactions between protons and electrons of the atomic constituents of these objects. Everyday objects do not actually touch; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. - Plesha, Gray, and Costanzo (2010). Engineering Mechanics - Statics. McGraw-Hill. pp. 8–9.
<urn:uuid:35742035-9cde-4dca-9bef-deb56beb2775>
4.15625
295
Knowledge Article
Science & Tech.
33.964678
95,528,371
28-12-2014, 12:40 PM Today technology plays a vital role in every aspect of life. Increasing standards in technology in many fields , has taken man today to high esteem. But the present available technologies are unable to interact with the atoms, such a minute particles. Hence Nanotechnology has been developing. Nanotechnology is nothing but a technology which uses atoms with a view to creating a desired product. It has wider applications in all the fields. The important application is Cryonics.. Cryonics is nothing but an attempt of raising the dead - making them alive. First we preserve the body then by using molecular machines based nanotechnology we could revive the patients by repairing damaged cells. cryonics paper.pdf (Size: 244.51 KB / Downloads: 69) kavi, proud to be a member of Vidyarthiplus.com (V+) - Online Students Community since Apr 2013.
<urn:uuid:c5fea479-23a5-4d2d-ac76-d123c9f51279>
2.671875
184
Comment Section
Science & Tech.
50.2125
95,528,391
Jul 18, 18 10:21 AM Mednews announcements of latest treatments, medicines and discoveries Jul 18, 18 10:12 AM buzcall and how it started and how it's progressing Jul 18, 18 09:58 AM cancer a fight we can win Fusion the safe alternative. Wibbly-wobbly magnetic fusion stuff: The return of the stellarator Artistically shaped magnets may make stellarators easier to manage than ITER. Fusion powers the Sun, where hydrogen ions are forced together by the high pressure and temperature. The nuclei join to create helium and release a lot of energy in the process. Doing the same thing on Earth means creating the same conditions that drive hydrogen nuclei together, which is easier said than done. Humans are very clever, but achieving fusion in a magnetic bottle will probably be one of our cleverer tricks. Making that bottle is difficult, and Ars recently had the chance to visit the people and facilities behind one of our most significant attempts at it. For most people, magnetic bottles for fusion bring to mind the tokamak, a donut-shaped device that confines the plasma in a ring. But actually, the tokamak is just one approach; there's a more complicated version that is helical in shape. Somewhere in between the two is the stellarator. Here, the required magnetic field is a bit easier to create than for a helix, but it's still far more complicated than for a tokamak. At the Max Planck Institute for Plasma Physics (MPIPP) in Greifswald, located on the Baltic coast in Germany, the latest iteration of the stellarator design is preparing to restart after its first trial run. The researchers putting it all together are pretty excited by the prospect—frankly everyengineer and scientist would be excited by the prospect of turning on a new piece of hardware. But it's even more so the case at MPIPP since the new gear happens to be something they designed and built. The stellarator is something special: the realization of a design that is more than 50 years in the making. The heliac, the stellarator, and the tokamak are all trying to achieve the same thing: confine a plasma tightly in a magnetic bottle, tightly enough to push protons in close to each other. They all use a more-or-less donut shape, but that more-or-less involves some really important differences. That difference makes the stellarator a pretty special science and engineering challenge. To highlight that challenge, we can start with the simpler and more familiar tokamak. The tokamak begins with a donut-shaped vacuum vessel. The magnetic field is applied by a series of flat coils that are wrapped around the tube of the donut (as in the diagram). This, along with a few other magnets, creates a magnetic field that runs in parallel lines around the interior of the donut. When a plasma is injected, its charged particles corkscrew around the field lines. At first sight this looks like it should confine the plasma in a series of tubes. This doesn't happen, though. As Professor Thomas Klinger, head of the stellarator project at the MPIPP says, "The vacuum magnetic field has no confinement properties because it’s a purely toroidal field. And a purely toroid field does not confine a plasma at all; that was already realised by Fermi in 1951." The problem is that the charged particles can drift from magnetic field line to magnetic field line. Since the magnetic field doesn't have the same strength across the cross-section of the torus, particle drift to the outside is much more energetically favorable. So the plasma simply expands outward and hits the wall. To obtain high plasma temperatures in a tokamak, this drift has to be stopped. To do this, a large current has to flow through the plasma. "You have to twist the magnetic field lines, which is done by the current," says Klinger. The current generates a second magnetic field, which distorts the applied field so that the field lines run in a twisted spiral. A charged particle in the very short-term can still be thought of as corkscrewing around a single field line. But, because the field line spirals around, it is better to think of a series of nested surfaces (like a matryoshka doll), with the particles in the plasma confined on these surfaces. One consequence of this design is that, while particles still hop between field lines, they can now drift from low magnetic field to high magnetic field, and vice versa—an outward flow is no longer favorable. So, on average, the rate at which particles escape confinement is much smaller. Strong confinement means that the plasma has to support a large current to generate the right magnetic field shape. For the international thermonuclear experimental reactor (ITER), the plasma will generate several million amps of current. Unfortunately, the current through the plasma, the plasma density, and temperature don't end up the same everywhere, and these differences have the potential to destabilize the current. In particular, if the current is not evenly distributed across the plasma, the lovely nested surfaces that confine the plasma may be destroyed. This process can rapidly spiral out of control, dumping all the current in the plasma to the vessel walls in an event called a disruption. A disruption is not something to be taken lightly, as Klinger notes. "A grown-up tokamak like JET [joint European tokamak] or our ASDEX upgrade [axially symmetric diverter experiment] starts to jump in the case of a disruption," he says. "These are big machines; imagine such a big machine starts jumping." So while the tokamak can use a self-organizing magnetic field to confine the plasma, that field is subject to various instabilities. To avoid these building into problems, the tokamak has to operate in pulsed mode (though those pulses may be hours in duration), and it requires a lot of sensors, control systems, and feedback to minimize the instabilities. To get this right, you need a good physical model of the plasma physics. Researchers use the model to look for the telltale signs that indicate the beginning of an instability. "My modeling is mostly related to how do we control these instabilities. How do we affect these instabilities so that they either do not occur or that, when they occur, we suppress them or ameliorate their presence," says Dr. Egbert Westerhof from the Dutch Institute for Fundamental Energy Research (DIFFER). In the tokamak, this sort of modeling is simplified by the symmetry of the device, which reduces a 3D problem to 2D. The results from these physics-based models are then used to create empirical models that do not really contain detailed physics, but they can quickly provide predictive results within some limited range of plasma properties. This simplicity has helped produce models that can calculate the tokamak's behavior faster than the tokamak can misbehave, a necessity for a successful control system. This hasn't really happened with the stellarator designs. "They are really far [ahead of us] in tokamaks because they have these models that work really well. They have been tested. And now they can actually predict the temperature and density profiles faster than real time, which is incredible. But we don’t have these models yet," explains Dr. Josefine Proll, an assistant professor at Technical University Eindhoven. Externally organized confinement The stellarator has little to no current in the plasma. This is because the externally applied magnetic field has all the properties required to confine the plasma. So, although the vacuum vessel is still basically a toroid, the magnets that loop around the tube are not planar. Instead, they have the shape needed to generate a twisted magnetic field. "If you shape your field in a clever way then you can make it so that the drifts basically cancel out, at least for those that would leave the plasma," says Proll. Theoretically, that is. In practice, well, we're still working on it. To give a magnetic field precisely the right shape requires extensive calculation at many different scales, and all of it must happen in a 3D space. So, computer code that simulates the plasma over the entire volume of a stellarator had to be developed, and that had to wait for computers that were powerful enough to perform the calculations. "These machines, these supercomputers of the '80s, made it possible to crank through the equations, to solve the equations simultaneously, and then it was found out, okay, the stellarator needs optimization," says Klinger. Calling it optimization kind of undersells the problem, though. Scientists had to decide what parameters of the system need to be optimized and in what range. To make that decision more difficult, no single computer model can encompass the vast range of physics that needed to be included. To get an accurate picture of the plasma in a stellarator, you need separate models that calculate the applied magnetic field and the plasma's fluid-like behavior, called a magnetohydrodynamic model. Then, to test the magnetic field confinement against particle drift and particle collisions, you need models that track individual particles along field lines and other models that deal with diffusion. All of these models needed to be created and then verified against experimental data before optimization was even possible. Listing image by Max Planck Institute for Plasma Physics JUMP TO ENDPAGE 1 OF 2 Building a beast of burden This wasn't easy, but it was successful. The result is the Wendelstein 7-X stellarator. The W7-X is a beautiful design to look at, but it wasn't very simple to put together. The magnets that ring the tube are divided into five identical sections, and each section has modules that consist of two parts. The parts contain five non-planar and two planar magnets, arranged in flip-symmetric fashion (so the magnets are ordered 1, 2, 3, 4, 5:5, 4, 3, 2, 1). There are five unique non-planar magnet designs, and each had to be successfully replicated 10 times. Plus the planar magnets also had to meet the same strict set of specifications. Unlike planar magnets, non-planar magnets experience a force that tries to flatten them, so the winding structure for the superconducting wire had to be strong enough to withstand that force. But, getting magnets to reach specifications in terms of things like retaining their helium coolant, maintaining electrical isolation in the face of high voltages, and surviving quenches (a quench is when superconductivity is suddenly lost) was a challenge. Building the magnets was a story six years in the making. "We did a test of each single coil; this was foreseen from the beginning. So each single coil, each single coil of the seventy coils, went through very thorough tests in Paris," Klinger tells me. "There was one coil that has seen Paris three times. We call that coil Apollo 13." Apart from confirming that the hardware could produce a magnetic field with a very precise state, these tests were necessary to make the magnets less susceptible to cascaded failure—the sort of failure that marred the start up of the LHC. Still, that was a minor cause of stress compared to the mechanical engineering problems. Unlike the tokamak, there is no real symmetry, so the whole structure had to be modeled. The engineers used a finite element model—a standard engineering tool to help design structures—to calculate where the stress induced by the magnets would be and design the support structure to cope with that. They got it wrong. And it was only discovered after the magnets were in production. "We had to change the entire support concept, a very fundamental concept. And the most important change was that we made the magnet system less stiff," Klinger explains. In the end, the entire structure was redesigned to allow the magnets to move by 5cm. But they all move in concert, so the relative position of all the magnets stays the same, and the magnetic field is not altered by that. Start me up This may not sound like much of an achievement, but the vessel that contains the magnets and the inner chamber that holds the plasma is some 16 meters across, and the magnets need to maintain a relative orientation that is accurate to within about 100 micrometers. This has to be the goal despite the fact that, when the current is turned on, the magnets shift by about 4cm. In late 2016, the magnets were switched on, and the field shape was measured. I suspect there were some quiet sighs of relief and perhaps a beer or two consumed—the measured field shape agreed with computer models to within one part in 100,000. The agreement was so good that, even though not all of the parts were ready, they decided to do some early plasma tests. Normally, the vessel wall has to be lined with a material that absorbs the heat from the plasma. Since the energy in the plasma is rather high, the material either has to be sacrificial (meaning it's allowed to burn off), a good heat conductor, or both. But the carbon panels designed for this function had not yet been installed. So the W7-X could only be ramped up to the limit of the copper-chrome-zirconium mounts for the carbon tiles. Even with that, Klinger claims that they were able to run at 2 to 4MJ and an ion temperature of 2,000 eV, which is about what they would expect given the wall limitations. Currently, the vessel is open, and engineers are placing 8,000 carbon tiles. That will allow researchers to run at energies up to 80MJ, which should demonstrate two very important points. First, they hope to confirm the model predictions for plasma confinement: do they get the plasma density and ion temperature that is predicted? And since magnetic confinement fusion systems all follow the same scaling, they can compare their results to those from tokamaks. Given that the Wendelstein 7-X has a larger volume than the ASDEX upgrade but is smaller than JET, Klinger expects performance then falls somewhere in between the two. The second major goal is to show that the stellarator is, indeed, stable enough to run continuously. However, that cannot be demonstrated at full power yet. The graphite tiles are not water-cooled, so the W7-X can only be run at full power for 10 seconds. The models predict that the stellarator should settle down to continuous operations, but the settling takes longer than 10 seconds. To test this, the researchers will have to run at lower power, which should still be good enough to demonstrate a certain amount of stability (or not—that's the point of doing experiments). The heat is on A major part of the experimental plan is to see if the researchers can successfully incorporate a diverter into their design. What is a diverter? Essentially, all magnetic confinement schemes leak. The plasma is going to hit the wall, and the plasma is energetic: it is going to heat the wall material, possibly blow holes in it, and definitely blast contaminants from the wall into the plasma. The magnetic field, however, can be constructed such that there are specific locations at which the plasma escapes. At this location, you can devote considerable attention to little details like having a material that doesn't easily ablate, having good heat conductivity and cooling systems, and pumping away all the material that does ablate. These are called diverters. Tokamak experiments have incorporated diverters for years, but stellarators have not. The critical point is not just showing that a diverter can be engineered but that the plasma escapes in a predictable way at the diverter. That is, it is not just the thermal and mechanical engineering of the hardware at the plasma-facing material and mount; the plasma and magnetic field models also need to be tested at operating conditions. All of them need to work to create a diverter, and, without a diverter, the stellarator will have reached the end of the line. Assuming the diverter works, the stellarator will have one more scheduled shutdown. At this point, the water cooling will be turned on. The machine has been designed with water cooling in mind—all the pipes are installed and wind, spaghetti-like, around the vessel until they find a gap in the coil structure to escape. But water cooling and vacuum systems are not always happy companions, so rather than switch it all on now and spend the next year chasing leaks, the researchers have delayed that step. At the moment, while many components are water-cooled, the diverter sections are not. Once the diverters are water-cooled, the W7-X will be able to operate at 10MW for half an hour. This is enough time to verify stable operation at temperatures and densities that are the maximum achievable. The data will be fusion relevant as well. Living in an imperfect world The stellarator of our dreams lives in computer code. We're hoping to realize it with the W7-X, but our code base is incomplete; the model used to design the stellarator does not contain all the important physics. According to Proll, a major missing component is an accurate model of turbulence. Turbulence can cause particles to leak from the plasma. Full 3D models cannot deal with turbulence; instead, the model that Proll uses follows particles along a single flux line around the entire stellarator. These computations take on the order of 5 million CPU hours per data point. For example, turbulence will result in the plasma density fluctuating in space and time. But do these fluctuations grow under all conditions? Or, do they reach some point where the properties of the rest of the plasma put a cap on fluctuation growth? This point, where the turbulence stops being strongly influenced by the rest of the plasma, is called saturation. "There are a few really cool phenomena that we’re studying at the moment, and one of the main things is what happens at saturation. So, [they are] what really kind of stops the instabilities from the turbulence... from growing," says Proll. For now, optimization was performed with the influence of turbulence included as an extra form of diffusion. But it's possible that this isn't the right approach or that the effective speed of the diffusion is quite different from that in the model. While a stellarator is stable compared to a tokamak, turbulence is probably going to be something that should be minimized through dynamic control, meaning we track the plasma and intervene to control turbulence as it develops. But the models that Proll uses are too slow for this. Instead, the goal is to use these CPU-intensive calculations to come up with empirical models that only require a few input parameters, like the magnetic field geometry and current plasma parameters, to predict the onset of turbulence. One of the questions I had for Klinger was how a PhD program works on a project like the stellarator. In my experience, students are basically given some lab space, a bunch of hardware, and some general ideas of what might be interesting to do with said hardware. Then, your academic advisor awaits results, while occasionally breaking things in the lab under the guise of helping you out. Clearly this is not the approach at a facility like the W7-X, which has taken years of planning. Creativity is key to science. So how do you keep creativity in such a strictly planned experiment, I wondered? The staff at MPIPP have a clever system to keep student creativity in the research mix. Students, along with their advisors, submit proposals for experiments on the W7-X. Then, if the proposal is accepted, at their allotted time, the student takes command in the control room. The magnets are configured per their specs, and the instrumentation teams take data with the student. (I'd imagine that this is quite an experience and I admit to some jealousy.) The staff at the MPIPP are also very careful to take on their students at a time when the W7-X will be up and running. Thus, with assembly and the initial data run over, the first wave of students has obtained all its experimental data and is completing its studies. The W7-X will start up again in 2018, so the next wave of students will start soon, allowing enough time for them to be prepared for operations. It is all very carefully organized and quite foreign—but in a good way—to my experience in research. Part of the W7-X's funding comes from a European consortium called EUROfusion. But if you look on the EUROfusion website, there is barely a mention of the stellarator. Instead, all the glory goes to ITER. The reasons for the focus are some key design differences between ITER and the W7-X. According to Professor Tony Donné, program manager at EUROfusion, the plan is for ITER to do a whole lot of interesting fusion physics that will test many of the physical principles upon which a commercial reactor could be based. That doesn't just include the plasma confinement and control systems but also diverters to absorb heat and walls that breed tritium fuel from beryllium so that deuterium-tritium fusion can be used. The W7-X, on the other hand, cannot handle tritium. Plasma parameters can be explored, but nuclear fusion is not in the plan. The step after ITER for EUROfusion is DEMO, a demonstration power plant. This, according to Donné, will also be a tokamak design. The design for DEMO is not set in stone yet and could conceivably be a stellarator. However, a lot of the initial design work and studies for DEMO are underway, so a decision to change to a stellarator would have to be made soon, probably before the W7-X experimental program is complete. Accordingly, Professor Marco de Baar, of the Dutch institute DIFFER, suggests that stellarators, should the W7-X deliver, could end up being a second-generation fusion power plant. It would be second generation because there are numerous issues yet to solve. You have to have walls that breed tritium, which no one knows how to do in a stellarator. And, periodically, the interior has to be cleaned, which involves robotic handling. "The design and construction of such a machine is very difficult, the maintenance of such a machine is even more difficult. In ITER already we have to think of robotic access using haptic master-slave systems, but at least you have some sort of symmetry, and you have some ports that you can use to bring your robotic systems into the vessel," explains de Baar. These devices provide multiple types of feedback to their operators to help them navigate the interior of ITER. In a stellarator, access ports are restricted by the strange magnetic field, and there is no symmetry. Yet, Klinger is hopeful the robots will be ready. "We really have to count on the advances in engineering, in robotics, and robots getting better and better and better. Just compare the robots nowadays with the robots 20 years ago; I’m pretty relaxed about that." No matter what form a fusion power generator takes, fusion as a viable power source is not a certainty. Unlike most green energy solutions, the initial investment is huge: the magnets are big, expensive, and only part of the cost. Companies would also be taking on an enormous liability should they choose to construct them. And, frankly, the time scale at which fusion generators could be attached to the grid is not going to help us much. As de Baar puts it, "Let me make it very clear: fusion is, if you simply look at the carbon dioxide goals we have, fusion would be too late to bring those carbon dioxide emissions down at the rate that we need. If renewables do the job, fusion could become part of a network of dispatchable power generation units. However, if renewables don’t do the job, fusion will be too late to prevent serious damage. In that scenario, we would find ourselves in a bad situation for a period of time that extends beyond when the first fusion reactors come on line." So, the stellarator remains an exciting physical and scientific achievement the world should be anxiously awaiting. It represents hope for the future, just not the hope most people would assign to fusion. June 22nd 2018 As Fukushima residents return, some see hope in nuclear tourism On a cold day in February, Takuto Okamoto guided his first tour group to a sight few outsiders had witnessed in person: the construction cranes looming over Japan's Fukushima Daiichi nuclear plant. Seven years after a deadly tsunami ripped through the Tokyo Electric Power (9501.T) plant, Okamoto and other tour organisers are bringing curious sightseers to the region as residents who fled the nuclear catastrophe trickle back. Many returnees hope tourism will help resuscitate their towns and ease radiation fears. But some worry about drawing a line under a disaster whose impact will be felt far into the future. The cleanup, including the removal of melted uranium fuel, may take four decades and cost several billion U.S. dollars a year. "The disaster happened and the issue now is how people rebuild their lives," Okamoto said after his group stopped in Tomioka, 10 kilometres (6.21 miles) south of the nuclear plant. He wants to bring groups twice a week, compared with only twice a month now. Electronic signs on the highway to Tomioka showed radiation around 100 times normal background levels, as Okamoto's passengers peered out tour bus windows at the cranes poking above Fukushima Daiichi. "For me, it's more for bragging rights, to be perfectly honest," said Louie Ching, 33, a Filipino programmer. Ching, two other Filipinos and a Japanese man who visited Chernobyl last year each paid 23,000 yen ($208.75) for a day trip from Tokyo. The group had earlier wandered around Namie, a town 4 kilometres north of the plant to which residents began returning last year after authorities lifted restrictions. So far, only about 700 of 21,000 people are back - a ratio similar to that of other ghost towns near the nuclear site. Former residents Mitsuru Watanabe, 80, and his wife Rumeko, 79, have no plans to return. They were only in town to clear out their shuttered restaurant before it is demolished, and they chatted with tourists while they worked. "We used to pull in around 100 million yen a year," Mitsuru said as he invited the tourists inside. A 2011 calendar hung on the wall, and unfilled orders from the evacuation day remained on a whiteboard in the kitchen. "We want people to come. They can go home and tell other people about us," Mitsuru said among the dusty tables. Okamoto's group later visited the nearby coastline, where the tsunami killed hundreds of people. Abandoned rice paddies, a few derelict houses that withstood the wave and the gutted Ukedo elementary school are all that remain. It's here, behind a new sea wall at the edge of the restricted radiation zone, that Fukushima Prefecture plans to build a memorial park and 5,200-square-metre (56,000-square-foot) archive centre with video displays and exhibits about the quake, tsunami and nuclear calamity. "It will be a starting point for visitors," Kazuhiro Ono, the prefecture's deputy director for tourism, said of the centre. The Japan Tourism Agency will fund the project, Ono added. Ono wants tourists to come to Fukushima, particularly foreigners, who have so far steered clear. Overseas visitors spent more than 70 million days in Japan last year, triple the number in 2011. About 94,000 of those were in Fukushima. Tokyo Electric will provide material for the archive, although the final budget for the project has yet to be finalised, he said. "Some people have suggested a barbecue area or a promenade," said Hidezo Sato, a former seed merchant in Namie who leads a residents' group. A "1" sticker on the radiation metre around his neck identified him as being the first to return to the town. "If people come to brag about getting close to the plant, that can't be helped, but at least they'll come," Sato said. The archive will help ease radiation fears, he added. Standing outside a farmhouse as workmen refurbished it so her family could return, Mayumi Matsumoto, 54, said she was uneasy about the park and archive. "We haven't gotten to the bottom of what happened at the plant, and now is not the time," she said. Matsumoto had come back for a day to host a rice-planting event for about 40 university students. Later they toured Namie on two buses, including a stop at scaffolding near the planned memorial park site to view Fukushima Daiichi's cranes. Matsumoto described her feelings toward Tokyo Electric as "complicated," because it is responsible for the disaster but also helped her family cope its aftermath. One of her sons works for the utility and has faced abuse from angry locals, she added. "It's good that people want to come to Namie, but not if they just want to get close to the nuclear plant. I don't want it to become a spectacle," Matsumoto said. Okamoto is not the only guide offering tours in the area, although visits of any kind remain rare. He said he hoped his clients would come away with more than a few photographs. "If people can see for themselves the damage caused by tsunami and nuclear plant, they will understand that we need to stop it from happening again," said Okamoto, who attended university in a neighbouring prefecture. "So far, we haven't come across any opposition from the local people." Sept 21st 2016 CC BY 2.0 Eamonn Butler The names Chernobyl and Fukushima connote nuclear disaster. But do you remember Three Mile Island? Have you ever heard of Beloyarsk, Jaslovske, or Pickering? These names appear among the 15 most expensive nuclear disasters. 1. Chernobyl, Ukraine (1986): $259 billion 2. Fukushima, Japan (2011): $166 billion 3. Tsuruga, Japan (1995): $15.5 billion 4. Three Mile Island, Pennsylvania, USA (1979): $11 billion 5. Beloyarsk, USSR (1977): $3.5 billion 6. Sellafield, UK (1969): $2.5 billion 7. Athens, Alabama, USA (1985): $2.1 billion 8. Jaslovske Bohunice, Czechoslovakia (1977): $2 billion 9. Sellafield, UK (1968): $1.9 billion 10. Sellafield, UK (1971): $1.3 billion 11. Plymouth, Massachusetts, USA (1986): $1.2 billion 12. Chapelcross, UK (1967): $1.1 billion 13. Chernobyl, Ukraine (1982): $1.1 billion 14. Pickering, Canada (1983): $1 billion 15. Sellafield, UK (1973): $1 billion A new study of 216 nuclear energy accidents and incidents crunches twice as much data as the previously best review, predicting that "The next nuclear accident may be much sooner or more severe than the public realizes." The study points to two significant issues in the current assessment of nuclear safety. First, the International Atomic Energy Agency (IAEA) serves the dual masters of overseeing the industry and promoting nuclear energy. Second, the primary tool used to assess the risk of nuclear incidents suffers from blind spots. The conflict of interest in the first issue is clear. The second issue may not be transparent to the layperson until they understand more fully how industry conducts the probabilistic safety assessments (PSAs) which are the source of the standard predictions of the risk of nuclear accidents. A PSA involves identifying every single possible thing that could go wrong, and assigning a probability that reflects the risk it will go wrong. Nuclear plants are then built with layers of interlocking safety mechanisms, that should reduce the probability to near zero that all of the failures necessary to result in a significant event could ever happen all at the same time. It is a comprehensive and thorough method to help safety engineers reduce risks to levels that are acceptable relative to the benefits of the technology. It has certainly helped safety engineering make great strides in the effort towards 'zero accident' goals. However, the scientifically calculated risk probabilities from a PSA are only as good as the engineers' abilities to identify every single thing that could go wrong. Every time some new thing goes wrong that wasn't thought of before, it is quickly integrated into the PSA and the assessment re-calculated and safety measures reinforced to again return the risks to the 'safe' levels. And industry keeps close track of everything that goes wrong, even when no accident occurs due to the layers of safety engineered in, which helps to fine-tune PSAs without the need for actual disasters. But every so often, a Chernobyl or Fukushima proves that our limitations outrun our technology for controlling the risks. The new study, by researchers at the University of Sussex (England) and ETH Zurich (Switzerland), takes a different approach by submitting the data on events that have disrupted the nuclear industry to a statistical analysis. The report tracks the evolution of nuclear safety engineering that with the benefit of 20:20 hindsight in the wake of each nuclear disaster. It finds that nuclear accidents have substantially decreased in frequency, especially due to success of safety engineering in suppressing the "moderate-to-large" incidents. But even with these optimistic trends, the report predicts that it is more likely than not that disasters at the extreme end of the IAEA scale will occur once or twice per century. Accidents on the scale of Three Mile Island have over a 50% probability of occurring every 10-20 years. This may not spell the end of the nuclear industry though. One co-author of the study, Professor Didier Sornette, emphasizes that: "While our studies seem damning of the nuclear industry, other considerations and potential for improvement may actually make nuclear energy attractive in the future." The papers are published in Energy Research and Social Science: Reassessing the safety of nuclear power and in the journal of Risk Analysis: Of Disasters and Dragon Kings: A Statistical Analysis of Nuclear Power Incidents and Accidents For another perspective published on TreeHugger about nuclear power, see: The debate over nuclear power: An engineer looks at the issues The more serious nuclear-accidents are nuclear power station explosions, accidents are rare but they are very serious. The accident at Chernobyl is pictured above, very well documented, caused many deaths and laid waste to thousands of square miles of contaminated land, including cities as pictured above and where ever there was fallout the land, the animals and the produce was unusable. The above picture is of radiological contamination on an amusement park which was hastily abandoned due to the contamination caused by the Chernobyl accident, huge areas of the country was laid to waste with no possibility of human habitation for many hundreds of years, there are of course still wild animals living in the vicinity and these are monitored by the authorities to determine the effects of overexposure to radiation There have been cases of accidental contamination where an x-ray machine has been taken out of service, scrapped and sent for recycling without first removing the radiation source, the recycled steel was then used to make very many everyday objects, a widespread investigation revealed radiation coming from such things as restaurant furniture and white goods, refrigerators and washing machines, fortunately in modern times the authorities are much more careful.. The other very serious accident was at a power station at Fukushima in Japan, this was caused by an earthquake just off the coast and resultant tsunami flooded the power generating facility and crippling the electrical supply to the cooling water pumps, this one also is well known and well documented and sadly the leaking contamination has yet to be contained. These accidents have led to a general distrust of nuclear power stations and some countries have decided to phase them out altogether whereas in other parts of the world there are plans to build dozens more, of course the design is being improved constantly and the regulation of such facility is is much tighter now than it was previously, this applies to the construction and the operation of such facilities, Japan has 58 operating nuclear power stations and plans underway for a few more. With the realisation that the demand for electricity is going to steadily increase, it is becoming more and more obvious that the only way we are going to survive an energy crisis is to build bigger and better nuclear power stations. Fortunately with modern communication systems these dangerous situations can be monitored easily and warnings issued by local government, civil defense, police, local radio and television.
<urn:uuid:f8054209-a985-41b7-be88-11246b1f7020>
2.578125
7,733
Content Listing
Science & Tech.
45.702638
95,528,396
Seed- and fruit-feeding insects in tropical rain forests: Faunal composition and rates of attack The article in Journal of Biogeography represents the first intercontinental comparison of assemblages of seed- and fruit-feeding insects in tropical rainforests. FIGURE. Plot of the average proportion of individuals of insect guilds reared per sample for each study site (BCI: Panama; KHC: Thailand; WAN: Papua New Guinea). Proportions of particular guilds across sites are all significantly different (Kruskal–Wallis tests, all with p < .05). Figures above bars indicate, for each site, the percentage of samples in which a taxon or guild was present. Note that because values are averaged across all samples, proportions are rather small Insects feeding on seeds and fruits represent interesting study systems, potentially able to lower the fitness of their host plants. In addition to true seed eaters, a suite of insects feed on the fleshy parts of fruits. We examined the likelihood of community convergence in whole insect assemblages attacking seeds/fruits in three tropical rain forests: Barro Colorado Island (Panama), Khao Chong (Thailand) and Wanang (Papua New Guinea). We surveyed 1,186 plant species and reared 1.1 ton of seeds/fruits that yielded 80,600 insects representing at least 1,678 species. We assigned seeds/fruits to predation syndromes on the basis of plant traits relevant to insects, seed/fruit appearance and mesocarp thickness. We observed large differences in insect faunal composition, species richness and guild structure between our three study sites. We hypothesize that the high species richness of insect feeding on seeds/fruits in Panama may result from a conjunction of low plant species richness and high availability of dry fruits. Insect assemblages were weakly influenced by seed predation syndromes, both at the local and regional scale, and the effect of host phylogeny varied also among sites. At the driest site (Panama), the probability of seeds of a plant species being attacked depended more on seed availability than on the measured seed traits of that plant species. However, when seeds were attacked, plant traits shaping insect assemblages were difficult to identify and not related to seed availability. In sum, we observed only weak evidence of community convergence at the intercontinental scale among these assemblages. Our study suggests that seed eaters may be most commonly associated with dry fruits at relatively dry tropical sites where fleshy fruits may be less prevalent. Basset, Y., Dahl, C., Ctvrtecka, R., Gripenberg, S., Lewis, O.T., Segar, S.T., Klimes, P., Barrios, H., Brown, J.W., Bunyavejchewin, S., Butcher, B.A., Cognato, A.I., Davies, S.J., Kaman, O., Knizek, M., Miller, S.E., Morse, G.E., Novotny, V., Pongpattananurak, N., Pramual, P., Quicke, D.L.J., Robbins, R.K., Sakchoowong, W., Schutze, M., Vesterinen, E.J., Wang, W.-z., Wang, Y.-y., Weiblen, G. & Wright, S.J. 2018. A cross-continental comparison of assemblages of seed- and fruit-feeding insects in tropical rainforests: faunal composition and rates of attack. Journal of Biogeography, 2018;00:1–13. https://doi.org/10.1111/jbi.13211
<urn:uuid:c4d927aa-2dcb-4c2a-baa5-e5652822b0f2>
3.28125
780
Academic Writing
Science & Tech.
58.421087
95,528,397
Deadly Tropical Cyclone Bingiza, which crossed over northern Madagascar three days ago, has continued to affect Madagascar while moving along Madagascar's west coast. Bingiza had weakened from a powerful category 3 tropical cyclone with sustained winds of 100 kts (~115 mph/185 kmh) to tropical storm force winds of about 35 kts (~40 mph/65 kmh) when the Tropical Rainfall Measuring Mission (TRMM) satellite passed almost directly overhead on February 16, 2011 at 1911 UTC (2:11 p.m. EST). NASA\'s TRMM satellite saw moderate to heavy rainfall, falling at a rate of over 2 inches/50 mm per hour (in red) in a small area near Bingiza\'s center of circulation on Feb. 16, 2011. Credit: SSAI/NASA, Hal Pierce TRMM data was used to create an image of Bingiza's rainfall. The analysis used TRMM's Microwave Imager (TMI) and Precipitation Radar (PR) data. At that time, Bingiza was approaching Madagascar from the Mozambique Channel with additional moderate to heavy rainfall (over 2 inches/50 mm per hour). Extremely heavy rainfall was revealed to be located in a small area near Bingiza's center of circulation. On February 17 at 0900 UTC (4 a.m. EST), Bingiza's maximum sustained winds were near 40 knots (46 mph/74 kmh) with higher gusts. It was about 220 nautical miles west-southwest of Antananarivo, Madagascar, near 21.0 South and 43.7 East. Bingiza was moving south at 7 knots (8 mph/13 kmh). Multispectral satellite imagery showed that Bingiza still has strong bands of thunderstorms wrapping around it from the northwest into the southeast quadrant. The low-level center of circulation is partially exposed to outside winds, however. Exposure to outside winds leaves the storm vulnerable for weakening. A low to mid-level ridge (elongated area of high pressure) located to the northeast of Bingiza is what's guiding it southward, and then it is forecast to track along the ridge and move southeastward in the next day taking it near or over land. Some models show that the storm may meander and remain over water while others take it inland. Whether it stays near the coast or moves inland, Bingiza is still forecast to weaken and is expected to dissipate by the weekend. Rob Gutro | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:b6a926a3-07f9-4ece-a6f1-c2f65cfeccfc>
3.046875
1,142
Content Listing
Science & Tech.
49.125131
95,528,410
Why is carbon dating inaccurate after 50000 years The constant, that is the Strong Nuclear Force, is absolute.It'd have to be, it's what controls radioactivity and all other nuclear reactions.The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen.The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire in a sample from a dead plant or animal such as a piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. This leaves out aquatic creatures, since their carbon might (for example) come from dissolved carbonate rock.It is also standard to coat fossils during their extraction and transport.Acetone is sometimes used while extracting fossils, because it dissolves dirt.Research has been ongoing since the 1960s to determine what the proportion of in the atmosphere has been over the past fifty thousand years.The resulting data, in the form of a calibration curve, is now used to convert a given measurement of radiocarbon in a sample into an estimate of the sample's calendar age. Search for why is carbon dating inaccurate after 50000 years: The older a sample is, the less (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by this process date to around 50,000 years ago, although special preparation methods occasionally permit accurate analysis of older samples.
<urn:uuid:ffc61b82-62e0-45e3-85f8-ef669e86e8ce>
3.375
327
Knowledge Article
Science & Tech.
31.077522
95,528,411
Scientists Undertake Field Study in Puerto Rico Michigan Technological University researchers are joining US Forest Service and US Geological Survey scientists in the only tropical rainforest in the United States, to get a handle on the impact that climate change—particularly warming—is likely to have on the tropical forests of the world. Michigan Technological University Michigan Tech graduate student Alida Mau in Puerto Rico's El Yunque National Forest The Tropical Response to Altered Climate Experiment (TRACE) research project is supported by the U.S. Forest Service with an additional three-year, $960,000 grant from the US Department of Energy. Molly Cavaleri—a tree physiologist who studies how ecosystems are responding to climate change at Michigan Tech’s School of Forest Resources and Environmental Science—is thrilled to be heading the study in the El Yunque National Forest in Puerto Rico. “This is the first field experiment of its kind ever done in a tropical forest,” she says. “We will be manipulating the environment, warming the leaves and branches of the canopy as well as the smaller plants on the forest floor, not just observing.” Forests of all kinds help control greenhouse gases, particularly carbon dioxide, in the atmosphere because trees take in and store more CO2 than they put out. But unlike forests in temperate climates, where temperatures vary widely from season to season and trees have adapted to those changes, tropical forests grow in consistently warm climates, and no one knows how or even if they can acclimate if those climates get hotter. And they are getting hotter and will continue to do so, climate experts say. “Within 20 years, the new minimum temperatures in the tropics will be hotter than the current maximums,” says Cavaleri. The Michigan Tech researcher and two other research ecologists are leading the study: Tana Wood, from the Puerto Rico Conservation Foundation and adjunct scientist with the U.S. Forest Service International Institute of Tropical Forestry, and Sasha Reed with the U.S. Geological Survey. Eoin Brodie from the Lawrence Berkeley National Laboratory will join them as a collaborator. “It’s unusual for three early-career women to be spearheading a project of this size and significance,” Cavaleri noted. Michigan Tech graduate student Alida Mau is already in Puerto Rico, taking measurements in the forest. Cavaleri and her colleagues will be heating the soil and small trees on the forest floor with infrared lamps and running warming cables under the leaves of the canopy of full-grown trees, then collecting data about the responses of leaves, roots, and soil. “We want to know how sensitive tropical forests are to warming, what physiological changes it will cause, particularly how it affects the trees’ ability to store CO2,” Cavaleri said. “If we tip them over a threshold where it’s too warm, they may not be able to take up as much CO2. They may even start giving off more CO2, which could lead to more warming.” Once they have warmed the trees and measured the changes, Cavaleri, Reed and Wood plan to use the data to help develop better predictive models of the effects of climate change on tropical forests, an effort funded by the USGS Powell Center. “The data will help us understand what is happening globally and what is likely to happen in the future, Cavaleri explained. She also added, “We don’t know what we are going to find, so we are not advocating any particular policy.” Cavaleri said working in Puerto Rico is a pleasure, much easier than conducting research in other tropical locales. “There is electricity, no problem with permits, a Forest Service research station with dormitories,” she said. “There aren’t even any venomous snakes.” Michigan Technological University (www.mtu.edu) is a leading public research university developing new technologies and preparing students to create the future for a prosperous and sustainable world. Michigan Tech offers more than 130 undergraduate and graduate degree programs in engineering; forest resources; computing; technology; business; economics; natural, physical and environmental sciences; arts; humanities; and social sciences. Jennifer Donovan | newswise Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:17f631f2-c043-4ec0-9b75-d54a62e9b5a8>
2.875
1,480
Content Listing
Science & Tech.
35.88887
95,528,457
In the 13th January print edition of the journal Current Biology, IGC researchers provide insight into an old mystery in cell biology, and offer up new clues to understanding cancer. Inês Cunha Ferreira and Mónica Bettencourt Dias, working with researchers at the universities of Cambridge, UK, and Siena, Italy, unravelled the mystery of how cells count the number of centrosomes, the structure that regulates the cell’s skeleton, controls the multiplication of cells, and is often transformed in cancer. This research addresses an ancient question: how does a cell know how many centrosomes it has? It is equally an important question, since both an excess or absence of centrosomes are associated with disease, from infertility to cancer. Each cell has, at most, two centrosomes. Whenever a cell divides, each centrosome gives rise to a single daughter centrosome, inherited by one of the daughter cells. Thus, there is strict control on progeny! By using the fruit fly, the IGC researchers identified the molecule that is responsible for this ‘birth control policy’ of the cell – a molecule called Slimb. In the absence of Slimb, each mother centrosome can give rise to several daughters in one go, leading to an excess of centrosomes in the cell. In recent years, Monica’s group has produced several important findings relating to centrosome control: they identified another molecule, SAK, as the trigger for the formation of centrosomes. When SAK is absent, there are no centrosomes, whereas if SAK is overproduced, the cell has too many centrosomes. These results were published in the prestigious journals Current Biology and Science, in 2005 and 2007. Now, the group has discovered the player in the next level up: Slimb mediates the destruction of SAK, and in so doing, ultimately controls the number of centrosomes in a cell. Monica explains, ‘We carried out these studies in the fruit fly, but we know that the same mechanism acts in mice and even in humans. Knowing that Slimb is altered in several cancers opens up new avenues of research into the mechanisms underlying the change in the number of centrosomes seen in many tumours’. Mónica first became interested in centrosomes and in SAK when she was an Associate Researcher at Cambridge University, UK, and has pursued this interest at the IGC, where she has been group leader of the Cell Cycle Regulation laboratory since 2006. Inês Cunha Ferreira travelled with Monica from Cambridge, and is now in her second year of the in-house PhD programme. Two other PhD students in the lab also contributed to this research, Ana Rodrigues Martins and Inês Bento. Ana Godinho | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:fea09f8b-5ed9-4f2a-9b01-4ba68b2092c0>
3.25
1,174
Content Listing
Science & Tech.
39.020191
95,528,458
On a recent episode of StarTalk Radio, astrophysicist Neil deGrasse Tyson explained what happened to the energy and matter “eaten” by a black hole when it dies. Reading a question from a listener, co-host Chuck Nice asked, “is it my understanding that a black hole will just vanish and disappear at the end of its life? If that’s so, and ‘E = mc^2’ what happens to the energy and all the particles in a vanishing black hole?” “All black holes evaporate,” Tyson answered. “Every understanding of quantum physics and relativity as advanced to us by the brain-work of Stephen Hawking — and in fact, it’s called ‘Hawking radiation’ — its evaporation [creates] an energy field so intense that matter spontaneous spawns in [it].” “In so doing, the black hole loses mass,” he continued. “It’s losing mass because it’s sending matter and energy out beyond its event horizon.” “So there’s no loss of mass or energy. ‘E = mc^2’ is still intact. What’s even more amazing is that every atom that went in — if I dropped you into a black hole — there’s an accounting of that.” “There’s a mysterious quantum accounting of you that went in,” Tyson concluded. “So if you look at the particles that evaporate out from the energy field? That tally will come out equaling the tally of atoms that went in in the first place.” “The black hole remembers what it ate.”
<urn:uuid:531399ca-8ec5-4ffb-826d-b8688823ce42>
2.703125
366
Audio Transcript
Science & Tech.
59.358241
95,528,474
Astronomers using the Frederick C. Gillett Gemini Telescope on Mauna Kea have found the source of short flashes of gamma rays from outer space: a collision of two dead stars. With system and science verification complete, the Gemini Bench-mounted High-Resolution Spectrograph (bHROS) is available for science programs in 2006A. A series of coordinated observations, made under ideal conditions by the world’s largest collection of big telescopes, delivered surprising new insights into the ancestry and life cycles of comets. Astronomers have glimpsed dusty debris around an essentially dead star where gravity and radiation should have long ago removed any sign of dust. The discovery might provide insights into our own solar system’s eventual demise several billion years from now. The Gemini Observatory released a pair of images today that capture the dynamics of two very different interactions in space. Astronomers using the 8-meter Gemini South telescope have revealed that the galaxy NGC 300 has a large, faint extended disk made of ancient stars, enlarging the known diameter of the galaxy by a factor of two or more. Gemini Observatory has obtained a preliminary spectrum of 2003 UB313, the so-called "10th planet". A relatively young star located about 300 light-years away is greatly improving our understanding of the formation of Earth-like planets. Gemini Observatory is actively looking for sub-stellar mass companions using the existing Adaptive Optics (AO) system Altair on Gemini North, and building the specialized near-infrared coronagraph (NICI) with its own AO system for Gemini South. The Gemini North telescope on Mauna Kea successfully captured the dramatic fireworks display produced by the collision of NASA's Deep Impact probe with Comet 9P/Tempel 1. Imaging by Gemini North's Michelle mid-infrared imager/spectrograph has allowed an international team of researchers to isolate and characterize the dusty remains of the supernova remnant SN 2002hh. Gemini South Flamingos-I observations of the outer Trapezium region by Lucas et al. have probed the region to very faint levels in the infrared. Using T-ReCS on Gemini South, David Ciardi and his collaborators have found that processing of dust grains around the proto-binary star Serpens SVS20 began at a surprisingly early point in the system’s evolution. Recently a Canadian amateur astronomy group took advantage of a rare opportunity and used the Gemini 8-meter telescope to look more deeply into the remains of a particular stellar nursery than anyone ever has. An Laser Guide Star is produced by a relatively low power laser beam that shines up from a telescope into a layer of sodium gas in our upper atmosphere, creating a temporary artificial "star." A joint Chile-United Kingdom team has used the NOAO-built Phoenix near-infrared spectrometer on Gemini South to obtain high-resolution spectra (R~75,000) of the [Al VI] 3.66-micron line region in the planetary nebula NGC 6302 (the Bug Nebula). Recent spectroscopic studies of infrared light reflected from the surface of Sedna reveal that it is probably unlike Pluto and Charon since Sedna's surface does not display evidence for a large amount of either water or methane ice. Can galaxies observed at very high redshifts (at a time when the universe was a fraction of its current age) evolve to look like today's nearby galaxies simply by growing older? The answer is no. The combination of Gemini sensitivity and Phoenix spectral resolution has allowed a team to observe a set of objects toward several GHII regions and search for kinematic clues to the circumstellar geometry of newly forming massive stars. Using the IFU on GMOS-South, an international research team explores a spheroidal galaxy that captured a small gas-rich galaxy in a merger that led to a burst of star formation.
<urn:uuid:b2786913-182c-4d18-838b-74e11f57a876>
2.859375
817
Content Listing
Science & Tech.
28.373294
95,528,496
Working in the remote forests of Cambodia, conservationists from the Wildlife Conservation Society (WCS) have just discovered Southeast Asia's only known breeding colony of slender-billed vultures, one of the world's most threatened bird species. Found in heavily forested country just east of the Mekong River in Cambodia's Stung Treng Province, the colony also represents one of the only known slender-billed vulture nesting areas in the world, and therefore one of the last chances for recovery for the species, now listed as "Critically Endangered" by the World Conservation Union (IUCN). "We discovered the nests on top of a hill where two other vulture species were also found, one of which—the white-rumped vulture—is also 'Critically Endangered'," said Song Chansocheat, manager of the Cambodia Vulture Conservation Project, a government project supported by WCS, BirdLife International, World Wildlife Fund, the Disney Wildlife Conservation Fund, and the Royal Society for the Protection of Birds. "Amazingly, there were also a host of other globally threatened species of birds and primates. It's a very special place." Chansocheat's team immediately set-up 24-hour protection measures against poaching and egg collecting, and are now working with local communities to ensure that they are involved in—and support—longer-term conservation measures. "We already have a successful WCS model working in the Northern Plains where local people benefit from conservation activities. I think we have a good chance of making it work here if we can find the support." The slender-billed vulture is one of several vulture species in Asia that have been driven to the brink of extinction across its entire range due to Diclofenac, an anti-inflammatory drug used for cattle that is highly toxic to vultures. Diclofenac has lead to global population declines as high as 99 percent in slender-billed and other vulture species. Diclofenac is now being slowly phased out in South Asia, but not at a pace that assures the recovery of the vultures. Because Diclofenac is almost entirely absent from use in Cambodia, the country remains one of the main hopes for the survival of the species. However, these birds are still endangered from other threats, such as a lack of food due to the over-hunting of large-bodied mammals, loss of habitat, and sometimes direct hunting. The Cambodia Vulture Conservation Project has already been successful in helping stem the decline in Asia's vultures in Cambodia through a combination of scientific research, direct protection, food supplementation and awareness-raising. Satellite-collaring of animals has lead to a greater understanding of which areas are important to the two most threatened species, while simultaneous vulture 'restaurants' across the country provide both an additional food source for the birds and a chance to undertake coordinated counts to monitor the size and structure of the population. Chansocheat remains optimistic, adding "We have the backing of local people and of the Government. If we can find financial support to extend what we know is already a successful strategy, then we should be able to conserve these species forever." John Delaney | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:574412d2-ff9d-47d1-b769-0d6744d954cb>
3.640625
1,269
Content Listing
Science & Tech.
35.13932
95,528,547
+44 1803 865913 In the past two decades there has been considerable work on global climatic change and its effect on the ecosphere, as well as on local and global environmental changes triggered by human activities. From the tropics to the Arctic, peatlands have developed under various geological conditions, and they provide good records of global and local changes since the Late Pleistocene. The objectives of the book are to analyze topics such as geological evolution of major peatlands basins; peatlands as self sustaining ecosystems; chemical environment of peatlands: water and peat chemistry; peatlands as archives of environmental changes; influence of peatlands on atmosphere: circular complex interactions; remote sensing studies of peatlands; peatlands as a resource; peatlands degradation, restoration, plus more. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects We have always been very happy with NHBS service. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:efbab313-e617-41f7-b3b0-e65eb7331b4c>
3.109375
235
Product Page
Science & Tech.
22.532209
95,528,551
What will happen with our forests if temperatures increase by 2 degrees - the UN target for average global temperature increase? And what will happen if the world leaders do not manage to make an agreement ensuring this target – and the temperature will increase even more? Professionals speaking on Forest Day 3 made it clear that even an average temperature increase of 2 degrees will have major impacts on our forest ecosystems. If we go above that, the most commonly used word was "catastrophic". Genetic diversity of trees is crucial The genetic composition of the tree species growing naturally in a forest has become adjusted to the current climate based on thousands of years of natural selection. Abrupt changes in climate imply that those genes are no longer optimal for the new conditions. You might think that many common species such as spruce, pine, oak, beech etc can grow under very different climatic conditions and that they will therefore easily adjust to changes in climate. However, Professor Erik Dahl Kjær from the Forest & Landscape Department, University of Copenhagen, demonstrated that even if a species has a large natural geographic range, seeds sourced from one area may not be viable if planted in a different area. The climatic adaptation of the local gene pool is often crucial. As temperatures increase, it can thus be expected that plants growing in a certain place will no longer be adapted for the conditions there. Forestry is not agriculture A crucial difference between agriculture and forestry was also highlighted by some speakers. Farmers can switch to different crops which are better adjusted to the new climatic conditions. If the climate is no longer good for barley, then you can switch to corn the next year. This is not the case in forestry. A tree planted today is not expected to be harvested within the next 100 or 200 years. An individual tree may be well adjusted to the current climate conditions, but as it matures climate will change and by the time the tree reaches harvesting age it may be growing in a completely different climate. That is, if it ever reaches rotation age. Infrastructural barriers to natural gene flow The Earth has experienced significant climate changes in the past, so you might think that species can adapt the way they have obviously done before. But current climate changes are happening over an extremely short period of time; the planet has not experienced such rapid climate change for thousands of years. Furthermore, under natural conditions the global forest cover would be more expansive, and tree species with their genetic makeup would be able to move through the forest ecosystems in response to climate changes much more easily. But due to human impact, a significant part of the forest ecosystems has been replaced by agricultural land or infrastructure. The mobility of species and genes is therefore greatly reduced compared to the conditions prevailing in the natural ecosystem. Expert advice: establish gene pockets For years, using local genetic material in forestry to ensure adaptation to local conditions has been considered good forest management practice. This approach does not seem valid in the current situation. Since we don’t know exactly how our climate will be in 100 years, it is difficult or impossible to give clear guidance on what to do. Speakers at Forest Day 3 provided the following advice: - Promote mixed stands based on the philosophy that if one species fails, then another may survive and you still have a forest. - Increase the use of species with low harvesting ages. If trees can be harvested after e.g. 50 years instead of 100 years, then the expected range of climate change during their life time will be smaller and the chances that they will survive until maturity are higher. - Introduce new genes by using seeds from areas that are likely to match a future climate. This can be done by planting/sowing in smaller patches. Genes from these 'genetic pockets' can later spread if they are more successful under new conditions. More extreme weather conditions On top of the challenges surrounding genetic adaption, we can expect more extreme weather conditions such as storms, drought, and flooding. Extreme weather conditions that used to occur only once in a 100 years are now more like to happen every 10 years, and it is predicted that such events will become even more common in the future. Our forests - already under pressure due to climate change - will thus come under further stress due to extreme weather conditions. This further underlines the importance of selecting the right species and adapting the forest management to future weather conditions. Need to reconsider certification rules These issues naturally lead to the important question – are the certification requirements of forest management certification systems such as FSC appropriate, taking into account our current knowledge about the likely effects of climate change? It appears that there is a clear need for adjustment. Although the FSC rules do already promote forest health and resilience in several aspects, the standards also focus on the use of local genes and protection of natural ecosystems. It might be fatal if forest managers do not start to consider how to adapt silvicultural principles to the future climatic conditions. Adjustment of the certification rules to require adaptive forestry is a matter of urgency. As former Norwegian Prime Minister and 'mother' of the Brundtland Report Gro Harlem Brundtland expressed it in her keynote speech on Forest Day 3: "Forests are under threat and time is of essence".
<urn:uuid:94992d7e-398a-42b9-9b16-31d0f8f1fe76>
3.640625
1,078
Knowledge Article
Science & Tech.
32.846566
95,528,553
The Northwest CASC, EcoAdapt and Oregon State University recently partnered to assess the available science on climate adaptation actions that are commonly used in response to sea level rise by resource managers in Oregon and Washington. Inland fish are found in lakes, rivers, streams, canals, reservoirs, and other landlocked waters. Inland fish are vulnerable to a range of threats, including overharvesting, pollution, and changes in water conditions as the climate changes. Fish and wildlife play crucial roles across ecosystems and in human society. High animal diversity contributes to healthy ecosystems, and many species provide important economic benefits to our communities. With the passage of the fiscal year 2018 budget, the Climate Science Centers have been renamed the Climate Adaptation Science Centers (CASCs) and the National Climate Change & Wildlife Science Center is now the National Climate Adaptation Science Center. Spring is here and in many places across the country, trees are beginning to bud, flowers are blossoming, and the world is starting to look a little more colorful. Look a little closer though and you’ll find that many plants are facing challenging times.
<urn:uuid:9f84c162-c822-4b7a-8937-a53301dc225f>
3.0625
228
News (Org.)
Science & Tech.
20.889006
95,528,572
Perseid Meteor Shower The Perseids meteor shower is the most popular meteor shower as they peak on warm August nights as seen from the northern hemisphere. The Perseids are active from July 13 to August 26, reaching a strong maximum on August 11 - 13, depending on the year. Normal rates seen from rural locations range from 50-75 meteors per hour at maximum. In astronomy, there's nothing quite like a bright meteor streaking across the glittering canopy of a moonless night sky. The unexpected flash of light adds a dash of magic to an ordinary walk under the stars. Something even more spectacular than a meteor, is a fireball, and the Perseid meteor shower produces more fireballs than any other. A fireball is a very bright meteor, at least as bright as the planets Jupiter or Venus. They can be seen on any given night as random meteoroids strike Earth's upper atmosphere. One fireball every few hours is not unusual. Fireballs become more numerous, however, when Earth is passing through the debris stream of a comet. The Perseid meteor shower comes from Comet Swift-Tuttle. Every year in early- to mid-August, Earth passes through a cloud of dust sputtered off the comet as it approaches the sun. Perseid meteoroids hitting our atmosphere at 132,000 mph produce an annual light show that is a favorite of many backyard sky watchers. Last updated on: Saturday 27th May 2017 There are no comments for this post. Be the first!
<urn:uuid:4fff9ff0-05fb-4143-8b23-1a49ef9911ca>
3.40625
308
Knowledge Article
Science & Tech.
57.265533
95,528,589
Spectroscopy and Spectrometry Spectroscopy is the study of the interaction between energy and matter as a function of wavelength. Spectrometry is a technique used for the identification of elements through analysis of a spectrum. Before we take a look at spectroscopy, lets first have a look at what light is. Light, as we know, is only a small part of a larger Electro-Magnetic (EM) spectrum. For the purposes of this article, we will focus on the visible wavelengths only. Way back in 40AD, Seneca observed the light scattering properties of glass prisms but it wasn't until 1666 when Newton observed his own spectra that the idea of light being made of colours became popular. In 1802 the English chemist William Hyde Wollaston observed dark lines (absorption lines) in the spectrum from a glass prism. Later in 1814 German physicist Joseph von Fraunhofer independently rediscovered the lines and began a systematic study and careful measurement of the wavelength of these features. In all, he mapped over 570 lines and designated the principal features with the letters A through K, and weaker lines with other letters. If you were to observe the Sun's spectrum through a prism you may be able to see dark lines as shown in the diagram above. These are called Fraunhofer lines or absorption lines. In 1859 Gustav Robert Kirchhoff and Robert Bunsen showed that each chemical element has a unique "signature" of emission lines, and deduced that the dark lines in the solar spectrum were caused by absorption by those elements in the upper layers of the sun. Some of the observed features are also caused by absorption of oxygen molecules in the Earth's atmosphere. They did this by looking at the spectrum emitted by elements as they burn. In this example, Hydrogen will be burnt. By viewing hydrogen as it burns through a spectrometer a different set of lines will be observed - emission lines. As you can see, the emission lines from hydrogen match the absorption lines from the Sun's spectrum. Kirchhoff and Bunsen deduced that the Sun's upper atmosphere must be absorbing these wavelengths due to the presence of Hydrogen. Their experiment involved looking at the spectra of the Sun as it passes through a hot gas (from the Bunsen burner) and comparing it with the spectra emitted by heating different elements. It was during the process of developing spectroscopy that the Bunsen burner came into being. In all, there are over 1000 Fraunhofer lines observable in the Sun's spectrum and because each element has its own signature, we can deduce the chemical composition of the Sun, or any unknown object by analysing the spectral lines. What causes these lines? Atoms consist of protons, neutrons, and electrons. Protons are positively charged, electrons are negative, and neutrons have no charge (electrically neutral). Danish physicist Niels Bohr devised a model of the atom which helps explain absorption and emission lines. In his model, protons and neutrons are in the nucleus, the electrons orbit the nucleus. In the Bohr model electrons are only allowed to orbit at certain distances from the nucleus, much in the same way as planets can only orbit the sun at certain distances. The further away from the nucleus, the more energy is needed. Each of these "distances" is called an energy level. Electrons can move between energy levels, but it does require an exchange of energy. When we discuss the energy of a photon we can also talk about the wavelength since the two are related. The energy required is determined by the energy difference between the two levels and is different for every energy level and every element. Combining elements into molecules also changes the energy requirements. The energy (E) of a photon (in Joules) is given by the formula: Equation 29 - Energy of a Photon Where h is the Planck constant (6.624 x 10-34 joule-sec) and the frequency (f) is a function of wavelength (λ). Frequency is given by the formula below: Equation 30 - Frequency of Light Where c is the speed of light (3x108 ms-1) and λ it's wavelength in hertz. For an electron to move to a higher energy level it must gain energy. One way is to absorb a photon having the right amount of energy. When the electron absorbs the photon the corresponding wavelength appears to be missing from the spectrum because it has been absorbed. When electrons move to a lower energy level it releases the same amount of energy. This causes an emission line. Energy levels are generally noted as n, the first energy level being n = 2 (n = 1 for the nucleus). A jump from n = 2 to n = 3 requires an absorption of energy, while moving from n = 3 to n = 2 releases it. Going back to our hydrogen example, when it gains energy from a photon in the sun an electron makes the jump from n = 2 to n = 3 and an absorption line is formed. In this case light of 656.3nm (red). When we heat hydrogen in a burner we actually excite the electron with energy, then it releases it again. As the electron returns to n = 2 it emits the same amount of energy and we see an emission at 656.3nm. Electrons can jump from n = 2 to n = 3, or to n = 4, 5 and so on. The amount of energy required is summarised in the table below for hydrogen. This is also known as the Balmer Series. |Transition of n||3→||4→2||5→2||6→2||7→2||8→2||9→2||∞→2| Each different element has it's own unique energy levels and when an elemental atom is combined in a molecule the energy levels again change. Because of this, we can use spectroscopy to identify almost any element or compound. Last updated on: Wednesday 24th January 2018 In cosmology, redshift allows us to determine that the universe is expanding. Read on to find out more. Learn about the diagram which relates the luminosity of a star with its temperature and why this is important How stars are grouped together according to their temperatures There are no comments for this post. Be the first!
<urn:uuid:2af634e4-79bd-4f50-8ec2-453b95cba625>
3.5
1,320
Knowledge Article
Science & Tech.
52.038388
95,528,590
Shulamit Michaeli and colleagues describe a pathway in T. brucei parasites that they named SLS (SL-RNA silencing). Triggering this pathway shuts down the synthesis of a crucial RNA molecule, which halts the production of messenger RNAs and leads to the parasite’s death. Inducing SLS could therefore be a novel way to eradicate parasites and prevent sleeping sickness - trypanosomiasis. The researchers also believe this could have implications for related parasites and diseases, such as Leishmania and leishmaniasis and Trypanosoma cruzi and Chagas disease. Sleeping sickness affects humans and livestock, and is endemic in sub-Saharan Africa where it is estimated to affect as many as 70,000 people. Leishmaniasis is estimated to affect millions of individuals throughout the world, and can lead to skin lesions, tissue damage, fever, blindness and death. Chagas disease affects 16-18 million people across the Americas, and can cause intestinal complications, neurological disorders, heart damage and death. Although drugs are available to treat these diseases, their use is hampered by toxicity and undesirable side effects, difficulties in administering treatment, an increase in drug resistance, and high costs. Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:f5d049c6-544c-45c9-936f-00293bac201d>
3.4375
892
Content Listing
Science & Tech.
37.147477
95,528,596
Programming languages are usually divided into four generations. The first generation is the machine languages, where programmers have to write each binary-coded machine instruction. Binary coding is voluminous and laborious to write, and the programs are difficult to read and maintain, and highly machine dependent. Program Logic Production Management Fourth Generation Procedural Logic Special Logic These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. This is a preview of subscription content, log in to check access.
<urn:uuid:55ad2f5f-c9e1-4100-a686-4e7195013097>
3.109375
113
Truncated
Software Dev.
23.533333
95,528,606
The scientists merged data collected underwater by UCSB divers with satellite images of giant kelp canopies taken by the Landsat 5 Thematic Mapper. The findings are published in the feature article of the May 16 issue of Marine Ecology Progress Series. In this marriage of marine ecology and satellite mapping, the team of UCSB scientists tracked the dynamics of giant kelp –– the world's largest alga –– throughout the entire Santa Barbara Channel at approximately six-week intervals over a period of 25 years, from 1984 through 2009.David Siegel, co-author, professor of geography and co-director of UCSB's Earth Research Institute, noted that having 25 years of imagery from the same satellite is unprecedented. "I've been heavily involved in the satellite game, and a satellite mission that goes on for more than 10 years is rare. One that continues for more than 25 years is a miracle," said Siegel. Landsat 5 was originally planned to be in use for only three years. Giant kelp is particularly sensitive to changes in climate that alter wave and nutrient conditions. The scientists found that the dynamics of giant kelp growing in exposed areas of the Santa Barbara Channel were largely controlled by the occurrence of large wave events. Meanwhile, kelp growing in protected areas was most limited by periods of low nutrient levels. Images from the Landsat 5 satellite provided the research team with a new "window" into how giant kelp changes through time. The satellite was built in Santa Barbara County at what was then called the Santa Barbara Research Center and launched from Vandenberg Air Force Base. It was designed to cover the globe every 16 days and has collected millions of images. Until recently these images were relatively expensive and their high cost limited their use in scientific research. However, in 2009, the entire Landsat imagery library was made available to the public for the first time at no charge. "In the past, it was not feasible to make these longtime series, because each scene cost over $500," said Kyle C. Cavanaugh, first author and UCSB graduate student in marine science. "In the past, you were lucky to get a handful of images. Once these data were released for free, all of a sudden we could get hundreds and hundreds of pictures through time." Giant kelp grows to lengths of over 100 feet and can grow up to 18 inches per day. Plants consist of bundles of ropelike fronds that extend from the bottom to the sea surface. Fronds live for four to six months, while individual plants live on average for two to three years. According to the article, "Giant kelp forms a dense floating canopy at the sea surface that is distinctive when viewed from above. …Water absorbs almost all incoming near-infrared energy, so kelp canopy is easily differentiated using its near-infrared reflectance signal." Cavanaugh explained that, thanks to the satellite images, his team was able to see how the biomass of giant kelp fluctuates within and among years at a regional level for the first time. "It varies an enormous amount," said Cavanaugh. "We know from scuba diver observations that individual kelp plants are fast-growing and short-lived, but these new data show the patterns of variability that are also present within and among years at much larger spatial scales. Entire forests can be wiped out in days, but then recover in a matter of months." Satellite data were augmented by information collected by the Santa Barbara Coastal Long Term Ecological Research Project (SBC LTER), which is based at UCSB and is part of the National Science Foundation's Long Term Ecological Research (LTER) Network. In 1980, the NSF established the LTER Program to support research on long-term ecological phenomena. SBC LTER became the 24th site in the LTER network in April of 2000. The SBC LTER contributed 10 years of data from giant kelp research dives to the current study. The scientists said that interdisciplinary collaboration between geographers and marine scientists is common at UCSB and is a strength of its marine science program. Daniel C. Reed, co-author and research biologist with UCSB's Marine Science Institute, is the principal investigator of SBC LTER. Reed has spent many hours as a research diver. He explained: "Kelp occurs in discrete patches. The patches are connected genetically and ecologically. Species that live in them can move from one patch to another. Having the satellite capability allows us to look at the dynamics of how these different patches are growing and expanding, and to get a better sense as to how they are connected. We can't get at that through diver plots alone. The diver plots, however, help us calibrate the satellite data, so it's really important to have both sources of information." The fourth author of the paper is Philip E. Dennison. He received his Ph.D. in geography at UCSB and is now an associate professor in the Department of Geography at the University of Utah. The research team received funding from NASA and the National Science Foundation. Gail Gallessich | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:4988a178-123f-4063-898b-e88de5651b03>
3.671875
1,713
Content Listing
Science & Tech.
45.764394
95,528,608
12 July 2018 Scores of endangered turtles killed in Egypt Published online 22 November 2012 Decapitated and battered turtles washed up on the shores of a lake in northern Egypt has environmental groups and governmental teams looking for answers. Over 80 sea turtles, mostly endangered green sea turtles or loggerheads, washed up on the shore of Lake Bardawil in October. Some of the turtles were found decapitated or had their heads crushed by blunt objects. Some turtles were found sick with suspected poisoning. "We realized that the 84 dead turtles were counted in a small area, not even in the entire lake. The number of dead turtles might be well higher than that," says Noor Noor, executive director of Nature Conservation Egypt, an environmental non-government organization, and one of several independent groups that has visited the lake to find out more about the reported deaths. "Some of the fishermen claimed that other fishermen injected fish eaten by turtle with poison to target the loggerheads. We are still waiting for results," says Noor. The team studied the area where the turtles were found and met with the local fishermen and the government officials responsible for maintaining the area, part of which belongs to a natural protectorate in north Sinai. Two sick turtles were found and sent to two laboratories, but died before tests could be conducted. The researchers need to capture a live turtle to determine if it was indeed poisoned. "We are trying to find another live sample now to test thoroughly and determine if there are traces of poison in it," says Magdy Elwani, an oceanographer at Suez Canal University who has visited the lake. The loggerhead sea turtles, classified as endangered by the International Union for the Conservation of Nature (IUCN), have rarely been sighted in the lake. However, recent changes in the biodiversity of the saline lake have led to growth in their numbers. "Some of the locals have started throwing used tyres in the lake. This has created an artificial reef which increased the populations of shrimps and crabs, the favourite foods of the loggerhead marine turtles," says Elwani. The turtles are known to feed on small fish as well, bringing them into competition with local fishermen. And they can rip them to shreds if they become entangled. Ten metres of the nets used by local fisherman can cost up to 1,000 EGP (~US$165). "We need to keep in mind that the fishermen are not evil people, they are underprivileged and going through economic hardships and the turtles have very little value to them," stresses Noor. "Only a minority of them admitted to actually killing turtles. To them, these are considered intruders." "[The fishermen] are unaware of endangered species and the laws governing them," says Elwani. He contends the government has not been doing enough to protect the turtles. "We have laws that prevent the killing of any endangered species but they are not being enforced, especially so after the revolution. "We do not blame the fishermen, but we would like to take initiatives to raise their awareness. Everything is very costly, however, and we have no budget to work with," he says. Egypt's Ministry of State for Environmental Affairs sent a team to determine the cause of death, but are yet to report their findings.
<urn:uuid:eb4bf90e-f6a6-48b6-b1f2-dc15a68137e8>
2.828125
675
Truncated
Science & Tech.
41.680298
95,528,615
Glaciers cracking in the presence of carbon dioxide The well-documented presence of excessive levels of carbon dioxide (CO2) in our atmosphere is causing global temperatures to rise and glaciers and ice caps to melt. New research, published today, 11 October, in IOP Publishing’s Journal of Physics D: Applied Physics, has shown that CO2 molecules may be having a more direct impact on the ice that covers our planet. Researchers from the Massachusetts Institute for Technology have shown that the material strength and fracture toughness of ice are decreased significantly under increasing concentrations of CO2 molecules, making ice caps and glaciers more vulnerable to cracking and splitting into pieces, as was seen recently when a huge crack in the Pine Island Glacier in Antarctica spawned a glacier the size of Berlin. Ice caps and glaciers cover seven per cent of the Earth—more than Europe and North America combined—and are responsible for reflecting 80–90 per cent of the Sun’s light rays that enter our atmosphere and maintain the Earth’s temperature. They are also a natural carbon sink, capturing a large amount of CO2. “If ice caps and glaciers were to continue to crack and break into pieces, their surface area that is exposed to air would be significantly increased, which could lead to accelerated melting and much reduced coverage area on the Earth. The consequences of these changes remain to be explored by the experts, but they might contribute to changes of the global climate,” said lead author of the study Professor Markus Buehler. Buehler, along with his student and co-author of the paper, Zhao Qin, used a series of atomistic-level computer simulations to analyse the dynamics of molecules to investigate the role of CO2 molecules in ice fracturing, and found that CO2 exposure causes ice to break more easily. Notably, the decreased ice strength is not merely caused by material defects induced by CO2 bubbles, but rather by the fact that the strength of hydrogen bonds—the chemical bonds between water molecules in an ice crystal—is decreased under increasing concentrations of CO2. This is because the added CO2 competes with the water molecules connected in the ice crystal. It was shown that CO2 molecules first adhere to the crack boundary of ice by forming a bond with the hydrogen atoms and then migrate through the ice in a flipping motion along the crack boundary towards the crack tip. The CO2 molecules accumulate at the crack tip and constantly attack the water molecules by trying to bond to them. This leaves broken bonds behind and increases the brittleness of the ice on a macroscopic scale.
<urn:uuid:e0d23462-41d1-456d-9b7a-fb5abb344a9b>
3.4375
528
Truncated
Science & Tech.
25.008548
95,528,624
Authors: Leonid M. Martyushev A measure of time is related to the number of ways by which the human correlates the past and the future for some process. On this basis, a connection between time and entropy (information, Boltzmann-Gibbs, and thermodynamic one) is established. This measure gives time such properties as universality, relativity, directionality, and non-uniformity. A number of issues of the modern science related to the finding of laws describing changes in nature are discussed. A special emphasis is made on the role of evolutionary adaptation of an observer to the surrounding world. Comments: 10 Pages. [v1] 2017-06-28 10:09:31 Unique-IP document downloads: 54 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:4ca9be2c-cf4d-4739-ba17-aba8f87b8d76>
2.984375
297
Knowledge Article
Science & Tech.
39.13322
95,528,635
Over the past several weeks, we’ve been working on debugging a stored procedure bug for a client. Coming from a software development background, I looked at the procedure like any piece of code — how can I debug the program and be able to use some means of knowing the values within the program while it’s running? In C, I’ve always used GDB, as well as Perl having it’s own debugger. Also useful are print statements! Those can be the most simplistic but also useful tools in debugging, especially in the case of a C program that’s so optimized that GDB gives you the famous “value optimized out” message, preventing you from knowing what the program is really doing. Stored procedures are a whole different matter than compiled or interpreted code. They are executing within the database. The line numbers in your source file you inevitably create the procedure with don’t match the line numbers of the procedure as stored by the database. Furthermore, if you are a developer of other languages, you will find debugging stored procedures to be a exercise in misery. You can load a stored procedure into MySQL, and as long as the SQL is syntactically correct, it will not catch things such as specifying variables that don’t exist– you will have the joy of discovering those during run-time– and most often the error message displayed will be of little use. How then, can you better observe the execution of a stored procedure? In some cases, you could run it with print statements and simply observe the output when the procedure is executed. But in our case, that was not a solution as the procedure had mostly DML statements and we needed something that was more precise. We needed something that not only allows you to display the values of variables, but also show you when the statement was executed. There is this very useful article on Dr. Dobbs, a concept that we expanded upon. The main idea of this article is that a temporary memory table is used during the execution of the stored procedure to insert stored procedure debug messages into, and being a memory table, won’t touch disk and will be as lightweight as possible. At the end of a debug session, these messages are copied from the temporary table into a permanent table with the same table definition. This gives you a means of reading the debug messages written during the execution of your stored procedure. We built upon the idea from the article and came up with these useful stored procedures that are easy to load and utilize in a stored procedure. They allow you to print any message that you want to, along with the time and the connection/thread ID of the thread executing the procedure– something that was invaluable in debugging what we thought at first was a race condition. Debugging stored procedures The following debugging procedures can be used within your stored procedures. First, each will be introduced, and then a more descriptive explanation on how they work will be provided, along with source code. Note that these procedures are also available github: http://github.com/CaptTofu/Stored-procedure-debugging-routines setupProcLog() This procedure you run once at the beginning of your procedure to set up the the temporary table for storing messages in, as well as ensuring that there is a permanent log table. procLog() This is the procedure you call for each debug message. cleanup() This is the procedure you call at the end of your debugging session The code for each of these procedures: CREATE PROCEDURE setupProcLog() BEGIN DECLARE proclog_exists int default 0; /* check if proclog is existing. This check seems redundant, but simply relying on 'create table if not exists' is not enough because a warning is thrown which will be caught by your exception handler */ SELECT count(*) INTO proclog_exists FROM information_schema.tables WHERE table_schema = database() AND table_name = 'proclog'; IF proclog_exists = 0 THEN create table if not exists proclog(entrytime datetime, connection_id int not null default 0, msg varchar(512)); END IF; /* * temp table is not checked in information_schema because it is a temp * table */ create temporary table if not exists tmp_proclog( entrytime timestamp, connection_id int not null default 0, msg varchar(512)) engine = memory; END As you can see, setupProcLog() first checks to see if the permanent logging table exists, and if not, creates it. The second create statement creates the temporary memory table for logging messages to. Note that ‘if exists’ is not sufficient for avoiding warnings that will cause your exception handler to be called in the event the table already exists and is even created specifying an ‘if exists’ condition. Also note that a storage engine is not specified for proclog. In this case, proclog should be a non-transactional table to avoid any issues with the stored procedure using the debugging procedures within a transaction– you don’t want all your debug messages rolled-back! CREATE PROCEDURE procLog(in logMsg varchar(512)) BEGIN declare continue handler for 1146 -- Table not found BEGIN call setupProcLog(); insert into tmp_proclog (connection_id, msg) values (connection_id(), 'reset tmp table'); insert into tmp_proclog (connection_id, msg) values (connection_id(), logMsg); END; insert into tmp_proclog (connection_id, msg) values (connection_id(), logMsg); END The procLog() procedure is what you use to log messages with. You can see an exception handler is set in case the table is not found, setupProcLog() is called to make sure the both the temporary and permanent log tables exist, as well as display a message that the temporary table has been set up. Then of course, in the case of no exception, the message is inserted into the table. CREATE PROCEDURE cleanup(in logMsg varchar(512)) BEGIN call procLog(concat("cleanup() ",ifnull(logMsg, ''))); insert into proclog select * from tmp_proclog; drop table tmp_proclog; END Finally, the cleanup() procedure copies all entries made during the session into the permanent logging table, then the temporary logging table is truncated. Note that this procedure just drops the temporary table, tmp_proclog. This is to allow you to be able to run setupProcLog() without a warning about tmp_proclog existing. Since all the data has been copied to proclog as well as being temporary table that will be re-created if setupProcLog() is called, dropping it works out well. These procedures are easy to use. Just load them into your MySQL instance: mysql yourschema < proclog.sql In your procedure code, you are able to now log messages! First, you will call: Then you can log message: call procLog("this is a message"); Or even with variables: call procLog(concat("this is a message with a variable - foo = ", ifnull(foo,'NULL'))); Note above the use of ifnull(). This is because concat() fails if the variable in the list being concatenated is is NULL. Now, for an actual procedure. The procedure shown below is for demonstration purposes and will show as succinctly as possible the problem we encountered. The issue was essentially that there is a stored procedure that has a number start of a transaction, a number insert statements that occur within a loop into tables relating to a single unique id and then a final check on the primary table to see if that unique id has been inserted already– and based on this, commits or rolls back the subsequent inserts. The procedure looked bullet-proof, and it seemed that there would be no way for any of the subsequent insert statements to ever be committed, but there was one problem: within the loop, there was a call to another stored procedure that also had it’s own call to start and commit a transaction. You cannot have nested transactions in MySQL, and this was the crux of the problem, and what the example procedure here will attempt to show, as well as show you how you can practically utilize the debugging procedures. CREATE PROCEDURE proc_example ( _username varchar(32) ) BEGIN DECLARE status_code int; DECLARE counter int default 0; DECLARE BAIL int default 0; DECLARE sleep_foo int; /* * exit handler for anything that goes wrong during execution. This will * ensure the any subsequent DML statements are rolled back */ DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN SET status_code = -1; ROLLBACK; SELECT status_code as status_code; call cleanup("line 65: exception handler"); END; SET status_code = 0; CALL setupProcLog(); CALL procLog("line 71: Entering proc_example"); /* start the transaction - so that anything that follows will be atomic */ START TRANSACTION; call procLog(concat("line 76: START TRANSACTION, status_code=", ifnull(status_code, NULL))); IF status_code = 0 THEN /* * the loop. The only thing that will cause this loop to end other * than the counter exceeding the value of 5 is if BAIL is set to 1 */ myloop: LOOP CALL procLog(concat("line 85: loop iteration #", ifnull(counter, NULL))); /* leave the loop is counter exceeds the value of 5 */ IF counter > 5 THEN LEAVE myloop; END IF; /* * this statement is just to show an example of an insert * statement that should NOT be committed until the end of * the procedure, or if the status_code is anything other than * zero then a rollback will result in this statement being rolled * back */ INSERT INTO userloop_count (username, count) VALUES (_username, counter); CALL procLog("line 103: CALL someother_proc()"); /* * This call to someother_proc() will set a value for BAIL. This * is the type of thing you want to be cognizant of in your stored * procedures - that a procedure that you call doesn't have it's * own transaction. Nested transactions are not supported by MySQL */ CALL someother_proc(rand(), _username, BAIL); CALL procLog(concat("line 111: BAIL = ", ifnull(BAIL, 'NULL'))); IF BAIL THEN SET status_code = 1; LEAVE myloop; END IF; SET counter = counter + 1; END LOOP; END IF; /* * this is the do or die part of the procedure that will either commit or * roll back any subsequent DML statements (insert, update, delete, etc) * if the username exists, a status_code of 2 is set, which results in a * rollback, and if not, an insert into users is called. If the insert * fails for any reason, the EXIT handler will also roll back subsequent * statements * */ IF (status_code = 0) THEN IF (SELECT user_id FROM users WHERE username = _username) IS NOT NULL THEN call procLog("line 135: user exists, setting status_code to 5"); SET status_code = 2; ELSE call procLog("line 138: user does not exist, inserting"); INSERT INTO users (username) VALUES (_username); END IF; END IF; select sleep(3) into sleep_foo; call procLog(concat( "line 148 status_code = ", ifnull(status_code,'NULL'))); /* if status_code of 0, then commit, else roll back */ IF (status_code = 0) THEN COMMIT; ELSE ROLLBACK; END IF; /* * call cleanup() to ensure the temp proc logging table's entries are * copied to the proclog table */ call cleanup("line 160: end of proc"); SELECT status_code as status_code; END As you can see, the call to procLog() gives you the ability to see what line of the procedure was called. This all corresponds to the line numbers you have in whatever source file you use to add your stored procedure to your database from. As you just saw, there was a call to the someother_proc() procedure. In this case, someother_proc() contains a transaction which will demonstrate the problem with trying to attempted nested transactions– something that you should always be cognizant of in debugging stored procedures DROP PROCEDURE IF EXISTS someother_proc | CREATE PROCEDURE someother_proc ( _rand float, _username varchar(32), out _return_val int ) MODIFIES SQL DATA BEGIN /* * this is passed by the calling procedure and is available after the * program is called */ set _return_val = 0; /* * this will cause grief for the test procedure that calls it because it * will result in an attempt of a nested procedure, which is not supported * in MySQL. This call will be an implicit commit, so all subsequent DML * statements in the calling procedure will be committed */ START TRANSACTION; /* * Arbitrary. Just to have some way to randomly set a true value that the * calling procedure will use to test whether or not to leave the loop * that this procedure was called from */ IF (_rand > 0.5) THEN SET _return_val = 1; END IF; /* this is here to provide a means to see what random value was tested */ INSERT INTO randlog (username, rvalue, returned) VALUES (_username, _rand, _return_val); COMMIT; END Now, to actually run this procedure and see what is entred into the proclog table. mysql> call proc_example('testuser'); +-------------+ | status_code | +-------------+ | 1 | +-------------+ 1 row in set (3.04 sec) So, as can be seen, the status code returned is ‘1’, meaning that the user wasn’t inserted. How can this be? Well, now that the procLog() procedure is being utilized, you can query the table proclog mysql> select * from proclog; +---------------------+---------------+-------------------------------------------+ | entrytime | connection_id | msg | +---------------------+---------------+-------------------------------------------+ | 2010-10-15 09:34:27 | 11 | line 71: Entering proc_example | | 2010-10-15 09:34:27 | 11 | line 76: START TRANSACTION, status_code=0 | | 2010-10-15 09:34:27 | 11 | line 85: loop iteration #0 | | 2010-10-15 09:34:27 | 11 | line 103: CALL someother_proc() | | 2010-10-15 09:34:27 | 11 | line 111: BAIL = 1 | | 2010-10-15 09:34:30 | 11 | line 145 status_code = 1 | | 2010-10-15 09:34:30 | 11 | cleanup() line 156: end of proc | +---------------------+---------------+-------------------------------------------+ Also, randlog can be queried to see what the random value was: mysql> select * from randlog; +----+---------------------+----------+----------+----------+ | id | created | username | rvalue | returned | +----+---------------------+----------+----------+----------+ | 1 | 2010-10-15 09:34:27 | testuser | 0.880542 | 1 | +----+---------------------+----------+----------+----------+ Aha! BAIL was set to ‘1’ because of the random value being greater than .5, so the loop ended. This means that the subsequent insert statements should have not been committed, right? (well, we know there’s a nested transaction, but for the sake of this example, let us forget about that momentarily) The users table should be empty, and it is: mysql> select * from users; Empty set (0.01 sec) The next table to check is userloop_count, it too should be empty: mysql> select * from userloop_count; +----+---------------------+-------+----------+ | id | created | count | username | +----+---------------------+-------+----------+ | 1 | 2010-10-15 09:34:27 | 0 | testuser | +----+---------------------+-------+----------+ Hmm, but it is not! How could this be? The status code was set to ‘1’, so the ROLLBACK having been issued would roll back all the previous insert statements. What else can we look at? Binary Log The binary log is like a closed circuit TV camera of your database– at least in terms of DML statements. Any statement that modifies your data will be found in your binary log. Not only that, you can see what thread issued the statement. The evidence is there for you to see! The following is cleaned up to make clearer: So, the first line below, the message “loop iteration #0” is inserted into proclog: #101015 9:34:27 server id 2 end_log_pos 2733 Query thread_id=11 exec_time=0 error_code=0 insert into tmp_proclog (connection_id, msg) values (connection_id(), NAME_CONST('logMsg',_latin1'line 85: loop iteration #0'))/*!*/; Next, the insertion into userloop_count is made: #101015 9:34:27 server id 2 end_log_pos 2813 Query thread_id=11 exec_time=0 error_code=0 INSERT INTO userloop_count (username, count) VALUES ( NAME_CONST('_username',_latin1'testuser'), NAME_CONST('counter',0))/*!*/; The message indicating the call to someother_proc() is inserted into proclog: #101015 9:34:27 server id 2 end_log_pos 3291 Query thread_id=11 exec_time=0 error_code=0 insert into tmp_proclog (connection_id, msg) values (connection_id(), NAME_CONST('logMsg',_latin1'line 103: CALL someother_proc()'))/*!*/; Next, the crux of the problem! A COMMIT is issued. How can this be? Well, because there is a ‘BEGIN TRANSACTION’, which there was already a ‘BEGIN TRANSACTION’ issued in the calling procedure. Nested transactions are not supported, and when you issue a ‘BEGIN TRANSACTION’ within a transaction, it acts as an implicite ‘COMMIT’ #101015 9:34:27 server id 2 end_log_pos 3318 Xid = 809 COMMIT/*!*/; So, now the problem is know and can be fixed accordingly! Summary This post was written to help those who are pulling their hair out debugging their stored procedures. This post was also written for those who might have come from more from a development role and might have an approach that is overly complex. When debugging stored procedures, here are some tips that will help: * binary log – look at this first when something seems awry. It is the closed-circuit TV recording of what happened with your database * utilize these logging procedures to debug your stored procedures * look closely at the data in the tables affected by your stored procedures You will develop an intuition for the types of issues stored procedures present over time. You just have to think a bit differently than with regular programming. EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives. With serious financial penalties, SOX audits can be intimidating — but they don’t have to be. Find out how you can use Datavail’s software to automatically prove SOX compliance.
<urn:uuid:91092b30-2da6-4330-96d9-44ba2a634d34>
2.625
4,306
Tutorial
Software Dev.
40.527938
95,528,641
In physics, quantitative models are developed on the basis of measurements. Measurements are made in standard increments, called units. Without units, a measurement is meaningless. Many quantities are specified by a magnitude (a number and the appropriate unit) and a direction in space. Such quantities are called vector quantities. Symbols that denote these vector quantities are bold letters, or normal letters with arrows drawn above. Examples of vector quantities: |displacement (d):||d = 10 m north| |velocity (v):||v = 3 m/s eastward| |acceleration (a)||a = 6 m/s2 west| |force (F)||F = 9 N up| To find the electric and magnetic fields produced by charged particles and the electric and magnetic forces acting on objects, we have to perform vector operations. Link: Vectors - Fundamentals and Operations To uniquely specify vector quantities we need a reference point and reference lines, i.e. we need a coordinate system. The most commonly used coordinate systems are rectangular, Cartesian coordinate systems. Other widely used coordinate systems are cylindrical and spherical coordinate systems. In Cartesian coordinates a vector is represented by its components along the axes of the coordinate system. Example: F = (Fx, Fy) = Fxi + Fyj = 3 N i - 4 N j. Here i and j are unit vectors. Unit vectors have magnitude 1 and no units. They are used as direction indicators. (i, j, k point in the x-, y-, and z-direction, respectively.) In the polar coordinates, in two dimensions, a vector is represented by its magnitude and the angle its direction makes with the x-axis. Example: F = (F, φ) = (5 N, 306.87o) = (5 N, -53.13o) Cylindrical coordinates and spherical coordinates are two different extensions of polar coordinates to three dimensions. To add or subtract physical vectors, they have to have the same units. To find the sum of two physical vector quantities with the same units algebraically, we add the x, y, and z-components of the individual vectors. Let vector A have components (3, 4) and vector B have components (2, -3). Let C = A + B be the sum of the two vectors. Then the components of C are (3+2, 4+(-3) = (5, 1). The magnitude of the vector C is C = (25 + 1)½ = 5.1, and the angle C makes with the x-axis is φ = tan-1(1/5) = 0.197 rad = 11.3o. To subtract vector B from vector A we subtract the components of vector B from the components of vector A. Vectors can be multiplied by a scalar (or number). Multiplying a vector by a scalar changes the magnitude of the vector, but leaves its direction F = (3 N, -4 N), 3F = 3*(3 N, -4 N) = (9 N, -12 N), F = (F, φ) = (5 N, 323.13o), 3F = (15 N, 323.13o). A vector can also be multiplied by another vector. There are two different products of vectors. The scalar product or dot product of two vectors A and B is a scalar quantity (a number with units) equal to the product of the magnitudes of the two vectors and the cosine of the smallest angle between them. A∙B = ABcosθ. In terms of the Cartesian components of the vectors A and B the scalar product is written as A∙B = AxBx + AyBy + AzBz. In one dimension, the scalar product is positive if the two vectors are parallel to each other, and it is negative if the two vectors are anti-parallel to each other, i.e. if they point in opposite directions. The vector product or cross product of two vectors A and B is defined as the vector C = A × B. The magnitude of C is C = AB sinθ, where θ is the smallest angle between the directions of the vectors A and B. C is perpendicular to both A and B, i.e. it is perpendicular to the plane that contains both A and B. The direction of C can be found by using the right-hand rule. Let the fingers of your right hand point in the direction of A. Orient the palm of your hand so that, as you curl your fingers, you can sweep them over to point in the direction of B. Your thumb points in the direction of C = A × B. and B are parallel or anti-parallel to each other, then C = A × B = 0, since sinθ = 0. If A and B are perpendicular to each other, then sinθ = 1 and C has its maximum possible magnitude. We can find the Cartesian components of C = A × B in terms of the components of A and B. Cx = AyBz - AzBy Cy = AzBx - AxBz Cz = AxBy - AyBx If a vector can be assigned to each point in a subset of space, we have a vector field. The velocity of a fluid, for example the velocity of water flowing through a pipe or down a drain, is a vector field. The velocity field describes the motion of a fluid at every point. The length of the flow velocity vector at any point is the flow speed. Forces are vectors. A force that we are familiar with is gravity. The gravitational force is not a contact force. It acts at a distance. We introduce the concept of the gravitational field to explain this action at a distance. Massive particles attract each other. We say that massive particles produce gravitational fields and are acted on by gravitational fields. The magnitude of the gravitational field produced by a massive object at a point P is the gravitational force per unit mass it exerts on another massive object located at that point. The direction of the gravitational field is the direction of that force. The gravitational field produced by a point mass always points towards the point mass and decreases proportional to the inverse square of the distance from the point mass. Near the surface of Earth the gravitational field produced by Earth is nearly constant and has magnitude F/m = g = 9.8 m/s2. Its direction is downward. To find the total gravitational field at a point calculate the vector sum of the gravitational fields produced by all masses that do not produce negligibly small gravitational fields at that point. All charged particles interact via the Coulomb force. The Coulomb force is not a contact force. It acts at a distance. We introduce the concept of the electric field to explain this action at a distance. We say that charged particle produce electric fields and are acted on by electric fields. The magnitude of the electric field E produced by a charged particle at a point P is the electric force per unit positive charge it exerts on another charged particle located at that point. The direction of the electric field is the direction of that force on a positive charge. The actual force on a particle with charge q is given by F = qE. It points in the opposite direction of the electric field E for a The electric field produce by a positive point charge always points away from the point charge and the electric field produce by a negative point charge always points towards the point charge. The electric field decreases proportional to the inverse square of the distance from the point charge. To find the total electrical field at a point calculate the vector sum of the electric fields produced by all charges that do not produce negligibly small electric fields at that point. OOne way to graphically represent a vector field in two dimensions is by drawing arrows an a grid. Set up a grid and find the magnitude and direction of the field vector at every grid point. At each grid point draw an arrow with the tail anchored at the grid point and a length proportional to the magnitude of the vector in the direction of the field vector. field of an ideal fluid in a pipe Continuity equation: A1v1 = A2v2 A2 = ½A1 --> v2 = 2v1 The arrows in the narrower section of the pipe are twice as long as the arrows in the wider section. field near the surface of Earth g = 9.8 m/s2 = constant, pointing downward. All arrows have the same length. field of a positive point charge at the origin Note how fast the field decreases as a function of the distance from the point charge as a consequence of the 1/r2 dependence. Arrows near the origin are not drawn, because they are too long. The magnitude of the field approaches infinity as we approach The arrow representation for the field produced by more than one source can become quite messy. Another way to graphically represent a vector field is by drawing field lines. The direction of the field at any point is given by the direction of a line tangent to the field line, while the magnitude of the field is given qualitatively by the density of field lines. Field lines can emerge from sources and end in sinks, or they can form closed loops. To draw a field line calculate the field at a point. Draw a short line segment (Δl --> 0) in the direction of the field. Recalculate the field at the end of the line segment. Velocity field lines or streamlines for a liquid flowing in a pipe. The density is higher in region 2 where the velocity of the liquid has a greater magnitude. Field lines of the gravitational field near the surface of Earth. The lines are evenly spaced since the field is constant. Electric field lines for a positive (source) and for a negative charge (sink). The number of lines emerging from or converging at the charge is proportional to the magnitude of the charge.
<urn:uuid:0853de42-282e-4a77-9ae2-c23522eb0774>
4.4375
2,160
Tutorial
Science & Tech.
62.547498
95,528,645
Scientists understand how magnets work to a degree but what they don’t understand is this: why do natural magnets ALWAYS have a north and south pole? Further, no matter how many times you cut a magnet in half you will always get a magnet that has a north and south pole. Pretty strange right? Scientists can generate lab made magnets that only have a north or south pole, but out in the wild this is simply not the case — it has never happened simply put we just don’t know why. You know what, let’s just set magnets aside entirely. Why do they even exist? 5. A Cat’s Purr Is there anything more symbolic of domesticated contentment than the purr of a cat? If there is we haven’t heard of it. If you have a cat then you know just how content your kitty is when they are purring and there is little more rewarding for a cat owner than to bask in the love of an adoring cat. Yet, this simple act of purring has kept scientists on the edge of pulling out their hair for as long as they’ve been studying our feline friends. Scientists today cannot figure out how exactly cats manufacture their purring. Scientists to this day don’t have a solid answer down but the leading hypothesis is that cats use the vocal folds in their larynx in order to create the vibrating sounds we interpret as purrs. Why do domesticated cats all choose to do this? No idea.
<urn:uuid:2ed4adc4-52fd-46fc-a7a0-5ac84edc007d>
3.09375
311
Listicle
Science & Tech.
68.778831
95,528,648
Extensive seagrass meadows discovered in Indian Ocean through satellite tracking of green turtles Research led by Swansea University's Bioscience department has discovered for the first time extensive deep-water seagrass meadows in the middle of the vast Indian Ocean through satellite tracking the movement of green sea turtles. A new study by Swansea University and Deakin University academics, published in the recent Marine Pollution Bulletin, reported how the monitoring of the turtles — which forage on seagrasses – tracked the species to the Great Chagos Bank, the world's largest contiguous atoll structure in the Western Indian Ocean. This area lies in the heart of one of the world's largest Marine Protected Areas (MPAs) and the study involved the use of in-situ SCUBA and baited video surveys to investigate the day-time sites occupied by the turtles, resulting in the discovery of extensive monospecific seagrass meadows of Thalassondendron ciliatum. These habitats are critically important for storing huge amounts of carbon in their sediments and for supporting fish populations. At three sites that extended over 128?km of the Great Chagos Bank, there was a high seagrass cover (average of 74%) at depths to 29 metres. The mean species richness of fish in the seagrass meadows was 11 species per site, with a mean average of 8-14 species across the aforementioned three sites. Results showed a high fish abundance as well as a large predatory shark recorded at all sites and given that the Great Chagos Bank extends over approximately 12,500?km and many other large deep submerged banks exist across the world's oceans, the results suggest that deep-water seagrass may be far more abundant than previously suspected. Reports of seagrass meadows at these depths with high fish diversity, dominated by large top predators, are relatively limited. Dr Nicole Esteban, a Research Fellow at Swansea University's Biosciences department, said: "Our study demonstrates how tracking marine megafauna can play a useful role to help identify previously unknown seagrass habitat. "We hope to identify further areas of critical seagrass habitat in the Indian Ocean with forthcoming turtle satellite tracking research." Dr Richard Unsworth, from Swansea University's Biosciences department, said: "Seagrasses struggle to live in deep waters due to their need for high light, but in these crystal clear waters of Chagos these habitats are booming. "Given how these habitats are threatened around the world it's great to come across a pristine example of what seagrass meadows should look like." This research was led by the Bioscience department at Swansea University, alongside the involvement of researchers at Deakin University. Related Journal Article
<urn:uuid:2291c1b1-90e8-458b-990d-f4d76721db9a>
3.234375
582
News Article
Science & Tech.
24.043343
95,528,653
Tsunami reached Peru, Northeast Canada: Study The quake produced less damaging tsunami that struck areas one day later in the Atlantic Ocean and Antarctica, according to the study.india Updated: Aug 26, 2005 09:34 IST The massive Sumatra earthquake last December 26 which sent deadly tsunami waves through the Indian Ocean also touched off large waves that went as far as Peru and northeast Canada, according to a new scientific study. Besides the huge waves that smashed coastlines from Indonesia to India, killing more than 2,17,000 people, the quake also produced less damaging tsunami that struck far-flung areas one day later in the Atlantic Ocean and around Antarctica, according to the study published in Science magazine on Thursday. Researcher Vasily Titov and colleagues at the Pacific Marine Environmental Laboratory in Seattle, Washington said their research offered evidence of how local earthquakes could generate effects worldwide. The researchers used ocean models and tidal gauges to demonstrate that the tsunami generated by the earthquake travelled around the globe several times before they dissipated. The waves' directions were guided by ridges in the middle of the oceans, according to the study. Sub-sea topography also helped to determine levels of energy in the tsunami. The researchers also discovered that significant quake-generated tsunami waves travelled from the Atlantic Ocean to the Pacific through the Drake Passage between Antarctica and South America. These waves were as strong as those which moved from the Indian Ocean into the Pacific, they said.
<urn:uuid:99f0ea19-c9bc-4d0e-92a7-ce3423d942ae>
3.296875
299
News Article
Science & Tech.
18.238956
95,528,656
Adult tiger moths exhibit a wide range of palatabilities to the insectivorous big brown bat Eptesicus fuscus. Much of this variation is due to plant allelochemics ingested and sequestered from their larval food. By using a comparative approach involving 15 species from six tribes and two subfamilies of the Arctiidae we have shown that tiger moths feeding on cardiac glycoside-containing plants often contain highly effective natural feeding deterrents. Feeding on pyrrolizidine alkaloid-containing plants is also an effective deterrent to predation by bats but less so than feeding on plants rich in cardiac glycosides. Moths feeding on plants containing iridoid glycosides and/or moths likely to contain biogenic amines were the least deterrent. By manipulating the diet of several tiger moth species we were able to adjust their degree of palatability and link it to the levels of cardiac glycosides or pyrrolizidine alkaloids in their food. We argue that intense selective pressure provided by vertebrate predators including bats has driven the tiger moths to sequester more and more potent deterrents against them and to acquire a suite of morphology characteristics and behaviors that advertise their noxious taste. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:b6cba67f-cec5-4ed5-99d6-e0af8abc5fe0>
3.140625
275
Academic Writing
Science & Tech.
12.616765
95,528,672
An infrared spatial interferometer has been developed to extend the spatial resolution of astronomical observations beyond the diffraction limits of existing telescopes. It is the first such instrument to measure the angular diameters and shapes of circumstellar shells at wavelengths from 2 to 20 AM. As a result, the angular resolution of routine telescopic observations at 10 µm has been extended from ~1 arcsec to ~0.1 arcsec even though observations are often obtained in ~3 arcsec conditions of atmospheric "seeing." D. W. McCarthy, F. J. Low, "Design and Operation of an Infrared Spatial Interferometer," Optical Engineering 16(6), 166569 (1 December 1977). https://doi.org/10.1117/12.7972163
<urn:uuid:43385a9a-1d39-484a-b284-17f9a8b253f8>
3.140625
159
Knowledge Article
Science & Tech.
55.249015
95,528,675
The High Power laser Energy Research facility (HiPER), is a proposed experimental laser-driven inertial confinement fusion (ICF) device undergoing preliminary design for possible construction in the European Union. HiPER is the first experiment designed specifically to study the "fast ignition" approach to generating nuclear fusion, which uses much smaller lasers than conventional designs, yet produces fusion power outputs of about the same magnitude. This offers a total "fusion gain" that is much higher than devices like the National Ignition Facility (NIF), and a reduction in construction costs of about ten times. Theoretical research since the design of HiPER in the early 2000s has cast doubt on fast ignition as a route to practical ICF. A new approach known as shock ignition has been proposed to address some of these problems. A similar ICF experimental setup in Japan was known as "HIPER", but this is no longer operational. Inertial confinement fusion (ICF) devices use "drivers" to rapidly heat the outer layers of a "target" to compress it. The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium and tritium. The heat of the laser burns the surface of the pellet into a plasma, which explodes off the surface. The remaining portion of the target is driven inwards due to Newton's Third Law, eventually collapsing into a small point of very high density. The rapid blowoff also creates a shock wave that travels towards the center of the compressed fuel. When it reaches the center of the fuel and meets the shock from the other side of the target, the energy in the shock wave further heats and compresses the tiny volume around it. If the temperature and density of that small spot can be raised high enough, fusion reactions will occur. The fusion reactions release high-energy particles, some of which (primarily alpha particles) collide with the high density fuel around it and slow down. This heats the fuel further, and can potentially cause that fuel to undergo fusion as well. Given the right overall conditions of the compressed fuel—high enough density and temperature—this heating process can result in a chain reaction, burning outward from the center where the shock wave started the reaction. This is a condition known as "ignition", which can lead to a significant portion of the fuel in the target undergoing fusion, and the release of significant amounts of energy. To date most ICF experiments have used lasers to heat the targets. Calculations show that the energy must be delivered quickly to compress the core before it disassembles, as well as creating a suitable shock wave. The energy must also be focused extremely evenly across the target's outer surface to collapse the fuel into a symmetric core. Although other "drivers" have been suggested, notably heavy ions driven in particle accelerators, lasers are currently the only devices with the right combination of features. In the case of HiPER, the driver laser system is similar to existing systems like NIF, but considerably smaller and less powerful. The driver consists of a number of "beamlines" containing Nd:glass laser amplifiers at one end of the building. Just prior to firing, the glass is "pumped" to a high-energy state with a series of xenon flash tubes, causing a population inversion of the neodymium (Nd) atoms in the glass. This readies them for amplification via stimulated emission when a small amount of laser light, generated externally in a fibre optic, is fed into the beamlines. The glass is not particularly effective at transferring power into the beam, so to get as much power as possible back out the beam is reflected through the glass four times in a mirrored cavity, each time gaining more power. When this process is complete, a Pockels cell "switches" the light out of the cavity. One problem for the HiPER project is that Nd:glass is no longer being produced commercially, so a number of options need to be studied to ensure supply of the estimated 1,300 disks. From there, the laser light is fed into a very long spatial filter to clean up the resulting pulse. The filter is essentially a telescope that focuses the beam into a spot some distance away, where a small pinhole located at the focal point cuts off any "stray" light caused by inhomogeneities in the laser beam. The beam then widens out until a second lens returns it to a straight beam again. It is the use of spatial filters that lead to the long beamlines seen in ICF laser devices. In the case of HiPER, the filters take up about 50% of the overall length. The beam width at exit of the driver system is about 40 cm × 40 cm. One of the problems encountered in previous experiments, notably the Shiva laser, was that the infrared light provided by the Nd:glass lasers (at ~1054 nm in vaco) couples strongly with the electrons around the target, losing a considerable amount of energy that would otherwise heat the target itself. This is typically addressed through the use of an optical frequency multiplier, which can double or triple the frequency of the light, into the green or ultraviolet, respectively. These higher frequencies interact less strongly with the electrons, putting more power into the target. HiPER will use frequency tripling on the drivers. When the amplification process is complete the laser light enters the experimental chamber, lying at one end of the building. Here it is reflected off a series of deformable mirrors that help correct remaining imperfections in the wavefront, and then feeds them into the target chamber from all angles. Since the overall distances from the ends of the beamlines to different points on the target chamber are different, delays are introduced on the individual paths to ensure they all reach the center of the chamber at the same time, within about 10 picoseconds (ps). The target, a fusion fuel pellet about 1 mm in diameter in the case of HiPER, lies at the center of the chamber. HiPER differs from most ICF devices in that it also includes a second set of lasers for directly heating the compressed fuel. The heating pulse needs to be very short, about 10 to 20 ps long, but this is too short a time for the amplifiers to work well. To solve this problem HiPER uses a technique known as chirped pulse amplification (CPA). CPA starts with a short pulse from a wide-bandwidth (multi-frequency) laser source, as opposed to the driver which uses a monochromatic (single-frequency) source. Light from this initial pulse is split into different colours using a pair of diffraction gratings and optical delays. This "stretches" the pulse into a chain several nanoseconds long. The pulse is then sent into the amplifiers as normal. When it exits the beamlines it is recombined in a similar set of gratings to produce a single very short pulse, but because the pulse now has very high power, the gratings have to be large (approx 1 m) and sit in a vacuum. Additionally the individual beams must be lower in power overall; the compression side of the system uses 40 beamlines of about 5 kJ each to generate a total of 200 kJ, whereas the ignition side requires 24 beamlines of just under 3 kJ to generate a total of 70 kJ. The precise number and power of the beamlines are currently a subject of research. Frequency multiplication will also be used on the heaters, but it has not yet been decided whether to use doubling or tripling; the latter puts more power into the target, but is less efficient converting the light. As of 2007, the baseline design is based on doubling into the green. Fast Ignition and HiPER In traditional ICF devices the driver laser is used to compress the target to very high densities. The shock wave created by this process further heats the compressed fuel when it collides in the center of the sphere. If the compression is symmetrical enough the increase in temperature can create conditions close to the Lawson criterion, leading to significant fusion energy production. If the resulting fusion rate is high enough, the energy released in these reactions will heat the surrounding fuel to similar temperatures, causing them to undergo fusion as well. In this case, known as "ignition", a significant portion of the fuel will undergo fusion and release large amounts of energy. Ignition is the basic goal of any fusion device. The amount of laser energy needed to effectively compress the targets to ignition conditions has grown rapidly from early estimates. In the "early days" of ICF research in the 1970s it was believed that as little as 1 kilojoules (kJ) would suffice, and a number of experimental lasers were built to reach these power levels. When they did, a series of problems, typically related to the homogeneity of the collapse, turned out to seriously disrupt the implosion symmetry and lead to much cooler core temperatures than originally expected. Through the 1980s the estimated energy required to reach ignition grew into the megajoule range, which appeared to make ICF impractical for fusion energy production. For instance, the National Ignition Facility (NIF) uses about 330 MJ of electrical power to pump the driver lasers, and in the best case is expected to produce about 20 MJ of fusion power output. Without dramatic gains in output, such a device would never be a practical energy source. The fast ignition approach attempts to avoid these problems. Instead of using the shock wave to create the conditions needed for fusion above the ignition range, this approach directly heats the fuel. This is far more efficient than the shock wave, which becomes less important. In HiPER, the compression provided by the driver is "good", but not nearly that created by larger devices like NIF; HiPER's driver is about 200 kJ and produces densities of about 300 g/cm3. That's about one-third that of NIF, and about the same as generated by the earlier NOVA laser of the 1980s. For comparison, lead is about 11 g/cm3, so this still represents a considerable amount of compression, notably when one considers the target's interior contained light D-T fuel around 0.1 g/cm3. Ignition is started by a very-short (~10 picoseconds) ultra-high-power (~70 kJ, 4 PW) laser pulse, aimed through a hole in the plasma at the core. The light from this pulse interacts with the fuel, generating a shower of high-energy (3.5 MeV) relativistic electrons that are driven into the fuel. The electrons heat a spot on one side of the dense core, and if this heating is localised enough it is expected to drive the area well beyond ignition energies. The overall efficiency of this approach is many times that of the conventional approach. In the case of NIF the laser generates about 4 MJ of infrared power to create ignition that releases about 20 MJ of energy. This corresponds to a "fusion gain" —the ratio of input laser power to output fusion power— of about 5. If one uses the baseline assumptions for the current HiPER design, the two lasers (driver and heater) produce about 270 kJ in total, yet generate 25 to 30 MJ, a gain of about 100. Considering a variety of losses, the actual gain is predicted to be around 72. Not only does this outperform NIF by a wide margin, the smaller lasers are much less expensive to build as well. In terms of power-for-cost, HiPER is expected to be about an order of magnitude less expensive than conventional devices like NIF. Compression is already a fairly well-understood problem, and HiPER is primarily interested in exploring the precise physics of the rapid heating process. It is not clear how quickly the electrons stop in the fuel load; while this is known for matter under normal pressures, it's not for the ultra-dense conditions of the compressed fuel. To work efficiently, the electrons should stop in as short a distance as possible, to release their energy into a small spot and thus raise the temperature (energy per unit volume) as high as possible. How to get the laser light onto that spot is also a matter for further research. One approach uses a short pulse from another laser to heat the plasma outside the dense "core", essentially burning a hole through it and exposing the dense fuel inside. This approach will be tested on the OMEGA-EP system in the US. Another approach, tested successfully on the GEKKO XII laser in Japan, uses a small gold cone that cuts through a small area of the target shell; on heating no plasma is created in this area, leaving a hole that can be aimed into by shining the laser into the inner surface of the cone. HiPER is currently planning on using the gold cone approach, but will likely study the burning solution as well. This section needs to be updated.(December 2013) In 2005 HiPER completed a preliminary study outlining possible approaches and arguments for its construction. The report received positive reviews from the EC in July 2007, and moved onto a preparatory design phase in early 2008 with detailed designs for construction beginning in 2011 or 2012. In parallel, the HiPER project also proposes to build smaller laser systems with higher repetition rates. The high-powered flash lamps used to pump the laser amplifier glass causes it to deform, and it cannot be fired again until it cools off, which takes as long as a day. Additionally only a very small amount of the flash of white light generated by the tubes is of the right frequency to be absorbed by the Nd:glass and thus lead to amplification, in general only about 1 to 1.5% of the energy fed into the tubes ends up in the laser beam. Key to avoiding these problems is replacing the flash lamps with more efficient pumps, typically based on laser diodes. These are far more efficient at generating light from electricity, and thus run much cooler. More importantly, the light they do generate is fairly monochromatic and can be tuned to frequencies that can be easily absorbed. This means that much less power needs to be used to produce any particular amount of laser light, further reducing the overall amount of heat being generated. The improvement in efficiency can be dramatic; existing experimental devices operate at about 10% overall efficiency, and it is believed "near term" devices will improve this as high as 20%. HiPER proposes to build a demonstrator diode-pump system producing 10 kJ at 1 Hz or 1 kJ at 10 Hz depending on a design choice yet to be made. The best high-repetition lasers currently operating are much smaller; Mercury at Livermore is about 70 J, HALNA in Japan at ~20 J, and LUCIA in France at ~100 J. HiPER's demonstrator would thus be between 10 and 500 times as powerful as any of these. To make a practical commercial power generator, the high-gain of a device like HiPER would have to be combined with a high-repetition rate laser and a target chamber capable of extracting the power. Additional areas of research for post-HiPER devices include practical methods to carry the heat out of the target chamber for power production, protecting the device from the neutron flux generated by the fusion reactions, and the production of tritium from this flux to produce more fuel for the reactor. - Perkins, LJ (2009). "Shock Ignition: A New Approach to High Gain Inertial Confinement Fusion on the National Ignition Facility" (PDF). Physical Review Letters. Bibcode:2009PhRvL.103d5004P. doi:10.1103/physrevlett.103.045004. - "How NIF works Archived 27 May 2010 at the Wayback Machine.", Lawrence Livermore National Laboratory. Retrieved on 2 October 2007. - Per F. Peterson, Inertial Fusion Energy: A Tutorial on the Technology and Economics Archived 27 September 2011 at the Wayback Machine., University of California, Berkeley, 1998. Retrieved on 7 May 2008. - Per F. Peterson, How IFE Targets Work Archived 17 June 2008 at the Wayback Machine., University of California, Berkeley, 1998. Retrieved on 8 May 2008. - Per F. Peterson, Drivers for Inertial Fusion Energy Archived 14 September 2008 at the Wayback Machine., University of California, Berkeley, 1998. Retrieved on 8 May 2008. - Dunne, 2007, p. 107 - Dunne, 2007, p. 147 - Dunne, 2007, p. 101 - S. Atzeni, et al., "Fast ignitor target studies for the HiPER project" Archived 5 December 2010 at the Wayback Machine., Physics of Plasmas, Vol. 15, 056311 (2008), doi:10.1063/1.2895447 - Dunne, 2005 - Dunne, 2007, p. 149 - Nuckolls et al., Laser Compression of Matter to Super-High Densities: Thermonuclear (CTR) Applications, Nature Vol. 239, 1972, pp. 129 - John Lindl, The Edward Teller Medal Lecture: The Evolution Toward Indirect Drive and Two Decades of Progress Toward ICF Ignition and Burn, 11th International Workshop on Laser Interaction and Related Plasma Phenomena, December 1994. Retrieved on 7 May 2008. - Dunne, 2007, p. 104 - Dunne, 2007, p. 130 - Mike Dunne et al., "HiPER Technical Background and Conceptual Design Report 2007", June 2007 - Mike Dunne et al., "HiPER: a laser fusion facility for Europe", 2005 - Edwin Cartlidge, "Europe plans laser-fusion facility", Physics World, 2 September 2005 - Gerstner, Ed (2007), "Laser physics: Extreme light", Nature, 446 (7131): 16, Bibcode:2007Natur.446...16G, doi:10.1038/446016a, PMID 17330018 - HiPER Project – Project home page - Fast track to fusion – includes an image of the gold-cone approach - Hydrodynamic Instability Experiments at the GEKKO XII/HIPER Laser – the Japanese experiment of the same name, for comparison - Laser vision fuels energy future – BBC news report - Professor Mike Dunne, Director of the UK's Central Laser Facility, on European plans for creating fusion energy, Ingenia magazine, December 2007 - HiPER Power – Article on physics.org, August 2009
<urn:uuid:781f2293-a580-47d7-8d3c-e4bcb4729dee>
3.640625
3,854
Knowledge Article
Science & Tech.
48.082648
95,528,684
The Role of Natural Variability in Shaping the Response of Coral Reef Organisms to Climate Change - 363 Downloads Purpose of Review We investigate whether regimes of greater daily variability in temperature or pH result in greater tolerance to ocean warming and acidification in key reef-building taxa (corals, coralline algae). Temperature and pH histories will likely influence responses to future warming and acidification. Past exposure of corals to increased temperature variability generally leads to greater thermotolerance. However, the effects of past pH variability are unclear. Variability in pH or temperature will likely modify responses during exposure to stressors, independent of environmental history. In the laboratory, pH variability often limited the effects of ocean acidification, but the effects of temperature variability on responses to warming were equivocal. Environmental variability could alter responses of coral reef organisms to climate change. Determining how both environmental history as well as the direct impacts of environmental variability will interact with the effects of anthropogenic climate change should now be high priority. KeywordsEnvironmental variability Ocean acidification Ocean warming Coral Crustose coralline algae Calcification We thank Emma Camp for sharing the raw data from her study. We also acknowledge Kristy Kroeker and Sarah Lummis for their contributions in designing and assembling a database of studies that test the effects of changes in carbonate chemistry on corals. This paper is Contribution No. 3677 of the Virginia Institute of Marine Science, College of William & Mary. Funding was provided to SC by an ARC Discovery Early Career Researcher Award (DE160100668). Compliance with Ethical Standards Conflict of Interest On behalf of all authors, the corresponding author states that there is no conflict of interest. - 3.Kleypas JA, Feely RA, Fabry VJ, Langdon C, Sabine CL, Robbins LL (2006) Impacts of ocean acidification on coral reefs and other marine calcifiers: a guide for future research. Report of a workshop held 18–20 April 2005, St. Petersburg, FL, sponsored by NSF, NOAA, and the US Geological Survey, 88 pp.Google Scholar - 20.Ohde S, van Woesik R. Carbon dioxide flux and metabolic processes of a coral reef, Okinawa. Bull Mar Sci. 1999;65:559–76.Google Scholar - 27.West-Eberhard M. Developmental plasticity and evolution. New York: Oxford University Press; 2003.Google Scholar - 30.Boyd PW, Dillingham PW, McGraw CM, Armstrong EA, Cornwall CE, Feng Y-Y, et al. Physiological responses of a Southern Ocean diatom to complex future ocean conditions. Nat Clim Chang. 2015;6:207.Google Scholar - 38.Hettinger A, Sanford E, Hill TM, Lenz EA, Russell AD, Gaylord B. Larval carry-over effects from ocean acidification persist in the natural environment. Glob Chang Biol. 2013;19:3317–26.Google Scholar - 67.Putnam HM, Edmunds PJ. Responses of coral hosts and their algal symbionts to thermal heterogeneity. Fort Lauderdale: Proc 11th Int. Coral Reef Symp; 2008.Google Scholar - 69.Comeau S, Cornwall CE. Contrasting effects of ocean acidification on coral reefs “animal forests” versus seaweed “kelp forests”. In: Rosi S, editor. Marine Animal Forests. Switzerland: Springer; 2016. p. 1–25.Google Scholar
<urn:uuid:d394516d-d1f2-4915-8a32-67570411a003>
2.59375
738
Academic Writing
Science & Tech.
39.116197
95,528,685
ALL >> Education >> View Article Swift: A New Programming Language Developed By Apple Total Articles: 277 What is "Swift Programming Language"? Quick is a totally new programming dialect based on the LLVM (Low-Level Virtual Machine) compiler and runtime? In spite of the fact that intended to exist together with Objective-C, Apple's past question arranged dialect, SWIFT will supplant in the long run. The fundamental object is to get rid of whole classes of the normal programming mistakes that torment code, keeping the messiness aside and making programming less demanding. You can take in more of it here or you can download Apple's legitimate eBook (free), "The Swift Programming Language" from iBookstore. There was a blended reaction among Apple designers with respect to this new dialect, some were particularly concerned, hearing Swift being situated as the trade for Objective-C, yet there is no compelling reason to worry - as the two dialects should exist together for a long while to come. There is no requiring have the race to change over every one of your tasks, take as much time as necessary relocating to Swift. Keep in mind; it is as yet another dialect, which isn't made to influence you to freeze, yet to make your work simpler as an engineer. Begin learning iOS Training Institutes In Bangalore and understanding it, before influencing a make a beeline for begin towards this delightful dialect by Apple. What is in for designers? •It incorporates programming dialect usefulness like Generics, terminations, type derivation, different return composes, administrator over-burdens and so forth. •Existing code set of target C can be converged with the one written in Swift. •Supports both Cocoa and Cocoa Touch. •Automatic security from flood and memory fumble. •Users can see their code in real life as they compose it. What is in for you? •Faster incorporating and runtime speed implies a quicker application advancement, conveyance, and arrangement. •Clutter-free coding implies lesser bugs and greater usefulness. •Automatic memory administration offers to ascend to more steady applications. •Faster Updates – With simple coordination with existing Objective C codebase, applications can be refreshed all the more as often as possible. •Better availability, speed and less testing time diminish cost while enhancing dependability and client encounter. As Apple has proclaimed, SWIFT will, in the long run, supplant target C as advancement dialect for Apple Devices. It's an ideal opportunity to investigate App Development Course In Bangalore this new dialect for every single future Io improvements and making the progress.Regardless of whether, you're considering totally reworking your current application or simply encourage improvement, time to think about the change. This new dialect will fulfill you a coder and will influence you to love your work considerably more. Infocampusis one of the BestApp Development Course in Bangalore, Offering iOS training with 100% Placement Assistance. Searching for the best iOS Training Institutes In Bangalore ? Then the best training institute is Infocampus. It is the name behind thousands of successful IT professionals. For more points of interest get in touch with us: 9738001024 Education Articles1. Give Your Career A Right Direction With Data Science Training Program 2. Cisco (sase) 500-651 Real Exam Questions Author: Alexia Aurora 3. An Insight To Ielts Programs And How They Help Aspirants Author: Mia Seal 4. Institute Of Vedic Astrology Reviews For Astrology Online Courses Author: Aashish Patida 5. Valuable Software Testing Trends - Selenium To Follow In 2018 6. A List Of Top Job Oriented Mechanical Industries In India 7. Scholarship For Girls In Jharkhand 8. Different Jharkhand State Scholarship In All Courses 9. A Demand Technology-plc Scada Author: Aptron Solutions 10. Talend Training In Bangalore,marathahalli,btm 11. The Current State Of Java Value Types 12. Getting Job In Salesforce Via Training Online Now Author: Riley Louis 13. Blue Prism Training Institute In Hyderabad 14. The Top Llb College In Bhopal Author: Megha Sharma 15. New Zealand Study Visa Consultant In Jalandhar Punjab
<urn:uuid:b6b34a1d-681f-4995-a15a-a8168ae2a4bb>
2.890625
902
Truncated
Software Dev.
38.223356
95,528,706
Contamination of aquifers in many cases occurs because of the downward migration of contaminants from the land surface through the vadose zone, including through residuum sediments in case of fractured or karst aquifers. It is therefore critically important to characterize and model fate and transport of contaminants in both the vadose zone/residuum and the underlying saturated zone as part of remedial design for aquifer restoration. Residuum sediments usually contain a significant fraction of clay minerals and fluid flow is often predominantly in vertical direction. Groundwater modeling is useful when evaluating alternatives of contaminant mass removal in the vadose zone/residuum and their impact on the downgradient dissolved concentrations in the underlying aquifer. Two case studies illustrate applicability of such modeling. The first study includes modeling of the expected natural longevity of a residual DNAPL mass accumulation in the saturated residuum, and its comparison to the effects of remedial DNAPL mass removal. The second study includes modeling contaminant transport from a residual DNAPL source within the vadose zone to the saturated zone in the residuum and the underlying bedrock aquifer. The natural longevity of the unsaturated DNAPL source is modeled, focusing on resulting down-gradient concentrations in the residuum and bedrock aquifer. For comparison, two remedial actions are simulated; 90% mass removal and soil flushing. Impact of contaminant mass reduction in residuum sediments on dissolved concentrations in underlying aquifer N. Kresic, A. Mikszewski, J. Manuszak, M.C. Kavanaugh; Impact of contaminant mass reduction in residuum sediments on dissolved concentrations in underlying aquifer. Water Science and Technology: Water Supply 1 November 2007; 7 (3): 31–39. doi: https://doi.org/10.2166/ws.2007.064 Download citation file:
<urn:uuid:c416ee0f-42f2-4ee4-8e74-f4b5a9f8ef76>
2.765625
391
Academic Writing
Science & Tech.
23.57725
95,528,731
The violent winter storms that rocked the United Kingdom in 2014 had the power to physically shake cliffs to a degree in excess of anything recorded previously, according to a new study. A team at Plymouth University in the United Kingdom used seismometers, laser scanning and video cameras to evaluate the impact of the massive waves – up to eight meters (26 feet) high – that struck the cliffs in Porthleven, West Cornwall, during January and February of last year. In a paper accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union, the team from the Coastal Processes Research Group at Plymouth University found that the level of shaking was of an order of magnitude greater than ever previously recorded. They also recorded 1,350 cubic meters (47,675 cubic feet) of cliff face being eroded along a 300-meter (984-foot) stretch of coastline in just two weeks – a cliff retreat rate more than 100 times larger than the long-term average. “Coastal cliff erosion from storm waves is observed worldwide but the processes are notoriously difficult to measure during extreme storm wave conditions when most erosion normally occurs, limiting our understanding of cliff processes,” said Claire Earlie, a PhD student at the School of Marine Science and Engineering at Plymouth University, and lead author of the new study. “Over January-February 2014, during the most energetic Atlantic storm period since at least 1950, with deep water significant wave heights of six to eight meters (20 to 26 feet), cliff-top ground motions showed vertical ground displacements in excess of 50 to 100 microns; an order of magnitude larger than observations made previously anywhere in the world,” she said. Using seismometers on loan from Scripps Institution of Oceanography in La Jolla, Calif., Earlie and the team embedded the instruments seven meters (23 feet) from the cliff edge. Within two weeks, they were just five meters (16 feet) from the edge, such had been the rate of erosion. Terrestrial laser scanner surveys conducted from the beach also revealed a cliff face volume loss two orders of magnitude larger than the long-term erosion rate. “The results imply that erosion of coastal cliffs exposed to extreme storm waves is highly episodic and that long-term rates of cliff erosion will depend on the frequency and severity of extreme storm wave impacts,” said Paul Russell, a professor in the School of Marine Science and Engineering at Plymouth University, who helped to supervise the project and is a co-author of the new paper. “Our coastline acts as a natural barrier to the sea, but what we’ve seen right across South West England is unprecedented damage and change – from huge amounts of sand being stripped from beaches to rapid erosion of cliffs,” added Gerd Masselink, professor of coastal geomorphology in the School of Marine Science and Engineering at Plymouth University and a co-author of the new study. “These figures will help to explain some of the invisible forces being brought to bear on our coastal structures, and highlight the risk of sudden cliff damage,” he added. The American Geophysical Union is dedicated to advancing the Earth and space sciences for the benefit of humanity through its scholarly publications, conferences, and outreach programs. AGU is a not-for-profit, professional, scientific organization representing more than 60,000 members in 139 countries. Join the conversation onFacebook, Twitter, YouTube, and our other social media channels. Notes for Journalists Journalists and public information officers (PIOs) of educational and scientific institutions who have registered with AGU can download a PDF copy of this article by clicking on this link: Or, you may order a copy of the final paper by emailing your request to Nanci Bompey at firstname.lastname@example.org. Please provide your name, the name of your publication, and your phone number. Neither the paper nor this press release is under embargo. “Coastal cliff ground motions and response to extreme storm waves” Claire S. Earlie: School of Marine Science and Engineering, Plymouth University, Plymouth, UK; Adam, P. Young: Integrative Oceanography Division, Scripps Institution of Oceanography, University of California San Diego, La Jolla, California, USA; Gerd Masselink and Paul E. Russell: School of Marine Science and Engineering, Plymouth University, Plymouth, UK. Contact information for the authors: Claire S. Earlie: +44 7590 025745, email@example.com +1 (202) 777-7524 University of Plymouth Contact: +44 (0)1752 588003 Nanci Bompey | American Geophysical Union New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:b9f58737-b9f4-4756-b017-2ebc24cfffb1>
3.703125
1,619
Content Listing
Science & Tech.
38.766997
95,528,735
A View from Brittany Sauser A Nuclear-Powered Mars Hopper Researchers say a vehicle could explore Mars more efficiently by collecting gas from the planet’s atmosphere to use as propellant. Researchers at the Space Research Center in the United Kingdom have developed a concept for a Mars rover that would use nuclear power and propellant gathered from the Martian atmosphere to hop a kilometer at a time. A vehicle that hops such a distance could cover diverse areas faster than current wheeled rovers, says Hugo Williams, lead researcher for the new hopper concept. Taking giant leaps across a planet is not a new idea. Researchers at Draper Laboratory in Cambridge, Massachusetts have developed a prototype of such a vehicle. But the recent work stands out because it would gather carbon dioxide from the Martian atmosphere, heat it up, and discharges it through a rocket nozzle to propel the vehicle. While it will take the vehicle about a week to refuel after making a hop, Williams says having its fuel source in-situ would extend its operational time and range. The hopper would also uses a nuclear-powered engine so it would not be reliant on solar panels for energy like the current generation of rovers . Richard Ambrosi, a researcher at the Space Research Center, says that the vehicle would use a guidance, navigation, and control system already used by other spacecraft and would be mostly autonomous. The work was published Wednesday in the Proceedings of the Royal Society. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:3c12db26-feec-4ee9-aa26-9f8ae15fa768>
4.03125
324
Truncated
Science & Tech.
35.509207
95,528,738
Scientists have found that molecular oxygen around comet 67P is not produced on its surface, as some suggested, but may be from its body. The discovery changes our understanding of the basic mechanism of photosynthesis and should rewrite the textbooks. Decisions made in the next decade will determine whether Antarctica suffers dramatic changes that contribute to a metre of global sea level rise. Researchers have studied how a 'drumstick' made of light could make a microscopic 'drum' vibrate and stand still at the same time. Researchers have lab-tested a molecule that can combat the common cold virus by preventing it from hijacking human cells. Researchers have used lasers to connect, arrange and merge artificial cells, paving the way for networks of artificial cells that act like tissues. Studying the fleeting actions of electrons in organic materials will now be much easier, thanks to a new method for generating fast X-rays. A deadly fungus responsible for the devastation of amphibian populations around the world may have originated in East Asia, new research has found. A coupled system of two miniature detectors called nanopores improves detection of biological molecules, including DNA and markers of early disease. The maser (microwave amplification by stimulated emission of radiation), the older microwave frequency sibling of the laser, was invented in 1954. However unlike lasers, which have become widespread, masers are much less ...
<urn:uuid:7bc0d93f-0041-4e61-b940-2955fd8ef436>
2.953125
281
Content Listing
Science & Tech.
29.851883
95,528,758
Look! In the sky! It's a bird! It's a plane! It's ... a horseshoe? That was the question some folks in Battle Mountain, Nevada, and on social media were asking after a photo was shared of a decidedly odd-looking cloud. One of the rarest clouds ever. This was taken over Battle Mountain, Nevada, USA on 8 March 2018.— NWS Elko (@NWSElko) March 9, 2018 It's called a horseshoe cloud for obvious reasons. #nvwx Credit goes to eagle-eye Christy Grimes. pic.twitter.com/XgQDY77ZzM The cloud above, and others like it, aren't figments of your imagination or a plane's unfinished loop-the-loop. Instead, they're a very rare type of cloud called a — you guessed it — horseshoe cloud. These clouds are the result of a combination of air flows getting mixed together. According to the Weather Channel, horseshoe clouds begin when a flat, often a small cumulus cloud, moves over a thermal, a column of rising warm air. The air rises fastest where it's the warmest, and that happens to be the middle of the cloud in the cases of these horseshoe clouds. The middle of the cloud rises faster than the sides and, presto, the ends of the clouds are sort of drooping while the middle is soaring. The difference in these speeds can also create a bit of a spin in the cloud. So the middle section of the cloud is pulling away faster. Or as one Twitter user explained, "It's a very weak & sideways cousin of a waterspout or tornado." Q: What is even happening?!— Mika McKinnon (@mikamckinnon) March 9, 2018 A: Horseshoe clouds are partially-visible vortices. An updraft hit a sheer layer, getting knocked into a spin & flattened. Oversimplifucation: It's a very weak & sideways cousin of a waterspout or tornado, a quickly-dissipating shred of cloud. pic.twitter.com/S9pWeqrc2y These clouds are not only rare because they require the exact right sort of conditions, but they're also quick to dissipate. Of course, some people weren't convinced that the cloud in Nevada was actually a horseshoe. Horseshoe?? I know a staple when I see one, son. https://t.co/tMMv4r2XxB— what is dog?? (@semisponge) March 10, 2018 Or are actually the result of clouds and air. That's literally a UFO exhaust port. Don't lie to me. https://t.co/4IrVB7baBs— Dentist (@Julianmunoz) March 10, 2018 But clouds and air are all they are. Unless, of course, you just can't let go of the UFO theory ...
<urn:uuid:9d63462f-ff33-4e48-8e1a-3594eac76402>
3.234375
626
News Article
Science & Tech.
73.444103
95,528,761
Section 3.14 Sage¶ Sage is an open source library of computational routines for symbolic, exact and numerical mathematics. It is designed to be a “viable free open source alternative to Magma, Maple, Mathematica, and Matlab.” PreTeXt contains extensive support for including example Sage into your document. A typical use of the <sage> tag is to include an <input> element, followed by an <output> element. The content of the <input> element may be presented statically in PDF output, or dynamically as a Sage Cell in an output format based on HTML. Of course, for output as a CoCalc worksheet, the Sage code is presented in the worksheet's native format. The content of the <output> element is included in PDF output, but not in dynamic instances, since it can be re-computed. Notably, there is a conversion which pairs input and output into a single file in the format used by Sage's doctest framework. So if expected output is provided, it becomes automatic to identify when Sage has diverged from your expectations, and you can adjust your examples accordingly. The Sage Cell Server can also be configured to interpret different languages, because Sage by default contains everything needed to evaluate code in these languages. This is done by providing a @language attribute, where possible values are singular. The default is Note that the dynamic formats (including the Sage Cell) may run Sage “interacts,” so that is possible to embed interactive demonstrations into your dynamic output formats.
<urn:uuid:b15dc52b-6a9c-41c8-a888-95b29000c498>
2.765625
331
Documentation
Software Dev.
29.995344
95,528,766
- SEE ALSO Devel::hdb - Perl debugger as a web page and REST service To debug a Perl program, start it like this: perl -d:hdb youprogram.pl It will print a message on STDERR with the URL the debugger is listening to. Point your web browser at this URL and it will bring up the debugger GUI. It defaults to listening on localhost port 8080; to use a different port, start it like this: perl -d:hdb=port:9876 yourprogram.pl To specify a particular IP address to listen on: perl -d:hdb=host:192.168.0.123 yourprogram.pl And to listen on any interface: perl -d:hdb=a yourprogram.pl The GUI is divided into three main parts: Control buttons, Code browser and Watch expressions. Additionally, click on the thick border between the code and watch panes to slide out the breakpoint list. - Step In Causes the debugger to execute the next line and stop. If the next line is a subroutine call, it will stop at the first executable line in the subroutine. - Step Over Causes the debugger to execute the next line and stop. If the next line is a subroutine call, it will step over the call and stop when the subroutine returns. - Step Out Causes the debugger to run the program until it returns from the current subroutine. Resumes execution of the program until the next breakpoint. The debugged program immediately exits. The GUI in the web browser remains running. Most of the interface is taken up by the code browser. Tabs along the top show one file at a time. The "+" next to the last tab brings up a dialog to choose another file to open. Click the "x" next to a file name to close that tab. The first tab is special: it shows the stack frames of the currently executing program and cannot be closed. The stack tab itself has tabs along the left, one tab for each stack frame; the most recent frame is at the top. Each of these tabs shows a Code Pane. The line numbers on the left are struck through if that line is not breakable. For breakable lines, clicking on the line number will set an unconditional breakpoint and turn the number red. Right-clicking on a breakable line number will bring up a menu where a breakpoint condition and action can be set. Lines with conditional breakpoints are blue. Lines with actions have a circle outline, and are dimmed when the breakpoint is inactive. The banner at the top of the Code Pane shows the current function and its arguments. Clicking on the banner will scroll the Code Pane to show the currently executing line. Hover the mouse over a variable to see its value. It shows the value in whichever stack frame is being displayed. To see the values for variables in higher frames, select the appropriate frame from the tab on the left. Right-click on a variable to add it to the list of watch expressions on the right. The right side of the GUI shows watch expressions. To add a new expression to the watch window, click on the "+" or right-click on a variable in a code pane. To remove a watched expression, click on the "x" next to its name. Composite types like hashes and arrays have a blue circled number indicating how many elements belong to it. To collapse/expand them, click the blue circle. A watched typeglob will show all the used slots within the glob. Older versions of perl will always create an undefined value in the SCALAR slot. The value for the IO slot will be the file descriptor number of the filehandle and its position as reported by sysseek(), or undef if it is closed. Enabling the checkbox next to the expression turns it into a watchpoint. Watchpoints are evaluated in list context, and if their value changes, the debugged program will stop. For list values, it is considered changed if any of the elements changes value or if the number of values in the list changes. It will not perform a deep search for changed values. The debugger responds to these keys, which are generally the same as the command-line debugger commands - s - Step in - n - Step over - <CR> - Repeat the last 's' or 'n' - r - Step out - c - Continue/Run - q - Quit/Exit - x - Add new watch expression - f - Add new file - . - Scroll the code window to show the current line - L - Open the breakpoint manager - b - Open the Quick Breakpoint entry dialog - B - Toggle a breakpoint on the current line The Quick Breakpoint dialog accepts several different types of expressions to set unconditional breakpoints: Sets a breakpoint on a particular line in the file the debugger is currently stopped in. Sets a breakpoint for the first line in the named subroutine within the current package. Sets a breakpoint for the nth line in the named subroutine within the current package. Sets a breakpoint for the first line in the named subroutine. Sets a breakpoint for the nth line in the named subroutine. Sets a breakpoint for a particular line in the named file. Toggle a breakpoint on the current line The debugger overrides Perl's fork() built-in so that the child process will stop with the first statement after the fork(). In the parent process' debugger window, it will pop up a dialog with two options Open a new browser window and debug the child process inside it Disables the debugger in the child process, and forces it to run to completion. Devel::hdb can trace the execution of a program and stop if the code path differs from that of a previously saved run. First, run the program in trace mode: perl -d:hdb=trace:<tracefile> yourprogram.pl In trace mode, the program runs normally, and the debugger does not stop execution. It records information about which lines of code were run into the specified file (tracefile, in this example). Next, run the program again in follow mode: perl -d:hdb=follow:<tracefile> yourprogram.pl This time, the debugger starts up normally, stopping on the first line of the program. As the program runs, the debugger reads the trace information from the specified file. The first time the current location is different than what is in the file, the debugger will stop and report which line it expected. After it stops in this way, follow mode is disabled. This package includes these third party libraries: jQuery version 2.1.1 http://jquery.com/ scrollTo jQuery plugin version 22.214.171.124 http://flesler.blogspot.com/2007/10/jqueryscrollto.html Twitter Bootstrap version 2.3.0 http://twitter.github.com/bootstrap/index.html Handlebars version 2.0.0 http://handlebarsjs.com/ Anthony Brummett <email@example.com> Copyright 2014, Anthony Brummett. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself.
<urn:uuid:2fcb8c75-c723-41a1-ad71-e03195904bd2>
2.828125
1,579
Documentation
Software Dev.
66.532386
95,528,774
When we define more than one member in a type with the same name (be it a constructor or, as we’ll see later, a method) we call this overloading. Initially, we created two constructors (two overloads of the PolarPoint3D, and they compiled just fine. This is because they took different sets of parameters. One took three doubles, the other two. In fact, there was also the third, hidden constructor that took no parameters at all. All three constructors took different numbers of parameters, meaning there’s no ambiguity about which constructor we want when we initialize a The constructor in Example 3-31 seems different: the two doubles have different names. Unfortunately, this doesn’t matter to the C# compiler—it only looks at the types of the parameters, and the order in which they are declared. It does not use names for disambiguation. This should hardly be surprising, because we’re not required to provide argument names when we call methods or constructors. If we add the overload in Example 3-31, it’s not clear what new PolarPoint3D(0, 0) would mean, and that’s why we get an error—we’ve got two members with the same name PolarPoint3D—the constructor), and exactly the same parameter types, in the same order. Looking at overloaded functions will emphasize that it really is only the method name and the parameters that matter—a function’s return type is not considered to be a disambiguating aspect of the member for overload purposes. That means ...
<urn:uuid:0c9b06b7-51f0-40f9-95f5-a49a408b6f17>
2.65625
360
Truncated
Software Dev.
51.660441
95,528,808
Alternative Form for the Frenet Equations The Frenet equations can be written in the form For a curvethe Frenet equations are 1.whereis a scalar (called the radius of curvature) andis the unit tangent to the curve. 2whereis a scalar called the torsion and 3. whereis a unit normal to the curve. Equating the expressions forgives The vectorsform a right handed coordinate system soand we can writeashence Hence(Eq 1. above) and Subsituteintoto obtainwhich is equation 3. Subsituteintoto obtainwhich is Eq 2. above.
<urn:uuid:b5bda6ba-5a26-404c-b701-cdf619b9f5da>
2.671875
141
Tutorial
Science & Tech.
42.393532
95,528,815
That matter is composed of atoms we take for granted … but can we hope to unravel what they actually look like ? The simplest of experiences can hatch eureka moments. Legend has it that despite all his inherited wealth and global travels, the ancient Greek philosopher Democritus hit upon one of the most fundamental of ideas in physics while sitting in the comfort of his own home. One of the most common and baffling science questions is “how does gravity work?”. If you too find yourself confused then rest assured you’re in good company … even Sir Isaac Newton admitted he was baffled by gravity. In this article we look at Einsteins solution to this conundrum. It’s often said that water draining out of a sink or bath swirls down the plughole anti-clockwise in the Northern Hemisphere and clockwise in the southern hemisphere .. but is that really true ? If the fact that all magnetic compasses don’t actually point to the geographical North Pole comes as a shock to you then what we discuss below will blow your socks off, because magnetic north actually moves year by year AND the poles have even flipped in the past; North is South & South is North! We take radio and television signals for granted now. But if you really think about it radio waves really are a strange phenominon … invisibly beaming sound from one point to another through and around solid objects ! We are Stardust … the chemical elements that make up our bodies, originated, with one exception, in the turbulent body of a star Why do so many things seem to go wrong ? Sometimes you suspect that the universe is out to get you! We need to look at some scientific theories surround something called Entropy or the “The laws of thermodynamics” Ground Source heat pumps promise to supply most if not all the heating required for any new house without drawing ANY energy from the Electricity Grid! What are they? Ground source heat pumps use solar energy naturally stored in soil, bedrock and groundwater as a heat source. They do require electricity to operate, but efficiently produce up to five times as much heat energy for every unit of electricity they use. How does it work?A... The most accurate time piece today is an Atomic Clock. But how does it work & what’s the theory behind it ? Microwave ovens are part of everyday life. But how to they work? Why is it that we don’t fall through the floor? What’s stopping us ? A stupid question, surely ? … not really when you consider what an atom is made up of. Solar cells, or photovoltaic cells, convert energy from sunlight into electricity… but how? What principles lie behind this technology? How many of us have a foot a foot long ? What are the origins of some of the measurements we use today.
<urn:uuid:dedf6573-47c3-4b5b-92e8-7b68e1495294>
3.0625
596
Content Listing
Science & Tech.
56.224462
95,528,823
Much of what living cells do is carried out by "molecular machines" – physical complexes of specialized proteins working together to carry out some biological function. How the minute steps of evolution produced these constructions has long puzzled scientists, and provided a favorite target for creationists. In a study published early online on Sunday, January 8, in Nature, a team of scientists from the University of Chicago and the University of Oregon demonstrate how just a few small, high-probability mutations increased the complexity of a molecular machine more than 800 million years ago. By biochemically resurrecting ancient genes and testing their functions in modern organisms, the researchers showed that a new component was incorporated into the machine due to selective losses of function rather than the sudden appearance of new capabilities. "Our strategy was to use 'molecular time travel' to reconstruct and experimentally characterize all the proteins in this molecular machine just before and after it increased in complexity," said the study's senior author Joe Thornton, PhD, professor of human genetics and evolution & ecology at the University of Chicago, professor of biology at the University of Oregon, and an Early Career Scientist of the Howard Hughes Medical Institute. "By reconstructing the machine's components as they existed in the deep past," Thornton said, "we were able to establish exactly how each protein's function changed over time and identify the specific genetic mutations that caused the machine to become more elaborate." The study – a collaboration of Thornton's molecular evolution laboratory with the biochemistry research group of the UO's Tom Stevens, professor of chemistry and member of the Institute of Molecular Biology – focused on a molecular complex called the V-ATPase proton pump, which helps maintain the proper acidity of compartments within the cell. One of the pump's major components is a ring that transports hydrogen ions across membranes. In most species, the ring is made up of a total of six copies of two different proteins, but in fungi a third type of protein has been incorporated into the complex. To understand how the ring increased in complexity, Thornton and his colleagues "resurrected" the ancestral versions of the ring proteins just before and just after the third subunit was incorporated. To do this, the researchers used a large cluster of computers to analyze the gene sequences of 139 modern-day ring proteins, tracing evolution backwards through time along the Tree of Life to identify the most likely ancestral sequences. They then used biochemical methods to synthesize those ancient genes and express them in modern yeast cells. Thornton's research group has helped to pioneer this molecular time-travel approach for single genes; this is the first time it has been applied to all the components in a molecular machine. The group found that the third component of the ring in Fungi originated when a gene coding for one of the subunits of the older two-protein ring was duplicated, and the daughter genes then diverged on their own evolutionary paths. The pre-duplication ancestor turned out to be more versatile than either of its descendants: expressing the ancestral gene rescued modern yeast that otherwise failed to grow because either or both of the descendant ring protein genes had been deleted. In contrast, each resurrected gene from after the duplication could only compensate for the loss of a single ring protein gene. The researchers concluded that the functions of the ancestral protein were partitioned among the duplicate copies, and the increase in complexity was due to complementary loss of ancestral functions rather than gaining new ones. By cleverly engineering a set of ancestral proteins fused to each other in specific orientations, the group showed that the duplicated proteins lost their capacity to interact with some of the other ring proteins. Whereas the pre-duplication ancestor could occupy five of the six possible positions within the ring, each duplicate gene lost the capacity to fill some of the slots occupied by the other, so both became obligate components for the complex to assemble and function. "It's counterintuitive but simple: complexity increased because protein functions were lost, not gained," Thornton said. "Just as in society, complexity increases when individuals and institutions forget how to be generalists and come to depend on specialists with increasingly narrow capacities." The research team's last goal was to identify the specific genetic mutations that caused the post-duplication descendants to functionally degenerate. By reintroducing historical mutations that occurred after the duplication into the ancestral protein, they found that it took only a single mutation from each of the two lineages to destroy the same specific functions and trigger the requirement for a three-protein ring. "The mechanisms for this increase in complexity are incredibly simple, common occurrences," Thornton said. "Gene duplications happen frequently in cells, and it's easy for errors in copying to DNA to knock out a protein's ability to interact with certain partners. It's not as if evolution needed to happen upon some special combination of 100 mutations that created some complicated new function." Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today. Such a mechanism argues against the intelligent design concept of "irreducible complexity," the claim that molecular machines are too complicated to have formed stepwise through evolution. "I expect that when more studies like this are done, a similar dynamic will be observed for the evolution of many molecular complexes," Thornton said. "These really aren't like precision-engineered machines at all," he added. "They're groups of molecules that happen to stick to each other, cobbled together during evolution by tinkering, degradation, and good luck, and preserved because they helped our ancestors to survive." The paper, "Evolution of increased complexity in a molecular machine," appears in the January 18, 2012, issue of Nature [doi: 10.1038/nature10724]. The work was a collaboration of Thornton's molecular evolution lab with the research group of Tom Stevens, a yeast geneticist at the University of Oregon. Other authors include Gregory C. Finnigan and Victor Hanson-Smith, of the University of Oregon. Funding for this work was provided by the National Institutes of Health, the National Science Foundation, and the Howard Hughes Medical Institute. John Easton | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e94972df-e9cd-4750-9d99-962fcb8e6895>
3.875
1,934
Content Listing
Science & Tech.
30.874333
95,528,842
+44 1803 865913 By: Kazutoshi Yabuki Deals with photosynthesis and growth of plants/crops from an environmental engineering and environmental physics point of view. A theory to CO2 diffusion or photosynthesis of a single leaf, a plant, plant community and forests is also applied and discussed in detail. From the contents: Preface.- I: A Closer Look at Wind.- II: CO2 Exchange between the Air and the Leaf.- III: Photosynthetic Rate in the Aspects of the Leaf Boundary Layer.- IV: Photosynthetic Rate of a Plant Community and Wind Speed.- V: Gas Exchange between the Pneumatophores and Roots of Mangroves by Photosynthesis of Pneumatophore. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects the world’s foremost supplier of natural history and environmental books Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:599e289b-1f07-4da4-9839-919cb0a06139>
2.609375
224
Product Page
Science & Tech.
46.256598
95,528,843
Now you can extend that truism about oil and water to water and itself. Water and water don't always mix, either. The textbooks say that water readily comes together with other water, open arms of hydrogen clasping oxygen attached to other OH molecules. This is the very definition of "wetness." But scientists at Pacific Northwest National Laboratory have observed a first: a single layer of water--ice grown on a platinum wafer--that gives the cold shoulder to subsequent layers of ice that come into contact with it. "Water-surface interactions are ubiquitous in nature and play an important role in many technological applications such as catalysis and corrosion," said Greg Kimmel, staff scientist at the Department of Energy lab and lead author of a paper in the current issue (Oct. 15 advance online edition) of Physical Review Letters. "It was assumed that one end of the water molecule would bind to metal, and at the other end would be these nice hydrogen attachment points for the atoms in next layer of water." A theory out of Cambridge University last year suggested that these attachment points, or "dangling OH's," did not exist, that instead of dangling, the OH's were drawn by the geometry of hexagonal noble-metal surfaces and clung to that. Kimmel and his co-authors, working at the PNNL-based W.R. Wiley Environmental Molecular Sciences Laboratory, tested the theory with a technique called rare gas physisorption that enlists krypton to probe metal surfaces and water layers on those surfaces. They found that the first single layer of water, or monolayer, wetted the platinum surface as they had expected but "that subsequent layers did not wet the first layer," Kimmel said. "In other words, the first layer of water is hydrophobic." The results jibe with an earlier Stanford University study that used X-ray adsorption to show that rather than being fixed pointing outward in the dangling position, wet and ready to receive the next water layer, the arms of a water monolayer on a metal surface are double-jointed. They swivel back toward the surface of the metal to find a place to bind. To the water molecules approaching this bent-over-backward surface, the layer has all the attractiveness of a freshly waxed car's hood. The second layer beads up, but that's not all: Additional water's attraction to that first hydrophobic water monolayer is so weak that 50 or more ice-crystal layers can be piled atop the first until all the so-called non-wetting portions are covered--akin to "the coalescence of water drops on a waxed car in a torrential downpour," said Bruce Kay, PNNL laboratory fellow and co-author with Kimmel and PNNL colleagues Nick Petrik and Zdenek Dohnálek. Kimmel said that self-loathing water on metal is more than a curiosity and will come as a surprise to many in the field who assumed that water films uniformly cover surfaces. Hundreds of experiments have been done on thin water films grown on metal surfaces to learn such things as how these films affect molecules in which they come into contact and what role heat, light and high-energy radiation play in such interactions. Source: Pacific Northwest National Laboratory Explore further: Electronic stickers to streamline large-scale 'internet of things'
<urn:uuid:1abbb32f-8900-41f8-97ab-d02e5c99963c>
3.53125
697
News Article
Science & Tech.
38.381233
95,528,854
Global interpreter lock A global interpreter lock (GIL) is a mechanism used in computer-language interpreters to synchronize the execution of threads so that only one native thread can execute at a time. An interpreter that uses GIL always allows exactly one thread to execute at a time, even if run on a multi-core processor. Some popular interpreters that have GIL are CPython and Ruby MRI. Technical background concepts A global interpreter lock (GIL) is a mutual-exclusion lock held by a programming language interpreter thread to avoid sharing code that is not thread-safe with other threads. In implementations with a GIL, there is always one GIL for each interpreter process. Applications running on implementations with a GIL can be designed to use separate processes to achieve full parallelism, as each process has its own interpreter and in turn has its own GIL. Otherwise, the GIL can be a significant barrier to parallelism. Benefits and drawbacks Use of a global interpreter lock in a language effectively limits the amount of parallelism reachable through concurrency of a single interpreter process with multiple threads. If the process is almost purely made up of interpreted code and does not make calls outside of the interpreter which block for long periods of time (allowing the GIL to be released by that thread while they process), there is likely to be very little increase in speed when running the process on a multiprocessor machine. Due to signaling with a CPU-bound thread, it can cause a significant slowdown, even on single processors. Reasons for employing such a lock include: - increased speed of single-threaded programs (no necessity to acquire or release locks on all data structures separately), - easy integration of C libraries that usually are not thread-safe, - ease of implementation (having a single GIL is much simpler to implement than a lock-free interpreter or one using fine-grained locks). Some language implementations that implement a global interpreter lock are CPython, the most widely-used implementation of Python, and Ruby MRI, the reference implementation of Ruby (where it is called Global VM Lock). JVM-based equivalents of these languages (Jython and JRuby) do not use global interpreter locks. IronPython and IronRuby are implemented on top of Microsoft's Dynamic Language Runtime and also avoid using a GIL. - "GlobalInterpreterLock". Retrieved 30 November 2015. - David Beazley (2009-06-11). "Inside the Python GIL" (PDF). Chicago: Chicago Python User Group. Retrieved 2009-10-07. - Shannon -jj Behrens (2008-02-03). "Concurrency and Python". Dr. Dobb's Journal. p. 2. Retrieved 2008-07-12. The GIL is a lock that is used to protect all the critical sections in Python. Hence, even if you have multiple CPUs, only one thread may be doing "pythony" things at a time. - Python/C API Reference Manual: Thread State and the Global Interpreter Lock - "IronPython at python.org". python.org. Retrieved 2011-04-04. IronPython has no GIL and multi-threaded code can use multi core processors.
<urn:uuid:7ade2812-a9cc-41e6-8416-db8e2e85880e>
3.875
676
Knowledge Article
Software Dev.
43.478338
95,528,869
The Formation and Evolution of the Solar System by James Schombert Publisher: University of Oregon 2007 The purpose of this course is to educate you on the basic science behind our exploration of the Solar System so you may make informed choices as future/current voters on issues of our environment and the future of science in this country. This course requires a basic understanding of mathematics. Home page url Download or read it online for free here: by Thomas P. Hansen - NASA The 1964 Lunar Orbiter program consisted of the investigation of the Moon by five unmanned spacecraft. Its objective was to obtain detailed photographs of the Moon. This document presents information on the location and coverage of all photographs. by Elbert A. King - Lunar and Planetary Institute The excitement of the Apollo program was that it accomplished a bold leap from the surface of the Earth to the Moon. The deed challenged our technology and engineering skill. Preparations are being made now for another and even more daring leap. The Solar System consists of the Sun and its planetary system of eight planets, their moons, and other non-stellar objects. It formed 4.6 billion years ago from the collapse of a molecular cloud. The majority of the system's mass is in the Sun... by Andrew N. Youdin, Scott J. Kenyon - arXiv The text covers the theory of planet formation with an emphasis on the physical processes relevant to current research. After summarizing empirical constraints from astronomical data we describe the structure and evolution of protoplanetary disks.
<urn:uuid:e83b4339-f361-402b-bc4f-b90ce379b5f5>
3.234375
312
Content Listing
Science & Tech.
44.613846
95,528,896
Anyone involved in macromolecular crystallography will know that for many years scientists have had to rely on a multi-stage process utilizing protein, usually expressed in engineered cells, which is then extracted and purified before crystallization in vitro and finally prepared for analysis. As a counter to this time-consuming and substantial scientific effort, there are a number of examples of protein crystallization events occurring in vivo, with next to no human input. In a case presented in a recent paper an insect virus exploits the phenomenon as part of its life cycle. Not surprisingly an issue with intracellular protein crystals is that they are typically very small, limited by the size of the cell. However, microfocus beamlines at synchrotron light sources prove here to be capable and refined in the analysis of micron-scale in vivo samples. A group of scientists from the Diamond Light Source and the University of Oxford, UK [Axford et al. (2014), Acta Cryst. D70, 1435-1441; doi:10.1107/S1399004714004714] has been able to study crystals inside the cells directly using X-ray analysis without complex attempts to extract and prepare samples. It would not be out of place to assume that the presence of cellular material might compromise the experiment. However, the researchers’ results show that the exact opposite may actually be true; the cell maintains the crystals in an environment amenable to the collection of data. It will be interesting to see if an improved understanding of protein crystallization in vivo can bring more targets within reach of such analysis. Certainly continued technical developments, including increased photon flux and reduced beam size, will improve the signal-to-noise ratio. Together with more efficient data processing, this means that we will be able to do more with less and exploit novel microcrystal targets of increasing complexity for in vivo structural studies. Business Development Manager, IUCr Dr. Jonathan Agbenyega | Eurek Alert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8cf1fdef-c36c-4eda-b6ac-b988e52098f4>
2.859375
988
Content Listing
Science & Tech.
36.360372
95,528,900
A submarine dives to a depth of 95.2 meters in the ocean.What is the force (in Newtons) acting on a hatch that has an area of 4.3 square meters? WARNING: do not use the information associated with Figure 4.26 to solve this problem because the round off error will cause you to get the wrong answer. Do not consider atmospheric pressure in your calculation, since the pressure inside of the submarine is maintained at 1 atmosphere, which offsets the 1 atmosphere of pressure that normally would be added to the gauge pressure. Pressure=density*gravity*height =1000*9.81*95.m... View the full answer Sign up to view the full answer P = r * g * h r (rho) is the density of the fluid, density of sea water is 1.03... View the full answer
<urn:uuid:f36c73da-c38b-4b32-8a98-0d4370e1a1d6>
3.25
174
Q&A Forum
Science & Tech.
78.116468
95,528,908
What methods or tools are you familiar with that facilitate requirements elicitation, specification and decomposition?© BrainMass Inc. brainmass.com July 18, 2018, 7:03 am ad1c9bdddf The best-known and oldest process is the waterfall model, where developers follow these steps in order: * state requirements * analyze requirements * design a solution approach * architect a software framework for that solution * develop code * test (unit tests then system tests) * deploy, and * Post Implementation. After each step is finished, the process proceeds to the next step. Iterative and Incremental development model Iterative and Incremental development is a software development process developed in response to the weaknesses of the more traditional waterfall model. The two most well known iterative development frameworks are the: - Rational Unified Process and - Dynamic Systems Development Method. Iterative and incremental development is also an essential part of: - Extreme Programming and - All other agile software development frameworks. The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added. The Procedure itself consists of the Initialization step, the Iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the ...
<urn:uuid:67c5a517-8c1d-4fdb-bc5e-3aeea0d2d9bd>
2.671875
368
Truncated
Software Dev.
16.428672
95,528,921
Compound of five cubohemioctahedra |Compound of five cubohemioctahedra| |Faces||30 squares, 20 hexagons| |Symmetry group||icosahedral (Ih)| |Subgroup restricting to one constituent||pyritohedral (Th)| There is some controversy on how to colour the faces of this polyhedron compound. Although the common way to fill in a polygon is to just colour its whole interior, this can result in some filled regions hanging as membranes over empty space. Hence, the "neo filling" is sometimes used instead as a more accurate filling. In the neo filling, orientable polyhedra are filled traditionally, but non-orientable polyhedra have their faces filled with the modulo-2 method (only odd-density regions are filled in). In addition, overlapping regions of coplanar faces can cancel each other out. - Skilling, John (1976), "Uniform Compounds of Uniform Polyhedra", Mathematical Proceedings of the Cambridge Philosophical Society, 79 (3): 447–457, doi:10.1017/S0305004100052440, MR 0397554. |This polyhedron-related article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:510e6308-be9c-442d-9cdf-b5480437d46b>
3.09375
277
Knowledge Article
Science & Tech.
30.165556
95,528,950
Researchers hope newly developed robots will give them their first look at the mysterious ridge located between Greenland and Siberia. Scientists from the Woods Hole Oceanographic Institution on Cape Cod plan to begin a 40-day expedition of the ridge on July 1. They plan to use the robots to navigate and map its terrain and sample any life found near a series of underwater hot springs. Tim Shank, lead biologist on the international expedition, said researchers have no idea what new life at the ridge might be like. "I almost think it's like going to Australia for the first time, knowing it's there, but not knowing what lives there," he said. The Gakkel Ridge marks a 1,100-mile stretch from north of Greenland toward Siberia, where the North American and Eurasian tectonic plates continuously move away from each other. Scientists believe new life could be discovered there because of hot springs that are created at such tectonic boundaries when ocean water comes into contact with hot magma rising from the earth's mantle. The organisms known to exist in the Arctic basin, where the Gakkel is located, may have evolved in a unique fashion because they were mostly isolated from the life in the deep waters of other oceans for all but the last 25 million years, said Robert Reves-Sohn, the expedition's lead scientist. The job of reaching any new organisms at the ridge falls to scientists operating three new robotic vehicles, two of which are designed to navigate untethered under the ice. The two robots, named Puma and Jaguar, cost about US$450,000 (EUR335,000) each and received significant funding from NASA because their mission is similar to what scientists hope to do in a future exploration under the ice of one of Jupiter's moons, Europa. The robots are built to descend to about 5,000 meters and work 5 to 6 meters off the bottom, photographing and removing samples, said Hanumant Singh, the project's chief engineer. The advances are no guarantee of success, however. The hot springs are difficult to find in far less challenging conditions and the margin for error is thin, since the robots cannot surface through the ice and be retrieved if there are problems. Singh said the excitement of finding new organisms and understanding the geology in the Arctic outweighs any risks to the robots. "Even though we know there's a strong probability, or there's a reasonable probability of losing a vehicle, it's still worth it," he said. The face of USA's First Lady Melania Trump after her handshake with Russian President Putin has received a lot of attention in social media The Ukrainian government refuses to abode by its obligations, rejects a peaceful resolution of the conflict, and disregards its own people, the president said
<urn:uuid:adb2087a-de6b-4c58-9244-703d70c536a5>
3.3125
568
News Article
Science & Tech.
35.949333
95,528,962
Scalar–tensor–vector gravity (STVG) is a modified theory of gravity developed by John Moffat, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. The theory is also often referred to by the acronym MOG (MOdified Gravity). Scalar–tensor–vector gravity theory, also known as MOdified Gravity (MOG), is based on an action principle and postulates the existence of a vector field, while elevating the three constants of the theory to scalar fields. In the weak-field approximation, STVG produces a Yukawa-like modification of the gravitational force due to a point source. Intuitively, this result can be described as follows: far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field. STVG has been used successfully to explain galaxy rotation curves, the mass profiles of galaxy clusters, gravitational lensing in the Bullet Cluster, and cosmological observations without the need for dark matter. On a smaller scale, in the Solar System, STVG predicts no observable deviation from general relativity. The theory may also offer an explanation for the origin of inertia. STVG is formulated using the action principle. In the following discussion, a metric signature of will be used; the speed of light is set to , and we are using the following definition for the Ricci tensor: We begin with the Einstein-Hilbert Lagrangian: where is the trace of the Ricci tensor, is the gravitational constant, is the determinant of the metric tensor , while is the cosmological constant. where , is the mass of the vector field, characterizes the strength of the coupling between the fifth force and matter, and is a self-interaction potential. The three constants of the theory, , and , are promoted to scalar fields by introducing associated kinetic and potential terms in the Lagrangian density: where denotes covariant differentiation with respect to the metric , while , , and are the self-interaction potentials associated with the scalar fields. The STVG action integral takes the form where is the ordinary matter Lagrangian density. Spherically symmetric, static vacuum solutionEdit where is the test particle mass, is a factor representing the nonlinearity of the theory, is the test particle's fifth-force charge, and is its four-velocity. Assuming that the fifth-force charge is proportional to mass, i.e., , the value of is determined and the following equation of motion is obtained in the spherically symmetric, static gravitational field of a point mass of mass : where is determined from cosmological observations, while for the constants and galaxy rotation curves yield the following values: where is the mass of the Sun. These results form the basis of a series of calculations that are used to confront the theory with observation. STVG/MOG has been applied successfully to a range of astronomical, astrophysical, and cosmological phenomena. On the scale of the Solar System, the theory predicts no deviation from the results of Newton and Einstein. This is also true for star clusters containing no more than a maximum of a few million solar masses. STVG is in good agreement with the mass profiles of galaxy clusters. STVG can also account for key cosmological observations, including: - McKee, M. (25 January 2006). "Gravity theory dispenses with dark matter". New Scientist. Retrieved 2008-07-26. - Moffat, J. W. (2006). "Scalar-Tensor-Vector Gravity Theory". Journal of Cosmology and Astroparticle Physics. 3: 4. arXiv: . Bibcode:2006JCAP...03..004M. doi:10.1088/1475-7516/2006/03/004. - Brownstein, J. R.; Moffat, J. W. (2006). "Galaxy Rotation Curves Without Non-Baryonic Dark Matter". Astrophysical Journal. 636: 721–741. arXiv: . Bibcode:2006ApJ...636..721B. doi:10.1086/498208. - Brownstein, J. R.; Moffat, J. W. (2006). "Galaxy Cluster Masses Without Non-Baryonic Dark Matter". Monthly Notices of the Royal Astronomical Society. 367: 527–540. arXiv: . Bibcode:2006MNRAS.367..527B. doi:10.1111/j.1365-2966.2006.09996.x. - Brownstein, J. R.; Moffat, J. W. (2007). "The Bullet Cluster 1E0657-558 evidence shows Modified Gravity in the absence of Dark Matter". Monthly Notices of the Royal Astronomical Society. 382: 29–47. arXiv: . Bibcode:2007MNRAS.382...29B. doi:10.1111/j.1365-2966.2007.12275.x. - Moffat, J. W.; Toth, V. T. (2007). "Modified Gravity: Cosmology without dark matter or Einstein's cosmological constant". arXiv: [astro-ph]. - Moffat, J. W.; Toth, V. T. (2008). "Testing modified gravity with globular cluster velocity dispersions". Astrophysical Journal. 680: 1158–1161. arXiv: . Bibcode:2008ApJ...680.1158M. doi:10.1086/587926. - Moffat, J. W.; Toth, V. T. (2009). "Modified gravity and the origin of inertia". Monthly Notices of the Royal Astronomical Society Letters. 395: L25. arXiv: . Bibcode:2009MNRAS.395L..25M. doi:10.1111/j.1745-3933.2009.00633.x. - Moffat, J. W.; Toth, V. T. (2009). "Fundamental parameter-free solutions in Modified Gravity". Classical and Quantum Gravity. 26: 085002. arXiv: . Bibcode:2009CQGra..26h5002M. doi:10.1088/0264-9381/26/8/085002.
<urn:uuid:ff951aeb-8d7a-4283-9349-cbc4dcd28a16>
3.3125
1,372
Knowledge Article
Science & Tech.
65.518896
95,528,966
Web App Calculates Carbon Footprint Of Every Purchase Do you know what the climate impacts of your everyday actions are? Aside from calculating the carbon footprint of each activity, and adding them together and keeping tabs on them all, which is not only time-consuming, but doesn’t take into consideration how your behaviors compare with others, there isn’t a really comprehensive method of tracking personal climate impacts over time. However, thanks to a partnership between Oroeco and Mint.com, there’s now a personal virtual climate dashboard that can put a carbon footprint on almost everything you do, from travel to diet to energy to housing and entertainment. The Oroeco web app, currently in beta, uses the spending information from your Mint account to assign a carbon value to your behavior, and can track it over time, compare it to other people in your social circle, and offer suggestions for reducing your carbon footprint. “Oroeco uses environmental life-cycle assessment (LCA) data to calculate the climate impacts of products, services, and investments, with values converted to CO2e (see below). We use a combination of product-level “cradle to grave” LCA and economic input-output LCA modeling. Our LCA data is peer reviewed by leading academic and government institutions and comes primarily from UC Berkeley’s CoolClimate Network and the US Environmental Protection Agency.” – Oroeco Oroeco also offers badges and real-life rewards (such as a Nest Thermostat) for making lifestyle changes that lower your carbon footprint, and users can opt to purchase personal carbon offsets through the app, which can be used to mitigate climate impacts. “The basic idea is that every dollar we spend impacts our climate. The problem is that we can’t see these impacts when we’re deciding what to buy, particularly now that global supply chains have shifted problems half a world away. We are building a tool that automatically connects your purchase data (via Mint.com) to scientific climate impact data so you can track the climate footprint of your groceries, gas, airfare, home energy, clothing, etc. We’re also integrating with Facebook, so you’ll be able to see how you compare with your friends, as well as earn points and prizes for improving.” – Ian Monroe, CEO of Oroeco The service is free to use, and a forthcoming mobile app will make it easy to stay on top of your carbon footprint, right from your smartphone. Users will need a free Mint.com account (which can be very helpful on its own) to use the Oroeco service. Sign up to start tracking your carbon footprint here.« Don’t Buy Your Next Suit, Rent It by the Month with This Suit Subscription Service These Press-In Window Inserts Are Like Sunglasses for Your Home »
<urn:uuid:07092b63-0ee0-42a4-8ea6-6f3d36e534e8>
2.5625
589
News Article
Science & Tech.
37.219697
95,528,984
Exposes methods that can be called to get information on or close a file that is in use by another application. When an application attempts to access a file and finds that file already in use, it can use the methods of this interface to gather information to present to the user in a dialog box. The IFileIsInUse interface has these methods. |IFileIsInUse::CloseFile||Closes the file currently in use.| |IFileIsInUse::GetAppName||Retrieves the name of the application that is using the file.| |IFileIsInUse::GetCapabilities||Determines whether the file can be closed and whether the UI is capable of switching to the window of the application that is using the file.| |IFileIsInUse::GetSwitchToHWND||Retrieves the handle of the top-level window of the application that is using the file.| |IFileIsInUse::GetUsage||Gets a value that indicates how the file in use is being used.| In versions of Windows before Windows Vista, when a user attempted to access a file that was open in another application, the user would simply receive a dialog box with a message stating that the file was already open. The message instructed that the user close the other application, but did not identify it. Other than that suggestion, the dialog box provided no user action to address the situation. This interface provides methods that can lead to a more informative dialog box from which the user can take direct action. Perform these steps to add a file to the ROT: - Call the GetRunningObjectTable function to retrieve an instance of IRunningObjectTable. - Create an IFileIsInUse object for the file that is currently in use. - Create an IMoniker object for the file that is currently in use. - Insert the IFileIsInUse and IMoniker objects into the ROT by calling IRunningObjectTable::Register. In the call to Register, specify the ROTFLAGS_ALLOWANYCLIENT flag. This allows the ROT entry to work across security boundaries. Use of this flag requires the calling application to have an explicit Application User Model ID (AppUserModelID) (System.AppUserModel.ID). An explicit AppUserModelID allows the Component Object Model (COM) to inspect the application's security settings. An attempt to call Register with ROTFLAGS_ALLOWANYCLIENT and no explicit AppUserModelID will fail. You can call Register without the ROTFLAGS_ALLOWANYCLIENT flag and the application will work correctly, but only within its own security level. The value retrieved in the Register method's [out] parameter is used to identify the entry in later calls to retrieve or remove it from the ROT.File Is in Use sample, which demonstrates how to implement IFileIsInUse and register a file with the ROT. It then shows how to customize the File In Use dialog to display additional information and options for files currently opened in an application. |Windows version||Windows Vista [desktop apps only] Windows Server 2008 [desktop apps only]| |Header||shobjidl_core.h (include Shobjidl.h)|
<urn:uuid:0d807e5e-a9f4-49ac-8a47-11b376ecab83>
3.125
687
Documentation
Software Dev.
27.943462
95,528,998
Propagation of Wave Packets and Concept of Group Velocity We will devote this chapter to the solution of the time dependent Schrödinger equation for a free particle. In quantum mechanics, a free particle is described by a wave packet (which is nothing but a superposition of plane waves) and the solution of the time dependent Schrödinger equation will allow us to study the time evolution of the packet and how it would disperse in free space. We will also show how the uncertainty principle is contained in the solution of the Schrödinger equation. KeywordsWave Packet Group Velocity Uncertainty Principle Free Particle Schrodinger Equation Unable to display preview. Download preview PDF.
<urn:uuid:586f2435-0a81-47f0-93c9-2b544416692d>
3.015625
144
Truncated
Science & Tech.
29.148636
95,529,001
Washington: Houseflies have more than 152,000 cousins that have been described and named, and least that many more that haven`t yet been discovered and described, a new research has revealed. An Iowa State University researcher is one of a team of scientists who have recently researched the fly family tree - one of the most complicated in the animal world. "It really isn`t a tree, it`s sort of a bush," said Gregory Courtney, professor of entomology, explaining the complex relationships between fly relatives. "Because of this, and because the history of flies extends more than 260 million years, it`s difficult figuring out the relationships between this branch and that branch,” he said. Based on the research of Courtney and his colleagues, at least three episodes of rapid radiation have occurred in the history of flies. “One of the nice results of this research was confirmation that a number of episodic radiations may have occurred. That explains some of the difficulty we’ve had in resolving relationships of different types of flies,” Courtney added. The study was recently published in the journal Proceedings of the National Academy of Sciences.
<urn:uuid:cec4db92-85a3-484e-aa77-76d0393d8087>
3.1875
239
News Article
Science & Tech.
37.95
95,529,024
The Strategic Implications of Global Warming MetadataПоказать полную информацию At least to the frontline climate scientists, it is becoming increasingly evident that the process of global warming presents a threat to human societies of essentially unprecedented proportions. They know beyond any reasonable doubt that the thermal impulse now being imparted to the earth’s ecological system by aggregate human activity is occurring at a rate greater than any that has been documented in the entire 65 million year paleoclimate record. The current rate of CO2 accumulation in the atmosphere is ten times higher than at any time over the past 400,000 years for which annual estimates have been made based upon ice core data. For earlier periods estimates are made for longer time periods, but the natural rate of CO2 accumulation that 50 million years ago drove atmospheric concentrations and deep ocean temperatures to the highest estimated levels on record was a factor of 20,000 less than the current rate. That distant process occurred over millions of years. At the higher rates currently prevailing, the inexorable process of reestablishing energy equilibrium will occur over a time span that will certainly be much shorter and will certainly affect the operating conditions of human societies, but the exact character, magnitude, timing, or location of the consequences cannot yet be determined.
<urn:uuid:4ac3e6c2-8bd7-46f8-a1a8-ba1bf29a868c>
2.921875
280
Academic Writing
Science & Tech.
12.339242
95,529,025
University of Leicester researchers suggest a turning point for the planet and its resources Human beings are pushing the planet in an entirely new direction with revolutionary implications for its life, a new study by researchers at the University of Leicester has suggested. The research team led by Professor Mark Williams from the University of Leicester's Department of Geology has published their findings in a new paper entitled 'The Anthropocene Biosphere' in The Anthropocene Review. Professor Jan Zalasiewicz from the University of Leicester's Department of Geology who was involved in the study explained the research: "We are used to seeing headlines daily about environmental crises: global warming, ocean acidification, pollution of all kinds, looming extinctions. These changes are advancing so rapidly, that the concept that we are living in a new geological period of time, the Anthropocene Epoch - proposed by the Nobel Prize-winning atmospheric chemist Paul Crutzen - is now in wide currency, with new and distinctive rock strata being formed that will persist far into the future. "But what is really new about this chapter in Earth history, the one we're living through? Episodes of global warming, ocean acidification and mass extinction have all happened before, well before humans arrived on the planet. We wanted to see if there was something different about what is happening now." The team examined what makes the Anthropocene special and different from previous crises in Earth's history. They identified four key changes: In total, the team suggests that these changes represent a planetary transformation as fundamental as the one that saw the evolution of the photosynthetic microbes which oxygenated the planet 2.4 billion years ago, or that saw the transition from a microbial Earth to one dominated by multicellular organisms half a billion years ago. Professor Williams added: "We think of major changes to the biosphere as the big extinction events, like that which finished off the dinosaurs at the end of the Cretaceous Period. But the changes happening to the biosphere today may be much more significant, and uniquely are driven by the actions of one species, humans." The team includes Professor Mark Williams and Jan Zalasiewicz (University of Leicester), Peter Haff (Duke University, USA), Christian Schwägerl (Aßmannshauser Strasse 17, Berlin, Germany), Anthony D Barnosky (University of California, USA) and Erle C Ellis (University of Maryland, Baltimore County, USA). The paper 'The Anthropocene Biosphere' is published in The Anthropocene Review and is available here: http://anr. Images of the growing global technosphere and Professors Mark Williams and Jan Zalasiewicz available at: https:/ Mark Williams | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:b9355b5f-f3b9-4a7d-a73e-a59bc2d44338>
3
1,185
Content Listing
Science & Tech.
32.331828
95,529,028
Composition and Inheritance Multiple Choice Questions 1 PDF Download Practice composition and inheritance multiple choice questions (MCQs), c++ test 1 for online course prep exams. Learn virtual functions MCQs questions and answers on virtual functions, composition and inheritance with answers. Free composition and inheritance quiz online, study guide has multiple choice question on polymorphism is achieved by with options arrays, operators, constructors and virtual function to test online e-learning skills for viva exam prep and job's interview questions with answers key. Study to learn virtual functions quiz questions with online learning MCQs for competitive exam preparation test. MCQ on Composition and Inheritance Quiz PDF Download Test 1 MCQ. Polymorphism is achieved by - Virtual function MCQ. In object oriented programming there are two distinct views, one is consumer and second is manufacturer view, that consumer action are called - All of them MCQ. There are how many ways to use existing classes to define a new class? MCQ. Composition is also called as - Both A and C MCQ. When a data member of the new class is an object of another class, it is called as - New class is a composite of other objects - New class is inherited - New class is aggregate of another - None of them
<urn:uuid:575071af-0a95-4082-8c02-0376f04369cd>
3.875
274
Content Listing
Software Dev.
40.328848
95,529,034
Aether theoriesWikipedia Open wikipedia design. Aether theories (also known as ether theories) in physics propose the existence of a medium, the aether (also spelled ether, from the Greek word (αἰθήρ), meaning "upper air" or "pure, fresh air"), a space-filling substance or field, thought to be necessary as a transmission medium for the propagation of electromagnetic or gravitational forces. The assorted aether theories embody the various conceptions of this "medium" and "substance". This early modern aether has little in common with the aether of classical elements from which the name was borrowed. Since the development of special relativity, theories using a substantial aether fell out of use in modern physics, and were replaced by more abstract models. Isaac Newton suggests the existence of an aether in the Third Book of Opticks (1718): "Doth not this aethereal medium in passing out of water, glass, crystal, and other compact and dense bodies in empty spaces, grow denser and denser by degrees, and by that means refract the rays of light not in a point, but by bending them gradually in curve lines? ...Is not this medium much rarer within the dense bodies of the Sun, stars, planets and comets, than in the empty celestial space between them? And in passing from them to great distances, doth it not grow denser and denser perpetually, and thereby cause the gravity of those great bodies towards one another, and of their parts towards the bodies; every body endeavouring to go from the denser parts of the medium towards the rarer?" In the 19th century, luminiferous aether (or ether), meaning light-bearing aether, was a theorized medium for the propagation of light (electromagnetic radiation). However, a series of increasingly complex experiments had been carried out in the late 1800s like the Michelson-Morley experiment in an attempt to detect the motion of Earth through the aether, and had failed to do so. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Joseph Larmor discussed the aether in terms of a moving magnetic field caused by the acceleration of electrons. James Clerk Maxwell said of the aether, "In several parts of this treatise an attempt has been made to explain electromagnetic phenomena by means of mechanical action transmitted from one body to another by means of a medium occupying the space between them. The undulatory theory of light also assumes the existence of a medium. We have now to show that the properties of the electromagnetic medium are identical with those of the luminiferous medium." Hendrik Lorentz and George Francis FitzGerald offered within the framework of Lorentz ether theory a more elegant solution to how the motion of an absolute aether could be undetectable (length contraction), but if their equations were correct, Albert Einstein's 1905 special theory of relativity could generate the same mathematics without referring to an aether at all. This led most physicists to conclude that this early modern notion of a luminiferous aether was not a useful concept. Einstein however stated that this consideration was too radical and too anticipatory and that his theory of relativity still needed the presence of a medium with certain properties. Mechanical gravitational aether From the 16th until the late 19th century, gravitational phenomena had also been modelled utilizing an aether. The most well-known formulation is Le Sage's theory of gravitation, although other models were proposed by Isaac Newton, Bernhard Riemann, and Lord Kelvin. None of those concepts is considered to be viable by the scientific community today. Non-standard interpretations in modern physics We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it. Quantum mechanics can be used to describe spacetime as being non-empty at extremely small scales, fluctuating and generating particle pairs that appear and disappear incredibly quickly. It has been suggested by some such as Paul Dirac that this quantum vacuum may be the equivalent in modern physics of a particulate aether. However, Dirac's aether hypothesis was motivated by his dissatisfaction with quantum electrodynamics, and it never gained support from the mainstream scientific community. Robert B. Laughlin, Nobel Laureate in Physics, endowed chair in physics, Stanford University, had this to say about ether in contemporary theoretical physics: It is ironic that Einstein's most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word 'ether' has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. . . . Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry. [..] It turns out that such matter exists. About the time relativity was becoming accepted, studies of radioactivity began showing that the empty vacuum of space had spectroscopic structure similar to that of ordinary quantum solids and fluids. Subsequent studies with large particle accelerators have now led us to understand that space is more like a piece of window glass than ideal Newtonian emptiness. It is filled with 'stuff' that is normally transparent but can be made visible by hitting it sufficiently hard to knock out a part. The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic ether. But we do not call it this because it is taboo. Conjectures and proposals According to the philosophical point of view of Einstein, Dirac, Bell, Polyakov, ’t Hooft, Laughlin, de Broglie, Maxwell, Newton and other theorists, there might be a medium with physical properties filling 'empty' space, an aether, enabling the observed physical processes. Albert Einstein in 1894 or 1895: "The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces." Albert Einstein in 1920: "We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an Aether. According to the general theory of relativity space without Aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this Aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it." Paul Dirac wrote in 1951: "Physical knowledge has advanced much since 1905, notably by the arrival of quantum mechanics, and the situation [about the scientific plausibility of Aether] has again changed. If one examines the question in the light of present-day knowledge, one finds that the Aether is no longer ruled out by relativity, and good reasons can now be advanced for postulating an Aether ... We have now the velocity at all points of space-time, playing a fundamental part in electrodynamics. It is natural to regard it as the velocity of some real physical thing. Thus with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an Aether". John Bell in 1986, interviewed by Paul Davies in "The Ghost in the Atom" has suggested that an Aether theory might help resolve the EPR paradox by allowing a reference frame in which signals go faster than light. He suggests Lorentz contraction is perfectly coherent, not inconsistent with relativity, and could produce an aether theory perfectly consistent with the Michelson-Morley experiment. Bell suggests the aether was wrongly rejected on purely philosophical grounds: "what is unobservable does not exist" [p. 49]. Einstein found the non-aether theory simpler and more elegant, but Bell suggests that doesn't rule it out. Besides the arguments based on his interpretation of quantum mechanics, Bell also suggests resurrecting the aether because it is a useful pedagogical device. That is, many problems are solved more easily by imagining the existence of an aether. Einstein remarked "God does not play dice with the Universe". And those agreeing with him are looking for a classical, deterministic aether theory that would imply quantum-mechanical predictions as a statistical approximation, a hidden variable theory. In particular, Gerard 't Hooft conjectured that: "We should not forget that quantum mechanics does not really describe what kind of dynamical phenomena are actually going on, but rather gives us probabilistic results. To me, it seems extremely plausible that any reasonable theory for the dynamics at the Planck scale would lead to processes that are so complicated to describe, that one should expect apparently stochastic fluctuations in any approximation theory describing the effects of all of this at much larger scales. It seems quite reasonable first to try a classical, deterministic theory for the Planck domain. One might speculate then that what we call quantum mechanics today, may be nothing else than an ingenious technique to handle this dynamics statistically." In their paper Blasone, Jizba and Kleinert "have attempted to substantiate the recent proposal of G. ’t Hooft in which quantum theory is viewed as not a complete field theory, but is in fact an emergent phenomenon arising from a deeper level of dynamics. The underlying dynamics are taken to be classical mechanics with singular Lagrangians supplied with an appropriate information loss condition. With plausible assumptions about the actual nature of the constraint dynamics, quantum theory is shown to emerge when the classical Dirac-Bergmann algorithm for constrained dynamics is applied to the classical path integral [...]." Louis de Broglie, "If a hidden sub-quantum medium is assumed, knowledge of its nature would seem desirable. It certainly is of quite complex character. It could not serve as a universal reference medium, as this would be contrary to relativity theory." In 1982, Ioan-Iovitz Popescu, a Romanian physicist, wrote that the aether is "a form of existence of the matter, but it differs qualitatively from the common (atomic and molecular) substance or radiation (photons)". The fluid aether is "governed by the principle of inertia and its presence produces a modification of the space-time geometry". Built upon Le Sage's ultra-mundane corpuscles, Popescu's theory posits a finite Universe "filled with some particles of exceedingly small mass, traveling chaotically at speed of light" and material bodies "made up of such particles called etherons". Sid Deutsch, a professor of electrical engineering and bioengineerig, conjectures that a "spherical, spinning" aether particle must exist in order "to carry electromagnetic waves" and derives its diameter and mass using the density of dark matter. A degenerate Fermi fluid model, "composed primarily of electrons and positrons" that has the consequence of a speed of light decreasing "with time on the scale of the age of the universe" was proposed by Allen Rothwarf. In a cosmological extension the model was "extended to predict a decelerating expansion of the universe". - "Aether Archived December 3, 2005, at the Wayback Machine.", American Heritage Dictionary of the English Language. - Born, Max (1964), Einstein's Theory of Relativity, Dover Publications, ISBN 0-486-60769-0 - Isaac Newton The Third Book of Opticks (1718) http://www.newtonproject.sussex.ac.uk/view/texts/normalized/NATP00051 - James Clerk Maxwell: "A Treatise on Electricity and Magnetism/Part IV/Chapter XX" - Kostro, L. (1992), "An outline of the history of Einstein's relativistic ether concept", in Jean Eisenstaedt; Anne J. Kox, Studies in the history of general relativity, 3, Boston-Basel-Berlin: Birkhäuser, pp. 260–280, ISBN 0-8176-3479-7 - Einstein, Albert: "Ether and the Theory of Relativity" (1920), republished in Sidelights on Relativity (Methuen, London, 1922) - Dirac, Paul: "Is there an Aether?", Nature 168 (1951), p. 906. - Kragh, Helge (2005). Dirac. A Scientific Biography. Cambridge: Cambridge University Press. pp. 200–203. ISBN 0-521-01756-4. - Laughlin, Robert B. (2005). A Different Universe: Reinventing Physics from the Bottom Down. NY, NY: Basic Books. pp. 120–121. ISBN 978-0-465-03828-2. - Annales de la Fondation Louis de Broglie, Volume 12, no.4, 1987 - Foundations of Physics. 13. Springer. 1983. pp. 253–286. Bibcode:1983FoPh...13..253P. doi:10.1007/BF01889484. It is shown that one can deduce the de Broglie waves as real collective Markov processes on the top of Dirac's aether - Albert Einstein's 'First' Paper (1894 or 1895), http://www.straco.ch/papers/Einstein%20First%20Paper.pdf - Einstein, Albert: "Ether and the Theory of Relativity" (1920), republished in Sidelights on Relativity (Methuen, London, 1922) - R. Brunetti and A. Zeilinger (Eds.), Quantum (Un)speakables, Springer, Berlin (2002), Ch. 22 - M. Blasone, P. Jizba and H. Kleinert,.“Path Integral Approach to 't Hooft's Derivation of Quantum from Classical Physics”, Phys.Rev. A71 (2005) 052507, arXiv:quant-ph/0409021 - Duursma, Egbert (Ed.). Etherons as predicted by Ioan-Iovitz Popescu in 1982. CreateSpace Independent Publishing Platform. ISBN 978-1511906371. - de Climont, Jean. The Worldwide List of Alternative Theories and Critics. Formerly: The Worldwide List of Dissidents Scientists. Editions d'Assailly. ISBN 978-2902425174. - Deutsch, Sid (2006). "Chapter 9: An Aether Particle (AP)". Einstein’s Greatest Mistake: Abandonment of the Aether. iUniverse. ISBN 978-0-595-37481-6. - Rothwarf, A., “An Aether Model of the Universe”, Physics Essays, vol. 11, issue 3, p. 444 (1998), Semantic Scholar. - Rothwarf, F., Roy, S., “Quantum Vacuum and a Matter - Antimatter Cosmology”, Phys.Rev. A71 (2005) 052507, arXiv:astro-ph/0703280 - Whittaker, Edmund Taylor (1910), A History of the theories of aether and electricity (1st ed.), Dublin: Longman, Green and Co. - Schaffner, Kenneth F. (1972), Nineteenth-century aether theories, Oxford: Pergamon Press, ISBN 0-08-015674-6 - Darrigol, Olivier (2000), Electrodynamics from Ampére to Einstein, Oxford: Clarendon Press, ISBN 0-19-850594-9 - Maxwell, James Clerk (1878), "Ether", Encyclopædia Britannica Ninth Edition, 8: 568–572 - Harman, P.H. (1982), Energy, Force and Matter: The Conceptual Development of Nineteenth Century Physics, Cambridge: Cambridge University Press, ISBN 0-521-28812-6 - Decaen, Christopher A. (2004), "Aristotle's Aether and Contemporary Science", The Thomist, 68: 375–429, retrieved 2011-03-05.[permanent dead link] - Larmor, Joseph (1911). "Aether". Encyclopædia Britannica. 1 (11th ed.). pp. 292–297. - Oliver Lodge, "Ether", Encyclopædia Britannica, Thirteenth Edition (1926). - "A Ridiculously Brief History of Electricity and Magnetism; Mostly from E. T. Whittaker’s A History of the Theories of Aether and Electricity". (PDF format) - Epple, M. (1998) "Topology, Matter, and Space, I: Topological Notions in 19th-Century Natural Philosophy", Archive for History of Exact Sciences 52: 297–392.
<urn:uuid:cd31bf1e-4350-4252-96e3-ed4342941236>
3.578125
3,735
Knowledge Article
Science & Tech.
44.767483
95,529,050
In recent years, researchers at The University of Texas at Dallas and colleagues at the University of Wollongong in Australia have put a high-tech twist on the ancient art of fiber spinning, using modern materials to create ultra-strong, powerful, shape-shifting yarns. In a perspective article published Sept. 26 online in the Proceedings of the National Academy of Sciences, a team of scientists at UT Dallas’ Alan G. MacDiarmid NanoTech Institute describes the path to developing a new class of artificial muscles made from highly twisted fibers of various materials, ranging from exotic carbon nanotubes to ordinary nylon thread and polymer fishing line. Because the artificial muscles can be made in different sizes and configurations, potential applications range from robotics and prosthetics to consumer products such as smart textiles that change porosity and shape in response to temperature. “We call these actuating fibers ‘artificial muscles’ because they mimic the fiber-like form-factor of natural muscles,” said Dr. Carter Haines BS’11 PhD’15, associate research professor in the NanoTech Institute and co-lead author of the PNAS article, with research associate Dr. Na Li. “While the name evokes the idea of humanoid robots, we are very excited about their potential use for other practical applications, such as in next-generation intelligent textiles.” Science Based on Ancient Art Spinning animal fur and plant fibers to make thread and yarn goes back thousands of years. Aligning the fibers and then twisting them into yarn gives the yarn strength. By exploiting this concept, and adding 21st-century science, the UT Dallas researchers have produced actuating muscle yarns that, like their wooly counterparts, can be woven, sewn and knitted into textiles. For example, carbon nanotubes are essentially tendrils of tiny, hollow tubes that are super-strong and electrically conductive. In 2004, led by Dr. Ray Baughman, director of the NanoTech Institute and the Robert A. Welch Distinguished Chair in Chemistry at UT Dallas, the team developed a method to draw “forests” of nanotubes out into sheets of aligned fibers — much like carded wool — and then twist the sheets into yarns. Next, the group turned to polymer fibers such as nylon sewing thread and fishing line, which consist of many individual molecules aligned along the fiber’s length. Twisting the thread or fishing line orients these molecules into helices, producing torsional — or rotational — artificial muscles that can spin a heavy rotor more than 100,000 revolutions per minute. When these muscles are so highly twisted that they coil like an over-twisted rubber band, they can produce tensile actuation, where the muscle dramatically contracts along its length when heated, and returns to its initial length when cooled. That research, published in 2014, showed that simple, low-cost muscles made from fishing line can lift 100 times more weight and generate 100 times higher mechanical power than a human skeletal muscle of the same length and weight. “The success of our muscles derives from their special geometry and the fact that we start with materials that are anisotropic — when they are heated, the materials expand in diameter much more than they expand along their length,” said Baughman, senior author of the PNAS perspective. This anisotropy is an intrinsic property of high-strength polymer fibers, and is the same principle that drives powerful artificial muscles the researchers discovered in 2012, which they made by adding a thermally responsive “guest” material within a carbon nanotube yarn. “When these fibers are then twisted and coiled, their internal geometry changes so that when they are heated, that diameter expansion results in a change in length,” Baughman said. “The fiber’s diameter only has to expand by about 5 percent to drive giant changes in length.” The Latest Twist In their most recent experiments, described for the first time in the PNAS article, Haines and Li added a new twist to their artificial muscles. “The coiled artificial muscles we initially made from fishing line and nylon sewing thread were limited in the amount they could expand and contract along their length,” Haines said. “Because of their geometry — like a phone cord — they could only contract so far before the coils began to collide with one another.” The solution: Form the coiled actuators into spirals. “The advantage to the spiral shape is that now our muscle can contract into a flat state, expand out in the other direction, and return to its original length, all without getting stuck on itself,” Li said. “Our experiments to date have been proof-of-concept, but have already shown that we can use heating and cooling to drive this back-and-forth motion across a giant range. This type of telescoping actuator can produce over an 8,600 percent change in length, compared to around 70 percent for our previous coils.” Li said one potential application for the spiral-shaped coil might be thermally responsive clothing. Instead of a down-filled jacket, a coat that incorporates many small coils could change the loft and insulating power of the garment in response to temperature. In the laboratory, Haines and Li have produced spools of coiled polymer muscle threads suitable for sewing. “We have shown that these thermally responsive fibers can be used in conventional machines, such as looms, knitting machines and sewing machines,” Li said. “As we move forward with our research, and scale it up, we hope to incorporate our ideas into functional fabrics and textiles for a variety of purposes, from clothing to environmentally responsive architecture to dynamic art sculptures.”
<urn:uuid:63f37a4f-ac00-4fd6-9204-cc52ff677b8a>
3.359375
1,210
News Article
Science & Tech.
33.040735
95,529,053
Renewable energy sources (RES) are the energy resources used to produce electricity or thermal energy. They are derived from nature, and their reserves are continuously or cyclically renewed. The very name “renewable” comes from the fact that energy is consumed in an amount not exceeding the speed at which they are reproduced in nature. This is in contrast to non-renewable resources whose reserves are estimated at tens or hundreds of years, while their creation took millions of years. The main renewable resources in nature are: sunlight, waves, tides, rain, wind, biomass and geothermal heat. Renewable sources replace many conventional fuels in four areas: space / water heating, motor fuels, electricity generation and rural energy services. Renewable energy sources can be divided according to their origin in the following categories: - Solar energy - Wind energy - Geothermal energy - Hydropower (water power) - Marine energy (ocean energy) Each of these renewable energy sources has unique characteristics and they require special technologies and power plants. In addition, it’s very actual production of energy from waste, so-called waste energy in recent years. The sun is, directly or indirectly, the source of almost all available energy on Earth. Solar energy comes from nuclear reactions (fusion) at its center – connection of the hydrogen atoms leads to the formation of helium, with the release of large amounts of energy. Basic principles of direct solar energy exploitation are: - Solar panels or collectors (convert solar energy into thermal energy) - The concentration of solar power (directing solar radiation to obtain heat) - Solar cells (conversion of solar energy directly into electric energy) Wind energy can be used directly or indirectly – for the production of electricity. It is actually a form of solar energy, created by circulation in the Earth’s atmosphere. Wind energy exploitation is the fastest growing segment of energy production from renewable energy sources. This is the type of renewable energy sources that does not originate from the sun. The basic medium that transfers heat from the Earth’s interior to the surface is the water or water vapor. Water from rain penetrates deeply through the cracks in the soil and arrives at the great depths where it is heated. Heated water is circulated back to the surface, where it appears in the form of hot springs or geysers. Biofuels are fuels that are obtained by processing some form of biomass. There are various types of biofuels: ethanol, biodiesel, bioalcohols (methanol, butanol, etc.), green diesel, biofuel gasoline, vegetable oil, bioethers, biogas, syngas, and so on. The term biomass means the biological material from living organisms. Biomass is used to generate heat that can be used to produce electricity. The most important form of biomass is dead trees (and wood shavings) that are very large potential of renewable energy source. In addition to wood biomass, there are residues and waste from agriculture, animal wastes / residues and biomass from waste (trash). Water energy (hydropower) is the most important source of renewable energy sources, and also the only renewable energy that is economically competitive with fossil fuels and nuclear energy. Marine energy (ocean energy) is a form of hydropower. There are three main types of marine energy: wave energy, tidal energy and energy from the temperature difference between the warm and cold ocean water. Latest posts by Jack (see all) - Top Creative Ways to Boost Your Social Media Presence - July 19, 2018 - Engineering inspired by Nature - July 18, 2018 - UK Research Uncovers Cheap Way to Recycle Carbon Fiber - July 10, 2018
<urn:uuid:d5cb0056-6fc4-484d-9d06-bf3047e2830c>
4.03125
774
Knowledge Article
Science & Tech.
23.517554
95,529,069
Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable. Mathematically, the unification is accomplished under an SU(2) × U(1) gauge group. The corresponding gauge bosons are the three W bosons of weak isospin from SU(2) (W1, W2, and W3), and the B boson of weak hypercharge from U(1), respectively, all of which are massless. In the Standard Model, the bosons, and the photon, are produced by the spontaneous symmetry breaking of the electroweak symmetry from SU(2) × U(1)Y to U(1)em, caused by the Higgs mechanism (see also Higgs boson). U(1)Y and U(1)em are different copies of U(1); the generator of U(1)em is given by Q = Y/2 + T3, where Y is the generator of U(1)Y (called the weak hypercharge), and T3 is one of the SU(2) generators (a component of weak isospin). The spontaneous symmetry breaking makes the W3 and B bosons coalesce into two different bosons – the boson, and the photon (γ), where θW is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (W3, B) plane, by the angle θW. This also introduces a mismatch between the mass of the and the mass of the particles (denoted as MZ and MW, respectively), The W1 and W2 bosons, in turn, combine to give massive charged bosons The distinction between electromagnetism and the weak force arises because there is a (nontrivial) linear combination of Y and T3 that vanishes for the Higgs boson (it is an eigenstate of both Y and T3, so the coefficients may be taken as −T3 and Y): U(1)em is defined to be the group generated by this linear combination, and is unbroken because it does not interact with the Higgs.
<urn:uuid:dfa56c1b-a4cc-4e6d-9a7c-fed9e387b865>
2.9375
585
Knowledge Article
Science & Tech.
29.949722
95,529,082
Stoichiometry is an integral part of chemistry that involves the relationship between product and reactants in a chemical reaction and other words, stoichiometry means the measurement of elements. To understand Stoichiometric calculations, it’s necessary to comprehend the relationship between reactants and products taking place in a chemical reaction. For a reaction to being balanced, both sides of the equation must have the same number of elements. To adjust the number of each element on both sides of the reaction, we use stoichiometric coefficients, and it is the number written of atoms to balance the reaction. Now, we can discuss the conversion factors to solve stoichiometric problems. Steps to be followed are: - First, we need to balance the given equation. - The given substance should have the unit as a mole. - Calculate the number of moles Stoichiometric Calculations are mostly based on chemical formulas. - Formula Mass: It is defined as the sum of the atomic weights of the each atom present in the molecule of the substance. For example formula mass of Na2S is calculated as 2(23) + 1(32) = 78 - Avogadro number: Avogadro’s number is defined as the number of particles present in one mole of substance. It is defined as a number of atoms present in exactly 12g of C-12.Avogadro number is valued as 022 × 1023. - Molar Mass: It is defined as the sum of the total mass of all the atoms that make up a molecule per mole. The mole ratio of reactants and products can be explained with the help of the following reaction: \(2Na~ + ~2HCl ~\rightarrow~ 2NaCl~ + ~H_2\) From the above reaction, we can say that 2 moles of Na react with 2 moles of HCl to form 2 moles of NaCl ( Sodium Chloride) and 1 mole of H2. Hence for a given amount of sodium let us say x mole, the x mole of NaCl will be formed on reaction with x mole of HCl and the hydrogen gas produced will be x/2 moles. To understand more about Stoichiometric Calculations, follow Byju’s the learning app and download Byjus-the learning app.’ Practise This Question
<urn:uuid:d1876dc3-6ebd-4e40-bf74-009417654f98>
4.3125
496
Tutorial
Science & Tech.
50.06965
95,529,097
Recent accomplishments of CDFW's scientific community A dwindling population of a tiny owl in Southern California has a chance at a comeback, thanks to a collaborative effort by scientists from CDFW, the San Diego Zoo’s Institute for Conservation Research (ICR), Caltrans and the U.S. Fish and Wildlife Service. The Western burrowing owl (Athene cunicularia hypugaea) is currently listed as a Species of Special Concern, and nongame scientists have long been concerned about their viability and survival. Breeding populations have especially declined in the central and southern coastal areas, due in large part to a combination of habitat loss and eradication of the ground squirrels that dig out the burrows where the owls make their nests. In San Diego County specifically, the once-widespread population has been reduced to a single breeding node in the Otay Mesa region, just north of the Mexico border. Two groups in particular have been monitoring these owls carefully, in an effort to help. Biologists from Zoo’s ICR have spent seven years assessing owl population status and productivity, including assessing the feasibility and effectiveness of artificial burrows, refining techniques to help the ground squirrels thrive and disperse into new areas and developing a system of identifying potential new locations where the owls might thrive. Much of ICR’s owl research has been conducted at Brown Field, a small municipal airport near the border within the City of San Diego, and on an adjacent property owned by Caltrans. Meanwhile, about 10 miles from the Brown Field study site, CDFW scientists have spent a decade working to create more suitable burrowing owl habitat at Rancho Jamul Ecological Reserve (RJER). Efforts there included installation of artificial burrows and mowing the tall grass to foster a low-growing grassland suitable for burrowing owls and ground squirrels. Despite their best efforts, CDFW scientists studying the Rancho Jamul site have experienced many years of disappointment -- although wintering owls have shown up every year, none have stayed and attempted to breed on the property. But conditions have been improving over the last four years, thanks to the implementation of a grazing program that reduced dense thatches of old grasses and expanded areas of open ground. As habitat changed at Rancho Jamul, CDFW scientists observed more squirrel burrows, and the conditions seemed just about right for the owls. This spring, an approved development project at Brown Field began to take shape – and it became evident that the timing for an owl translocation project was ripe at last. Thanks to efforts by Caltrans, which incorporated burrowing owl habitat restoration as part of their mitigation effort for a nearby highway, CDFW staff believed the owl population to be strong enough to support a translocation effort. Also the spring season, just prior to egg-laying, is likely the optimum time to move the animals. After looking at many options, scientists decided to try to move five pairs of breeding owls to RJER in the hopes that they would establish a new population and thrive. The full conservation team – which included CDFW, the U.S. Fish and Wildlife Service, the City of San Diego, Metro Air Park, Schaefer Ecological Solutions and the San Diego Zoo – was on board and ready to move the owls. In March 2018, the team caught five pairs and moved them to hacking cages at RJER. The owls lived in the cages for about one month to give them time to acclimate to their new surroundings. By the time the cages were removed, each female had laid at least one egg in the artificial burrow chamber. CDFW Environmental Scientist Dave Mayer has worked on this project for years and is anxious to see efforts at RJER finally pay off. The presence of the eggs, he said, was thrilling to see. “More owls, and at diverse locations, is what it will take to conserve this species in San Diego County. This first step was a long time coming, but I have all my fingers crossed that it’s going to work.” This successful multi-agency partnership will continue long past the actual translocation day. Scientists banded the owls and fitted some with radio transmitters. ICR staff will monitor the owls themselves, while CDFW staff will monitor the grassland and the grazing program, and perform inspections and repairs of the artificial burrows twice a year. After five years, CDFW will perform regular monitoring of the owls, the habitat and associated grazing practices, and the general status of the ground squirrel population. Mayer is proud of the work achieved so far in this unusual project. “We built a better mousetrap, with the Zoo’s help,” he says. General information about California’s bird species of special concern can be found on the CDFW website, along with the species account (PDF) for the burrowing owl and information about Rancho Jamul Ecological Reserve. The San Diego Zoo has also issued a news release with more details about the burrowing owl translocation project. All photos © San Diego Zoo Global, all rights reserved # # # to receive Science Spotlight and Featured Scientist articles by email
<urn:uuid:b2b89759-a66f-456f-acc7-0c60f624646f>
3.515625
1,077
News (Org.)
Science & Tech.
40.5825
95,529,104
Side b is 2 cm longer than side c, side a is 9 cm shorter than side b. The triangle circumference is 40 cm. Find the length of sides a, b, c . .. . Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment!
<urn:uuid:e8693a76-ffd4-412f-a8e2-9c9d20bd7879>
2.625
76
Tutorial
Science & Tech.
99.603224
95,529,136
Are neutrinos harmful? How can we tell where neutrinos come from?Check out today’s Neuchitos…With Dip! Understanding our Galaxy through GAIA Mission with @einionyn & @georgeseabroke. More at #365DaysOfAstro What is the cosmological constant? How is the expansion of the universe related to quantum fields? How does a vacuum energy produce accelerated expansion? Time to expand our view and encompass the entire universe. Some of the most dramatic events originate from tidal forces caused by gravity: other worlds, galaxies, black holes and even entire clusters of galaxies are under this influence. Oumuamua, the interstellar space rocks and the possibility of intelligent civilizations exist on interior water ocean worlds. NEID spectrometer is being installed and it will soon start to search for exoplanets by using the Doppler, or Radial Velocity (RV), effect. There have been over 3,700 exoplanets discovered so far. Some seriously clever techniques have been used to hunt down these alien worlds. Astronomers define a new technique to discover baby planet! Where’s the best place to live on Mercury? Where did the Big Bang start? lets find out the answer here at #365DaysOfAstro
<urn:uuid:a4a70cf6-c46c-4ef2-b02e-a7e66a344230>
2.9375
269
Content Listing
Science & Tech.
37.987161
95,529,164
Evaluate potential hazards caused by humans that might affect your grassland ecosystem's stability.© BrainMass Inc. brainmass.com July 21, 2018, 1:46 pm ad1c9bdddf The biome called temperate grasslands occurs where the average annual rainfall is between 10 and 40 inches. Grasslands have variation in the distribution of rainfall, temperature and soil type (Kaufman & Franz, 2005). Grasslands with less rainfall, nutrient minerals tend to accumulate just below the topsoil. In areas with more precipitation, the nutrient minerals are washed out of the soil onto the surface. Soil moisture limits ... Characteristics of grassland and potential hazards caused by man are described in about 280 words.
<urn:uuid:0ed9591c-9f24-478f-bff1-5a538d5c5245>
3.34375
148
Truncated
Science & Tech.
48.894935
95,529,190
Researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and General Atomics have simulated a mysterious self-organized flow of the superhot plasma that fuels fusion reactions. The findings show that pumping more heat into the core of the plasma can drive instabilities that create plasma rotation inside the doughnut-shaped tokamak that houses the hot charged gas. This rotation may be used to improve the stability and performance of fusion devices. The results, reported in January in the journal Physical Review Letters, use first principles-based plasma turbulence simulations of experiments performed on the DIII-D National Fusion Facility that General Atomics operates for the DOE in San Diego. The findings could lead to improved control of fusion reactions in ITER, the international experiment under construction in France to demonstrate the feasibility of fusion power. Support for this research comes from the DOE Office of Science with simulations performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. High energy beams To enhance stability and confinement of the plasma, a gas composed of electrons and ions that is often called the fourth state of matter, physicists have traditionally injected high energy beams of neutral atoms. These energetic beams cause the core and outer region of the plasma to spin at different rates, creating a sheared flow, or rotation, that improves stability and confinement. One persistent mystery is how the plasma sometimes generates its own sheared flow, spontaneously. The new research, led by PPPL physicists Brian Grierson and Weixing Wang, shows that sufficient heating of the core of the plasma generates a special type of turbulence that produces an intrinsic torque, or twisting force, that causes the plasma to generate its own sheared flow. The findings have relevance to large, future reactors, since neutral beam injection will create only limited rotation in the huge plasmas inside such facilities. The collaborative research by PPPL and General Atomics scientists found that plasmas can organize themselves to produce sheared rotation when heat is added in the right way. The process works like this: Researchers used the GTS code to simulate the physics of turbulent plasma transport by modeling the behavior of plasma particles as they cycled around magnetic fields. The simulation predicted the rotation profile by modeling the intrinsic torque of the turbulence and the diffusion of its momentum. The predicted rotation agreed quite well, in shape and magnitude, with the rotation observed in DIII-D experiments. A key next challenge will be to extrapolate the processes for ITER. Such modeling will require massive simulations that will push the limits of the high-performance supercomputers currently available. "With careful experiments and detailed simulations of fundamental physics, we are beginning to understand how the plasma creates its own sheared rotation," said Grierson. "This is a key step along the road to optimizing the plasma flow to make fusion plasmas more stable, and operate with high efficiency." PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas -- a gas composed of electrons and ions that is often called the fourth state of matter -- and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. John Greenwald | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:d7af7cb9-d9c4-4360-9bc9-5d993044bf80>
3.234375
1,314
Content Listing
Science & Tech.
33.687448
95,529,219
Climatic Databases as Sources of Actinometric Information for Analysis and use of Solar Resources in Kazakhstan AbstractAt the present stage, there is a growing need for more accurate information about climate resources ith a high spatial resolution, as well as long historical series of observations to solve applied problems.In this connection, the main modern sources of information that are based on satellite observations and mathematical modeling, such as, NASA Surface meteorology and Solar Energy, METEONORM, SARAH,etc. were analyzed in this article. The main characteristics of the considered climatic bases were given. These kinds of information are also increasingly used in the field of renewable energy, since, on the onehand, they are based on a long period of observation, and, on the other hand, they make it possible to circumvent the problem related to the lack of meteorological stations and their distance from each other. How to Cite АБAЕВ, Н. Н.; НЫСAНБAЕВA, А. С.; АБИЕВA, Д. К.. Climatic Databases as Sources of Actinometric Information for Analysis and use of Solar Resources in Kazakhstan. Journal of Geography and Environmental Management, [S.l.], v. 47, n. 4, p. 13-22, mar. 2018. ISSN 1563-0234. Available at: <http://bulletin-geography.kaznu.kz/index.php/1-geo/article/view/432>. Date accessed: 16 july 2018. Meteorology and hydrology
<urn:uuid:cdd0b364-f2cb-481f-abe5-b716114e30ad>
2.78125
357
Truncated
Science & Tech.
43.302692
95,529,222
Lectures on Elementary Probability by William G. Faris Publisher: University of Arizona 2002 Number of pages: 62 From the table of contents: Combinatorics; Probability Axioms; Discrete Random Variables; The Bernoulli Process; Continuous Random Variables; The Poisson Process; The weak law of large numbers; The central limit theorem; Estimation. Home page url Download or read it online for free here: by John Maynard Keynes - Macmillan and co From the table of contents: Fundamental ideas - The Meaning of Probability, The Measurement of Probabilities; Fundamental theorems; Induction and analogy; Some philosophical applications of probability; The foundations of statistical inference, etc. by Vladislav Kargin - arXiv Contents: Non-commutative Probability Spaces; Distributions; Freeness; Asymptotic Freeness of Random Matrices; Asymptotic Freeness of Haar Unitary Matrices; Free Products of Probability Spaces; Law of Addition; Limit Theorems; Multivariate CLT; etc. by S. R. S. Varadhan - New York University These notes are based on a first year graduate course on Probability and Limit theorems given at Courant Institute of Mathematical Sciences. The text covers discrete time processes. A small amount of measure theory is included. by Patrick Roger - BookBoon The book is intended to be a technical support for students in finance. Topics: Probability spaces and random variables; Moments of a random variable; Usual probability distributions in financial models; Conditional expectations and Limit theorems.
<urn:uuid:7bc3f247-53b7-4a85-a6be-436d3b6846ed>
2.640625
352
Content Listing
Science & Tech.
10.005862
95,529,238
A cosmic dinosaur is about to hatch, say scientists, who have discovered what may be the first known example of a globular cluster on verge of being born – an unimaginably massive, extremely dense, but star-free cloud of molecular gas. A globular cluster is a huge compact spherical star cluster, typically of old stars, a dazzling agglomeration of up to one million ancient stars, in the outer regions of a galaxy. Globular clusters are among the Universe’s oldest objects. Although abundant in and around several galaxies, about-to-be-born examples are vanishingly rare, and the conditions required to create new ones have never been detected …. until now, that is. ALMA image of dense cores of molecular gas in the Antennae galaxies. The round yellow object close to the center could be the first prenatal example of a globular cluster ever identified. It is surrounded by a giant molecular cloud. (Image: public.nrao.edu) The globular cluster on the verge of being born was discovered by Kelsey Johnson, an astronomer at the University of Virginia in Charlottesville, and colleagues using the Atacama Large Millimeter/submillimeter Array (ALMA) in northern Chile. The scientists published their findings in the Astrophysical Journal. Lead author, Prof. Johnson said: “We may be witnessing one of the most ancient and extreme modes of star formation in the universe. This remarkable object looks like it was plucked straight out of the very early universe.” “To discover something that has all the characteristics of a globular cluster, yet has not begun making stars, is like finding a dinosaur egg that’s about to hatch.” This celestial object, which the scientists humorously refer to as the ‘Firecracker’, is located about fifty million light-years away from Earth, nestled within a famous pair of interacting galaxies – NGC 4038 and NGC 4039 – collectively known as the Antennae galaxies. Their ongoing merger generates tidal forces that trigger star formation on a colossal scale, with much of it taking place within dense clusters. What makes Firecrakcker unique, however, is its incredible mass, relatively small size, and apparent lack of stars. All other similar globular clusters scientists have so far observed are already teeming with stars. The radiation and heat from these stars have therefore changed the surrounding environment significantly, erasing any trace of its colder, quieter beginnings. With ALMA, the scientists were able to find and examine in detail a perfect example of such an object before stars forever change its unique features. (Upper image) The Antennae galaxies. (Centre right image) Extensive clouds of molecular gas. (Bottom image) One cloud is incredibly massive and dense, yet apparently star free. (Image: public.nrao.edu) This gave the astronomers a first-ever peek into the conditions that could have led to the formation of several, if not all globular clusters. Prof. Johnson said: “Until now, clouds with this potential have only been seen as teenagers, after star formation had begun. That meant that the nursery had already been disturbed. To understand how a globular cluster forms, you need to see its true beginnings.” The majority of globular clusters were formed during a veritable cosmic ‘baby boom’ about 12 billion years ago, at a time when galaxies started assembling. Each cluster contains up to one million densely packed ‘2nd generation’ stars: stars with noticeably low concentrations of heavy metals, suggesting they formed soon after the birth of our Universe. We know of 150 such clusters in our own galaxy, the Milky Way – there could be many more. Across the entire Universe, star clusters of various sizes are still forming today. It is possible, the authors say, though increasingly rare, that the largest and densest of these will go on to become globular clusters. Prof. Johnson said: “The survival rate for a massive young star cluster to remain intact is very low – around one percent. Various external and internal forces pull these objects apart, either forming open clusters like the Pleiades or completely disintegrating to become part of a galaxy’s halo.” The scientists believe, however, that the celestial object they detected with ALMA, which contains fifty million times our Sun’s mass in molecular gas, is sufficiently dense that it has a good chance of being one of the lucky ones. Globular clusters soon evolve out of their embryonic star-free stage – this can occur in as little as 1 million years. This means that what the astronomers observed with ALMA is undergoing a key milestone of its life, offering them a unique opportunity to study a major component of a very young Universe. According to the ALMA data, the Firecracker cloud is under immense pressure – about 10,000 times greater than typical interstellar pressures. This supports previous theories that for globular clusters to form, extremely high pressures are required. In exploring the Antennae, the team observed the faint emission from carbon monoxide molecules, which allowed them to image and characterize individual clouds of gas and dust. The lack of any discernible thermal emission – a hallmark signal given off by gas heated by nearby stars – confirmed that this newly-discovered celestial object is still in its unaltered, pristine state. Subsequent ALMA studies may reveal more examples of proto super star clusters in the Antennae galaxies and other interacting galaxies, which may provide insight into the origins of these ancient objects and the role they play in the evolution of galaxies. Citation: “The Physical Conditions in a Pre Super Star Cluster Molecular Cloud in the Antennae Galaxies,” K. E. Johnson, A. K. Leroy, R. Indebetouw, C. L. Brogan, B. C. Whitmore, J. Hibbard, K. Sheth, A. Evans. Arxiv. arXiv:1503.06477 . Video – Proto Super Star Cluster
<urn:uuid:b94fabc6-6549-4c0f-b579-40779c3a33ea>
3.75
1,275
Truncated
Science & Tech.
40.907187
95,529,290
Electron Spin Resonance So far we have confined our attention to nuclear magnetic resonance, although many of the basic principles apply to electron spin resonance. We have also considered questions concerning the electrons, such as the quenching of orbital angular momentum and the magnetic coupling of the nuclear spin to that of the electron. In this chapter we shall add a few more concepts that are important to the study of electron spin resonance1 but which are not encountered in the study of nuclear resonance. KeywordsMatrix Element Angular Momentum Electron Spin Resonance Electron Spin Orbital Angular Momentum Unable to display preview. Download preview PDF.
<urn:uuid:64c09160-63db-4d8a-ab11-9f94ec5b4a0a>
3.09375
127
Truncated
Science & Tech.
21.403769
95,529,301
THE ROBINSON LIBRARY |The Robinson Library >> Science >> Zoology >> Mammals >> Order Carnivora >> Family Ursidae| Ursus arctos (Grizzly Bear) The largest of all living carnivores, an adult brown bear may be up to 9 feet long and weigh up to 1,650 pounds. Fur is usually dark brown, but varies from cream to almost black. Each broad, flat foot bears five toes armed with non-retractible claws. Brown bears have poor eyesight but acute senses of smell and hearing. Distribution and Habitat Brown bears are found in extremely small numbers from western Europe to eastern Siberia, as well as in the Middle East, the Himalayas, and on the Japanese island of Hokkaido. They are more numerous in North America, where they range from Alaska and western Canada, south through the Sierra Nevada and Rockies into northern Mexico. They occupy a variety of habitats, from desert edges to high mountain forests and ice fields, as long as there is some area with dense cover in which to shelter. Although brown bears are classified as carnivores the majority of their diet actually consists of plant matter. In spring, grasses, sedges, roots, moss and bulbs make up most of the diet. During summer and early autumn, berries are important, with bulbs and tubers also eaten. Fungi and roots are taken at all times of the year. The amount and type of animals taken varies by region. Those in the Canadian Rockies, for example, are quite carnivorous, hunting moose, elk, mountain sheep and goats. In Alaska they have been observed eating carrion and occasionally capturing young calves of caribou and moose. Grizzlies are also well known for feeding on salmon during the summer spawning runs. Individual bears tend to have specific food preferences, with some eating far more meat than others, and some are also picky about what plants and animal parts they will and will not eat. Breeding takes place from May to July, but the fertilized eggs are not implanted until October or November. Births occur from January to March after a total gestation ranging from 180 to 266 days. Two cubs are generally born per litter. They are weaned at 5 months of age but remain with the mother until at least their second spring of life. Sexual maturity is reached at 4 to 6 years of age, but growth continues until the 10th or 11th year. Brown bears in the wild may live 25 years or more. Other Habits and Behaviors Brown bears may be active at any time of the day, but generally forage in the morning and evening and rest in dense cover by day. Individual bears maintain home ranges averaging about 20 square miles. Home ranges overlap extensively, but territorial defense is rare. Grizzlies are generally solitary, but they may gather in large numbers at major food sources. Young brown bears climb trees well, though slowly and deliberately, but adults rarely do so. Library >> Science >> Zoology >> Mammals >> Order Carnivora This page was last updated on March 22, 2018.
<urn:uuid:21e07f50-aeed-4c9b-bf62-a9de1efe0075>
3.765625
647
Knowledge Article
Science & Tech.
50.232823
95,529,312
The Controlled Ecological Life Support Systems (CELSS) program is evaluating higher plants as a means of providing life support functions aboard space craft. These plant systems will be capable of regenerating air and water while meeting some of the food requirements of the crew. In order to grow plants in space, a series of systems are required to provide the necessary plant support functions. Some of the systems required for CELSS experiments are such that it is likely that existing technologies will require refinement, or novel technologies will need to be developed. To evaluate and test these technologies, a series of KC-135 precursor flights are being proposed. A general purpose free floating experiment platform is being developed to allow the KC-135 flights to be used to their fullest. This paper will outline the basic design for the CELSS Free Floating Test Bed (FFTB), and the requirements for the individual subsystems. Several preliminary experiments suitable for the free floater will also be discussed.
<urn:uuid:c16c6eb3-b97b-47c3-9b98-d0631655f9df>
3.015625
193
Academic Writing
Science & Tech.
34.882378
95,529,320
Gluttony? Tsk tsk tsk. Kind of puts things into perspective, doesn’t it? Space — the prettiest frontier. All babies are adorable, right? Weighing without using a scale. An incredibly fruitful mission sheds new secrets about the Milky Way. The findings might help piece together the evolution of the universe. If you happened to be alive 70,000 years ago, you’d be in for quite a show. Giving “family dinner” a whole new meaning. Turns out, planet farts are just like ours, but with chlorine! It sounds like the plot of a bad movie — but it’s just science. If there’s one thing that black holes do extremely well, it’s drawing things to them and destroying them. Seeing is believing. The European Space Agency’s Planck satellite has revealed some information which may force us to rethink the evolution of the early Universe. This space snow could help scientists better understand planet formation and evolution. This disposable battery runs on bacteria and folds like an origami ninja star. Sold! NASA astronauts have discovered a lonely planetary-like mass floating on its own, without a solar system. Imagine a galaxy, riddled with countless solar systems. Then zoom in slowly on a solar system – how do you picture it? There’s probably a star at the center, and several planets around it. That’s generally where we feel planets should be, rotating around A spectacular image captured by the Hubble Space Telescope’s Wide Field Planetary Camera 2 (WFPC2) gives us a glimpse into how the Sun will look at its death. Many ancient civilizations made astronomical notes, but according to researchers, this is the earliest historical document of naked eye observations on a variable star – Algol. Variable stars are stars with a varying brightness (as seen from Earth), and they probably held a special place in Egyptian astronomy – they made careful notes on these stars. Now, more than three
<urn:uuid:99b328af-8c5d-46cd-bb02-876eda16e391>
2.90625
425
Content Listing
Science & Tech.
52.931125
95,529,354
Install GCC by running the following commands: mkdir $LFS/usr/src/gcc-build && cd $LFS/usr/src/gcc-build && ../gcc-184.108.40.206/configure --prefix=/usr \ --enable-languages=c,c++ --disable-nls \ make -e LDFLAGS=-static bootstrap && make prefix=$LFS/usr install && cd $LFS/lib && ln -s ../usr/bin/cpp && cd $LFS/usr/lib && ln -s ../bin/cpp && cd $LFS/usr/bin && ln -s gcc cc --enable-languages=c,c++: This only builds the C and C++ compilers and not the other available compilers as they are, on the average, not often used. If those other compilers are needed, the --enable-languages parameter can be omitted. ln -s ../usr/bin/cpp: This creates the $LFS/lib/cpp symlink. Some packages explicitly try to find cpp in /lib. ln -s ../bin/cpp: This creates the $LFS/usr/lib/cpp symlink as there are packages that expect cpp to be in /usr/lib. The GCC package contains compilers, preprocessors and the GNU C++ Library. A compiler translates source code in text format to a format that a computer understands. After a source code file is compiled into an object file, a linker will create an executable file from one or more of these compiler generated object files. A preprocessor pre-processes a source file, such as including the contents of header files into the source file. It's a good idea to not do this manually to save a lot of time. Someone just inserts a line like #include <filename>. The preprocessor inserts the contents of that file into the source file. That's one of the things a preprocessor does. The C++ library is used by C++ programs. The C++ library contains functions that are frequently used in C++ programs. This way the programmer doesn't have to write certain functions (such as writing a string of text to the screen) from scratch every time he creates a program.
<urn:uuid:5b328d57-1392-42fb-b863-df7b01518b04>
3.171875
496
Documentation
Software Dev.
77.123918
95,529,357
|ECHINODERMATA : OPHIURIDA : Amphiuridae||STARFISH, SEA URCHINS, ETC.| Description: A small brittle star with very long arms which lives buried in muddy sand. Dorsal and ventral surfaces of the disc are covered with small scales. There are 4-6 conical arm spines, none widened or flattened at the tips and 2 large tentacle scales. Disc 9-10mm. arms 9x disc diameter. Habitat: This species lives buried in muddy sand and extends its arms across the surface of the substratum, feeding on deposited material. Distribution: Reported from all round the British Isles mostly below 10 metres but there is some doubt over records from the south. Similar Species: Other Amphiura species are similar. Mixed populations of this species and Amphiura filiformis are common. Key Identification Features: Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Picton, B.E. & Morrow, C.C. (2016). Amphiura chiajei Forbes, 1843. [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=ZB2860 Accessed on 2018-07-16 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:2556b57e-ab24-4b50-b964-d33f1a6c75b1>
3.140625
318
Knowledge Article
Science & Tech.
49.784732
95,529,360
+44 1803 865913 By: Grigori Barenblatt 200 pages, 46 figs Describes and teaches the art of discovering scaling laws, starting from dimensional analysis and physical similarity, which are here given a modern treatment. The concepts of intermediate asymptotics and the renormalisation group as natural attributes of self-similarity are demonstrated. The author shows how and when these notions and tools can be used to tackle the task at hand, and when they cannot. '! deserves to be placed on the book shelf of every working applied mathematician.' ZAMP 'This book will become a classic ! Barenblatt's delightful book though, is more than [a] just an introduction to scaling: it can also be read as a philosophy of mathematical modelling. The writing is witty, insightful, and sometimes moving. Every time you read the book, you return refreshed and inspired ! One can only conclude that any mathematical scientist could be inspired to fundamental advances in their own domain after studying this marvellous book.' The Journal of Fluid Mechanics 'Professor Barenblatt has produced an admirable introduction to this subject, which combines lucid mathematical treatments with perceptive discussions of the principles ! Undergraduate and graduate students will benefit from courses based on this book, but the specialist will also find paradoxes and controversies quietly resolved by the careful use of the scaling methods discussed by Barenblatt. Needless to say, coming from this author, the book is clearly and elegantly written, well presented and well illustrated.' Contemporary Physics '! written in a concise and clear fashion ! Readers will be rewarded with a wealth of examples, with guiding general principles and with profound insights.' Mathematical Reviews ' ! a superb introduction !' Zentralblatt MATH There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I don't know how you got a book printed 26 years ago in the conditions that I received it (like new) but you do it! ABSOLUTELY AWESOME! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:3f086974-c597-4ca4-9dac-397283b360f6>
2.625
451
Product Page
Science & Tech.
38.868707
95,529,367
Space Junk Could Halt Space Exploration More Than Anything Else!First Published: March 28, 2018 Last updated: May 22nd, 2018 Estimated Reading Time: 5 minutes While it is something rarely thought about by most of us, for those looking to explore the cosmos the issue of “space junk” is an increasingly problematic one. As well as the billions upon billions of dollars’ worth of satellites, not to mention the International Space Station, there is a sea of debris, burned out boosters and dead satellites racing around Earth’s orbit. It is only through pure luck and some last-minute decisions that such debris hasn’t caused a major incident. It is a problem that does need a solution, however. Otherwise, future missions to the stars may fail before they have the chance to escape Earth’s atmosphere if it happens to meet even a small piece of space junk head on. There are a number of ideas in the development stages on how to overcome the problem. It would appear whatever action is eventually taken, it needs to be sooner rather than later. And one that sees a collective human effort and agreement regardless of nationality. For example, although there have been very few major incidents involving space junk, even one collision, even if it were of two pieces of “rubbish” no longer in use, would make the problem one-hundred times worse. These collisions turn two pieces of very trackable objects into many smaller objects, each setting off in different directions. These objects are traveling an estimated 17,500 miles per hour. Even an object the size of a pin can cause considerable damage to a satellite. Furthermore, should such a small object collide with an astronaut directly, it would be enough to kill them. Before we look at this in more detail, check out the short video below that demonstrates the problem at hand. Constant Danger Of Collisions As we have written about before, the International Space Station is essentially the first permanent human residence in space. Since it’s official launch in the late-1990s, it has remained in Earth’s orbit providing us with a window into the cosmos. It too, however, has been a victim of space junk. On several occasions, it has had to make last-minute adjustments to its location to avoid a collision. It is perhaps worth bearing in mind that such a change takes days to complete. Most of the time, experts foresee these collisions, and the ISS has moved accordingly. Sometimes, however, there isn’t enough time to carry out such a move. While the outer body of the ISS undoubtedly sustains damage on these occasions, the astronauts themselves will enter a specially designed shelter – like a safe room – where they will remain until any damage can be assessed, and it is safe for them to return. It isn’t just the ISS that has a permanent residence in Earth’s orbit. There are countless satellites roaming around the planet. These carry out various jobs in television broadcasts, to mobile phones and internet access. Basically, they are responsible for a huge portion of the modern, convenient way of life most of us enjoy today. Although the repair of any potential damages would be overcome, in a world of increasing tension and distrust, the monitoring of space junk and such damage is of importance if only to avoid an “international misunderstanding”. For example, should a satellite that is responsible for a nation’s defense systems suffer sudden damage, they may see it as an act of hostility from a perceived enemy nation. There were several such incidents involving UFOs and nuclear power stations that almost kickstarted US-Soviet conflicts beyond the rhetoric of the Cold War, for example. International Cooperation Required If we agree then that there needs to be complete international cooperation on not only how to combat the problem of space junk, but also on the understanding of its dangers to life on Earth, where do we go from here? The US Air Force has seemingly taken the lead in the drive to catalog and track as many pieces of space junk as possible. They would begin doing so in the early-1980s and to date have over 500,000 pieces of “rubbish” on their database. There are also many international agreements in place to bring back as much space junk as possible to Earth instead of allowing to it remain in orbit and out of control. There are several other long-term and much more ambitious projects to tackle the space junk problem – some of which we will look at in a moment. However, due to the astronomical costs any of the projects will involve, the pressure to succeed the first time is immense. This is partly relieved by a number of private companies who are more than willing to stump up the costs for a chance to not only solve a genuine world issue but to demonstrate their technology on the world stage. Although this would deal with the issue of costs, a failure would still risk turning an already large problem into a much bigger one. With this in mind, it is likely that any final decision on how to deal with the problem will likely involve both private and public funds and influence. The short documentary below is worth taking the time to watch. Some of the plans to tackle space junk are extremely interesting and futuristic. For example, the European Space Agency are developing a “robot astronaut” named “Justin”. It is the hope that Justin would carry out the repairs on the outside of the ISS. This will remove the risk to astronauts themselves. Even more remarkable are the plans for the Columbus laboratory on the ISS to control Justin using an exoskeleton glove. The specially designed glove will give the wearer the feel of touch using electrical pads and sensors. This is vital in such delicate repairs required in space. In terms of protecting the planet, one particularly intriguing plan is the Space Fence Project. This is a system of digital radars that allow the tracking of smaller, but no less deadly, pieces of debris. This information will allow such agencies as NASA and the ESA to give ample warning for approaching dangers. In turn, this decreases the risk of damage or fatalities. It won’t, however, do anything to actually reduce the amount of space junk. Other ideas are still very much in the planning stages. One such project aims to gather as much space debris as possible in a huge “space net”. Once achieved it will shift the mass of rubbish away from the Earth’s orbit and into outer space. Of course, where it ends up after that is anybody’s guess. Even more ambitious ideas include the use of laser technology. This will move or “push” larger pieces of debris out of the orbit of the Earth. Ultimately if a solution isn’t reached, then many fear humans will literally be trapped on Earth. Unable to escape into space through the mess of our own making. The video below looks at the issue a little further.
<urn:uuid:c52ba07e-b7be-4ebf-8d87-eb9786e5460d>
3.359375
1,420
Truncated
Science & Tech.
45.210938
95,529,388
Natural Occurence of Enzymes Linked to Inorganic Supports Powerful enzyme activities were found in the sediments of the Venice lagoon and internal city canals. No detectable enzyme activity was present in the aqueous phase even after centrifugation. These insolubilized enzymes showed remarkable heat stability and an increased resistance to severe environmental conditions. They were probably of bacterial origin, mostly immobilized on the inorganic component of the sediment, so that they could survive the organisms from which they were generated, since their lifespan is prolonged by insolubilisation. As a consequence they resist to conditions where the same enzyme under soluble form would be rapidly inactivated. They are useful diagnostic factors of the ecosystem, since their presence is related to the waste products. A study on the linkage between enzymes and clays suggested that a spontaneous and specific affinity between them, previously unsuspected, allowed the formation of “activated” sediments, the leading force in the first step of organic matter degradation in ecosystems. KeywordsCellulase Activity Septic Tank Sediment Composition Phosphate Anion Venice Lagoon Unable to display preview. Download preview PDF. - 2.Haddane, M., Baudinat, C., Rambaud, A. and Coletti-Previero M-A. (1986) J. Fra. Hydr. 17: 253–262.Google Scholar - 12.Coletti-Previero, M-A., Pugnière, M., Favel, A.and Previero, A. (1988) In: Protein Structure-Function Relationhip, ( Zaidi, Z.H. ed. ), pp. 39–58.Google Scholar - 14.Donazzolo, R., Orio, A.A., Pavoni, B. and Perin, G. (1984) Oceanol. Acta. 7: 25–32.Google Scholar - 15.Shaw, E. and Green, G.D.J. (1981) In: Methods in Enzymology, 80: pp. 820–826, Acad. Press, New York.Google Scholar
<urn:uuid:8856963b-c7f2-4ced-86d1-5f3ddb9c0a3d>
2.640625
449
Truncated
Science & Tech.
43.96997
95,529,399
What is an Ecological Footprint? We need food, shelter and heating (in some locations) to survive. Our planet’s ecological resources help fulfill these needs. But how many resources do we consume? This question can be answered using the Ecological Footprint. Just as a bank statement tracks income against expenditures, Ecological Footprint accounting measures a population’s demand for and ecosystems’ supply of resources and services. On the demand side, the Ecological Footprint measures an individual or a population’s demand for plant-based food and fiber products, livestock and fish products, timber and other forest products, space for urban infrastructure, and forest to absorb its carbon dioxide emissions from fossil fuels. On the supply side, a city, state, or nation’s biocapacity represents its biologically productive land and sea area, including forest lands, grazing lands, cropland, fishing grounds, and built-up land. The gap between Ecological Footprint and biocapacity is determined by several factors. Our personal Footprint is the product of how much we use and how efficiently this is being produced. The biocapacity per person is determined by how many hectares of productive area there is, how productive each hectare is, and how many people (in a city, country, or the world) share this biocapacity. Many countries are “in the red,” which means they use more natural resources (Ecological Footprint) than their ecosystems can regenerate (biocapacity). They are running an “ecological deficit.” When a country’s biocapacity is greater than its population’s Ecological Footprint, the country has an “ecological reserve.” Nations (also cities and states) can run ecological deficits by liquidating their own resources, such as by overfishing; importing resources from other areas; and/or emitting more carbon dioxide into the atmosphere than their own ecosystems can absorb. What is Earth Overshoot Day? When the entire planet is running an ecological deficit, we call it “overshoot.” At the global level, ecological deficit and overshoot are the same, since there is no net import of resources to the planet. Overshoot occurs when: HUMANITY’S ECOLOGICAL FOOTPRINT > EARTH’S BIOCAPACITY Earth Overshoot Day marks the date when humanity’s demand for ecological resources and services (Ecological Footprint) in a given year exceeds what Earth can regenerate in that year (biocapacity). Where does the data come from? Your country has an office that collects data about its citizens, including how much food (apples, pasta, orange juice) has been eaten, how much wood used to make furniture and so on. These offices report the data to bigger international offices that store this information for most countries in the world. Global Footprint Network, which calculates the Ecological Footprint for more than 200 countries, gets the data from these international and at times national offices and it into a big database on its computers to calculate how many resources each country consumes.
<urn:uuid:4982f221-e2be-4c2b-8179-cd6f4855943e>
3.90625
668
Knowledge Article
Science & Tech.
24.874713
95,529,402
Aromatic compounds are important contaminants that limit the intended uses of water resources. Both polar and non-polar substances, such as phenols, aromatic sulfonates, lignin-sulfonic acids, humic and fulvic substances (acids) and mono- and poly-aromatic hydrocarbons, their alkyl-substituted derivatives, respectively, are among the potential aromatic micropollutants. During the last 5 - 10 years, an analytical approach has been developed on the basis of total fluorescence measurement of the original water sample and its organic solvent (cyclohexane) extract. It has been demonstrated and verified that polar aromatic compounds fluoresce only in the original water sample, whereas non-polar (hydrophobic) compounds fluoresce in an organic solvent (e.g. cyclohexane) extract. During extraction, polar compounds remain in the water sample. This method has been used in a country-wide survey in drinking water aquifers, as well as in several environmental impact assessment studies, particularly for petroleum-related pollution. It is a very convenient method to determine the naturally occurring humic and fulvic substances in water and has proved to be an appropriate substitute of the infrared spectrophotometric method for oil pollution assessment in the environment, also having the advantage of signalling more harmful, toxic aromatic petroleum hydrocarbons. Research Article|June 01 2001 P. Literathy; Polar and non-polar aromatic micropollutants in water (drinking-water) resources. Water Science and Technology: Water Supply 1 June 2001; 1 (4): 149–157. doi: https://doi.org/10.2166/ws.2001.0079 Download citation file: Don't already have an account? Register You could not be signed in. Please check your email address / username and password and try again.
<urn:uuid:e2ea40a5-628a-43a0-b222-dcf0f7b4ac78>
3.5
392
Truncated
Science & Tech.
26.158647
95,529,410
Spatiotemporal Control of 3D-Cultured Stem Cells Using Light Nothing beats nature. The diverse and wonderful varieties of cells and tissues that comprise the human body are evidence of that. Each one of us starts out as a mass of identical, undifferentiated cells, and thanks to a combination of signals and forces, each cell responds by choosing a developmental pathway and multiplying into the tissues that become our hearts, brains, hair, bones or blood. A major promise of studying human embryonic stem cells is to understand these processes and apply the knowledge toward tissue engineering. Researchers in UC Santa Barbara’s departments of Chemistry and Biochemistry, and of Molecular, Cellular and Developmental Biology have gotten a step closer to unlocking the secrets of tissue morphology with a method of three-dimensional culturing of embryonic stem cells using light. “The important development with our method is that we have good spatiotemporal control over which cell — or even part of a cell — is being excited to differentiate along a particular gene pathway,” said lead author Xiao Huang, who conducted this study as a doctoral student at UCSB and is now a postdoctoral scholar in the Desai Lab at UC San Francisco. The research, titled “Light-Patterned RNA Interference of 3D-Cultured Human Embryonic Stem Cells,” appears in volume 28, issue 48 of the journal Advanced Materials. Similar to other work in the field of optogenetics — which largely focuses neurological disorders and activity in living organisms, leading to insights into diseases and conditions such as Parkinson’s and drug addiction — this new method relies on light to control gene expression. The researchers used a combination of hollow gold nanoshells attached to small molecules of synthetic RNA (siRNA) — a molecule that plays a large role in gene regulation — and thermoreversible hydrogel as 3D scaffolding for the stem cell culture, as well as invisible, near-infrared (NIR) light. NIR light, Huang explained, is ideal when creating a three-dimensional culture in the lab. “Near-infrared light has better tissue penetration that is useful when the sample becomes thick,” he explained. In addition to enhanced penetration — up to 10 cm deep — the light can be focused tightly to specific areas. Irradiation with the light released the RNA molecules from the nanoshells in the sample and initiated gene-silencing activity, which knocked down green fluorescent protein genes in the cell cluster. The experiment also showed that the irradiated cells grew at the same rate as the untreated control sample; the treated cells showed unchanged viability after irradiation. Of course, culturing tissues consisting of related but varying cell types is a far more complex process than knocking down a single gene. “It’s a concert of orchestrated processes,” said co-author and graduate student researcher Demosthenes Morales, describing the process by which human embryonic stem cells become specific tissues and organs. “Things are being turned on and turned off.” Perturbing one aspect of the system, he explained, sets off a series of actions along the cells’ developmental pathways, much of which is still unknown. “One reason we’re very interested in spatiotemporal control is because these cells, when they’re growing and developing, don’t always communicate the same way,” Morales said, explaining that the resulting processes occur at different speeds, and occasionally overlap. “So being able to control that communication on which cell differentiates into which cell type will help us to be able to control tissue formation,” he added. The fine control over cell development provided by this method also allows for the three-dimensional culture of tissues and organs from embryonic stem cells for a variety of applications. Engineered tissues can be used for therapeutic purposes, including replacements for organs and tissues that have been destroyed due to injury or disease. They can be used to give insight into the body’s response to toxins and therapeutic agents. Research on this study was also conducted also by Qirui Hu, a postdoctoral fellow in Dennis Clegg’s lab at UCSB’s Center for Stem Cell Biology and Engineering in the Department of Molecular, Cellular and Developmental Biology, and Yifan Lai in the lab of Norbert Reich in the Department of Chemistry and Biochemistry. This article has been republished from materials provided by UCSB. Note: material may have been edited for length and content. For further information, please contact the cited source. Huang, X., Hu, Q., Lai, Y., Morales, D. P., Clegg, D. O., & Reich, N. O. (2016). Light-Patterned RNA Interference of 3D-Cultured Human Embryonic Stem Cells. Advanced Materials, 28(48), 10732-10737. doi:10.1002/adma.201603318 Synthetic Material That Detects Enzymatic ActivityNews Scientists integrate protein and polymer building blocks to create stimulus-responsive systemsREAD MORE Regenerative Medicine Meets Clever Engineering to Accommodate Bone GraftsNews Personalized bone grafts developed to repair bone defects from disease or injuryREAD MORE Protein Essential for Making Stem Cells IdentifiedNews The discovery by Stanford scientists drills a peephole into the black box of cellular reprogramming and may lead to new ways to generate induced pluripotent stem cells in the laboratory.READ MORE
<urn:uuid:ae43c458-2d3b-40d3-a9b2-b6a66c32bb85>
2.828125
1,154
News Article
Science & Tech.
33.520151
95,529,422
Chemistry Inside The Wheel: The Urethane Molecule [Part One] Up to the late 60s, wheels were manufactured from clay or metal and lasted 8 to 9 hours. In the early 70s, Frank Nasworthy introduced the urethane wheel to the sidewalk surfing world. This new smooth riding wheel propelled the sport into modern day skateboarding. As the sport progressed new wheels were being created for different styles of skating. Adjusting the recipe for urethane can change wheel properties, like durometer, which you can use to optimize your ride. Just what is urethane? And what exactly is changed to adjust the properties of a wheel? Zelda Ziegler, Professor of Chemistry at the Central Oregon Community College, sat down with me to discuss urethane wheels on a molecular level. Because each wheel company uses unique formulas, we were unsure where to start. However, there are some things common to all urethane polymers. All polymers are made by repeating two different types of small molecules which she represented as A and B. In poly-urethanes they alternate ABABABABA..ect. for many repeating units. The A side provides half of the urethane link and the B side provides the other. Each A and B have at least two connecting points on the molecule. The connections on the A molecules are called isocyanate groups. The connections on the B side can be many different things, but are usually alcohol groups. When you put all the A's and B's together, they easily connect together to form extremely long chains with freshly made urethane linkages. When you have molecules this long and flexible, they form elastic solids. Many thanks to Zelda Ziegler, Professor of Chemistry at the Central Oregon Community College.
<urn:uuid:74835251-3415-4d67-9625-627acec15ab6>
3.671875
367
Personal Blog
Science & Tech.
44.077715
95,529,439
birth of the Universe famous Canadian Science Fiction writer Mark A. Carter reprints an article from Science Daily courtesy of the Perimeter Institute and comments on how it relates to Joe Cooper's journey through a black hole as depicted in the And it doesn't bode well for Joe Cooper and Amelia Brand. big bang poses a big question: if it was indeed the cataclysm that blasted our universe into existence 13.7 billion years ago, what sparked it? Three Perimeter Institute researchers have a new idea about what might have come before the big bang. It's a bit perplexing, but it is grounded in sound mathematics. But is it testable? Before the Big Bang Image courtesy of Perimeter Institute What we perceive as the big bang, they argue, could be the three-dimensional mirage of a collapsing star in a universe profoundly different than our own. greatest challenge is understanding the big bang itself," write Perimeter Institute Associate Faculty member Niayesh Afshordi, Affiliate Faculty member and University of Waterloo professor Robert Mann, and PhD student Razieh Pourhasan. Conventional understanding holds that the big bang began with a singularity - an unfathomably hot and dense phenomenon of space-time where the standard laws of physics break down. Singularities are bizarre, and our understanding of them is limited. all physicists know, dragons could have come flying out of the singularity," Afshordi says in an interview with The problem, as the authors see it, is that the big bang hypothesis has our relatively comprehensible, uniform, and predictable universe arising from the physics-destroying insanity of a singularity. It seems unlikely. So perhaps something else happened. Perhaps our universe was never singular in the first place. Their suggestion: our known universe could be the three-dimensional wrapping around a four-dimensional black hole's event horizon. In this scenario, our universe burst into being when a star in a four-dimensional universe collapsed into a black hole. black holes have two-dimensional event horizons. That is, they are surrounded by a two-dimensional boundary that marks the point of no return. In the case of a four-dimensional universe, a black hole would have a three-dimensional event In their proposed scenario, our universe was never inside the singularity; rather, it came into being outside an event horizon, protected from the singularity. It originated as, and remains, just one feature in the imploded wreck of a four-dimensional star. The researchers emphasize that this idea, though it may sound absurd, is grounded firmly in the best modern mathematics describing space and time. Specifically, they've used the tools of holography to "turn the big bang into a cosmic mirage." Along the way, their model appears to address long-standing cosmological puzzles and crucially produce testable predictions. Of course, our intuition tends to recoil at the idea that everything and everyone we know emerged from the event horizon of a single four-dimensional black hole. We have no concept of what a four-dimensional universe might look like. We don't know how a four-dimensional parent universe itself came to be. But our fallible human intuitions, the researchers argue, evolved in a three-dimensional world that may only reveal shadows of reality. They draw a parallel to Plato's allegory of the cave, in which prisoners spend their lives seeing only the flickering shadows cast by a fire on a cavern wall. shackles have prevented them from perceiving the true world, a realm with one additional dimension," they write. "Plato's prisoners didn't understand the powers behind the sun, just as we don't understand the four-dimensional bulk universe. But at least they knew where to look for answers." 1. Razieh Pourhasan, Niayesh Afshordi, Robert B. Mann. Out of the White Hole: A Holographic Origin for the Big Bang. arXiv, 2014. Institute. "The black hole at the birth of the Universe." ScienceDaily. ScienceDaily, 7 August 2014. <www.sciencedaily.com/releases/2014/08/140807145618.htm>. So, as I see it, in Interstellar, when astronaut Joe Cooper slipped over the event horizon of the black hole known as Gargantua, he underwent what British astrophysicist Sir Martin Rees coined as spaghettification. In other words, Joe Cooper would have become a stream of subatomic particles swirling into the black hole. He simply could not survive the tidal pressure of the singularity within. But even if he did survive, he would not be the same Coop that we saw earlier. He would not be able to have the endearing scene with his daughter Murphy on her death bed. He could not steal a spacecraft from Cooper Station orbitting Saturn and reenter the wormhole to join Amelia Brand on the world she now inhabited alone. According to this latest theory coming out of the Perimeter Institute, a three dimensional being passing over a two dimensional event horizon would turn into a two dimensional being as flat as paper. And there goes the potential Adam and Eve
<urn:uuid:6565b318-e7ff-4288-934f-73e785face05>
3.515625
1,180
Nonfiction Writing
Science & Tech.
42.838858
95,529,442
As part of plans for long-term human survival on Mars, NASA this week held a competition in which students from seven US universities demonstrated various drilling technologies to extract water from simulated Martian subsurface ice. The three-day 'Mars Ice Challenge' was held at the Langley Research Center in Virginia from June 13-15. The students — divided into eight teams — used drills, augers and an excavator positioned over large fishing coolers to get through about 16 inches (a half-metre) of simulated Martian soil to reach solid blocks of ice about 16 inches (a half-metre) deep. The teams also built their innovative drilling and water extraction systems, designed according to mass, volume and power constraints. The projects are "based in reality to what NASA wants. When we give those challenges to students, they're able to start solving them in their unique way", Shelley Spears, Director (education and outreach) at the National Institute of Aerospace, said in a statement on Saturday. "We were all really excited about this project. We wanted to give it a shot," added Wes Thomas, a student from University of Pennsylvania. Recent discoveries have revealed large ice deposits just under the surface of the red planet, which have prompted scientists to work to turn that ice into water, to help allow a sustained human presence on Mars. "NASA has really been focused on trying to get all the pieces in place to get to Mars," added Richard Davis, Assistant Director (science and exploration) at NASA. "There's a lot of resources on Mars, but water is the driver. There's a ton of water on Mars," Davis noted. The student teams included two from West Virginia University in Morgantown, and on one each from the Colorado School of Mines in Golden, the University of Pennsylvania in Philadelphia, the University of Tennessee in Knoxville, North Carolina State University in Raleigh, the University of Texas in Austin and Alfred University in Alfred, New York. The students also submitted a technical paper outlining their concept's adaptability to show how their system could be used on Mars and how it could be modified to account for the huge differences between the two planets.
<urn:uuid:9f969fc3-c762-4cd7-a900-47069e2cdfff>
3.515625
441
News Article
Science & Tech.
33.983607
95,529,458
New results from experiments at a unique ecology facility show that plant communities are dramatically altered by changes to the type of animal species living among their roots, but that key ecosystem measurements such as overall agricultural yield or the amount of soil carbon stored are unaffected. According to the results of a three year study into the processes of experimental grassland ecosystems published in the journal Science today, changes to the makeup of soil communities from small to large animal body sizes -ranging from bacteria and fungi to worms and beetles - have no significant effect on the overall yield of the plant life above, nor on the overall amount of carbon stored away in the soil. The researchers from England, Wales, Scotland, Germany and Finland say that the results of their experiment send an important message about the ability of natural ecosystems to cope with human impacts, such as changes to the composition of soil communities, but caution that further long-term work is needed to confirm their findings. Tom Miller | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:d44cb14e-9d6a-4818-a43e-c67f6864001e>
2.859375
789
Content Listing
Science & Tech.
35.823025
95,529,460
I always thought the N-Prize was a good challenge and I want to put out my idea on it and see if there is something to work with . Please see http://n-prize.com/ if you are not familiar with the challenge. Here is the idea dump: https://en.wikipedia.org/wiki/Space_tether https://en.wikipedia.org/wiki/Project_Echo https://www.sciencedaily.com/releases/2016/02/160226133603.htm https://en.wikipedia.org/wiki/Momentum_exchange_tether http://science.nasa.gov/missions/tss/ If you have the time, watch the Youtube videos by Robert Murray-Smith. He goes through the development of graphene for the home experimenter. The thought is this: A balloon lifts an uninflated graphene coated (puffed laser-scribed graphene) balloon with gear to ~110,000 feet. The coated balloon will inflate as the attitude increases due to the pressure change. The graphene increases the electrostatic charge based on the surface area being increased 1000s of times with the coating, so the charge become huge. The balloon has a tail of graphene to act as a tether and a discharge conduit to form a circuit from a high density charge to a corona discharge point. The tail will generate electricity as it passes through the Earth's magnetic field. The balloon will also be covered with the new perovskite solar film. The film charge will push against the Earth's magnetic field and increase its velocity and/or altitude (There are issues with the angle of the Earth's magnetic field at the poles). The balloon will be launch from the South Pole a week before the summer solstice (December - it's a different hemisphere). The date of launch date will allow for two weeks of maximum solar exposure and time to reach orbital velocity. The South Pole during this time will have 24 hour sunlight and because of overflight issues, there are less political country issues. The thrust will be slow at first, because the angle of the tail to the Earth's magnetic field is bad, but it will get better as it spirals north away from the poles. The thrust would be best at the equator and get the boost from Earth's rotation, but too many countries to fly over and only 12 hours of daylight. An extra tail could be used as an rectenna to charge from a ground based radio transmission, but I am thinking it would not be efficient enough, I might be wrong. The circuit flow might be able to power an ion craft drive, but I do not know, plus air resistance at mach speeds may be too much (At very high altitude, say 300000 ft, the air speed mach numbers may not be an issue. The best source of a big boost in thrust would be to use the hydrogen gas from the balloon as propellant for a microscale ion drive. The balloon should be made to become a vacuum balloon (Halfbakery) like ECHO 2, but instead of aluminum, use a UV cured plastic to harden by unfiltered UV space radiation. A vacuum balloon will not have the thermal issues of a lifting gas expanding and contracting during day and night. Multiple tails with different current might be the way to steer it or a just a big thin graphene sheet that turns (probably too clumsy) and uses the very thin air to change its inclination. Please let me know if this is interesting and if it has potential. 1209600 is roughly the number of seconds in two weeks. 7900 meters per second to reach orbit 7900/1209600=0.0065 meters per sec A little faster than a quarter of an inch per second. Working against gravity (9.8 m/s) is the major challenge. - PHYSICAL SCIENCES - EARTH SCIENCES - LIFE SCIENCES - SOCIAL SCIENCES Subscribe to the newsletter Stay in touch with the scientific world! Know Science And Want To Write? Apply for a column: email@example.com - Do-Gooderism: A Religious Cult for Millennials - Cancer Patients Who Use Complementary Medicine (Refuse Conventional) Twice As Likely To Die, Says Study - Toxic Chemical Cocktails - Where Endocrine Disruption Starts To Sound Like Homeopathy - Aspirin Dosage, to Battle Heart Disease, May Depend on Your Weight - Suicides Increase During Heat Waves - For Older Chinese-Americans, Loneliness & Depression Curtail Cognitive Function - Marine Litter: The Public Mistakenly Trusts Environmental Groups, And That Impedes Progress - Science 2.0 Explains: How Water Exists In Two Different Forms At The Molecular Level - New Class Of Commercial Herbicide Is All Natural - But Discovered Using Science - Hepatitis B In Africa Might Be Solved With A $20 Test - A Beautiful New Spectroscopy Measurement - Why Pot Gives You The Munchies Identified In Animal Studies - Transphobia Rife in A Popular Style: Crunchyroll ’s Anime Mistranslations, and the Otoko-no-ko Trap - Diet soda linked to significantly lower risk of colon cancer recurrence and death - SF State study compares athlete and truck driver, identical twins - Greening vacant lots reduces feelings of depression in city dwellers, Penn study finds - Scientists reverse aging-associated skin wrinkles and hair loss in a mouse model - The need for speed: Why malaria parasites are faster than human immune cells
<urn:uuid:d4cd224b-45fe-4d00-9c86-5d7416f7af25>
2.921875
1,166
Personal Blog
Science & Tech.
51.859983
95,529,478
What will a day in the life of a Californian be like in 40 years? If the state cuts its greenhouse gas emissions 80 percent below 1990 levels by 2050 — a target mandated by a state executive order — a person could wake up in a net-zero energy home, commute to work in a battery-powered car, work in an office with smart windows and solar panels, then return home and plug in her car to a carbon-free grid. Such is a future envisaged in a study published Nov. 24 by the journal Science that analyzes the infrastructure and technology changes needed to reach California's aggressive emissions reduction goal. The study was conducted by scientists from the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and the San Francisco-based energy consulting firm Energy and Environmental Economics (E3). The researchers describe a not-so distant time in which lights, appliances, and other devices are pushed to unprecedented levels of energy efficiency. Electricity is generated without emitting carbon dioxide into the atmosphere. And most importantly — even after these measures are implemented — cars, heating systems, and most other equipment that now run on oil and natural gas will instead be powered by electricity. The scientists say that all of this will be technologically feasible by 2050 if today's pace of technology innovation continues. "This study is meant to guide decisions about how to invest in our future. Assuming plausible technological advances, we find that it's possible for California to achieve deep greenhouse gas reductions by 2050," says Margaret Torn, the corresponding author of the paper and a staff scientist in Berkeley Lab's Earth Sciences Division. Jim Williams, chief scientist at E3 and professor at the Monterey Institute of International Studies, is the lead author of the paper. "To reach this goal, energy efficiency comes first, followed by decarbonization of electricity generation, followed by the electrification of transportation and other sectors," says Williams. The scientists developed this prescription using a model of California's greenhouse gas emissions from 2010 to 2050 that takes into account the state's changing population, economy, and physical infrastructure. The model includes six energy demand sectors (residential, commercial, industrial, agriculture, transportation, and petroleum industry) and two supply sectors (fuel and electricity). They explored the best ways to reach California's goal of reducing greenhouse gas emissions in 2050 by 80 percent below 1990 levels. This target is consistent with the Intergovernmental Panel on Climate Change's Fourth Assessment Report, which outlines the global emissions required to stabilize atmospheric concentrations at 450 parts per million. In California, this means a sharp reduction in CO2 emissions per year from 427 million metric tons in 1990 to 85 million metric tons in 2050. The scientists started with this 85 million metric ton target and worked backwards to determine the changes needed to get there. They arrived at four mitigation scenarios, all of which rely on three major energy system transformations. Among the findings: Energy Efficiency Comes First Energy efficiency has been the low-hanging fruit for decades when it comes to reducing energy demand, and will likely remain so. The scientists found that energy efficiency improvements will net 28 percent of the emissions reductions required to meet California's goal. The catch, however, is that energy efficiency will have to improve by at least 1.3 percent per year over the next 40 years. This is less than the level California achieved during its 2000-2001 electricity crisis, but it has never been sustained for decades. The scientists found that the largest share of greenhouse gas reductions from energy efficiency comes from the building sector via improvements in building shell, HVAC systems, lighting, and appliances. Next, Decarbonize Electricity Generation Another 27 percent reduction in emissions comes from switching to electricity generation technologies that don't pour carbon dioxide into the atmosphere. Renewable energy, nuclear power, and fossil fuel-powered generation coupled with carbon capture and storage technology each has the potential to be the chief electricity resource in California. But they all must overcome technical limitations, and they're all currently more expensive than conventional power generation. Because it's unclear which technology or technologies will win out in the long run, the scientists developed three separate scenarios that emphasize how each can reach the target, plus a fourth scenario that includes a blend of all three. In addition, they determined that Californians can't rely on renewable energy alone. At most, they found that 74 percent of the state's electricity could be supplied by sources such as wind and solar. The scientists also stressed that a renewable energy-intensive grid will require breakthroughs in energy storage and ways to enable smart charging of vehicles, among other technologies. They also found that 15 percent of the required emissions reductions could come from measures to reduce non-energy related CO2 and other greenhouse gas emissions, such as from landfill and agricultural activities. And 14 percent could come from various unrelated technologies and practices such as smart planning of urban areas, biofuels for the trucking and airline industry, and rooftop solar photovoltaics. And Finally, Goodbye Gas, Hello Electrons Even after these emission reduction measures are employed, the scientists still came up short in ensuring California meets its emissions reduction goal by 2050. So they turned to cars, space and water heaters, and industrial processes that consume fuel and natural gas. They determined that most of these technologies had to be electrified, with electricity constituting 55 percent of end-use energy in 2050, compared to 15 percent today. Overall, this nets a 16-percent reduction in greenhouse gas emissions, the final push needed to achieve an 80-percent reduction below 1990 levels. The largest share of greenhouse gas reductions from electrification came from transportation. In the study, 70 percent of vehicle miles traveled — including almost all light-duty vehicle miles — are powered by electricity in 2050. "The task is daunting, but not impossible. California has the right emissions trajectory with Assembly Bill 32," says Williams, referring to California's 2006 emissions legislation. "And it isn't a matter of technology alone. R&D, investment, infrastructure planning, incentives for businesses, even behavior changes, all have to work in tandem. This requires policy, and society needs to be behind it." Margaret Torn's contribution to the study was supported by DOE's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at http://science.energy.gov/ Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. The study, "The Technology Path to Deep Greenhouse Gas Emissions Cuts by 2050: The Pivotal Role of Electricity," is published Nov. 24 by Science in its online website Science Express. Dan Krotz | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:579cd896-51ab-4357-9b17-df6cc72526d3>
3.46875
2,055
Content Listing
Science & Tech.
35.203245
95,529,493
Audubon California is helping shape the future of this remarkable place for birds. White Pelicans at the Salton Sea. Photo: Robin L The Salton Sea is one of the most important places for birds in North America and is in danger of losing its ecological value. If it does, we will lose a vital part of the Pacific Flyway and face a toxic dust bowl that will threaten public health for more than a million Californians. As part of the Colorado River Delta, the sea filled and dried for thousands of years prior to its current, 35-mile-long incarnation, which came into existence as the result of a massive flood of the Colorado River in 1905. The 350-square-mile sea has partially replaced wetland habitat lost to agricultural and urban conversion in the Colorado River Delta, California’s coast, and the San Joaquin Valley. The sea is a globally significant Important Bird Area. For more than a century, the sea has served as a major nesting, wintering, and stopover site for millions of birds of more than 400 species. Today, tiny Eared Grebes winter by the thousands in rafts far out on its surface. American White Pelicans roost on mudflats and fish for tilapia in its shallows. Recently, its water level dropped to the point that colonial seabirds began abandoning nesting sites en masse in 2013, and shallow, marshy habitat areas at the sea’s edge have begun to rapidly vanish, particularly at the south end. And in 2017, inputs of Colorado River water that have been maintaining a minimum sea level are scheduled to end, as more water is transferred from local agricultural uses to urban uses on the coast. As less water flows into the sea, it will shrink considerably, becoming more saline and eventually inhospitable to birds, fish, and insects. Audubon California has the opportunity to help address some of the immense challenges of the Salton Sea. Research about how much habitat -- and what kind -- birds are using at the Salton Sea should guide restoration. San Diego Audubon Society recently partnered with an elementary school to educate students about birds that rely on the Salton Sea. Audubon Calfornia's Frank Ruiz talks about the need for the everyone to pull together to avert an ecological crisis at the Salton Sea -- to protect people and birds at the Salton Sea. Thanks to the Walton Family Foundation for putting this video together. Learn more about our work at the Salton Sea. The California Water Resources Control Board yesterday heard a presentation from state officials on their progress toward meeting their goals for habitat restoration and dust control at the Salton Sea. According to an agreement completed late last year, the state must complete work on 500 acres by the end of the year, but there is little indication that it will reach even that modest goal. Micheal Lynes, Audubon California's policy director, had this to say after the hearing: "The deterioration of the Salton Sea continues, and the rate of progress on the Salton Sea Management Program (SSMP) is not keeping up with the rapidly changing conditions. We encourage the State Water Resources Control board to work closely with state officials to ensure that upcoming deadlines are met– including constructing 500 acres’ worth of projects at the sea by the end of 2018.” Michael Cohen, senior associate at the Pacific Institute, added: "While the state has taken some steps towards implementing the Salton Sea Management Program, the rate of the progress is not nearly enough to keep up with the sea's decline. It is imperative that the State Water Resources Control Board hold the state to its commitment to build habitat and dust control projects at the Salton Sea, this year.” Terrific op-ed in Water Deeply from Allison Harvey Turner Allison Harvey Turner, of the S.D. Bechtel Jr. Foundation, and Barry Gold, of the Walton Family Foundation, about the State of California's recent progress addressing the environmental and public health crises at the Salton Sea: "At the Salton Sea, the state has the opportunity to demonstrate its commitment to supporting human health, a resilient environment, a strong economy and a sustainable water strategy for Southern California. Now, promising plans on paper must turn into critical progress on the ground. We are closer than ever to solving the Salton Sea crisis. This is the time for diligence and dedication to make it happen." Earlier this month, Audubon California led a group of chapter leaders to offer a firsthand look at the situation at the Salton Sea. We also co-hosted a community event to raise awareness about the developing environmental and public health crises. Audubon California News comes to your email inbox every month with updates on our activities throughout the state, as well as other important conservation news.
<urn:uuid:fe9d5b83-6b4b-42bd-8bfd-89e8cdacd95b>
3.390625
991
News (Org.)
Science & Tech.
39.032216
95,529,506
Graphene is a nanomaterial which combines a very simple atomic structure with intriguingly complex and largely unexplored physics. Since its first isolation about four years ago, researchers suggest a large number of applications for this material in anticipation of future technological innovations. Specifically, graphene is considered as a potential candidate for replacing silicon in future electronic devices. Theoretical physicists from the Swiss Federal Institute of Technology in Lausanne (EPFL) and Radboud University of Nijmegen (The Netherlands) performed a virtual crash-test of graphene as a material for future spintronic devices. In particular, a possible components of future computers. The material successfully passed the test, albeit with some reservations. The results have been published in the February 1, 2008, issue of Physical Review Letters. Current technology uses the charge of electrons to operate information in electronic devices. As an alternative, one can use intrinsic spin of electrons for this purpose. Electronic devices making use of electron spin has acquired the term, spintronic devices. Several types of such devices have already found their way into the market-place in high-capacity hard drives. Recently it was introduced in a non-volatile magnetic random access memory (MRAM). Further, replacement of charge-based devices by the spintronic components promises faster computers and less energy consumption. While spintronics requires magnetic materials, graphene itself is non-magnetic. However, when a single graphene layer is cut properly, ( e.g. using lithographic techniques widely used in the current semiconductor technology), electron spins are theoretically predicted to align at the edges of graphene. This amazing property of graphene has attracted considerable attention by theoretical researchers giving rise to new designs of spintronic devices. However, there is a gap between the theoretical models and the actual prototypes of such devices. The problem lies in the fact that such edge spins form a truly one-dimensional system. It is known that one-dimensional systems are very sensitive to thermal disorder which destroys the perfect arrangement of spins. Strictly speaking, a one-dimensional magnet cannot maintain the perfect alignment of magnetism at a temperature above absolute zero. This entropy-driven behavior is in sharp contrast to bulk materials (such as iron), which is able to keep the perfect order of electron spins below certain temperatures, (Curie temperature). This factor allows using bulk materials as permanent magnets. An important component of modern technology. On graphene edges, the order on spins can exist only within a certain range which limits the dimensions of spintronic devices. Researchers from Switzerland and Netherlands performed, "computer-time-demanding first principles calculations," in order establish the range of magnetic order at graphene edges. At room temperature, the range or spin correlation length, was found to be around 1 nanometer which limits device dimensions to several nanometers. This result may first look rather disappointing. This is about one order of magnitude below the length scales of the present-day semiconductor manufacturing processes. Nevertheless, graphene performed better than any other material when it came to one-dimension and room temperature factors. In other words, graphene is the best performer on the nanoscale. "We are very optimistic about these results," says Oleg Yazyev, Postdoctoral Researcher at the Swiss Federal Institute of Technology. "One can now devise different ways of increasing the spin correlation length on graphene edges." Dr.Yazyev further stated, " For instance, we are now looking for an appropriate chemical modification of graphene edges in order to further extend the length-scale limits. This is only beginning of an interesting direction of research". Citation: ‘Magnetic Correlations at Graphene Edges: Basis for Novel Spintronics Devices’, Physical Review Letters 100, 047209 (2008) [web-link: link.aps.org/abstract/PRL/v100/e047209 ] Explore further: Graphene assembled film shows higher thermal conductivity than graphite film
<urn:uuid:b97f9c1f-7b6b-4299-9689-d2e7eaef72ba>
3.953125
812
News Article
Science & Tech.
18.77007
95,529,513
New technology is helping more people see Frank Lloyd Wright’s winter home. How To Celebrate Bat Week In Arizona Just in time for Halloween, the state’s wildlife agency is celebrating Bat Week for the third year in a row. Conservation and government groups from around the United States and Canada celebrate Bat Week each October to promote conservation. Last year, the Arizona Game and Fish Department asked people to build bat houses where the furry mammals can roost. Typically, bats migrate south during the winter, so their stay in Arizona ends soon. “There are more and more ways for people to get involved,” said Angie McIntire, bat management coordinator at Game and Fish. “This year we’re asking people to pull invasive weeds for bats.” Arizona is home to 28 different species of bats, two of which are pollinators. The rest are “insectivores,” meaning they eat bugs. Bats need to eat a variety of insects to stay healthy. The problem is invasive plants push out native plants and the insects that feed on them. Bats control insect populations worldwide. “We can tell quite a lot from bats,” McIntire said. “Some species are like a canary in the coal mine and can tell us what is going on in the environment.” Randy Babb, a biologist who oversees the department’s wildlife viewing program, said interest in bats has been growing. “They are very cool and interesting looking,” Babb said. “They capture the imagination and there’s a growing number of people who realize the important part in ecology they play.” Some species of bats eat up to 12,000 insects the size of a mosquito an hour. Babb is considering establishing a webcam to capture bat activity. The camera would be linked to the internet, allowing the public to watch bats in action. The National Park Service also is getting in on the Bat Week fun. Montezuma Castle is hosting a Bat Week program on Oct. 29 and 30. Where To See Bats In Arizona Interested in doing some bat watching? There are several Arizona locations where you can get an up-close look. The best time to view bats is when the sun is setting — so check out some of these spots at dusk from May through October. The department lists several urban bat viewing sites in Phoenix and Tucson. In Phoenix, the primary viewing spot is at 40th Street and Camelback Road. That’s where a tunnel is located on the north side of the Arizona Canal. Bats also roost at the western end of that tunnel at 24th Street. - Pantano Wash Bridges at Golf links 22 on Speedway and Broadway. - Campbell Avenue Bridge at Rillito River. - Ina Road bridge and drainage pipes at the Santa Rita River. - East Tanque Verde bridge over the Rillito River.
<urn:uuid:99a9018c-5a08-4cdb-92d7-e607fc773aa8>
3.078125
617
News Article
Science & Tech.
57.546124
95,529,557
Epidemiology so far fails to substantiate the claim of an increase in cancer incidence in humans following low-level exposure to ionizing radiation, below about 100 mGy or mSv. The LNT model was introduced as a concept to facilitate radiation protection. But the use of this model led to the claim that even the smallest dose (one electron traversing a cell) may initiate carcinogenesis—for instance, from diagnostic x-ray sources. This claim is highly hypothetical and has resulted in medical, economic, and other societal harm. The release of energy from splitting a uranium atom turns out to be 2 million times greater than breaking the carbon-hydrogen bond in coal, oil or wood. Compared to all the forms of energy ever employed by humanity, nuclear power is off the scale. Wind has less than 1/10th the energy density of wood, wood half the density of coal and coal half the density of octane. Altogether they differ by a factor of about 50. Nuclear has 2 million times the energy density of gasoline. It is hard to fathom this in light of our previous experience. Yet our energy future largely depends on grasping the significance of this differential. George Will presents the facts and arguments. Fossil fuels are the backbone of modern economies today. Like everything, they come with certain side effects. Nuclear power is the only option for the long term future. It can be managed so that it is the cleanest, safest energy source we have ever had. "In science, credit goes to the man who convinces the world, not the man to whom the idea first occurs." This year's Hevesy Nuclear Pioneer awardee, a true son of the Johns Hopkins, is both a person to whom ideas first occur and, especially, one who, in the name of nuclear medicine, convinces the world of their veracity and usefulness.
<urn:uuid:49126b67-0b7a-4c2f-a843-60dd9ed9ca2e>
2.828125
382
Knowledge Article
Science & Tech.
47.699359
95,529,567
The recent loss of sea ice in the Arctic is greater than any natural variation in the past 1½ millennia, a Canadian study shows. "The recent sea ice decline … appears to be unprecedented," said Christian Zdanowicz, a glaciologist at Natural Resources Canada, who co-led the study and is a co-author of the paper published Wednesday online in Nature. "We kind of have to conclude that there's a strong chance that there's a human influence embedded in that signal." In September, Germany's University of Bremen reported that sea ice had hit a record low, based on data from a Japanese sensor on NASA's Aqua satellite. The U.S. National Snow and Ice Data Center, using a different satellite data set, reported that the sea ice coverage in 2011 was the second-lowest on record, after the record set in 2007. What makes recent sea ice declines unique is that they have been driven by multiple factors that never all coincided in historical periods of major sea ice loss, said Christophe Kinnard, lead author of the new report. "Everything is trending up – surface temperature, the atmosphere is warming, and it seems also that the ocean is warming and there is more warm and saline water that makes it into the Arctic," Kinnard said, "and so the sea ice is eroded from below and melting from the top." In the past, he said, sea ice loss was driven mostly by an influx of warm, salty water from the North Atlantic into the Arctic due to a change in ocean currents, and wasn't necessarily linked to periods of warmer air temperatures. In contrast, Zdanowicz said, temperature has come to dominate control of the sea ice. Most of the current data about the recent, rapid sea ice loss comes from satellite measurements that began in the 1970s. Other reports of sea ice variability come from sources such as ship logs and only go back around 130 years, said Kinnard, a research scientist at the Centro de Estudios Avanzados en Zonas Áridas in La Serena, Chile, who conducted most of the research while doing his PhD and working at the Geological Survey of Canada under Zdanowicz and fellow glaciologist David Fisher. Zdanowicz said he and his colleagues had some questions in light of the recent dramatic decline of Arctic sea ice: "Is this exceptional? Is this unique? Is this part of a longer cycle?" The researchers compiled data from more than 60 sources, including ice core records, tree rings and lake and ocean sediments, which all provide information about climatic and sea ice changes over hundreds or thousands of years. About 80 per cent of the data came from ice cores from polar glaciers, and about a third of those were Canadian. Using those historical data records and statistics derived from modern data correlating sea ice to other factors, the researchers managed to reconstruct sea ice changes over the past 1,450 years — since about 600 A.D. The model showed that when the sea ice extent was at its lowest historically, at the beginning of that period, at least 8.5 million square kilometres of sea ice covered the Arctic in late summer, the time of year when sea ice is usually at its lowest extent. "Today, we're lower than eight," Kinnard said. Data will improve predictions Kinnard said the information about the causes of past sea ice losses might be useful to scientists who make predictions about sea ice loss and have so far been largely underestimating the rate of its decline: "Which probably indicates that the models are missing something." Zdanowicz added that climate models are tested by seeing how well they are able to reproduce the past – and the new reconstruction allows for that. Sea ice also has a strong effect on the overall climate, the scientists noted. For example, it is bright and so it reflects sunlight, reducing warming, while ocean water is dark, absorbing sunlight and increasing warming, said Anne de Vernal, a researcher at the University of Quebec in Montreal who also co-authored the report. Fisher added that it also affects water and atmospheric chemistry. That in turn could produce feedback that somehow speeds up further sea ice loss. That means information about sea ice could be useful in predicting other aspects of the future climate. On the other hand, de Vernal said, the unique nature of the current sea ice loss makes it harder to guess how other systems will respond. "What we are experiencing at the moment seems to be very exceptional…. This means that we are entering into the world which has no equivalent in the past."
<urn:uuid:522e68cf-e943-4009-9021-618d96bae013>
3.53125
941
News Article
Science & Tech.
41.844091
95,529,568
Soils that formed on the Earth's surface thousands of years ago and that are now deeply buried features of vanished landscapes have been found to be rich in carbon, adding a new dimension to our planet's carbon cycle. The finding, reported today (May 25, 2014) in the journal Nature Geoscience, is significant as it suggests that deep soils can contain long-buried stocks of organic carbon which could, through erosion, agriculture, deforestation, mining and other human activities, contribute to global climate change. An eroding bluff on the US Great Plains reveals a buried, carbon-rich layer of fossil soil. A team of researchers led by UW-Madison Assistant Professor of geography Erika Marin-Spiotta has found that buried fossil soils contain significant amounts of carbon and could contribute to climate change as the carbon is released through human activities such as mining, agriculutre and deforestation. Credit: Jospeh Mason "There is a lot of carbon at depths where nobody is measuring," says Erika Marin-Spiotta, a University of Wisconsin-Madison assistant professor of geography and the lead author of the new study. "It was assumed that there was little carbon in deeper soils. Most studies are done in only the top 30 centimeters. Our study is showing that we are potentially grossly underestimating carbon in soils." The soil studied by Marin-Spiotta and her colleagues, known as the Brady soil, formed between 15,000 and 13,500 years ago in what is now Nebraska, Kansas and other parts of the Great Plains. It lies up to six-and-a- half meters below the present-day surface and was buried by a vast accumulation of windborne dust known as loess beginning about 10,000 years ago, when the glaciers that covered much of North America began to retreat. The region where the Brady soil formed was not glaciated, but underwent radical change as the Northern Hemisphere's retreating glaciers sparked an abrupt shift in climate, including changes in vegetation and a regime of wildfire that contributed to carbon sequestration as the soil was rapidly buried by accumulating loess. "Most of the carbon (in the Brady soil) was fire derived or black carbon," notes Marin-Spiotta, whose team employed an array of new analytical methods, including spectroscopic and isotopic analyses, to parse the soil and its chemistry. "It looks like there was an incredible amount of fire." The team led by Marin-Spiotta also found organic matter from ancient plants that, thanks to the thick blanket of loess, had not fully decomposed. Rapid burial helped isolate the soil from biological processes that would ordinarily break down carbon in the soil. Such buried soils, according to UW-Madison geography Professor and study co-author Joseph Mason, are not unique to the Great Plains and occur worldwide. The work suggests that fossil organic carbon in buried soils is widespread and, as humans increasingly disturb landscapes through a variety of activities, a potential contributor to climate change as carbon that had been locked away for thousands of years in arid and semiarid environments is reintroduced to the environment. The element carbon comes in many forms and cycles through the environment – land, sea and atmosphere – just as water in various forms cycles through the ground, oceans and the air. Scientists have long known about the carbon storage capacity of soils, the potential for carbon sequestration, and that carbon in soil can be released to the atmosphere through microbial decomposition. The deeply buried soil studied by Marin-Spiotta, Mason and their colleagues, a one-meter-thick ribbon of dark soil far below the modern surface, is a time capsule of a past environment, the researchers explain. It provides a snapshot of an environment undergoing significant change due to a shifting climate. The retreat of the glaciers signaled a warming world, and likely contributed to a changing environment by setting the stage for an increased regime of wildfire. "The world was getting warmer during the time the Brady soil formed," says Mason. "Warm-season prairie grasses were increasing and their expansion on the landscape was almost certainly related to rising temperatures." The retreat of the glaciers also set in motion an era when loess began to cover large swaths of the ancient landscape. Essentially dust, loess deposits can be thick — more than 50 meters deep in parts of the Midwestern United States and areas of China. It blankets large areas, covering hundreds of square kilometers in meters of sediment. The study conducted by Marin-Spiotta, Mason, former UW-Madison Nelson Institute graduate student Nina Chaopricha, and their colleagues was supported by the National Science Foundation and the Wisconsin Alumni Research Foundation. —Terry Devitt, 608-262-8282, firstname.lastname@example.org NOTE: A high-resolution photo to accompany this release can be downloaded at http://www.news.wisc.edu/newsphotos/soils-2014.html Erika Marin-Spiotta | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8fb97b2b-8335-4643-8bd5-a53341ec869a>
4.03125
1,595
Content Listing
Science & Tech.
40.077899
95,529,577