text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Species Detail - Membranoptera alata - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
1 January (recorded in 1985)
31 December (recorded in 1898)
National Biodiversity Data Centre, Ireland, Membranoptera alata, accessed 16 July 2018, <https://maps.biodiversityireland.ie/Species/212> | <urn:uuid:b17f5b71-14a4-4bb0-a02c-d7f15a75e609> | 2.578125 | 129 | Structured Data | Science & Tech. | 33.436696 | 95,499,076 |
Scientists have discovered the fossil of an unusual large-bodied 'nude' sea-creature from half a billion years ago.
Earth's first complex animals were an eclectic bunch that lived in the shallow oceans between 580-540 million years ago.
A new study published today in the Proceedings of the National Academy of Sciences says we are dramatically underestimating the role inland fisheries play in global food security.
You have probably encountered a raccoon raiding the trash in your neighborhood, seen a rat scurrying through the subway or tried to shoo away birds from your picnic. But have you ever wondered what makes these animals so ...
The days of searching the oceans around the world to find and study rare and endangered marine animals are not over. However, an emerging tool that can be used with just a sample of seawater may help scientists learn more ...
Lions and tigers and bears are increasingly becoming night owls because of us, a new study says.
Fish will forego their own temperature preferences in order to remain part of a group, according to a new study.
Anyone who's seen a flock of starlings twist and turn across the sky may have wondered: How do they maneuver in such close formation without colliding?
Whenever we think about extinct animals we often imagine them eating their favourite meals, whether it be plants, other animals or a combination of both.
On July 20, 1969, Neil Armstrong put the first human footprint on the moon. But when did animals leave the first footprint on Earth? | <urn:uuid:ee4d5726-31ce-4947-b4d4-e61a05491820> | 3.1875 | 308 | Content Listing | Science & Tech. | 52.875424 | 95,499,086 |
Solar energy can be used almost anywhere to power a home, generate electricity or run small appliances like roadside signs or even calculators. The . Department of Energy's Solar Energy Potential Map shows that every location in the continental United States offers enough sunlight to generate at least 250 watts of electricity per square foot of collector space per day, with many locations capable of generating much more than that. Hydropower production, on the other hand, is limited to locations with access to a sufficient supply of running water to power turbines and other generating equipment. Many areas in the United States are considered exclusion areas, where federal or other statutes prohibit the use of hydropower production. | <urn:uuid:cd19e943-4c7a-4139-8872-f24ee052c3a5> | 3.21875 | 132 | Knowledge Article | Science & Tech. | 13.884 | 95,499,094 |
SCIENCE SOURCE NEWS: Brought to you by Science Source Images, your source for stock photography, illustration and video.
Moths and Enemy Sonar
Luna Moth BE5506
"The camouflage and mimicry techniques that animals use to avoid becoming a meal aren't much use against a predator using echolocation. But a new study shows that moths can outsmart sonar with a flick of their long tails. Using high-speed infrared cameras and ultrasonic microphones, the researchers watched brown bats preying on moths. Luna moths with tails were 47 percent more likely to survive an attack than moths without tails. Bats targeted the tail during 55 percent of the interactions, suggesting the moths may lure bats to the tails to make an attack more survivable." - Science Daily, University of Florida, February 18, 2015, Moths shed light on how to fool enemy sonar View more images of luna moths
Half of the Great Barrier Reef has died since 2016 and scientists say it's a direct result of climate change.
Coral lives in a symbiotic relationship with algae. Algae converts energy from the sun into food that feeds and nourishes the coral. When water temperatures rise the algae vacates, causing the coral to 'bleach' and eventually die. The results can be devastating. The bleaching spreads across miles of reef, transforming once spectacular ecosystems into barren wastelands.See Stock Images of the Reef
"People often ask me, will we have a Great Barrier Reef in 50 years or 100 years?" says Terry Hughes, the director of the ARC Center of Excellence for Coral Reef Studies. "And my answer is, yes, I certainly hope so – but it's completely contingent on the near-future trajectory of greenhouse-gas emissions."
Marine Phone Case
The Paris climate agreement of 2015 set a goal to prevent the globe from warming by two degrees Celsius. Since the Industrial …
Beach weather gives us the opportunity to get outdoors, enjoy the fresh air, and soak up some Vitamin D, but also brings concerns about excessive sun exposure. With stronger and more frequent sun comes a higher risk for skin to be damaged by UV rays, making the body more susceptible to skin cancer.
Skin cancer is one of the most common types of cancer. The Skin Cancer Foundation states that more people are diagnosed with skin cancer each year in the U.S. than all other cancers combined. The cause is most often UV rays from the sun or tanning beds. Skin cancer is generally categorized into two groups, melanoma, and nonmelanoma.
Melanoma cancer begins in melanocytes, which are cells that produce skin pigment (melanin) and reside deep within the epidermis (the outer layer of the skin). Melanoma is known to often be more serious than nonmelanoma cancer because it has the tendency to advance and spread rapidly. The number of new melanoma cases are also on the rise. That being said, the ea… | <urn:uuid:6b38a98a-399e-4d24-ae6e-c31037d12491> | 3.40625 | 616 | Content Listing | Science & Tech. | 48.300433 | 95,499,095 |
- In computer science, a readers–writer (RW) or shared-exclusive lock (also known as a multiple readers/single-writer lock, a multi-reader lock, a push lock, or an MRSW lock) is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers or readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid (and should not be read by another thread) until the update is complete.
Upgradable RW lockEdit
Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode.
RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or be unspecified with regards to priority. These policies lead to different tradeoffs with regards to concurrency and starvation.
- Read-preferring RW locks allow for maximum concurrency, but can lead to write-starvation if contention is high. This is because writer threads will not be able to acquire the lock as long as at least one reading thread holds it. Since multiple reader threads may hold the lock at once, this means that a writer thread may continue waiting for the lock while new reader threads are able to acquire the lock, even to the point where the writer may still be waiting after all of the readers which were holding the lock when it first attempted to acquire it have released the lock. Priority to readers may be weak, as just described, or strong, meaning that whenever a writer releases the lock, any blocking readers always acquire it next.:76
- Write-preferring RW locks avoid the problem of writer starvation by preventing any new readers from acquiring the lock if there is a writer queued and waiting for the lock; the writer will acquire the lock as soon as all readers which were already holding the lock have completed. The downside is that write-preferring locks allows for less concurrency in the presence of writer threads, compared to read-preferring RW locks. Also the lock is less performant because each operation, taking or releasing the lock for either read or write, is more complex, internally requiring taking and releasing two mutexes instead of one. This variation is sometimes also known as "write-biased" readers–writer lock.
- Unspecified priority RW locks does not provide any guarantees with regards read vs. write access. Unspecified priority can in some situations be preferable if it allows for a more efficient implementation.
Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist.
Using two mutexesEdit
Raynal demonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter, b, tracks the number of blocking readers. One mutex, r, protects b and is only used by readers; the other, g (for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following is pseudocode for the operations:
This implementation is read-preferring.:76
Using a condition variable and a mutexEdit
Alternatively, a write-preferring R/W lock can be implemented in terms of a condition variable and an ordinary (mutex) lock, in addition to an integer counter and a boolean flag. The lock-for-read operation in this setup is:
Each of lock-for-read and lock-for-write has its own inverse operation. Releasing a read lock is done by decrementing r and signalling c if r has become zero (both while holding m), so that one of the threads waiting on c can wake up and lock the R/W lock. Releasing the write lock means setting w to false and broadcasting on c (again while holding m).
Programming language supportEdit
- POSIX standard
pthread_rwlock_tand associated operations
- ReadWriteLock interface and the ReentrantReadWriteLock locks in Java version 5 or above
System.Threading.ReaderWriterLockSlimlock for C# and other .NET languages
std::shared_mutexread/write lock in C++17
boost::upgrade_mutexlocks in Boost C++ Libraries
SRWLock, added to the Windows operating system API as of Windows Vista.
- Phase fair reader–writer lock, which alternates between readers and writers
std::sync::RwLockread/write lock in Rust
- Poco::RWLock in POCO C++ Libraries
mse::recursive_shared_timed_mutexin the SaferCPlusPlus library is a version of
std::shared_timed_mutexthat supports the recursive ownership semantics of
txrwlock.ReadersWriterDeferredLockReaders/Writer Lock for Twisted
- This is the standard "wait" operation on condition variables, which, among other actions, releases the mutex m.
- Hamilton, Doug (21 April 1995). "Suggestions for multiple-reader/single-writer lock?". Newsgroup: comp.os.ms-windows.nt.misc. Usenet: hamilton.798430053@BIX.com. Retrieved 8 October 2010.
- "Practical lock-freedom" by Keir Fraser 2004
- "Push Locks – What are they?". Ntdebugging Blog. MSDN Blogs. 2009-09-02. Retrieved 11 May 2017.
- Raynal, Michel (2012). Concurrent Programming: Algorithms, Principles, and Foundations. Springer.
- Stevens, W. Richard; Rago, Stephen A. (2013). Advanced Programming in the UNIX Environment. Addison-Wesley. p. 409.
java.util.concurrent.locks.ReentrantReadWriteLockJava readers–writer lock implementation offers a "fair" mode
- Herlihy, Maurice; Shavit, Nir (2012). The Art of Multiprocessor Programming. Elsevier. pp. 184–185.
- Nichols, Bradford; Buttlar, Dick; Farrell, Jacqueline (1996). PThreads Programming: A POSIX Standard for Better Multiprocessing. O'Reilly. pp. 84–89.
- Butenhof, David R. (1997). Programming with POSIX Threads. Addison-Wesley. pp. 253–266.
- "The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition: pthread_rwlock_destroy". The IEEE and The Open Group. Retrieved 14 May 2011.
- "ReaderWriteLockSlim Class (System.Threading)". Microsoft Corporation. Retrieved 14 May 2011.
- "New adopted paper: N3659, Shared Locking in C++—Howard Hinnant, Detlef Vollmann, Hans Boehm". Standard C++ Foundation.
- Anthony Williams. "Synchronization – Boost 1.52.0". Retrieved 31 Jan 2012.
- Alessandrini, Victor (2015). Shared Memory Application Programming: Concepts and Strategies in Multicore Application Programming. Morgan Kaufmann.
- "The Go Programming language - Package sync". Retrieved 30 May 2015.
- "Reader–Writer Synchronization for Shared-Memory Multiprocessor Real-Time Systems" (PDF).
- "std::sync::RwLock - Rust". Retrieved 10 December 2015.
- "Readers/Writer Lock for Twisted". Retrieved 28 September 2016. | <urn:uuid:8e4bb8db-119c-4c62-aee1-38ea132b83fc> | 3.5 | 1,685 | Knowledge Article | Software Dev. | 53.145178 | 95,499,123 |
When answering a query and searching for a goal, Prolog looks for a match where the predicates and their arguments are the same as in the query. This match can be from the database, or any intermediate places along the search path. Put another way, attempts are made to find a clause in the database from which the goal follows. If there is a successful match, Prolog will respond with yes. Alternatively if terms are not identical, variables may be instantiated. An integer or atom will match only itself. A more complicated structure will match another structure with the same relation and arity, provided that all corresponding arguments match.
KeywordsSearch Tree Control Word Search Path Control Predicate Successful Match
Unable to display preview. Download preview PDF. | <urn:uuid:b45d1814-f590-416f-8d6f-1a10a95a51ba> | 2.59375 | 153 | Truncated | Software Dev. | 44.910714 | 95,499,139 |
The NASA/ESA Hubble Space Telescope and NASA’s Spitzer Space Telescope have teamed up to weigh the stars in distant galaxies. One of these galaxies is not only one of the most distant ever seen, but it appears to be unusually massive and mature for its place in the young Universe.
This has surprised astronomers because the earliest galaxies in the Universe are commonly thought to have been much smaller agglomerations of stars that gradually merged together later to build the large majestic galaxies like our Milky Way.
"This galaxy appears to have bulked up amazingly quickly, within a few hundred million years after the Big Bang," said Bahram Mobasher of the European Space Agency and the Space Telescope Science Institute, a member of the team that discovered the galaxy.
Lars Lindberg Christensen | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Materials Sciences
20.07.2018 | Physics and Astronomy
20.07.2018 | Materials Sciences | <urn:uuid:19177ded-0644-4b5f-9e87-6ae90ea01b02> | 3.6875 | 735 | Content Listing | Science & Tech. | 37.245854 | 95,499,140 |
Characterizing and calibrating a low impedance large Helmholtz coil generating 60 Hz magnetic fields with amplitudes well below the earth’s magnetic field is difficult and imprecise when coil shielding is not available and noise is an issue. Parameters influencing the calibration process such as temperature and coil impedance need to be figured in the calibration process. A simple and reliable calibration technique is developed and used to measure low amplitude fields over a spatial grid using a standard Hall effect probe gaussmeter. These low amplitude fields are typically hard or impossible to detect in the presence of background fields when using the gaussmeter in the conventional manner. Standard deviations of two milligauss and less have been achieved over a spatial grid in a uniform field region. Theoretical and measured fields are compared yielding reasonable agreement for a large coil system designed and built for bioelectromagnetic experiments at the University of Nevada at Las Vegas using simple tools. Theoretical results need to be compared with and adjusted in accord with measurements taken over a large parameter space within the design constraints of the coil. Magnetic field measurements made over a four year period are shown to be consistent. Characterizing and calibrating large Helmholtz coils can be performed with rulers, levels, plumb lines, and inexpensive gaussmeters.
Calibration; Geomagnetism and paleomagnetism; Magnetic fields; Alternating current power transmission; Experiment design; Field theory; Magnetic field Measurements; Unified field theories
Computer Engineering | Electrical and Computer Engineering | Engineering
Copyright American Institute of Physics. Used with permission.
Schill, R. A.,
Hoff, K. V.
Characterizing and Calibrating a Large Helmholtz Coil at Low AC Magnetic Field Levels with Peak Magnitudes Below the Earth's Magnetic Field.
Review of Scientific Instruments, 72(6), | <urn:uuid:9bbace3c-a204-40cf-be38-96bf51fabfcb> | 2.78125 | 374 | Academic Writing | Science & Tech. | 21.406074 | 95,499,153 |
Could there be evidence of water on Mars?!
NASA’s Mars Exploration Rover Opportunity, which landed early 2004 in the Meridiani Planum region of Mars, has finally reached its destination after a two-year extended mission. The rover’s mission took it to Perseverance Valley on the rim of the Endeavor Crater. The rover has been taking images of the area in greater detail and higher resolution than what orbital imagery could provide.
It is still unknown how Perseverance Valley formed. Scientists hypothesize that the valley could have been formed due to flowing water, debris flow (small amounts of water with mud and boulders), or even wind erosion. The purpose of the rover’s mission is to provide information to support one the possible formation hypotheses. Opportunity Project Scientists Matt Golombek of NASA’s Jet Propulsion Laboratory, Pasadena, California says, “The science team is really jazzed at starting to see this area up close and looking for clues to help us distinguish among multiple hypotheses about how the valley formed.”
The team plans to have the rover take pictures in a way that will form detailed three-dimensional information on the surface of Mars. However, finding a path across the valley with minimum obstacles for the rover will not be easy. The valley “extends down from the rim’s crest line into the crater, at a slope of about 15-17 degrees for a distance of about two football fields.” The difficulty of this mission is not the descent down the hill, its going back up. The team will need to find a path that is suitable for driving the rover through the whole valley. Researchers plan to have the rover analyze the surface and makeup of the soil throughout all the levels of the valley. Currently, since landing on mars, the rover has traveled about 27.8 miles. We are looking forward to seeing what new discoveries are made from Opportunity’s explorations! #GetExcitedSU | <urn:uuid:d848573e-b343-40e2-9471-3ef21f53677d> | 3.765625 | 402 | News (Org.) | Science & Tech. | 46.901226 | 95,499,174 |
Development of Luminescent Detectors for Hot Plasmas
Among the main requirements which a broadband radiation detector for high temperature plasmas should possess include: a known or traceable response over as broad a wavelength range as possible; a quick response time; immunity to electromagnetic interference and moreover, in fusion devices the capability to cope neutron producing plasmas; resistance to radiation and operation with a limited access to the machine. Since most of the present designs lack sufficient radiation hardness, research is needed to find a solution. The luminescent properties of phosphor materials are commonly used in radiation detection and measurement. The energy absorbed by the phosphor is partly converted into light whose wavelength range can be tailored by appropriate material and impurity content. Potentially the main advantage for a fusion plasma is that only a thin film of the phosphor plus radiation filters need to be close to the plasma, whereas the most sensitive parts of the system, which receive the luminescent signal via metallic optics and fibres, can be withdrawn to a safer distance. Here we summarise the effort invested over the last years at CIEMAT to develop broadband plasma radiation detectors with spatial, temporal and energy resolution. Particular attention is given to the extrapolation of such detectors to the harsh environment of an ITER-like device.
KeywordsSodium Salicylate Luminescent Material Plasma Radiation High Temperature Plasma Luminescent Detector
Unable to display preview. Download preview PDF. | <urn:uuid:6cb0a98f-33eb-44bf-bc21-50af48e00a82> | 2.546875 | 296 | Truncated | Science & Tech. | 8.972222 | 95,499,187 |
Researchers at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) have for the first time simulated the formation of structures called "plasmoids" during Coaxial Helicity Injection (CHI), a process that could simplify the design of fusion facilities known as tokamaks.
The findings, reported in the journal Physical Review Letters, involve the formation of plasmoids in the hot, charged plasma gas that fuels fusion reactions. These round structures carry current that could eliminate the need for solenoids - large magnetic coils that wind down the center of today's tokamaks - to initiate the plasma and complete the magnetic field that confines the hot gas.
"Understanding this behavior will help us produce plasmas that undergo fusion reactions indefinitely," said Fatima Ebrahimi, a physicist at both Princeton University and PPPL, and the paper's lead author.
Ebrahimi ran a computer simulation that modeled the behavior of plasma and the formation of plasmoids in three dimensions thoughout a tokamak's vacuum vessel. This marked the first time researchers had modeled plasmoids in conditions that closely mimicked those within an actual tokamak. All previous simulations had modeled only a thin slice of the plasma - a simplified picture that could fail to capture the full range of plasma behavior.
Researchers validated their model by comparing it with fast-camera images of plasma behavior inside the National Spherical Torus Experiment (NSTX), PPPL's major fusion facility. These images also showed plasmoid-like structures, confirming the simulation and giving the research breakthrough significance, since it revealed the existence of plasmoids in an environment in which they had never been seen before.
"These findings are in a whole different league from previous ones," said Roger Raman, leader for the Coaxial Helicity Injection Research program on NSTX and a coauthor of the paper.
The findings may provide theoretical support for the design of a new kind of tokamak with no need for a large solenoid to complete the magnetic field. Solenoids create magnetic fields when electric current courses through them in relatively short pulses.
Today's conventional tokamaks, which are shaped like a donut, and spherical tokamaks, which are shaped like a cored apple, both employ solenoids. But future tokamaks will need to operate in a constant or steady state for weeks or months at a time. Moreover, the space in which the solenoid fits - the hole in the middle of the doughnut-shaped tokamak - is relatively small and limits the size and strength of the solenoid.
A clear understanding of plasmoid formation could thus lead to a more efficient method of creating and maintaining a plasma through transient Coaxial Helicity Injection. This method, originally developed at the University of Washington, could dispense with a solenoid entirely and would work like this:
Understanding how the magnetic lines in plasmoids snap closed could also help solar physicists decode the workings of the sun. Huge magnetic lines regularly loop off the surface of the star, bringing the sun's hot plasma with them. These lines sometimes snap together to form a plasmoid-like mass that can interfere with communications satellites when it collides with the magnetic field that surrounds the Earth.
While Ebrahimi's findings are promising, she stresses that much more is to come. PPPL's National Spherical Torus Experiment-Upgrade (NSTX-U) will provide a more powerful platform for studying plasmoids when it begins operating this year, making Ebrahimi's research "only the beginning of even more exciting work that will be done on PPPL equipment," she said.
PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas -- ultra-hot, charged gases -- and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
Raphael Rosen | EurekAlert!
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Science Education
23.07.2018 | Health and Medicine
23.07.2018 | Life Sciences | <urn:uuid:19036f8d-01f2-42f2-98d0-daab441e42f2> | 3.359375 | 1,515 | Content Listing | Science & Tech. | 36.245533 | 95,499,190 |
In this chapter we consider an FK2 of the form (1.2.2): ϕ(x) — λ ∫ a b k(x,s)ϕ(s)ds = f (s), where a ≤ x,s ≤ b, and the kernel k(x,s) has a weak singularity at an endpoint. In numerical approximations, whether in quadrature, finite differences,finite elements, and the like, the computational methods generally use polynomials as basis functions to obtain approximate solutions that are sufficiently accurate in a region where the function to be approximated is ‘smooth’ (or analytic). However,such methods fail significantly in a neighborhood of singularities of the function.An analytic function ϕ has a singularity at a point at which ϕ does not exist and at the endpoints of an interval or a contour. On the other hand, the numerical approximation obtained by using Whittaker’s cardinal function C(ϕ, h, x) yields much better results than those obtained by polynomial methods in the case when singularities are present at an endpoint of an interval. This method, however,may or may not yield better results in the absence of singularities. The function C(ϕ, h, x) is called the cardinal interpolant to ϕ(x).
KeywordsLipschitz Condition Collocation Point Quadrature Rule Interpolation Point Unknown Density Function
Unable to display preview. Download preview PDF. | <urn:uuid:2122aa1f-e19c-4ccb-8960-3097dd68dd9e> | 2.515625 | 318 | Truncated | Science & Tech. | 47.538169 | 95,499,219 |
P. S. We saw a bear!
Tuesday, May 31, 2011
P. S. We saw a bear!
Tuesday, May 17, 2011
Humanity is at odds with the world’s top predators. Ecologists have long recognized the importance of top predators for the functioning of food webs. Decades of work have revealed that top predator removal can impact the biomass of primary producers. This is because, as top predators decline, prey populations increase, initiating trophic cascades. Traditional trophic cascades are mediated by the demographic and behavioral responses of prey populations. But predator removal may also have important effects on prey evolution. When predators are present and prey density is low, natural selection may favor prey traits that are important for predator escape ability. However, when predators are eliminated and prey density increases, natural selection may shift, now favoring traits that are important for competitive ability. This shift in natural selection may modify important trophic interactions.
Mike Kinnison, Ben Wasserman and I investigated the impact of predator loss on the evolution and ecology of prey. We took advantage of a historical introduction experiment involving Trinidadian guppies. In 1976, John Endler introduced about 200 guppies from a site with the top fish predator Crenicichla to a site lacking predators. Much is known about how this introduced guppy population has evolved in terms of color patterns and life history traits. However, little is known about how trophic morphology and feeding rates have changed in response to predator loss. We hypothesized that the absence of Crenicichla would lead to increased guppy density and heightened intraspecific competition. Due to trade-offs between gathering resources and avoiding predators, we predicted that the population released from predation would display heightened feeding rates compared to the high-predation source population.
Our results confirmed this prediction. The introduced population and a nearby natural low-predation population both displayed greater guppy densities and higher individual level consumption rates than the high predation source population. In addition, morphometric analysis revealed that both head and body shape have evolved to facilitate heightened resource acquisition. Results from prior experiments in mesocosms suggest that heightened feeding rates in low-predation guppy populations may cause them to have stronger top-down effects on algal biomass compared to high-predation populations.
Traditionally, the loss of top predators has been considered from a strictly ecological point of view. Our results suggest that predator loss may drive prey evolution, which itself may have important ecological effects – in this case, amplifying the strength of trophic cascades. If our results reflect a common response of prey populations to the loss of top predators, then a full assessment of the ecological impacts of top predator removal must carefully consider the effects of prey evolution.
This study recently appeared in PLoS ONE: http://dx.plos.org/10.1371/journal.pone.0018879.
Thursday, May 5, 2011
The dutch word in this title probably needs translation. “Suskewiet“ is an onomatopoeia (=a sound-imitating word) imitating the last part of the song of the male chaffinch (Fringilla coelebs), a small passerine bird in the finch family Fringillidae. Sorry, this is not entirely correct. “Suskewiet” only refers to the last part of the song as it is performed in Flanders, Belgium. Indeed, the chaffinch occurs all over Europe, parts of Asia, and North Africa, and there is considerable geographical variation in song. So “suskewiet” belongs to the repertoire of the birds’ Flemish dialect.
The male of the chaffinch.
Why would one need a specific word to talk about the last part of the song of Flemish chaffinches? This is a long story – but the picture below tells it all:
A “vinkenzetting”, or finch championship, in Flanders (Belgium).
Silence please! Here you see an important folkloristic competition going on, called “vinkenzetting” (finch championship) . Each of the boxes contains a male finch, and the championship is all about which finch makes this precise “suskewiet” sound most often. The men and women in the picture are holding a pole and a piece of chalk, to keep track of the number of suskewiet’s. The birds can hear each other, and their territorial nature makes them sing “suskewiet” as much as possible. The best finches sing more than 600 times per hour! The owner of the best finch wins a symbolic price, and the finch becomes more valuable. The tradition goes back to the Middle Ages (first mentioning of a finch game in 1595), and, despite or thanks to its weirdness, still persists. People practicising the game are called “vinkeniers” (“finchers”). They used to have a pretty bad reputation for illegal practises such as blinding the birds (up to the 1900’s it was thought that blind birds sing more) and depleting the wild finch population with rude catching techniques (up to the 1970’s). Luckily, the finchers are now organised in an official federation (http://www.avibo.be/home.php), controlling the games with strict regulations, using domesticated birds only.
Just like farmers who know how to breed cattle, finchers have an impressive knowledge about how to breed finches (including the inheritance of interesting traits such as song and colour). I’m not sure how much of the variation in song is heritable, but breeders do select fathers based on song quality. However, it is possible to teach a male bird the right song exposing them to “teacher birds” or audiorecords. Interestingly, a bird not singing the Flemish “suskewiet” is called a “francophone”, making a supposedly less elegant “suskeweiih” noise. Talking about Belgium (and Quebec), this sounds a bit politically incorrect. However, birds have a right wing and a left wing, so they are probably politically neutral.
Returning from the Galapagos field expedition 2011, it came to my mind that male Darwin’s finches use neither “suskewiet” nor “suskeweiih” in their songs. This is no surprise, as Darwin’s finches and the chaffinch are not related. Their songs are actually very different, even though they have similar beaks.
Darwin probably would not be surprised to read that people are weird enough to domesticate finches (such as chaffinch and zebra finch), applying his theory of selection by domestication. He also wouldn’t be surprised to read that Darwin’s finches have become iconic for his theory on natural selection. But he might be surprised to read that humans can alter the strength of natural selection in Darwin’s finches. During previous expeditions, the Hendry lab has been investigating this possibility in the seed-eating medium ground finch (Geospiza fortis) from Santa Cruz island, by comparing the morphology of a population living at Academy Bay, a human-impacted site, with a population living at El Garrapatero, a natural site. G. fortis birds from Academy Bay had smaller beak size than birds from El Garrapatero. This probably implies that the presence of humans has caused a shift in the finches’ resource distribution by the introduction of human food (such as bread, rice and potato chips) or new plant species into the environment, creating a selective advantage for finches with smaller beaks.
Chaffinches in captivity and Darwin’s finches in the wild thus seem to have in common that humans can influence which finches are going to contribute to the next generation. In the first case we talk about intended selection by domestication, in the second case about unintended alteration of the strength of natural selection. Is it exaggerated to describe the latter as “unintended domestication”?
So far, indications that humans might influence the evolution of Darwin’s finches have only been observed in a single species. However, as scientists we don’t want to rely on a single significant P-value - just as a fincher is not satisfied with a single “suskewiet”. During the 2010 and 2011 expeditions, we measured five additional species at the human-impacted and the natural site study site: the small ground finch (Geospiza fuliginosa), the large ground finch (Geospiza magnirostris), the cactus finch (Geospiza scandens), the vegetarian finch (Platyspiza crassirostris), and the small treefinch (Camarhynchus parvulus). So, let’s have a look at the potential human impact on the beak morphology of these species. P-values are not very suitable here - as finchers wouldn’t understand them. Luckily , “suskewiet” sounds like a significant result (P < 0.05), whereas “suskeweiih” sounds perfect for an non-significant result (P > 0.05) – and finchers do understand it. This opens opportunities for a new “suskewiet vs. suskeweiih”-based school in statistics. Here I show how it works, testing for differences in beak length, beak depth and beak width, respectively, in of each of the investigated species.
The small ground finch – “Suskeweiih, suskeweiih, suskeweiih!”.
The vegetarian finch – “Suskeweiih, suskewiet, suskewiet!”.
The large ground finch – “Suskeweiih, suskeweiih, suskeweiih!”.
The cactus finch – “Suskewiet, suskeweiih, suskeweiih!”.This song is a bit silent, but still elegant. There is a marginally significant difference for beak length, which was larger at El Garrapatero in both years. The other beak dimensions don’t differ.
The small treefinch – “Suskeweiih, suskewiet, suskewiet!”.Ha! Exactly the same song as the vegetarian finch. Beak width and beak depth differ significantly between the sites, beak length does not. Beaks were larger at El Garrapatero, and this was consistent across years.
So far so good. Now lets listen to the birds when they sing all together. It sounds harmonious because all suskewiet’s are generated by significantly larger beaks at El Garrapatero than at Academy Bay. So, the human-altered environment of Academy Bay seems to affect the bird species in the same way as observed previously for the medium ground finch. This makes sense, because the diets of these bird species partially overlap. A human-induced shift towards smaller and softer food items might thus select for smaller beaks in multiple bird species.
After the game finchers don’t go home early. They go to the local bar to talk about the championship and their finches, and to share their experiences or secrets. Science might benefit from this, so I might go there and ask them for advice. I'll probably start drinking to forget my disappointment in the ground finches, because both the large and the small one did not sing very well. In contrast, my cactus finch, small tree finch and vegetarian finch did a great job. This is changing my perspective. Small ground finches in particular are really keen on human food. Perhaps it does not affect the direction of selection in this species, because its natural diet might already be rather similar to the human food they can find. In contrast, I expected that cactus finches (feeding on cactus flowers), vegetarian finches (feeding on fruits) and small tree finches (mostly feeding on insects) would not bother about the introduction of novel food items by humans, and that their beak morphology would be conserved. However, my state-of-the-art “suskewiet vs. suskeweiih”-based statistical analyses suggested the opposite.
It is about time to bring back my finches to the aviary. It is worrying how rice, bread and potato chips might alter the live of a finch in the wild. In this sense there is sadness in all of the above songs. Let’s see how this is going to affect finch championships in the future.
As I sit here in an airport lounge en route to yet another far-flung destination, it seems appropriate to finally write a long-planned blo...
[ This post is from Gregor Rolshausen , I am just putting it up. –B. ] Arguably, among the most interesting shifts in evolutionary thinki...
(Genders match the original.) The Gandalf Extremely wise – if sometimes inscrutably so – and able to solve your biggest problems, th...
As an editor, reviewer, supervisor, committee member, and colleague, I have read countless papers and proposals and have seen similarly co... | <urn:uuid:acccb9e8-2f0a-4241-9e7e-13e35bed2dcd> | 3.125 | 2,868 | Personal Blog | Science & Tech. | 45.903553 | 95,499,233 |
In February, Deep Earth Academy published a video guide to accompany the third episode of the Expedition 342: Newfoundland video series, entitled “Time Machine.” Comprising a student questionnaire and a detailed teachers’ guide, the package will enable students to identify how the JOIDES Resolution helps to study the past. They will also be able to list the steps of the coring process and identify three major scientific properties that are studied from core samples. For more information and to download the video guide, please visit the JOIDES Resolution website.
Program Update: Deep Earth Academy – February 2014
Webmaster 2016-06-28T19:30:06+00:00 February 28, 2014| | <urn:uuid:a83e2aa7-ce3a-4023-adf3-136d15cd4836> | 2.96875 | 141 | News (Org.) | Science & Tech. | 30.250105 | 95,499,242 |
The Model-View-Controller (MVC) architecture traces its roots back to the programming language Smalltalk and Xerox Parc. Since then, many systems to describe their architecture and MVC. Each system is different, but they all have the goal of separating data access, business logic and user interface code from each other.
The architecture of PHP MVC frameworks, most will look something like this.
1 – Basics This topic comprises approximately 5% of the exam. Questions are drawn randomly from the following objectives: Fundamentals ○ Describe and apply basic principles and processes of Object Oriented Programming (OOP) and Model-View-Controller (MVC) to build Magento websites ○ Identify and describe the principles of Event-Driven Architecture (EDA) …
Originally as a PHP programmer, if you wanted to gather together a group of related variables that you had a choice, the venerable Array. Although it shares a name with C's array of memory addresses, a PHP array is a general purpose dictionary like object with the behavior of a numerically indexed array variable.
In other languages, the choice is not so simple. You have several data structures to choose from, each with specific advantages in storage, speed and semantics. The PHP philosophy was to remove that choice from the client programmer and give them a useful data structure that was “good enough”. I have long held that PHP arrays are a major reason for the popularity of the platform. | <urn:uuid:bbe9444f-8bb2-4609-b3fd-9400448384af> | 3.421875 | 301 | Academic Writing | Software Dev. | 33.03549 | 95,499,270 |
Researchers have detected common plant toxins that affect human health and ecosystems in smoke from forest fires. The results from the new study also suggest that smoldering fires may produce more toxins than wildfires - a reason to keep human exposures to a minimum during controlled burns.
Finding these toxins -- known as alkaloids -- helps researchers understand how they cycle through earth and air. Smoke-related alkaloids in the environment can change aquatic and terrestrial ecosystems, as well as where and when clouds form. The study, which was of Ponderosa pines, by scientists at the Department of Energy's Pacific Northwest National Laboratory will appear June 1 in Environmental Science and Technology.
"Ponderosa pines are widespread in areas that are prone to forest fires," said PNNL physical chemist Julia Laskin, one of the coauthors. "This study shows us which molecules are in smoke so we can better understand smoke's environmental impact."
As trees and underbrush burn, billowing smoke made up of tiny particles drifts away. The tiny particles contain a variety of natural compounds released from the plant matter. Researchers have long suspected the presence of alkaloids in smoke or detected them in air during fire season, but no one had directly measured them coming off a fire. The PNNL researchers had recently developed the technology to pick out alkaloids from the background of similar molecules.
To investigate chemicals given off by fires, the team captured some smoke from test fires organized by Colorado State University researchers. These researchers were doing controlled burns of ponderosa pines, underbrush and other fuels at the Forest Service Fire Science Laboratory in Missoula, Mont.
The scientists collected smoke samples in a device that corrals small particles. Using high-resolution spectrometry instruments in EMSL, DOE's Environmental Molecular Sciences Laboratory on the PNNL campus, they then determined which molecules the smoke contained. At EMSL, the researchers used the new methods to glean highly detailed information about the smoke's composition.
The team found a wide variety of molecules. When they compared their results to other studies, they found that 70 percent of these molecules had not been previously reported in smoke.
"The research significantly expanded the previous observations," said aerosol chemist and coauthor Alexander Laskin.
In addition, 10 to 30 percent of these were alkaloids, common plant molecules that proved to be quite resistant to the high temperatures of fire. Plants often use alkaloids for protection, because they can poison other plants and animals, including humans. Alkaloids also have medicinal value (caffeine and nicotine, for example, are well-known alkaloids that aren't found in pine trees).
A large percentage of the alkaloids were those that carry biologically useful nitrogen through atmospheric, terrestrial and aquatic environments. Because of this, the results suggest smoke might be an important step in this transport. Also, the nitrogen-containing alkaloids have a basic pH, which can make cloud-forming particles less acidic, and in turn impact cloud formation that is critical to global agriculture and water supplies.
The researchers also found that the abundance of alkaloids depends on how vigorously the fire burns. Smoldering fires such as those in controlled burns produce more of the compounds than blazing fires such as those fanned by high winds. Because some plant alkaloids might be harmful, the result could affect planned fires upwind of human populations.
For future studies, the researchers are developing a method to quantify the alkaloids and related compounds in smoke to better understand their chemical composition and prevalence.
A Laskin, J Smith, and J Laskin. 2009. "Molecular Characterization of Nitrogen Containing Organic Compounds in Biomass Burning Aerosols Using High Resolution Mass Spectrometry." Environmental Science and Technology. DOI: 10.1021/es803456n.
This work was funded by the DOE Office of Science through the Office of Basic Energy Sciences, the Office of Biological and Environmental Research, and the Science Undergraduate Laboratory Internship program.
EMSL, the Environmental Molecular Sciences Laboratory (www.emsl.pnl.gov), is a national scientific user facility sponsored by the Department of Energy's Office of Science, Biological and Environmental Research program that is located at Pacific Northwest National Laboratory. EMSL offers an open, collaborative environment for scientific discovery to researchers around the world. EMSL's technical experts and suite of custom and advanced instruments are unmatched. Its integrated computational and experimental capabilities enable researchers to realize fundamental scientific insights and create new technologies.
Pacific Northwest National Laboratory (www.pnl.gov) is a Department of Energy Office of Science national laboratory where interdisciplinary teams advance science and technology and deliver solutions to America's most intractable problems in energy, national security and the environment. PNNL employs 4,250 staff, has a $918 million annual budget, and has been managed by Ohio-based Battelle since the lab's inception in 1965. Follow PNNL on Facebook, Linked In and Twitter.
Further reports about: > Chemical > EMSL > Environmental Research > Laboratory > Molecular Target > PNNL > Smoke > Smoldering ponderosa pine fires > alkaloids > aquatic and terrestrial ecosystems > ecosystems > environmental risk > fire season > forest fires > high-resolution spectrometry instruments > nitrogen-containing alkaloids > pine tree
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:9101d80c-ea86-4ae3-bce7-f8122275042c> | 3.4375 | 1,756 | Content Listing | Science & Tech. | 30.112935 | 95,499,306 |
Summation of Grandi's series
- 1 General considerations
- 2 Cesàro sum
- 3 Abel sum
- 4 Dilution
- 5 Separation of scales
- 6 Euler transform and analytic continuation
- 7 Borel sum
- 8 Spectral asymmetry
- 9 Proof through 1 / x series
- 10 Methods that fail
- 11 Notes
- 12 References
Stability and linearity
The formal manipulations that lead to 1 − 1 + 1 − 1 + · · · being assigned a value of 1⁄2 include:
- Adding or subtracting two series term-by-term,
- Multiplying through by a scalar term-by-term,
- "Shifting" the series with no change in the sum, and
- Increasing the sum by adding a new term to the series' head.
These are all legal manipulations for sums of convergent series, but 1 − 1 + 1 − 1 + · · · is not a convergent series.
The first rigorous method for summing divergent series was published by Ernesto Cesàro in 1890. The basic idea is similar to Leibniz's probabilistic approach: essentially, the Cesàro sum of a series is the average of all of its partial sums. Formally one computes, for each n, the average σn of the first n partial sums, and takes the limit of these Cesàro means as n goes to infinity.
For Grandi's series, the sequence of arithmetic means is
- 1, 1⁄2, 2⁄3, 2⁄4, 3⁄5, 3⁄6, 4⁄7, 4⁄8, …
or, more suggestively,
- (1⁄2+1⁄2), 1⁄2, (1⁄2+1⁄6), 1⁄2, (1⁄2+1⁄10), 1⁄2, (1⁄2+1⁄14), 1⁄2, …
- for even n and for odd n.
This sequence of arithmetic means converges to 1⁄2, so the Cesàro sum of Σak is 1⁄2. Equivalently, one says that the Cesàro limit of the sequence 1, 0, 1, 0, … is 1⁄2.
The Cesàro sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3. So the Cesàro sum of a series can be altered by inserting infinitely many 0s as well as infinitely many brackets.
The series can also be summed by the more general fractional (C, a) methods.
Abel summation is similar to Euler's attempted definition of sums of divergent series, but it avoids Callet's and N. Bernoulli's objections by precisely constructing the function to use. In fact, Euler likely meant to limit his definition to power series, and in practice he used it almost exclusively in a form now known as Abel's method.
Given a series a0 + a1 + a2 + · · ·, one forms a new series a0 + a1x + a2x2 + · · ·. If the latter series converges for 0 < x < 1 to a function with a limit as x tends to 1, then this limit is called the Abel sum of the original series, after Abel's theorem which guarantees that the procedure is consistent with ordinary summation. For Grandi's series one has
The corresponding calculation that the Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 involves the function (1 + x)/(1 + x + x2).
Whenever a series is Cesàro summable, it is also Abel summable and has the same sum. On the other hand, taking the Cauchy product of Grandi's series with itself yields a series which is Abel summable but not Cesàro summable:
has Abel sum 1⁄4.
That the ordinary Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 can also be phrased as the (A, λ) sum of the original series 1 − 1 + 1 − 1 + · · · where (λn) = (0, 2, 3, 5, 6, …). Likewise the (A, λ) sum of 1 − 1 + 1 − 1 + · · · where (λn) = (0, 1, 3, 4, 6, …) is 1⁄3.
The summability of 1 − 1 + 1 − 1 + · · · can be frustrated by separating its terms with exponentially longer and longer groups of zeros. The simplest example to describe is the series where (−1)n appears in the rank 2n:
- 0 + 1 − 1 + 0 + 1 + 0 + 0 + 0 − 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 1 + 0 + · · ·.
This series is not Cesaro summable. After each nonzero term, the partial sums spend enough time lingering at either 0 or 1 to bring the average partial sum halfway to that point from its previous value. Over the interval 22m−1 ≤ n ≤ 22m − 1 following a (− 1) term, the nth arithmetic means vary over the range
or about 2⁄3 to 1⁄3.
In fact, the exponentially spaced series is not Abel summable either. Its Abel sum is the limit as x approaches 1 of the function
- F(x) = 0 + x − x2 + 0 + x4 + 0 + 0 + 0 − x8 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + x16 + 0 + · · ·.
This function satisfies a functional equation:
This functional equation implies that F(x) roughly oscillates around 1⁄2 as x approaches 1. To prove that the amplitude of oscillation is nonzero, it helps to separate F into an exactly periodic and an aperiodic part:
satisfies the same functional equation as F. This now implies that Ψ(x) = −Ψ(x2) = Ψ(x4), so Ψ is a periodic function of loglog(1/x). Since F and Φ are different functions, their difference Ψ is not a constant function; it oscillates with a fixed, finite amplitude as x approaches 1. Since the Φ part has a limit of 1⁄2, F oscillates as well.
Separation of scales
Given any function φ(x) such that φ(0) = 1, the limit of φ at +∞ is 0, and the derivative of φ is integrable over (0, +∞), then the generalized φ-sum of Grandi's series exists and is equal to 1⁄2:
The Cesaro or Abel sum is recovered by letting φ be a triangular or exponential function, respectively. If φ is additionally assumed to be continuously differentiable, then the claim can be proved by applying the mean value theorem and converting the sum into an integral. Briefly:
Euler transform and analytic continuation
This section is empty. You can help by adding to it. (July 2010)
The Borel sum of Grandi's series is again 1⁄2, since
The series can also be summed by generalized (B, r) methods.
The entries in Grandi's series can be paired to the eigenvalues of an infinite-dimensional operator on Hilbert space. Giving the series this interpretation gives rise to the idea of spectral asymmetry, which occurs widely in physics. The value that the series sums to depends on the asymptotic behaviour of the eigenvalues of the operator. Thus, for example, let be a sequence of both positive and negative eigenvalues. Grandi's series corresponds to the formal sum
where is the sign of the eigenvalue. The series can be given concrete values by considering various limits. For example, the heat kernel regulator leads to the sum
which, for many interesting cases, is finite for non-zero t, and converges to a finite value in the limit.
Proof through 1 / x series
is a geometric series in , convergent for . Formally substituting gives:
Methods that fail
The moment constant method with
and k > 0.
- Davis pp.152, 153, 157
- Davis pp.153, 163
- Davis pp.162-163, ex.1-5
- Smail p.131
- Kline 1983 p.313
- Bromwich p.322
- Davis p.159
- Davis p.165
- Hardy p.73
- Hardy p.60
- Hardy (p.77) speaks of "another solution" and "plainly not constant", although technically he does not prove that F and Φ are different.
- Saichev pp.260-262
- Weidlich p.20
- Smail p.128
- Hardy pp.79-81, 85
- Hardy pp.81-86
- Bromwich, T.J. (1926) . An Introduction to the Theory of Infinite Series (2e ed.).
- Davis, Harry F. (May 1989). Fourier Series and Orthogonal Functions. Dover. ISBN 0-486-65973-9.
- Hardy, G.H. (1949). Divergent Series. Clarendon Press. LCC QA295 .H29 1967.
- Kline, Morris (November 1983). "Euler and Infinite Series". Mathematics Magazine. 56 (5): 307–314. doi:10.2307/2690371. JSTOR 2690371.
- Saichev, A.I. & W.A. Woyczyński (1996). Distributions in the physical and engineering sciences, Volume 1. Birkhaüser. ISBN 0-8176-3924-1. LCC QA324.W69 1996.
- Smail, Lloyd (1925). History and Synopsis of the Theory of Summable Infinite Processes. University of Oregon Press. LCC QA295 .S64.
- Weidlich, John E. (June 1950). Summability methods for divergent series. Stanford M.S. theses. | <urn:uuid:8843a275-67cf-4898-b69b-4af6b17f8a2b> | 2.890625 | 2,221 | Knowledge Article | Science & Tech. | 82.716993 | 95,499,315 |
On the Freezing Electrification of Freely Falling Water Droplets
The electrification of freely falling water droplets due to freezing was measured in a laboratory experiment.
When distilled water was used for specimen, the frequencies of positive and negative electrification were nearly the same, while the positive electrification was predominant in case of water melted from fresh natural snow.
It was concluded that the electrification of natural ice pellets is provided by the ejection of charged splinters with a diameter of a few micrometers at the end of freezing state.
KeywordsMeasuring Space Freezing State Cloud Droplet Charged Droplet Horizontal Field
Unable to display preview. Download preview PDF. | <urn:uuid:222c218b-908d-4141-9045-4a6ce0ec10e6> | 2.6875 | 140 | Truncated | Science & Tech. | 12.924547 | 95,499,347 |
NASA has postponed the launch of its flagship James Webb Space Telescope to early 2021 owing to a range of factors influencing its schedule and performance, including the technical challenges by Northrop Grumman on the spacecraft’s sunshield and propulsion system remaining before launch.
NASA had previously estimated an earlier launch date, but awaited findings from the IRB before making a final determination and considered data from Webb’s Standing Review Board. The telescope’s new total lifecycle cost, to support the revised launch date, is estimated at $9.66 billion; its new development cost estimate is $8.8 billion, NASA's statement read.
From detecting the light of the first stars and galaxies in the distant universe, to probing the atmospheres of exoplanets for possible signs of habitability, Webb’s world-class science not only will shed light on the many mysteries of the universe, it also will complement and further enhance the discoveries of other astrophysics projects.
The first telescope of its kind, and an unprecedented feat of engineering, Webb is at the very leading edge of technological innovation and development. At its conception, challenges were anticipated for such a unique observatory of its size and magnitude. Webb was designed with highly sophisticated instruments to accomplish the ambitious scientific goals outlined in the National Academy of Sciences 2000 Decadal Survey – to answer the most fundamental questions about our cosmic origins.
Webb will be folded, origami-style, for launch inside Arianespace’s Ariane 5 launch vehicle fairing – about 16 feet (5 meters) wide. After its launch, the observatory will complete an intricate and technically-challenging series of deployments – one of the most critical parts of Webb’s journey to its final orbit, about one million miles from Earth. When completely unfurled, Webb’s primary mirror will span more than 21 feet (6.5 meters) and its sunshield will be about the size of a tennis court.
Because of its size and complexity, the process of integrating and testing parts is more complicated than that of an average science mission. Once the spacecraft element has completed its battery of testing, it will be integrated with the telescope and science instrument element, which passed its tests last year. The fully-assembled observatory then will undergo a series of challenging environmental tests and a final deployment test before it is shipped to the launch site in Kourou, French Guiana, the US space agency noted in its statement. | <urn:uuid:9e96dde8-b84e-4929-9dc3-e1f6533b01ed> | 3.140625 | 505 | News Article | Science & Tech. | 29.071701 | 95,499,368 |
Selection. Start(Object, Object) Move
Moves the start position of the specified selection.
public int MoveStart (ref object Unit, ref object Count);
Public Function MoveStart (Optional ByRef Unit As Object, Optional ByRef Count As Object) As Integer
Optional WdUnits. The unit by which start position of the specified selection is to be moved.
Optional Object. The maximum number of units by which the specified selection is to be moved. If
Count is a positive number, the start position of the selection is moved forward in the document. If it's a negative number, the start position is moved backward. If the start position is moved forward to a position beyond the end position, the selection is collapsed and both the start and end positions are moved together. The default value is 1.
This method returns an integer that indicates the number of units by which the start position or the selection actually moved, or it returns 0 (zero) if the move was unsuccessful. | <urn:uuid:5f158c9b-443f-492b-b4b9-62f82833804d> | 2.59375 | 205 | Documentation | Software Dev. | 47.507429 | 95,499,376 |
New Solar Panels Will Be Built With Common Metals
September 27, 2012 Editor 0
Sunlight reaching American households is enough to supply at least 50 percent of the total electricity needs of America. Recently, scientists concluded that progress in the field of affordable solar-energy technology is fundamentally necessary for people to opt to invest in roof shingles that can generate electricity power. Actually, shingles that are capable of generating electrical power from the sunlight, can be installed like traditional roofing. This is currently a commercial reality. However, new advance in science can allow solar cells to be constructed from materials that are earth-abundant. This would make solar cells more affordable and also facilitate the integration of PV (Photovoltaic) into different parts of the building.
At the 244th National Meeting & Exposition of the American Chemical Society, the scientists’ report was part of the symposium on sustainability. Below are some of the other abstracts from other presentations.
One of the lecturers, Dr. Harry A. Atwater told that sustainability is developing technology that has the capacity of being fruitful over the long-term. This implies introducing resources that can fit today’s needs naturally.
Materials like zinc and copper are abundant and less-expensive materials, and that is what the new PV technology will use. They will use these abundant materials instead of the so-called rare-materials like gallium and indium. Rare materials are costly and also supplied mostly by foreign countries. China is withdrawing more than ninety presences of these rare substances; necessary for magnets, batteries for hybrid vehicles, high-tech products and electronics. Dr. James C. Stevens and Atwater are relating efforts to replace high-cost and rare material to produce PV cells with low-cost materials that are abundant and of course more sustainable.
Dr. Atwater is a physicist with the California Institute of Technology; Dr. Stevens is a chemist at the Dow Chemical Company; together, they are leading a partnership between both organizations in order to develop new electronic materials that can be used in devices for solar energy conversion. Both said that the development and testing of the new devices broke records when it comes to voltage and the electrical current; these conversion devices contain copper oxide and zinc phosphide. Atwater said that all these advances demonstrate that materials like copper oxide and zinc phosphide will be capable of reaching high efficiency while producing electrical power at a lower cost; according to Atwater, such goal could be accomplished in as little as 20 years!
Dr. Stevens supported the development of the Dow’s PowerHouse Solar Shingle that was launched in 2011 (October). It can generate electricity power through traditional roofing. These shingles use gallium, indium and copper diselenide PV technology but now the group is looking forward to incorporating sustainable and abundant materials into the Dow’s PowerHouse Solar Shingle in order to make them even more commonly accessible.
Dr. Stevens stated that the U.S. possessed 69 billion square feet of residential rooftops. The rooftops have all the necessary conditions to generate electricity from the sun. In fact, the solar light that falls on those large areas of rooftops can generate sufficient electricity to satisfy at least half (50%) of the U.S. energy needs. Some people go further and estimate a value close to 100%. With the technology that uses materials widely available on Earth, the electrical energy produced would be significantly more environmentally friendly.
In the symposium, other interesting presentations took place; here are some of them:
• In order to increase the production of earth-rare elements using greener and cheaper technology in the United States, Molycorp (mining company) is modernizing and expanding its facilities (Mountain Pass, Colo).
• An overview was conducted on the different challenges when it comes to maintaining a sustainable source of “critical materials” from rare elements such as indium vs abundant elements such as copper.
• Oil, mining and gas drilling produces 800 billion gallons of wastewater each year. In the symposium, they discussed a new material to recover rare materials from that wastewater.
Source: Science Codex
Add to del.icio.us
Search blogs linking this post with Technorati
- V3Solar photovoltaic Spin Cell generates 20 times more electricity per cell than flat panels
- Stanford study could lead to paradigm shift in organic solar cell research
- Stanford technology makes metal wires on solar cells nearly invisible to light
- Antifreeze, cheap materials may lead to low-cost solar energy
- Algae shells could hold secret to better solar cells
- EmTech Preview: Another Way to Think about Learning
Subscribe to our stories
- Prize-winning projects promote healthier eating, smarter crop investments June 28, 2018
- Reliable energy for all June 28, 2018
- Making marble from bottles: plastic waste’s second life in Kenya June 28, 2018
- A regional enterprise to commercialize an integrated technology for waste water treatment and biowaste conversion in eastern Africa May 27, 2018
- Dr Peggy Oti-Boateng May 27, 2018 | <urn:uuid:11a2679f-4081-4c6a-b432-094020f6805a> | 3.015625 | 1,053 | News (Org.) | Science & Tech. | 30.996408 | 95,499,377 |
The model is called sCast, short for seasonal forecast model. Atmospheric scientist Judah Cohen of AER, Inc., in Lexington, Mass., and colleagues analyzed seven real-time winter forecasts and 33 winter hindcasts (simulations of winters going back to 1972) to verify sCast.
"sCast works well in accurately predicting winter conditions over much of the eastern United States and Northern Eurasia," said Jay Fein, program director in the National Science Foundation (NSF)'s Division of Atmospheric Sciences, which funded the research. "Dynamical model prediction of winter climate remains a formidable challenge, and statistical approaches such as Cohen's continue to be a valuable alternative."
The results are published this week in the American Meteorological Society's Journal of Climate.
Cohen and colleagues outline the link between October snow cover in Siberia and the Northern Hemisphere's winter temperatures, and snowfall.
October is the month when snow begins to pile up across Siberia. October is also the month that the Siberian high, one of three dominant weather centers across the Northern Hemisphere, forms.
In years when Siberian snow cover is above normal, a strengthened Siberian high and colder surface temperatures across Northern Eurasia develop in the fall.
"The result is a warming in Earth's stratosphere that occurs in January," said Cohen. "This eventually descends from the stratosphere to Earth's surface over a week or two in January, making for a warmer winter in Northern Hemisphere high latitudes. However, in mid-latitudes it turns colder, so winters in the northeastern U.S. and eastern Europe are likely to be colder and snowier than normal. The skill of the sCast model takes us the next step beyond current seasonal forecast models employed worldwide."
Cheryl Dybas | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:4c56092a-4458-459a-bcec-3414c9eb4bfc> | 3.46875 | 932 | Content Listing | Science & Tech. | 38.131789 | 95,499,380 |
A total of 48 polymorphic microsatellite loci were characterized in 13 Drosophila melanogaster populations originating from Europe, America, and Africa. Consistent with previous results, the African D. melanogaster populations were the most differentiated populations and harbored most variation. Despite an overall similarity, American and European populations were significantly differentiated. Interestingly, genetic distances based on the proportion of shared alleles as well as FST values suggested that the American D. melanogaster populations are more closely related to the African populations than European ones are. We also detected a higher proportion of putative African alleles in the American populations, indicating recent admixture of African alleles on the American continent.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:38436e8b-590a-416c-ba39-297f96002c3a> | 2.78125 | 161 | Academic Writing | Science & Tech. | -13.902439 | 95,499,381 |
Most people who realize humans can adapt to digesting new types of food, think they know that there are one or two ways for humans to adapt to new food: by having new types of bacteria in our intestines and the other is to evolve news ways of digesting.
However, there are other ways, as we can sometimes pick up new DNA from other species, as well as pick up the required bacteria to properly digest the new foods.
I suppose that most people would never have consider that a virus could ever have a positive impact on our health!
Viruses and Other Gene Transfer Mechanisms
The Transfer of DNA Across Species Boundaries
Bacteria trade genes more frantically than a pit full of traders on the floor of the Chicago Mercantile Exchange — Lynn Margulis and Dorion Sagan (3)
While recombination moves whole blocks of genetic instructions within a cell, other processes move whole blocks of genetic information from one bacterium to another bacterium of a different kind. In the analogy between genes and written text, this move is a transfer of paragraphs or pages from one library to another
Sushi may 'transfer genes' to gut
A traditional Japanese diet could transfer the genes of "sushi-specific" digestive enzymes into the human gut.
This is according to researchers who discovered a substance in marine bacteria that breaks seaweed down into digestible pieces.
Rorting In The Auckland Property Market !
3 years ago | <urn:uuid:39b26a9c-a8b8-42e6-9943-aad15fbd68ed> | 3.078125 | 294 | Personal Blog | Science & Tech. | 24.3375 | 95,499,408 |
How to Be a Successful Pest
Credit: Earlham Institute
UK Scientists, in collaboration with groups in Europe and the US, have discovered why the green peach aphid (Myzus persicae) is one of the most destructive pests to many of our most important crops. Their research will inform industry and research programmes to support pest control and aid global food security.
Unlike most plant-colonising insects, which have adapted to live on a small range of closely related plants, green peach aphids can colonise over four hundred plant species. Developing resistance to over 70 different pesticides, coupled with the ever changing climate affecting crop losses in the EU and UK, the pest wreaks havoc on crop yields.
The green peach aphid transmits over a hundred different plant viruses and this notorious insect feeds on essential crops such as oilseed rape, sugar beet, tomato and potato, as well as wild plant species, which may serve as sources of the plant viruses. An example being the Turnip yellows virus (TuYV) and related viruses, which if left uncontrolled can reduce yield of multiple crops, such as oilseed rape and sugar beet, by up to 30%, rendering some crops unprofitable in the UK.
The aphids spend winter living on host plants such as peach, apricot or plum, but in the summer months can colonise a huge range of vegetables - from potatoes to spinach, squash, parsley and parsnip.
Generally, the insect parasites that live on a certain species are genetically very well adapted to live on just that plant. Yet, research led by the Earlham Institute (EI) and the John Innes Centre (JIC), has found that the green peach aphid foregoes this specialisation for a more flexible approach involving turning gene activity ‘up’ or ‘down’ in response to different plant hosts and environments.
Dr David Swarbreck, Group Leader at the Earlham Institute, said: “Our study has shed light on the genetic plasticity that allows the green peach aphid to survive so well on a multitude of plant species, giving us a greater insight into the survival strategies of one of the most challenging of crop pests.”
More intriguing about the insect's strategy is that aphids can reproduce clonally - i.e. they produce genetically identical lineages. This allows biologists to compare individual aphids with the same genetic background and see precisely what genes are more active than others in aphids living on different plant species.
By growing aphid clones on three different plant species, it was possible for the scientists to find the specific genes that were involved in colonising the different host plants. It appears that the genes responsible for helping aphids adjust to different plants are found in clusters within the genome and are rapidly increased or decreased in two days of transfer to a new host plant species.
Dr Yazhou Chen, Postdoctoral Scientist at the John Innes Centre, said: “The genes rapidly turn up or down in single aphids in just two days upon transfer to a new host plant. Given that a single aphid can produce her own offspring, and a lot of it, new aphid infestations may start with just a single aphid.”
The team found that rapid changes in gene expression were vital for the green peach aphid’s generalist lifestyle. Interfering with the expression of one particular gene family, cathepsin B, reduced aphid offspring production, but only on the host plant where the expression of these genes is increased.
Thomas Mathers, Postdoctoral Scientist at the Earlham Institute, said: “Surprisingly, many of the genes involved in host adjustment arose during aphid diversification and are not specific to the green peach aphid. This suggests that it may be the ability to rapidly adjust the expression of key genes in a coordinated fashion that enables generalism, rather than the presence of an expanded genomic toolbox of genes.”
Professor Saskia Hogenhout at the John Innes Centre, added: “Future research is expected to reveal mechanisms involved in the amazing plasticity of the green peach aphid leading to new ways to control this notorious pest. More generally, the research will help understand how some organisms are able to adjust quickly to a broad range of environmental conditions, whereas others are pickier and go extinct more easily, research that is central given our rapidly changing environment due to, for instance, climate change.”
Fetal Gene Therapy Prevents Lethal Neurodegenerative DiseaseNews
Researchers have successfully prevented the development of a fatal neurodegenerative disorder, Gaucher disease, using fetal gene therapy, in mice. The study highlights the potential of using this approach to combat lethal neurodegenerative diseases affecting newborns.READ MORE
Natural Product Could Lead to New Class of Commercial HerbicideNews
By looking for microorganism's protective shield, specifically the genes that can make it, a team discovered a new and potentially highly effective type of weed killer. This finding could lead to the first new class of commercial herbicides in more than 30 years.READ MORE | <urn:uuid:e1ca417a-f09d-437a-974a-21b13a53767c> | 3.5625 | 1,058 | News Article | Science & Tech. | 27.131471 | 95,499,416 |
Washington: Researchers have claimed to have found how whales successfully adapted to ocean environment.
Whales roam throughout all of the world`s oceans, living in the water but breathing air like humans. At the top of the food chain, whales are vital to the health of the marine environment, whereas 7 out of the 13 great whale species are endangered or vulnerable.
In this study, researchers conducted de novo sequencing on a minke whale with 128x average depth of coverage, and re-sequenced three minke whales, a fin whale (Balaenoptera physalus), a bottlenose dolphin, and a finless porpoise (Neophocaena phocaenoides).
The adaptation of whale to ocean life was notably marked by resistance to physiological stresses caused by a lack of oxygen, increased reactive oxygen species, and high salt level.
Researchers investigated a number of whale-specific genes that were strongly associated with stress resistance, such as the peroxiredoxin ( PRDX) family, O-linked N-acetylglucosaminylation ( O-GlcNAcylation).
The results revealed that the gene families associated with stress-responsive proteins and anaerobic metabolism were expanded.
In this study, researchers provided evidence to support that there is an increased ratio of reduced glutathione/glutathione disulfide when suffering hypoxic or oxidative stress.
Minke whales and other Mysticeti whale species grow baleen instead of teeth. It`s previously reported that the genes ENAM, MMP, and AMEL might play a role in tooth enamel formation and biomineralization.
This study showed that these genes may be pseudogenes with early stop codons in the baleen whales.
The study has been published online in journal Nature Genetics. | <urn:uuid:11b454b3-693b-496d-a254-47eef863e079> | 3.453125 | 373 | News Article | Science & Tech. | 20.075291 | 95,499,421 |
Quantization and the Schrödinger Equation
The basic equation of quantum mechanics is the Schrödinger equation which expresses the wave function Ψ of a quantum system as an eigenfunction of a quantized Hamiltonian operator H: HΨ = λΨ where the (real) eigenvalue λ is the quantum energy of the system in the state Ψ see equations (1.3.2), (1.3.4). Embodied already in this equation is the basic quantum mechanical principle that quantum energies cannot take on arbitrary values but are quantized: they are given by a discrete set of eigenvalues of a suitable second-order differential operator. This mathematical phenomenon of the discreteness of eigenvalues explains, for example, the observed discreteness of absorption and emission atomic spectral lines; compare remarks in Sections 1.2 and 1.3 of Chapter 1. The Schrödinger theory, and the equivalent theory of Heisenberg, Born and Jordan, represents a distinct advancement of the Bohr theory. Some early basic papers on quantum mechanics are compiled in the book , which includes a historic introduction by B. van der Waerden. Also see [6, 8, 9, 24, 75, 76].
KeywordsQuantum Mechanic Zeta Function Large Eigenvalue Quantum Energy Potential Energy Function
Unable to display preview. Download preview PDF. | <urn:uuid:719d8f9c-8b19-4b85-9005-2ee5df98e56f> | 3.015625 | 287 | Truncated | Science & Tech. | 43.299987 | 95,499,423 |
Come to our PaleoTime-BE International Fossil Show in Wijgmaal (BE), on November 11 2018!
Contribute knowledge and information to Fossiel.net!
How can I help?
Most Popular Articles
Gryphaea dilatata F3012
Within the subclass of the Elasmobranchii, the super order of Batoidea resides rays, sawfish and guitar rays. Fossil representatives are known since the early Triassic. They are cartilaginous fish, whose remains are very rare to find, since most skeletal parts do not fossilize. Only in very rare and extremely well preserved formations like lägerstatten, remains can be found
Most fossil Batoidea remains consist of tooth plates and dermal spines. Ray teeth can often be found for example in sandy Neogene marine deposits.
A complete specimen, like this one from Lebanon, is exceptional!
Photos or locations for Batoidea on the site
Do you have additional information for this article? Please contact the Fossiel.net Team. | <urn:uuid:4363af35-7de3-4e7d-a6e5-8a6cb1bd0dc6> | 2.671875 | 221 | Knowledge Article | Science & Tech. | 37.245358 | 95,499,439 |
Traffic lights lie on a street after being knocked down, as Hurricane Harvey approaches in Corpus Christi, Texas, US August 25, 2017. .
(photo credit: REUTERS)
Hurricane Harvey has brought an unprecedented levels of rainfall to Texas, threatening the lives of thousands or even tens of thousands. Scientists suggest climate change may have a role to play in this deadly storm.
According to the department of atmospheric science professor at Pennsylvania State University, Michael E. Mann, the first global warming mechanism that may have made the impact of Hurricane Harvey so severe is the rapid rising sea levels in the Houston region, making the area more likely to flood.
The second factor is the rising temperatures in the region which translates to more moisture in the atmosphere, bringing more rain to the region.
Human-caused warming penetrating the ocean past the surface has resulted in a deep layer of warm water feeding hurricane Harvey when it intensified near the coast.
Furthermore, global warming may have contributed to expanded subtropical high pressure systems which trapped hurricane Harvey in the middle, stalling it near the Texas coast.
"In conclusion, while we cannot say climate change "caused" Hurricane Harvey (that is an ill-posed question), we can say is that it exacerbated several characteristics of the storm in a way that greatly increased the risk of damage and loss of life. Climate change worsened the impact of Hurricane Harvey," Professor Mann explained in a Facebook post.
Climate experts said the rain would continue for another two or three days. Southeast Texas could see an additional 15 to 25 inches of rain while some areas could see as much as 50 inches of rain, the New York Times | <urn:uuid:a433b9af-de45-4b27-a0c8-abe4e1a6abc0> | 3.765625 | 335 | Truncated | Science & Tech. | 36.716291 | 95,499,443 |
Dynamics of Adparticles and Neutron Inelastic Scattering
Our concern in Chapters 2 and 3 has been with static properties of molecules adsorbed on surfaces. In this chapter, the discussion of dynamic properties will be introduced by considering the vibrational frequencies of molecules adsorbed on metal surfaces. There are a number of techniques available that give useful information on these frequencies and will be briefly referred to, but the main attention will be focused on what can be learned from neutron inelastic scattering about molecular vibrational frequencies.
KeywordsNeutron Technique Local Mode Neutron Inelastic Scattering Side Band Raney Nickel
Unable to display preview. Download preview PDF. | <urn:uuid:83147ce9-12bb-40b4-80eb-4c2071ba3d6d> | 2.578125 | 142 | Truncated | Science & Tech. | 13.269154 | 95,499,474 |
Plant components that bend, roll or twist in response to external stimuli such as temperature or moisture are fairly commonplace in nature and often play a role in the dispersal of seeds. Pine cones, for instance, close their scales when wet and open them again once they have dried out. Andre Studart, a professor of complex materials at ETH Zurich’s Department of Materials, and his group have now applied the knowledge of how these movements come about to produce synthetically a composite material with comparable properties.
The secret of the pine cone
Studart and co-workers knew from the literature how pine cone scales work: two firmly connected layers lying on top of each other inside a scale are responsible for the movement. Although the two layers consist of the same swellable material, they expand in different ways under the influence of water because of the rigid fibres enclosed in the layers. In each of the layers, these are specifically aligned, thus determining the direction of expansion.
Therefore, when wet only one of the two layers expands in the longitudinal direction of the scale and bends on the other side.
Inspired by nature, the scientists began to produce a similar moving material in the lab by adding ultrafine aluminium oxide platelets as the rigid component to gelatine – the swellable base material – and pouring it into square moulds. The surface of the aluminium oxide platelets is pre-coated with iron oxide nanoparticles to make them magnetic. This enabled the researchers to align the platelets in the desired direction using a very weak rotating magnetic field. On the cooled and hardened first layer, they poured a second one with the same composition, differing only in the direction of the rigid elements.
The scientists cut this double-layered material into strips. Depending on the direction in which these strips were cut compared to the direction of the rigid elements in the gelatine pieces, the strips bent or twisted differently under the influence of moisture: some coiled lengthwise like a pig’s tail, others turned loosely or very tightly on their own axis to form a helix reminiscent of spiral pastries. “Meanwhile, we can programme the way in which a strip should take shape fairly accurately,” explains Studart.
The researchers also produced longer strips that behave differently in different sections – curl in the first section, for instance, then bend in one direction and the other in the final section. Or they created strips that expanded differently length and breadthwise in different sections in water. And they also made strips from another polymer that responded to both temperature and moisture – with rotations in different directions.
After all, it is the rotating movements that interest Studart the most and that evidently were difficult to achieve until now. “Bending movements,” he says, “are relatively straightforward.” Metallic bilayer compounds that bend upon temperature changes are widely used in thermostats, for instance. The new method, however, is largely material-independent, which means that any material that responds to external stimuli – and, according to Studart, there are quite a few – can potentially be rendered self-shaping. “Even the solid component is freely selectable and can be made magnetically responsive through the iron-oxide coating,” he says.
Accordingly, in response to the question as to possible applications, Studart highlights two completely different directions which his group is looking to research further in future: one is the production of ceramic parts that, instead of being pressed into shape as they have been until now, bring themselves into shape. And the ETH-Zurich professor sees another possible use in medicine: using the new method, implants could be produced that only become effective in their definitive location in the body and would fit precisely. “Ideally, these would also be biodegradable,” adds the researcher. | <urn:uuid:c9ec393e-618e-48f1-acbe-015e4b0d3c4a> | 3.8125 | 784 | Truncated | Science & Tech. | 25.082714 | 95,499,489 |
The giant-impact hypothesis, sometimes called the Big Splash, or the Theia Impact suggests that the Moon formed out of the debris left over from a collision between Earth and an astronomical body the size of Mars, approximately 4.5 billion years ago, in the Hadean eon; about 20 to 100 million years after the solar system coalesced. The colliding body is sometimes called Theia, from the name of the mythical Greek Titan who was the mother of Selene, the goddess of the Moon. Analysis of lunar rocks, published in a 2016 report, suggests that the impact may have been a direct hit, causing a thorough mixing of both parent bodies.
- Earth's spin and the Moon's orbit have similar orientations.
- Moon samples indicate that the Moon's surface was once molten.
- The Moon has a relatively small iron core.
- The Moon has a lower density than Earth.
- There is evidence in other star systems of similar collisions, resulting in debris disks.
- Giant collisions are consistent with the leading theories of the formation of the Solar System.
- The stable-isotope ratios of lunar and terrestrial rock are identical, implying a common origin.
There remain several questions concerning the best current models of the giant-impact hypothesis, however. The energy of such a giant impact is predicted to have heated Earth to produce a global magma ocean, and evidence of the resultant planetary differentiation of the heavier material sinking into Earth's mantle has been documented. However, as of 2015[update] there is no self-consistent model that starts with the giant-impact event and follows the evolution of the debris into a single moon. Other remaining questions include when the Moon lost its share of volatile elements and why Venus—which experienced giant impacts during its formation—does not host a similar moon.
- 1 History
- 2 Theia
- 3 Basic model
- 4 Composition
- 5 Evidence
- 6 Difficulties
- 7 Possible origin of Theia
- 8 Modified hypothesis
- 9 Alternative hypotheses
- 10 See also
- 11 References
- 12 External links
In 1898, George Darwin made the suggestion that the Earth and Moon were once a single body. Darwin's hypothesis was that a molten Moon had been spun from the Earth because of centrifugal forces, and this became the dominant academic explanation. Using Newtonian mechanics, he calculated that the Moon had orbited much more closely in the past and was drifting away from the Earth. This drifting was later confirmed by American and Soviet experiments, using laser ranging targets placed on the Moon.
Nonetheless, Darwin's calculations could not resolve the mechanics required to trace the Moon backward to the surface of the Earth. In 1946, Reginald Aldworth Daly of Harvard University challenged Darwin's explanation, adjusting it to postulate that the creation of the Moon was caused by an impact rather than centrifugal forces. Little attention was paid to Professor Daly's challenge until a conference on satellites in 1974, during which the idea was reintroduced and later published and discussed in Icarus in 1975 by Drs. William K. Hartmann and Donald R. Davis. Their models suggested that, at the end of the planet formation period, several satellite-sized bodies had formed that could collide with the planets or be captured. They proposed that one of these objects may have collided with the Earth, ejecting refractory, volatile-poor dust that could coalesce to form the Moon. This collision could potentially explain the unique geological and geochemical properties of the Moon.
A similar approach was taken by Canadian astronomer Alastair G. W. Cameron and American astronomer William R. Ward, who suggested that the Moon was formed by the tangential impact upon Earth of a body the size of Mars. It is hypothesized that most of the outer silicates of the colliding body would be vaporized, whereas a metallic core would not. Hence, most of the collisional material sent into orbit would consist of silicates, leaving the coalescing Moon deficient in iron. The more volatile materials that were emitted during the collision probably would escape the Solar System, whereas silicates would tend to coalesce.
The name of the hypothesized protoplanet is derived from the mythical Greek titan Theia //, who gave birth to the Moon goddess Selene. This designation was proposed initially by the English geochemist Alex N. Halliday in 2000 and has become accepted in the scientific community. According to modern theories of planet formation, Theia was part of a population of Mars-sized bodies that existed in the Solar System 4.5 billion years ago. One of the attractive features of the giant-impact hypothesis is that the formation of the Moon and Earth align; during the course of its formation, the Earth is thought to have experienced dozens of collisions with planet-sized bodies. The Moon-forming collision would have been only one such "giant impact" but certainly the last significant impactor event. The Late Heavy Bombardment by much smaller asteroids occurred later - approximately 3.9 billion years ago.
Astronomers think the collision between Earth and Theia happened at about 4.4 to 4.45 bya; about 0.1 billion years after the Solar System began to form. In astronomical terms, the impact would have been of moderate velocity. Theia is thought to have struck the Earth at an oblique angle when the Earth was nearly fully formed. Computer simulations of this "late-impact" scenario suggest an impact angle of about 45° and an initial impactor velocity below 4 km/s. However, oxygen isotope abundance in lunar rock suggests "vigorous mixing" of Theia and Earth, indicating a steep impact angle. Theia's iron core would have sunk into the young Earth's core, and most of Theia's mantle accreted onto the Earth's mantle. However, a significant portion of the mantle material from both Theia and the Earth would have been ejected into orbit around the Earth (if ejected with velocities between orbital velocity and escape velocity) or into individual orbits around the sun (if ejected at higher velocities). The material in orbits around the Earth quickly coalesced into the Moon (possibly within less than a month, but in no more than a century). The material in orbits around the sun stayed on its Kepler orbits, which are stable in space, and was thus likely to hit the earth-moon system sometime later (because the Earth-Moon system's Kepler orbit around the sun also remains stable). Estimates based on computer simulations of such an event suggest that some twenty percent of the original mass of Theia would have ended up as an orbiting ring of debris around the Earth, and about half of this matter coalesced into the Moon.
The Earth would have gained significant amounts of angular momentum and mass from such a collision. Regardless of the speed and tilt of the Earth's rotation before the impact, it would have experienced a day some five hours long after the impact, and the Earth's equator and the Moon's orbit would have become coplanar.
Not all of the ring material need have been swept up right away: the thickened crust of the Moon's far side suggests the possibility that a second moon about 1,000 km in diameter formed in a Lagrange point of the Moon. The secondary, smaller moon may have remained in orbit for tens of millions of years. As the two moons migrated outward from the Earth, solar tidal effects would have made the Lagrange orbit unstable, resulting in a slow-velocity collision that "pancaked" the smaller moon onto what is now the far side, adding material to the crust. Lunar magma cannot pierce through the thick crust of the far side, causing lesser lunar maria, while the near side has a thin crust displaying the large maria visible from Earth.
In 2001, a team at the Carnegie Institution of Washington reported that the rocks from the Apollo program carried an isotopic signature that was identical with rocks from Earth, and were different from almost all other bodies in the Solar System.
In 2014, a team in Germany reported that the Apollo samples had a slightly different isotopic signature from Earth rocks. The difference was slight, but statistically significant. One possible explanation is that Theia formed near the Earth.
Energetic aftermath theory
In 2007, researchers from the California Institute of Technology showed that the likelihood of Theia having an identical isotopic signature as the Earth was very small (less than 1 percent). They proposed that in the aftermath of the giant impact, while the Earth and the proto-lunar disk were molten and vaporized, the two reservoirs were connected by a common silicate vapour atmosphere, and that the Earth–Moon system became homogenized by convective stirring while the system existed in the form of a continuous fluid. Such an "equilibration" between the post-impact Earth and the proto-lunar disk is the only proposed scenario that explains the isotopic similarities of the Apollo rocks with rocks from the Earth's interior. For this scenario to be viable, however, the proto-lunar disk would have to endure for about 100 years. Work is ongoing to determine whether or not this is possible.
Further modelling of the transient structure has given rise to the concept of a synestia, a doughnut-shaped body that existed for a century before it cooled down and gave birth to the Earth and the moon.
Indirect evidence for the giant impact scenario comes from rocks collected during the Apollo Moon landings, which show oxygen isotope ratios nearly identical to those of Earth. The highly anorthositic composition of the lunar crust, as well as the existence of KREEP-rich samples, suggest that a large portion of the Moon once was molten; and a giant impact scenario could easily have supplied the energy needed to form such a magma ocean. Several lines of evidence show that if the Moon has an iron-rich core, it must be a small one. In particular, the mean density, moment of inertia, rotational signature, and magnetic induction response of the Moon all suggest that the radius of its core is less than about 25% the radius of the Moon, in contrast to about 50% for most of the other terrestrial bodies. Appropriate impact conditions satisfying the angular momentum constraints of the Earth–Moon system yield a Moon formed mostly from the mantles of the Earth and the impactor, while the core of the impactor accretes to the Earth. It is noteworthy that the Earth has the highest density of all the planets in the Solar system ; the absorption of the core of the impactor body explains this observation, given the proposed properties of the early Earth and Theia.
Comparison of the zinc isotopic composition of Lunar samples with that of Earth and Mars rocks provides further evidence for the impact hypothesis. Zinc is strongly fractionated when volatilized in planetary rocks, but not during normal igneous processes, so zinc abundance and isotopic composition can distinguish the two geological processes. Moon rocks contain more heavy isotopes of zinc, and overall less zinc, than corresponding igneous Earth or Mars rocks, which is consistent with zinc being depleted from the Moon through evaporation, as expected for the giant impact origin.
Collisions between ejecta escaping Earth's gravity and asteroids would have left impact heating signatures in stony meteorites; analysis based on assuming the existence of this effect has been used to date the impact event to 4.47 billion years ago, in agreement with the date obtained by other means.
Warm silica-rich dust and abundant SiO gas, products of high velocity (> 10 km/s) impacts between rocky bodies, have been detected by the Spitzer Space Telescope around the nearby (29 pc distant) young (~12 My old) star HD172555 in the Beta Pictoris moving group. A belt of warm dust in a zone between 0.25AU and 2AU from the young star HD 23514 in the Pleiades cluster appears similar to the predicted results of Theia's collision with the embryonic Earth, and has been interpreted as the result of planet-sized objects colliding with each other. A similar belt of warm dust was detected around the star BD +20°307 (HIP 8920, SAO 75016).
This lunar origin hypothesis has some difficulties that have yet to be resolved. For example, the giant-impact hypothesis implies that a surface magma ocean would have formed following the impact. Yet there is no evidence that the Earth ever had such a magma ocean and it is likely there exists material that has never been processed in a magma ocean.
A number of compositional inconsistencies need to be addressed.
- The ratios of the Moon's volatile elements are not explained by the giant-impact hypothesis. If the giant-impact hypothesis is correct, they must be due to some other cause.
- The presence of volatiles such as water trapped in lunar basalts is more difficult to explain if the Moon was caused by a high-temperature impact.
- The iron oxide (FeO) content (13%) of the Moon, intermediate between that of Mars (18%) and the terrestrial mantle (8%), rules out most of the source of the proto-lunar material from the Earth's mantle.
- If the bulk of the proto-lunar material had come from an impactor, the Moon should be enriched in siderophilic elements, when, in fact, it is deficient in those.
- The Moon's oxygen isotopic ratios are essentially identical to those of Earth. Oxygen isotopic ratios, which may be measured very precisely, yield a unique and distinct signature for each solar system body. If a separate proto-planet Theia had existed, it probably would have had a different oxygen isotopic signature than Earth, as would the ejected mixed material.
- The Moon's titanium isotope ratio (50Ti/47Ti) appears so close to the Earth's (within 4 ppm), that little if any of the colliding body's mass could likely have been part of the Moon.
Lack of a Venusian moon
If the Moon was formed by such an impact, it is possible that other inner planets also may have been subjected to comparable impacts. A moon that formed around Venus by this process would have been unlikely to escape. If such a moon-forming event had occurred there, a possible explanation of why the planet does not have such a moon might be that a second collision occurred that countered the angular momentum from the first impact. Another possibility is that the strong tidal forces from the Sun would tend to destabilize the orbits of moons around close-in planets. For this reason, if Venus's slow rotation rate began early in its history, any satellites larger than a few kilometres in diameter would likely have spiralled inwards and collided with Venus.
Simulations of the chaotic period of terrestrial planet formation suggest that impacts like those hypothesized to have formed the Moon were common. For typical terrestrial planets with a mass of 0.5 to 1 Earth masses, such an impact typically results in a single moon containing 4% of the host planet's mass. The inclination of the resulting moon's orbit is random, but this tilt affects the subsequent dynamic evolution of the system. For example, some orbits may cause the moon to spiral back into the planet. Likewise, the proximity of the planet to the star will also affect the orbital evolution. The net effect is that it is more likely for impact-generated moons to survive when they orbit more distant terrestrial planets and are aligned with the planetary orbit.
Possible origin of Theia
In 2004, Princeton University mathematician Edward Belbruno and astrophysicist J. Richard Gott III proposed that Theia coalesced at the L4 or L5 Lagrangian point relative to Earth (in about the same orbit and about 60° ahead or behind), similar to a trojan asteroid. Two-dimensional computer models suggest that the stability of Theia's proposed trojan orbit would have been affected when its growing mass exceeded a threshold of approximately 10% of the Earth's mass (the mass of Mars). In this scenario, gravitational perturbations by planetesimals caused Theia to depart from its stable Lagrangian location, and subsequent interactions with proto-Earth led to a collision between the two bodies.
In 2008, evidence was presented that suggests that the collision may have occurred later than the accepted value of 4.53 Gya, at approximately 4.48 Gya. A 2014 comparison of computer simulations with elemental abundance measurements in the Earth's mantle indicated that the collision occurred approximately 95 My after the formation of the Solar System.
It has been suggested that other significant objects may have been created by the impact, which could have remained in orbit between the Earth and Moon, stuck in Lagrangian points. Such objects may have stayed within the Earth–Moon system for as long as 100 million years, until the gravitational tugs of other planets destabilized the system enough to free the objects. A study published in 2011 suggested that a subsequent collision between the Moon and one of these smaller bodies caused the notable differences in physical characteristics between the two hemispheres of the Moon. This collision, simulations have supported, would have been at a low enough velocity so as not to form a crater; instead, the material from the smaller body would have spread out across the Moon (in what would become its far side), adding a thick layer of highlands crust. The resulting mass irregularities would subsequently produce a gravity gradient that resulted in tidal locking of the Moon so that today, only the near side remains visible from Earth. However, mapping by the GRAIL mission has apparently ruled out this scenario.
The giant-impact hypothesis fails to properly explain the similar composition of Earth and the Moon. Especially, the indistinguishable relation of oxygen isotopes cannot be explained by the classical form of this hypothesis. According to research on the subject that is based on new simulations at the University of Bern by physicist Andreas Reufer and his colleagues, Theia collided directly with Earth instead of barely swiping it. The collision speed may have been higher than originally assumed, and this higher velocity may have totally destroyed Theia. According to this modification, the composition of Theia is not so restricted, making a composition of up to 50% water ice possible.
Other mechanisms that have been suggested at various times for the Moon's origin are that the Moon was spun off from the Earth's molten surface by centrifugal force; that it was formed elsewhere and was subsequently captured by the Earth's gravitational field; or that the Earth and the Moon formed at the same time and place from the same accretion disk. None of these hypotheses can account for the high angular momentum of the Earth–Moon system.
Another hypothesis attributes the formation of the Moon to the impact of a large asteroid with the Earth much later than previously thought, creating the satellite primarily from debris from Earth. In this hypothesis, the formation of the Moon occurs 60–140 million years after the formation of the Solar System. Previously, the age of the Moon had been thought to be 4.527 ± 0.010 billion years. The impact in this scenario would have created a magma ocean on Earth and the proto-Moon with both bodies sharing a common plasma metal vapour atmosphere. The shared metal vapour bridge would have allowed material from the Earth and proto-Moon to exchange and equilibrate into a more common composition.
Yet another hypothesis proposes that the Moon and the Earth have formed together instead of separately like the giant-impact hypothesis suggests. The new model, developed by Robin M. Canup, suggests that the Moon and the Earth have formed as a part of a massive collision of two planetary bodies, each larger than Mars, which then re-collided to form what we now call Earth. After the recollision, Earth was surrounded by a disk of material, which accreted to form the Moon. This hypothesis could explain facts that others do not.
- "Revisiting the Moon". The New York Times. 2014-09-09.
- Halliday, Alex N. (February 28, 2000). "Terrestrial accretion rates and the origin of the Moon". Earth and Planetary Science Letters. 176 (1): 17–30. Bibcode:2000E&PSL.176...17H. doi:10.1016/S0012-821X(99)00317-9.
- Young, Edward D.; Kohl, Issaku E.; Warren, Paul H.; Rubie, David C.; Jacobson, Seth A.; Morbidelli, Alessandro (2016-01-29). "Oxygen isotopic evidence for vigorous mixing during the Moon-forming giant impact". Science. 351 (6272): 493–496. arXiv: . Bibcode:2016Sci...351..493Y. doi:10.1126/science.aad0525. ISSN 0036-8075. PMID 26823426.
- Canup, R.; Asphaug, E. (2001). "Origin of the Moon in a giant impact near the end of the Earth's formation" (PDF). Nature. 412 (6848): 708–712. Bibcode:2001Natur.412..708C. doi:10.1038/35089010. PMID 11507633. Archived from the original (PDF) on 2010-07-30. Retrieved 2011-12-10.
- Mackenzie, Dana (2003). The Big Splat, or How The Moon Came To Be. John Wiley & Sons. ISBN 978-0-471-15057-2.
- Wiechert, U.; et al. (October 2001). "Oxygen Isotopes and the Moon-Forming Giant Impact". Science. 294 (12): 345–348. Bibcode:2001Sci...294..345W. doi:10.1126/science.1063037. PMID 11598294. Retrieved 2009-07-05.
- Daniel Clery (11 October 2013). "Impact Theory Gets Whacked". Science. 342: 183. Bibcode:2013Sci...342..183C. doi:10.1126/science.342.6155.183. PMID 24115419.
- Rubie, D. C.; Nimmo, F.; Melosh, H. J. (2007-01-01). Formation of Earth’s Core A2 - Schubert, Gerald. Amsterdam: Elsevier. pp. 51–90. ISBN 9780444527486.
- Binder, A. B. (1974). "On the origin of the Moon by rotational fission". The Moon. 11 (2): 53–76. Bibcode:1974Moon...11...53B. doi:10.1007/BF01877794.
- Daly, Reginald A. (1946). "Origin of the Moon and Its Topography". PAPS. 90 (2). doi:10.2307/3301051. JSTOR 3301051.
- Hartmann, W. K.; Davis, D. R. (April 1975). "Satellite-sized planetesimals and lunar origin". Icarus. 24 (4): 504–514. Bibcode:1975Icar...24..504H. doi:10.1016/0019-1035(75)90070-6.
- Cameron, A. G. W.; Ward, W. R. (March 1976). "The Origin of the Moon". Abstracts of the Lunar and Planetary Science Conference. 7: 120–122. Bibcode:1976LPI.....7..120C.
- Gray, Denis (December 2003), "Book Review: The big splat or how our moon came to be / John Wiley & Sons, 2003", Journal of the Royal Astronomical Society of Canada, 97 (6): 299, Bibcode:2003JRASC..97..299G
- Freeman, David (2013-09-23). "How Old Is The Moon? 100 Million Years Younger Than Once Thought, New Research Suggests". The Huffington Post. Retrieved 2013-09-25.
- Soderman. "Evidence for Moon-Forming Impact Found Inside Meteorites". NASA-SSERVI. Retrieved 7 July 2016.
- Canup, Robin M. (April 2004), "Simulations of a late lunar-forming impact", Icarus, 168 (2): 433–456, Bibcode:2004Icar..168..433C, doi:10.1016/j.icarus.2003.09.028
- "The Earth and Moon Both Contain Equal Parts of an Ancient Planet". Popular Mechanics. 2016-01-28. Retrieved 2016-04-30.
- Stevenson, D. J. (1987). "Origin of the moon–The collision hypothesis". Annual Review of Earth and Planetary Sciences. 15 (1): 271–315. Bibcode:1987AREPS..15..271S. doi:10.1146/annurev.ea.15.050187.001415.
- Richard Lovett (2011-08-03). "Early Earth may have had two moons". Nature.com. Retrieved 2013-09-25.
- "Was our two-faced moon in a small collision?". Theconversation.edu.au. Retrieved 2013-09-25.
- Phil Plait, Why Do We Have a Two-Faced Moon?, Slate: Bad Astronomy blog, July 1, 2014
- Herwartz, D.; Pack, A.; Friedrichs, B.; Bischoff, A. (2014). "Identification of the giant impactor Theia in lunar rocks". Science. 344 (6188): 1146. Bibcode:2014Sci...344.1146H. doi:10.1126/science.1251117. PMID 24904162.
- "Traces of another world found on the Moon". BBC News. 2014-06-06.
- Pahlevan, Kaveh; Stevenson, David (October 2007). "Equilibration in the Aftermath of the Lunar-forming Giant Impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv: . Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055.
- Boyle, Rebecca. "Huge impact could have smashed early Earth into a doughnut shape". New Scientist. Retrieved 7 June 2017.
- Lock, Simon J.; Stewart, Sarah T.; Petaev, Michail I.; Leinhardt, Zoe M.; Mace, Mia T.; Jacobsen, Stein B.; Ćuk, Matija (2018). "The origin of the Moon within a terrestrial synestia". Journal of Geophysical Research. arXiv: . Bibcode:2018JGRE..123..910L. doi:10.1002/2017JE005333.
- Paniello, R. C.; Day, J. M. D.; Moynier, F. (2012). "Zinc isotopic evidence for the origin of the Moon". Nature. 490 (7420): 376–379. Bibcode:2012Natur.490..376P. doi:10.1038/nature11507. PMID 23075987.
- Moynier, F.; Albarède, F.; Herzog, G. F. (2006). "Isotopic composition of zinc, copper, and iron in lunar samples". Geochimica et Cosmochimica Acta. 70 (24): 6103. Bibcode:2006GeCoA..70.6103M. doi:10.1016/j.gca.2006.02.030.
- Moynier, F.; Beck, P.; Jourdan, F.; Yin, Q. Z.; Reimold, U.; Koeberl, C. (2009). "Isotopic fractionation of zinc in tektites". Earth and Planetary Science Letters. 277 (3–4): 482. Bibcode:2009E&PSL.277..482M. doi:10.1016/j.epsl.2008.11.020.
- Ben Othman, D.; Luck, J. M.; Bodinier, J. L.; Arndt, N. T.; Albarède, F. (2006). "Cu–Zn isotopic variations in the Earth's mantle". Geochimica et Cosmochimica Acta. 70 (18): A46. Bibcode:2006GeCAS..70...46B. doi:10.1016/j.gca.2006.06.201.
- Bottke, W. F.; Vokrouhlicky, D.; Marchi, S.; Swindle, T.; Scott, E. R. D.; Weirich, J. R.; Levison, H. (2015). "Dating the Moon-forming impact event with asteroidal meteorites". Science. 348 (6232): 321. Bibcode:2015Sci...348..321B. doi:10.1126/science.aaa0602. PMID 25883354.
- Lisse, Carey M.; et al. (2009). "Abundant Circumstellar Silica Dust and SiO Gas Created by a Giant Hypervelocity Collision in the ~12 Myr HD172555 System". Astrophysical Journal. 701 (2): 2019–2032. arXiv: . Bibcode:2009ApJ...701.2019L. doi:10.1088/0004-637X/701/2/2019.
- Rhee, Joseph H.; Song, Inseok; Zuckerman, B. (2007). "Warm dust in the terrestrial planet zone of a sun-like Pleiad: collisions between planetary embryos?". Astrophysical Journal. 675 (1): 777–783. arXiv: . Bibcode:2008ApJ...675..777R. doi:10.1086/524935.
- Song, Inseok; et al. (21 July 2005). "Extreme collisions between planetesimals as the origin of warm dust around a Sun-like star". Nature. 436 (7049): 363–365. Bibcode:2005Natur.436..363S. doi:10.1038/nature03853. PMID 16034411.
- Jones, J. H. (1998). "Tests of the Giant Impact Hypothesis" (PDF). Lunar and Planetary Science. Origin of the Earth and Moon Conference. Monterey, California.
- Saal, Alberto E.; et al. (July 10, 2008). "Volatile content of lunar volcanic glasses and the presence of water in the Moon's interior". Nature. 454 (7201): 192–195. Bibcode:2008Natur.454..192S. doi:10.1038/nature07047. PMID 18615079.
- Taylor, Stuart R. (1997). "The Bulk Composition of the Moon" (PDF). 37. Lunar and Planetary Science. Bibcode:2002M&PSA..37Q.139T. Retrieved 2010-03-21.
- Galimov, E. M.; Krivtsov, A. M. (December 2005). "Origin of the Earth-Moon System" (PDF). Journal of Earth Systems Science. 114 (6): 593–600. Bibcode:2005JESS..114..593G. doi:10.1007/BF02715942. Retrieved 2011-12-10.
- Scott, Edward R. D. (December 3, 2001). "Oxygen Isotopes Give Clues to the Formation of Planets, Moons, and Asteroids". Planetary Science Research Discoveries (PSRD). Bibcode:2001psrd.reptE..55S. Retrieved 2010-03-19.
- Nield, Ted (September 2009). "Moonwalk" (PDF). Geological Society of London. p. 8. Retrieved 2010-03-01.
- Zhang, Junjun; Nicolas Dauphas; Andrew M. Davis; Ingo Leya; Alexei Fedkin (25 March 2012). "The proto-Earth as a significant source of lunar material". Nature Geoscience. 5: 251–255. Bibcode:2012NatGe...5..251Z. doi:10.1038/ngeo1429.
- Koppes, Steve (March 28, 2012). "Titanium paternity test fingers Earth as moon's sole parent". UChicagoNews. Retrieved August 13, 2012.
- Alemi, Alex; Stevenson, D. (September 2006), "Why Venus has No Moon", Bulletin of the American Astronomical Society, 38: 491, Bibcode:2006DPS....38.0703A
- Sheppard, Scott S.; Trujillo, Chadwick A. (July 2009), "A survey for satellites of Venus", Icarus, 202 (1): 12–16, arXiv: , Bibcode:2009Icar..202...12S, doi:10.1016/j.icarus.2009.02.008
- Lewis, K. (February 2011), "Moon formation and orbital evolution in extrasolar planetary systems - A literature review", in Bouchy, F.; Díaz, R.; Moutou, C., Detection and Dynamics of Transiting Exoplanets, EPJ Web of Conferences, 11, p. 04003, Bibcode:2011EPJWC..1104003L, doi:
- Belbruno, E.; Gott III, J. Richard (2005). "Where Did The Moon Come From?". The Astronomical Journal. 129 (3): 1724–1745. arXiv: . Bibcode:2005AJ....129.1724B. doi:10.1086/427539.
- Howard, E. (July 2005), "The effect of Lagrangian L4/L5 on satellite formation", Meteoritics & Planetary Science, 40: 1115, Bibcode:2005M&PS...40.1115H, doi:10.1111/j.1945-5100.2005.tb00176.x
- Halliday, Alex N (November 28, 2008). "A young Moon-forming giant impact at 70–110 million years accompanied by late-stage mixing, core formation and degassing of the Earth". Philosophical Transactions of the Royal Society A. Philosophical Transactions of the Royal Society. 366 (1883): 4163–4181. Bibcode:2008RSPTA.366.4163H. doi:10.1098/rsta.2008.0209. PMID 18826916.
- Jacobson, Seth A. (April 2014), "Highly siderophile elements in Earth's mantle as a clock or the Moon-forming impact", Nature, 508: 84–87, arXiv: , Bibcode:2014Natur.508...84J, doi:10.1038/nature13172
- Than, Ker (May 6, 2008). "Did Earth once have multiple moons?". New Scientist. Reed Business Information Ltd. Retrieved 2011-12-10.
- Jutzi, M.; Asphaug, E. (August 4, 2011), "Forming the lunar farside highlands by accretion of a companion moon", Nature, 476: 69–72, Bibcode:2011Natur.476...69J, doi:10.1038/nature10289, PMID 21814278
- Choi, Charles Q. (August 3, 2011), "Earth Had Two Moons That Crashed to Form One, Study Suggests", Yahoo News, retrieved 2012-02-24
- Dambeck, Thorsten (11 September 2012). "Retuschen an der Entstehungsgeschichte des Erdtrabanten" [Retouches on the genesis of Earth's moon] (in German). Archived from the original on 11 September 2012. Retrieved 23 September 2012.
- Mitler, H. E. (1975). "Formation of an iron-poor moon by partial capture, or: Yet another exotic theory of lunar origin". Icarus. 24 (2): 256–268. Bibcode:1975Icar...24..256M. doi:10.1016/0019-1035(75)90102-5.
- Taylor, G. Jeffrey (December 31, 1998), "Origin of the Earth and Moon", Planetary Science Research Discoveries, University of Hawaii
- Touboul, Mathieu (December 20, 2007), "Late formation and prolonged differentiation of the Moon inferred from W isotopes in lunar metals", Nature, 450: 1206–1209, Bibcode:2007Natur.450.1206T, doi:10.1038/nature06428, PMID 18097403
- Lovett, Richard A. (December 19, 2007), "Earth-Asteroid Collision Formed Moon Later Than Thought", National Geographic News, retrieved 2012-02-24
- "NASA Lunar Scientists Develop New Theory on Earth and Moon Formation". NASA Press Release. NASA. 2012-10-30. Retrieved 2012-12-05.
- William K. Hartmann and Donald R. Davis, Satellite-sized planetesimals and lunar origin, (International Astronomical Union, Colloquium on Planetary Satellites, Cornell University, Ithaca, N.Y., Aug. 18-21, 1974) Icarus, vol. 24, April 1975, p. 504-515
- Alastair G. W. Cameron and William R. Ward, The Origin of the Moon, Abstracts of the Lunar and Planetary Science Conference, volume 7, page 120, 1976
- Canup, R. M.; Asphaug, E. (Fall 2001). "An impact origin of the Earth-Moon system". Abstract #U51A-02. American Geophysical Union. Bibcode:2001AGUFM.U51A..02C.
- R. Canup; K. Righter, eds. (2000). Origin of the Earth and Moon. University of Arizona Press, Tucson. pp. 555 pp.
- Shearer, C. K.; 15 coauthors (2006). "Thermal and magmatic evolution of the Moon". Reviews in Mineralogy and Geochemistry. 60 (1): 365–518. Bibcode:2006RvMG...60..365S. doi:10.2138/rmg.2006.60.4.
- Galimov, Erik M.; Krivtsov, Anton M. "Origin of the Moon. New Concept. Geochemistry and Dynamics". De Gruyter., Berlin 2012, ISBN 978-3-11-028640-3.
- Planetary Science Institute: Giant Impact Hypothesis
- Origin of the Moon by Prof. AGW Cameron
- Klemperer Rosette simulations using Java applets
- SwRI giant impact hypothesis simulation (.wmv and .mov)
- Origin of the Moon - computer model of accretion
- Moon Archive - Including articles about the giant impact hypothesis
- Planet Smash-Up Sends Vaporized Rock, Hot Lava Flying (2009-08-10 JPL News)
- How common are Earth-Moon planetary systems? (arXiv:1105.4616: 23 May 2011)
- The Surprising State of the Earth after the Moon-Forming Giant Impact - Sarah Stewart (SETI Talks), Jan 28, 2015 | <urn:uuid:cadd594f-19b6-4ace-b230-5cdfe8395db5> | 3.96875 | 8,263 | Knowledge Article | Science & Tech. | 70.258625 | 95,499,491 |
Other vertebrates such as fish, reptiles, and birds, have red cells that contain nuclei that are inactive. Losing the nucleus enables the red blood cell to contain more oxygen-carrying hemoglobin, thus enabling more oxygen to be transported in the blood and boosting our metabolism.
Scientists have struggled to understand the mechanism by which maturing red blood cells eject their nuclei. Now, researchers in the lab of Whitehead Member Harvey Lodish have modeled the complete process in vitro in mice, reporting their findings in Nature Cell Biology online on February 10, 2008. The first mechanistic study of how a red blood cell loses its nucleus, the research sheds light on one of the most essential steps in mammalian evolution.
It was known that as a mammalian red blood cell nears maturity, a ring of actin filaments contracts and pinches off a segment of the cell that contains the nucleus, a type of “cell division.” The nucleus is then swallowed by macrophages (one of the immune system’s quick-response troops). The genes and signaling pathways that drive the pinching-off process, however, were a mystery.
“Using a cell-culture system we were actually able to watch the cells divide, go through hemoglobin synthesis and then lose their nuclei,” says Lodish, who is also a professor of biology at Massachusetts Institute of Technology. “We discovered that the proteins Rac 1, Rac 2 and mDia2 are involved in building the ring of actin filaments.”
“We were very interested in that Rac 1 and Rac 2 were involved in disposing the nuclei of red blood cells,” says Peng Ji, lead author and postdoctoral researcher in the Lodish lab. “These proteins are known for their role in creating actin fibers in many body cells, and a necessary component of many important cellular functions including cell division that support cell growth.”
His cell-culture system began with red blood cell precursors drawn from an embryonic mouse liver (in mammalian embryos, the liver is the main producer of such cells, rather than bone marrow as in adults). The cultured cells, synchronized to develop together, divided four or five times before losing their nuclei and becoming immature red blood cells. The researchers used simple fluorescence-based assays that enabled them to probe the changes in the red blood cells through the different stages leading up to the loss of the nucleus.
The researchers plan to further investigate the entire process of red blood cell formation, which may lead to insights about genetic alterations that underlie certain red blood cell disorders.
“During normal cell division, each daughter cell receives half the DNA,” comments Lodish. “In this case, when the red blood cell divides, one daughter cell gets all the DNA. What’s fascinating is that in this case, that daughter cell gets eaten by macrophages. Until now, scientists were unable to study these cells because they were unable to see them.”
Cristin Carr | EurekAlert!
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:12e43d8d-8f44-4294-8162-5996831bdf7e> | 3.671875 | 1,270 | Content Listing | Science & Tech. | 43.93133 | 95,499,529 |
Waikato dating laboratory
Along with hydrogen, nitrogen, oxygen, phosphorus, and sulfur, carbon is a building block of biochemical molecules ranging from fats, proteins, and carbohydrates to active substances such as hormones.All carbon atoms have a nucleus containing six protons.Buckland's immediate successors did a little better.They determined that the Red Lady was in fact a man, and that the ornaments resembled those found at much older sites in continental Europe.For more information on radiocarbon dating, see the Waikato Radiocarbon Dating Laboratory web page at
AMS radiocarbon dating of carefully selected organic pigments, beeswax, excavated charcoal and compounds living above or below rock art (e.g.
Ninety-nine percent of these also contain six neutrons.
The 6 proton 6 neutron atoms are said to have a mass of 12 and are referred to as "carbon-12." The nuclei of the remaining one percent of carbon atoms contain not six but either seven or eight neutrons in addition to the standard six protons.
C dating is a recent advancement that enables us to date samples as small as a grain of rice (and sometimes smaller).
This has the advantage of minimal damage to artwork and other artefacts. | <urn:uuid:4c918179-db66-4bb8-aaf4-e05acdfb4dfc> | 3.3125 | 258 | Knowledge Article | Science & Tech. | 36.277752 | 95,499,534 |
Unicellular microalgae smell dissolved minerals in the water as Chemists of the University Jena demonstrate in the current issue of “Nature Communications”
Diatoms are unicellular algae that are native in many waters. They are a major component of marine phytoplankton and the food base for a large variety of marine organisms. In addition, they produce about one fifth of the oxygen in the atmosphere and are therefore a key factor for our global climate.
However, these algae, which measure only a few micrometers, have yet another amazing ability: they can “smell” stones. “To be more precise, these algae are able to locate dissolved silicate minerals,” Prof. Dr. Georg Pohnert, the chair of Instrumental Analytics at the Friedrich Schiller University in Jena, Germany, explains.
A recent study by Pohnert and his research team published in the current issue of “Nature Communications” demonstrates that diatoms are not only able to trace silicate minerals in the water. Moreover, they can even move actively to areas where the concentration of silicates is especially high (DOI: 10.1038/ncomms10540).
Algae need silicate for the structure of their strong mineral cell membranes, which are composed of two overlapping parts like a cardboard shoe box with a lid. During cell division, each of the two new cells receives one half of the box and regenerates the missing lid. “The algae have to search their environment for the building material,” says Pohnert, who is also a Fellow at the Jena Max Planck Institute for Chemical Ecology.
For their study, the researchers from Jena and their colleagues from the University of Ghent, Belgium, observed and filmed Seminavis robusta diatoms under the microscope. The video shows what happens when algae are fed with a single silicate loaded granule:
The tiny single-cell organisms, which grow in a biofilm on a solid surface, perform back and forth moves to approach the silicate source in the center of the screen and virtually “gobble it up”. The algae are able to cover a distance of two micrometers per second, as shown in fast motion in the video. “It becomes obvious that diatom-dominated biofilms are actually constantly moving,” Pohnert points out.
How the algae succeed in performing a target-oriented movement remains to be elucidated yet. “We do currently not know, which receptors the algae have or which mechanisms mediate the perception,” says Karen Grace Bondoc from Pohnert’s research team. The PhD student, who is a fellow of the International Max Planck Research School “Exploration of Ecological Interactions with Molecular and Chemical Techniques”, is the first author of the publication. In her PhD project she studies complex interactions of the organisms in marine biofilms.
However, the scientists showed that the diatoms were solely attracted by the odor of the silicate. If the researchers replaced the silicate mineral with structurally similar salts containing Germanium which is toxic to the algae, the algae moved away from the mineral source.
Even though the experiments are pure basic research, the Jena chemists see the potential for practical application in the long term. “If we understand the processes that make the algae colonize one particular area or avoid other regions, we could use this information to selectively design surfaces and materials in such a way that they stay free of algae,” Pohnert emphasizes. Such materials could be used for the hulls of ships or water pipes which are often damaged by algal colonization.
Bondoc, K. G., Heuschele, J., Gillard, J., Vyverman, W., Pohnert, G. (2016). Selective silica-directed motility in diatoms. Nature Communications. DOI: 10.1038/ncomms10540
Prof. Dr. Georg Pohnert
Institute for Inorganic and Analytical Chemistry
Friedrich Schiller University Jena
Lessingstrasse 8, 07743 Jena
Phone: +49 3641 948170
http://www.uni-jena.de/unijenamedia/Bilder/presse/researchnews/Kieselalgen_Pohne... - download the video of diatoms movement (AG Pohnert/FSU)
Dr. Ute Schönfelder | idw - Informationsdienst Wissenschaft
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:6b4ed844-0c48-45a9-8e45-5930798790ca> | 3.4375 | 1,605 | Content Listing | Science & Tech. | 38.246719 | 95,499,542 |
The latest achievement in astronomical accuracy is an atomic clock that, had it been running at the time of the Big Bang, would still be correct to within a second today. This atomic clock, built by the National Institute of Standards and Technology and the University of Colorado, was last year awarded the title of most accurate. Its creators have made it three times more so since then, as a new paper in Nature Communications explains. Now it won’t lose or gain a second over 15 billion years.
Like other atomic clocks, this one observes the predictable oscillations of atoms to judge exactly how much time has passed. In this case, it’s strontium atoms, formed into a column-like structure and excited by a very specific wavelength of red laser light. This light is absorbed and cast off at a rate of some 430 trillion hertz — think of it like a second hand that has to tick 430 trillion times to get around the face of a clock. That makes for a heck of a stopwatch, which is just what scientists use it for.
Knowing exactly how much time has passed is useful for all manner of experimental purposes, as well as coordinating with satellites for location purposes. This clock is so precise that it can detect the changing effects of gravity on the passage of time, letting it tell the altitude to within an inch. The researchers won’t stop there, though, and are already planning experiments for when the clock’s accuracy is even further improved.
"If we can make a clock 1,000 times more accurate, we could hear the symphony of the universe" http://t.co/IfyEmRn4Mp
— Los Angeles Times (@latimes) April 22, 2015
— Live Science (@LiveScience) April 21, 2015
— Adrianne Jeffries (@adrjeffries) April 22, 2015
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
More from Futuristic
People tend to ask Ray Kurzweil all manner of questions about technology and the future. But they also want to …
AGING — A LAW OR A SUGGESTION? The inevitability of aging may be no more than yet another biological theory that scientific … | <urn:uuid:0afa49ac-5e87-4dcb-b4a5-949ee528380f> | 3.15625 | 463 | News Article | Science & Tech. | 45.310377 | 95,499,557 |
Authors: Antonio Puccini
With the neutron decay, a proton and an electron (e-) are emitted. The energy gap, which should be offset by the emission of a 3rd particle, is randomly included between 0.511 and 0.7828 MeV. These values correspond to those of a more or less accelerated electron, but not those of a neutrino, which mass is considered to be ≤ 0.01 electronic masses. Pauli and Fermi hypothesized that this 3rd particle should be free of electric charge and provided with the same mass and spin of an electron. Such requests may be fully met by an electron, but without electric charge: a neutral electron (e°), equally safeguarding all Conservation Laws. If we analyze the properties of this possible particle, they seem to coincide with those attributed to the Majorana Spynor or Fermion: that is, a massive particle, free of electric charge, self-conjugated, i.e. it identifies with its antiparticle (with the exception of the spin: antiparallel): ↓e° ≡ ē°↑
Comments: 12 Pages.
[v1] 2017-07-05 05:52:44
Unique-IP document downloads: 94 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:94880da4-30e5-4036-a450-f44d456e9efe> | 2.609375 | 398 | Academic Writing | Science & Tech. | 47.259531 | 95,499,560 |
Summary: Researchers have identified a new set of matching barrel structures two layers deeper in the cortex than whisker barrels.
Source: Brown University.
Because they provide an exemplary physiological model of how the mammalian brain receives sensory information, neural structures called “mouse whisker barrels” have been the subject of study by neuroscientists around the world for nearly 50 years. A new study shows that despite all that prior scrutiny, significant discoveries remain to be made. Specifically, researchers at Brown University have found a previously unknown set of matching barrel structures two layers deeper into the cortex than the whisker barrels, providing a more complete picture of the circuitry involved in handling sensory information.
While humans don’t use whiskers to perceive their surroundings, the new findings still serve to deepen scientists’ understanding of how two brain regions — the cortex, which processes information, and the thalamus, which feeds that information to the cortex — communicate, said study senior author Barry Connors, a professor of neuroscience at Brown. The barrels discovered in 1970 in the fourth layer of the cortex were presumed to handle much of this cross-talk, and while that may still be true, the finding of parallel “infrabarrels” in the sixth layer suggests that the layer closest to the thalamus may play a greater role than previously realized.
“There is a good chance that these infrabarrels reveal a kind of circuit architecture that wasn’t appreciated before,” Connors said.
It’s vital to understand these circuits given how important cortex-governed behaviors such as sensory processing and attention are, added lead author Shane Crandall, a former Brown postdoctoral scholar who is now an assistant professor at Michigan State University. This has proven difficult — even in the well-studied whisker barrel system — because of the immense diversity of cells in cortical layers.
“Our study focused on the deepest and perhaps the most mysterious layer of the neocortex, layer six,” Crandall said. “Our results reveal discrete neocortical circuit modules specialized for linking long-distance inputs with specific outputs, thus providing a framework for understanding the functional organization of layer six.”
What makes the brain science behind mouse whiskers so compelling to scientists is that it is so well organized, Connors said. Each whisker connects into a distinct circuit that can be cleanly traced through the brainstem, into the thalamus, and into an exact mapping in which each barrel in layer four corresponds to single whisker. But more generally in humans and mice alike, there is a map of neurons in the cortex for sensations from all over the body. The cells in the somatosensory cortex that process touch in a person’s right pinky, for example, are specific and different than the cells that process sensation in the left elbow. Meanwhile, just as these cells take in information from the thalamus, many more cells send messages from the cortex back to the thalamus, Connors said, perhaps to manage the cortex’s attention to all this incoming sensory information.
“The cortex is not only receiving information about the world through the thalamus, it is also regulating the information it receives,” Connors said. “The brain is constantly making decisions about what kind of information to pay attention to and what not to pay attention to.”
A new set of barrels
In the new study, Connors, Crandall and co-authors sought to understand how cells in layer six were organized and whether they had a role in this cross-talk. To do that, they used mice that were genetically engineered to express different fluorescing proteins in very specific cell types. For example, they were able to get “corticothalamic” (CT) neurons — ones that project from the cortex to the thalamus — in layer six to light up yellow.
When they did that, they saw that those CT neurons were tightly grouped into structures that were not only very similar to the famous barrels in layer four (which they made to glow red) but also directly below them in columns of circuitry that extend across all the layers of the cortex.
They continued their investigation of layer six neurons using the technique of optogenetics — in which specific neurons can be engineered to become activated or suppressed by pulses of visible light. The technology allowed them to stimulate different cells in the thalamus to see which neurons those cells might excite in layer six. They found that while the thalamic neurons did not have much effect at all on the CT neurons, they excited many of the neurons bunched into the spaces between the layer six infrabarrels. These neurons communicate within the cortex and are therefore called corticocortical (CC) neurons.
The findings suggest that at least some of the cortex’s communications to the thalamus are relayed by layer six CT cells in infrabarrels that are arranged just like the barrels in layer four, Connors said. They also suggest that the thalamus not only sends input for the cortex to process in layer four, but also to CC neurons between the infrabarrels in layer six.
One of the next questions Crandall now wants to ask is which other cells in the cortex these CC cells that receive direct thalamic input may be “talking” to.
In addition to Crandall and Connors, the paper’s other authors are Saundra Patrick and Scott Cruikshank.
Funding: The National Institutes of Health (grants K99-NS096108, F32-NS084763, P20-GM103645, R01-NS050434, R01-NS100016) and the National Science Foundation (award 1632738) funded the research.
Source: Kevin Stacey – Brown University
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is credited to Crandall et. al..
Original Research: Full open access research for “Infrabarrels Are Layer 6 Circuit Modules in the Barrel Cortex that Link Long-Range Inputs and Outputs” by Shane R. Crandall, Saundra L. Patrick, Scott J. Cruikshank, and Barry W. Connors in Cell Reports. Published online December 12 2017 doi:10.1016/j.celrep.2017.11.049
Infrabarrels Are Layer 6 Circuit Modules in the Barrel Cortex that Link Long-Range Inputs and Outputs
•Layer 6a of the somatosensory cortex has barrel-like structures called infrabarrels
•Corticothalamic cells group within and corticocortical cells between infrabarrels
•Inputs from somatosensory thalamus selectively target these two neuron types
•Synaptic and intrinsic properties control corticothalamic responses to thalamic input
The rodent somatosensory cortex includes well-defined examples of cortical columns—the barrel columns—that extend throughout the cortical depth and are defined by discrete clusters of neurons in layer 4 (L4) called barrels. Using the cell-type-specific Ntsr1-Cre mouse line, we found that L6 contains infrabarrels, readily identifiable units that align with the L4 barrels. Corticothalamic (CT) neurons and their local axons cluster within the infrabarrels, whereas corticocortical (CC) neurons are densest between infrabarrels. Optogenetic experiments showed that CC cells received robust input from somatosensory thalamic nuclei, whereas CT cells received much weaker thalamic inputs. We also found that CT neurons are intrinsically less excitable, revealing that both synaptic and intrinsic mechanisms contribute to the low firing rates of CT neurons often reported in vivo. In summary, infrabarrels are discrete cortical circuit modules containing two partially separated excitatory networks that link long-distance thalamic inputs with specific outputs.
“Infrabarrels Are Layer 6 Circuit Modules in the Barrel Cortex that Link Long-Range Inputs and Outputs” by Shane R. Crandall, Saundra L. Patrick, Scott J. Cruikshank, and Barry W. Connors in Cell Reports. Published online December 12 2017 doi:10.1016/j.celrep.2017.11.049 | <urn:uuid:e99d70a0-a28b-41fe-a184-517701e11173> | 3.328125 | 1,764 | Truncated | Science & Tech. | 39.938085 | 95,499,564 |
The Water Cycle
The lithosphere, oceans and the atmosphere form the largest reservoirs on earth. The main link between these reservoirs is the hydrological cycle, which provides fresh-water for humans, continental ecosystem functions, weathering, and sediment transport and is co-responsible for the temperature equilibration on earth. The driving force of the water cycle is solar radiation, contributing with an average of 1024 J/year to water evaporation; for comparison, the European Community consumes about 7×1019 J/year of energy.
KeywordsHydraulic Conductivity Groundwater Recharge Unsaturated Zone Water Cycle Vadose Zone
Unable to display preview. Download preview PDF. | <urn:uuid:1a37e8da-0542-44b1-9b11-7e099cc19781> | 3.40625 | 141 | Truncated | Science & Tech. | 15.828909 | 95,499,565 |
Evolution of the Solar System
by Hannes Alfven, Gustaf Arrhenius
Publisher: NASA 1976
A realistic attempt to reconstruct the early history of the solar system. The authors chose a procedure which reduces speculation as much as possible and connects the evolutionary models as closely as possible to experiment and observation.
Home page url
Download or read it online for free here:
The Solar System comprises the Sun and its planetary system, as well as a number of dwarf planets, satellites, and other objects that orbit the Sun. It formed 4.6 billion years ago from the gravitational collapse of a giant molecular cloud.
by Philip J. Armitage - arXiv
An introduction to the theory of the formation and early evolution of planetary systems. Topics covered: the structure, evolution and dispersal of protoplanetary disks; the formation of planetesimals, terrestrial and gas giant planets; etc.
- Pergamon Press
The theme of this book is the study of basaltic volcanism on the terrestrial planets as a stage in planetary evolution: to use the eruption of lava from the interior of a planet as evidence of the thermal and chemical processes of the planet.
Mars Science Laboratory is a robotic space probe mission to Mars launched by NASA, which landed a rover Curiosity. The objectives include investigating Mars' habitability, studying its climate and geology, and collecting data for a manned mission. | <urn:uuid:8bbe2ff6-b2dc-4001-91d5-d9bfc3edf9ce> | 3.09375 | 289 | Content Listing | Science & Tech. | 26.273636 | 95,499,596 |
A supervolcano is a large volcano that has had an eruption of magnitude 8, which is the largest value on the Volcanic Explosivity Index (VEI). This means the volume of deposits for that eruption is greater than 1,000 cubic kilometers (240 cubic miles).
Supervolcanoes occur when magma in the mantle rises into the crust but is unable to break through it and pressure builds in a large and growing magma pool until the crust is unable to contain the pressure. This can occur at hotspots (for example, Yellowstone Caldera) or at subduction zones (for example, Toba). Another setting for the eruption of very large amounts of volcanic material is in large igneous provinces, which can cover huge areas with lava and volcanic ash, causing long-lasting climate change (such as the triggering of a small ice age), which can threaten species with extinction. The Oruanui eruption of New Zealand's Taupo Volcano (about 26,500 years ago) was the world's most recent super eruption at a VEI-8 eruption.
The origin of the term "supervolcano" is linked to an early 20th-century scientific debate about the geological history and features of the Three Sisters volcanic region of Oregon in the United States. In 1925, Edwin T. Hodge suggested that a very large volcano, which he named Mount Multnomah, had existed in that region. He believed that several peaks in the Three Sisters area are the remnants of Mount Multnomah after it had been largely destroyed by violent volcanic explosions, similar to Mount Mazama. In 1948, the possible existence of Mount Multnomah was ignored by volcanologist Howel Williams in his book The Ancient Volcanoes of Oregon. The book was reviewed in 1949 by another volcanologist, F. M. Byers Jr. In the review, Byers refers to Mount Multnomah as a supervolcano. Although Hodge's suggestion that Mount Multnomah is a supervolcano was rejected long ago, the term supervolcano was popularised by the BBC popular science television program Horizon in 2000 to refer to eruptions that produce extremely large amounts of ejecta.
The term megacaldera is sometimes used for caldera supervolcanoes, such as the Blake River Megacaldera Complex in the Abitibi greenstone belt of Ontario and Quebec, Canada. Eruptions that rate VEI 8 are termed "super eruptions". Though there is no well-defined minimum explosive size for a "supervolcano", there are at least two types of volcanic eruptions that have been identified as supervolcanoes: large igneous provinces and massive eruptions.
Large igneous provinces
Large igneous provinces, such as Iceland, the Siberian Traps, Deccan Traps, and the Ontong Java Plateau, are extensive regions of basalts on a continental scale resulting from flood basalt eruptions. When created, these regions often occupy several thousand square kilometres and have volumes on the order of millions of cubic kilometers. In most cases, the lavas are normally laid down over several million years. They release large amounts of gases.
The Réunion hotspot produced the Deccan Traps about 66 million years ago, coincident with the Cretaceous–Paleogene extinction event. The scientific consensus is that a meteor impact was the cause of the extinction event, but the volcanic activity may have caused environmental stresses on extant species up to the Cretaceous–Paleogene boundary. Additionally, the largest flood basalt event (the Siberian Traps) occurred around 250 million years ago and was coincident with the largest mass extinction in history, the Permian–Triassic extinction event, although it is unknown whether it was solely responsible for the extinction event.
Such outpourings are not explosive, though lava fountains may occur. Many volcanologists consider that Iceland may be a large igneous province that is currently being formed. The last major outpouring occurred in 1783–84 from the Laki fissure which is approximately 40 km (25 mi) long. An estimated 14 km3 (3.4 cu mi) of basaltic lava was poured out during the eruption.
Massive explosive eruptions
Volcanic eruptions are classified using the Volcanic Explosivity Index, or VEI. It is a logarithmic scale, which means that an increase of one in VEI number is equivalent to a tenfold increase in volume of erupted material. VEI 7 or VEI 8 eruptions are so powerful that they often form circular calderas rather than cones because the downward withdrawal of magma causes the overlying rock mass to collapse into the empty magma chamber beneath it.
Known super eruptions
VEI 7 eruptions, less colossal but still massive, have occurred in historical times. The only ones in the past 2,000 years are Taupo Volcano's Hatepe eruption c. 232, Baekdu Mountain in 946–947, the eruption of Mount Samalas in 1257, and Tambora in 1815.
|Name||Zone||Location||Event / notes||Years ago before 1950 (Approx.)||Ejecta volume (Approx.)|
|Mount Tambora||Sumbawa Island, West Nusa Tenggara||Indonesia||This eruption took place in 1815. 1816 became known as the Year Without a Summer.||135||120 km3|
|Mount Samalas||Lombok Island, West Nusa Tenggara||Indonesia||1257 Samalas eruption. Possible trigger of the Little Ice Age.||693||130 km3[vague]|
|Baekdu Mountain||Control by Baikal Rift Zone||China/North Korea||One of the largest volcanic eruptions in the past 2,000 years. 946 eruption of Paektu Mountain (Millennium Eruption).||1,004||100–120 km3|
|Taupo Volcano (Lake Taupo)||Taupo Volcanic Zone||New Zealand, North Island||Hatepe eruption AD 232||1,718||120 km3 |
|Thera (Santorini caldera)||South Aegean Volcanic Arc||Greece, Santorini||Minoan eruption BC 1,641 (±12)||3,591||100 km3 |
|Kikai Caldera||Japan, Ryukyu Islands||Kikai Caldera 4,300 BC||6,300||150 km3|
|Macauley Island||Kermadec Islands||New Zealand||Macauley Island 8,300 to 6,300 years ago||6,300||100 km3 |
|Mount Mazama (Crater Lake)||Cascade Volcanic Arc||U.S., Oregon||Partially responsible for the formation of Crater Lake.||6,578||100 km3 |
|Kurile Lake||Kamchatka Peninsula||Russia||Kurile Lake
|10,500||140–170 km3 |
|Aira Caldera||Japan, Kyūshū||Aira Caldera||22,000||450 km3|
|Campanian Ignimbrite eruption||Campi Flegrei (Phlegraean Fields)||Italy, Naples||39,280||300 km3*|
|Rotoiti Ignimbrite||Taupo Volcanic Zone||New Zealand, North Island||Rotoiti Ignimbrite||50,000||240 km3 |
|Lake Maninjau||Lake Maninjau, West Sumatra||Indonesia||52,000||220–250 km³|
|Reporoa Caldera||Taupo Volcanic Zone||New Zealand, North Island||230,000||100 km3 |
|Mamaku Ignimbrite||Taupo Volcanic Zone||New Zealand, North Island||Rotorua Caldera||240,000||280 km3 |
|Matahina Ignimbrite||Taupo Volcanic Zone||New Zealand, North Island||Haroharo Caldera||280,000||120 km3 |
|Mount Aso||Japan, Kyūshū||Four large eruptions between 300,000 and 80,000 years ago.||300,000||600 km3|
|Long Valley Caldera||Bishop Tuff||U.S., California||760,000||600 km3|
|Mangakino||Taupo Volcanic Zone||New Zealand, North Island||Three eruptions from 0.97 to 1.23 million years ago||970,000||300 km3 |
|Valles Caldera||Jemez volcanic field||U.S., New Mexico||Two eruptions at 1.25 and 1.61 million years ago||1,250,000
|600 km3 |
|Henry's Fork Caldera||Yellowstone hotspot
Mesa Falls Tuff
|U.S., Idaho||Yellowstone hotspot||1,300,000||280 km3 |
|825 km3 |
|Pastos Grandes Ignimbrite||Pastos Grandes Caldera||Bolivia||2,900,000||820 km3 |
|Heise Volcanic Field||Yellowstone hotspot
|U.S., Idaho||Yellowstone hotspot||6,400,000||750 km3 |
|Bruneau-Jarbidge caldera||Yellowstone hotspot||U.S., Idaho||Yellowstone hotspot
Responsible for the Ashfall Fossil Beds 1,600 km to the east
|950 km3 |
|Cerro Panizos||Altiplano-Puna volcanic complex||Argentina, Bolivia||12,000,000||250 km3|
|Bennett Lake Volcanic Complex||Skukum Group||Canada, British Columbia/Yukon||50,000,000||850 km3 |
* means DRE (dense rock equivalent).
|Part of a series on|
- In 2004, the TV show Naked Science covered supervolcanoes on the National Geographic Channel.
- In 2005, a two-part television disaster film called Supervolcano aired on BBC One, the Discovery Channel, and other television networks worldwide.
- Nova featured an episode "Mystery of the Megavolcano" in September 2006 examining such eruptions in the last 100,000 years.
- In 2006, the Sci-Fi Channel aired the documentary Countdown to Doomsday which featured a segment called "Supervolcano". The same year, ABC News aired the documentary Last Days on Earth, which featured a segment called "Supervolcano".
- In 2009, the apocalypse-themed film 2012 featured the super-eruption of the massive Yellowstone Caldera, a result of the Earth's core heating up. This made most of the United States uninhabitable.
- The Siberian Traps and Lake Toba were featured in Animal Planet's prehistoric documentary Animal Armageddon (each in one episode), where speculation of the lifeforms they afflicted were discussed.
- "Archived copy". Archived from the original on 3 July 2017. Retrieved 22 August 2017.
- Wilson, C. J. N. (2001). "The 26.5ka Oruanui eruption, New Zealand: An introduction and overview". Journal of Volcanology and Geothermal Research. 112: 133. Bibcode:2001JVGR..112..133W. doi:10.1016/S0377-0273(01)00239-6.
- Harris, Stephen (1988) Fire Mountains of the West: The Cascade and Mono Lake Volcanoes, Missoula, Mountain Press.
- Byers, Jr., F. M. (1949) Reviews: The Ancient Volcanoes of Oregon by Howel Williams Archived 8 March 2016 at the Wayback Machine., The Journal of Geology, volume 57, number 3, May 1949, page 324. Retrieved 2012-08-17.
- supervolcano, n. Oxford English Dictionary, third edition, online version June 2012. Retrieved on 2012-08-17.
- Supervolcanoes Archived 1 August 2003 at the Wayback Machine.. Bbc.co.uk (2000-02-03). Retrieved on 2011-11-18.
- USGS Cascades Volcano Observatory Archived 13 February 2012 at WebCite. Vulcan.wr.usgs.gov. Retrieved on 2011-11-18.
- de Silva, Shanaka (2008). "Arc magmatism, calderas, and supervolcanos". Geology. 36 (8): 671–672. Bibcode:2008Geo....36..671D. doi:10.1130/focus082008.1. Archived from the original on 21 May 2016.
- Bryan, S.E. (2010). "The largest volcanic eruptions on Earth". Earth-Science Reviews. 102 (3–4): 207. Bibcode:2010ESRv..102..207B. doi:10.1016/j.earscirev.2010.07.001.
- Keller, G (2014). "Deccan volcanism, the Chicxulub impact, and the end-Cretaceous mass extinction: Coincidence? Cause and effect?". Geological Society of America Special Papers. 505: 57. doi:10.1130/2014.2505(03).
- Petraglia, M.; Korisettar, R.; Boivin, N.; Clarkson, C.; Ditchfield, P.; Jones, S.; Koshy, J.; Lahr, M. M.; et al. (2007). "Middle Paleolithic Assemblages from the Indian Subcontinent Before and After the Toba Super-Eruption". Science. 317 (5834): 114–6. Bibcode:2007Sci...317..114P. doi:10.1126/science.1141564. PMID 17615356.
- Knight, M.D., Walker, G.P.L., Ellwood, B.B., and Diehl, J.F. (1986). "Stratigraphy, paleomagnetism, and magnetic fabric of the Toba Tuffs: Constraints on their sources and eruptive styles". Journal of Geophysical Research. 91: 10355–10382. Bibcode:1986JGR....9110355K. doi:10.1029/JB091iB10p10355.
- Ninkovich, D., Sparks, R.S.J., and Ledbetter, M.T. (1978). "The exceptional magnitude and intensity of the Toba eruption, Sumatra: An example of using deep-sea tephra layers as a geological tool". Bulletin Volcanologique. 41 (3): 286–298. Bibcode:1978BVol...41..286N. doi:10.1007/BF02597228.
- Rose, W.I. & Chesner, C.A. (1987). "Dispersal of ash in the great Toba eruption, 75 ka" (PDF). Geology. 15 (10): 913–917. Bibcode:1987Geo....15..913R. doi:10.1130/0091-7613(1987)15<913:DOAITG>2.0.CO;2. ISSN 0091-7613. Archived (PDF) from the original on 17 June 2010.; Lee Siebert, Tom Simkin, Paul Kimberly Volcanoes of the World. University of California Press, 2011 ISBN 0-520-26877-6
- Williams, M.A.J. & Royce, K. (1982). "Quaternary geology of the middle son valley, North Central India: Implications for prehistoric archaeology". Palaeogeography, Palaeoclimatology, Palaeoecology. 38 (3–4): 139. Bibcode:1982PPP....38..139W. doi:10.1016/0031-0182(82)90001-3.
- Global Volcanism Program | Volcanoes of the World | Large Holocene Eruptions Archived 13 February 2010 at the Wayback Machine.. Volcano.si.edu. Retrieved on 2011-11-18.
- Lindsay, J. M.; de Silva, S.; Trumbull, R.; Emmermann, R.; Wemmer, K. (2001). "La Pacana caldera, N. Chile: a re-evaluation of the stratigraphy and volcanology of one of the world's largest resurgent calderas". Journal of Volcanology and Geothermal Research. 106 (1–2): 145–173. Bibcode:2001JVGR..106..145L. doi:10.1016/S0377-0273(00)00270-5.
- Froggatt, P. C.; Nelson, C. S.; Carter, L.; Griggs, G.; Black, K. P. (13 February 1986). "An exceptionally large late Quaternary eruption from New Zealand". Nature. 319 (6054): 578–582. Bibcode:1986Natur.319..578F. doi:10.1038/319578a0.
The minimum total volume of tephra is 1,200 km3 but probably nearer 2,000 km3, ...
- Lisa A. Morgan & William C. McIntosh (2005). "Timing and development of the Heise volcanic field, Snake River Plain, Idaho, western USA". GSA Bulletin. 117 (3–4): 288–306. Bibcode:2005GSAB..117..288M. doi:10.1130/B25519.1.
- Salisbury, M. J.; Jicha, B. R.; de Silva, S. L.; Singer, B. S.; Jimenez, N. C.; Ort, M. H. (21 December 2010). "40Ar/39Ar chronostratigraphy of Altiplano-Puna volcanic complex ignimbrites reveals the development of a major magmatic province". Geological Society of America Bulletin. 123 (5–6): 821–840. Bibcode:2011GSAB..123..821S. doi:10.1130/B30280.1. Archived from the original on 30 January 2016. Retrieved 29 November 2015.
- Rejuvenation and Repeated Eruption of a 1.0 Ma Supervolcanic System at Mangakino Caldera, Taupo Volcanic Zone, New Zealand American Geophysical Union, Fall Meeting 2012, abstract #V31C-2797. Retrieved 10 September 2017.
- BG, Mason (2004). "The size and frequency of the largest explosive eruptions on Earth". Bull Volcanol. 66 (8): 735. Bibcode:2004BVol...66..735M. doi:10.1007/s00445-004-0355-9.
- Hogg, A (2011). "Revised calendar date for the Taupo eruption derived by 14C wiggle-matching using a New Zealand kauri 14C calibration data set". The Holocene. 22 (4): 439. Bibcode:2012Holoc..22..439H. doi:10.1177/0959683611425551.
- Xu, JD (2013). "Climatic impact of the Millennium eruption of Changbaishan volcano in China: New insights from high-precision radiocarbon wiggle-match dating" (PDF). Geophysical Research Letters. 40: 54. Bibcode:2013GeoRL..40...54X. doi:10.1029/2012GL054246.
- Vidal, CM (2015). "Dynamics of the major plinian eruption of Samalas in 1257 A.D. (Lombok, Indonesia)". Bull Volcanol. 77 (9): 73. Bibcode:2015BVol...77...73V. doi:10.1007/s00445-015-0960-9.
- Oppenheimer, Clive (2003). "Climatic, environmental and human consequences of the largest known historic eruption: Tambora volcano (Indonesia) 1815". Progress in Physical Geography. 27 (2): 230–259. doi:10.1191/0309133303pp379ra.
- "Santorini". Global Volcanism Program. Smithsonian Institution. Archived from the original on 4 August 2017. Retrieved 4 August 2017.
- Latter, J. H.; Lloyd, E. F.; Smith, I. E. M.; Nathan, S. 1992. Volcanic hazards in the Kermadec Islands and at submarine volcanoes between southern Tonga and New Zealand Archived 22 May 2010 at the Wayback Machine., Volcanic hazards information series 4. Wellington, New Zealand. Ministry of Civil Defence. 44 p.
- "Macauley Island". Global Volcanism Program. Smithsonian Institution.
- Lidstrom, John (June 1972). A New Model for the Formation of Crater Lake (PDF File) (Doctor of Philosophy thesis). Corvallis, Oregon, United States of America: Oregon State University. Archived (PDF) from the original on 30 July 2017.
- "Kurile Lake on the Global Volcanism Program". volcano.si.edu. Archived from the original on 4 January 2017. Retrieved 3 January 2017.
- Froggatt, P. C. & Lowe, D. J. (1990). "A review of late Quaternary silicic and some other tephra formations from New Zealand: their stratigraphy, nomenclature, distribution, volume, and age". New Zealand Journal of Geology and Geophysics. 33: 89–109. doi:10.1080/00288306.1990.10427576. Archived from the original on 22 April 2012.
- I. A. Nairn; C. P. Wood; R. A. Bailey (December 1994). "The Reporoa Caldera, Taupo Volcanic Zone: source of the Kaingaroa Ignimbrites". Bulletin of Volcanology. 56 (6): 529–537. Bibcode:1994BVol...56..529N. doi:10.1007/BF00302833. Retrieved 2010-09-16.
- Karl D. Spinks, J.W. Cole, & G.S. Leonard 2004. Caldera Volcanism in the Taupo Volcanic Zone. In: Manville, V.R. | ed. Geological Society of New Zealand/New Zealand Geophysical Society/26th New Zealand Geothermal Workshop, 6–9 December 2004, Taupo: field trip guides. Geological Society of New Zealand miscellaneous publication 117B.
- Bailey, R. A. & Carr, R. G. (1994). "Physical geology and eruptive history of the Matahina Ignimbrite, Taupo Volcanic Zone, North Island, New Zealand". New Zealand Journal of Geology and Geophysics. 37 (3): 319–344. doi:10.1080/00288306.1994.9514624.
- Briggs, R.M.; Gifford, M.G.; Moyle, A.R.; Taylor, S.R.; Normaff, M.D.; Houghton, B.F.; Wilson, C.J.N. (1993). "Geochemical zoning and eruptive mixing in ignimbrites from Mangakino volcano, Taupo Volcanic Zone, New Zealand". Journal of Volcanology and Geothermal Research. 56 (3): 175–203. Bibcode:1993JVGR...56..175B. doi:10.1016/0377-0273(93)90016-K.
- Dethier, David P.; Kampf, Stephanie K. (2007). Geology of the Jemez Region II. New Mexico Geological Society. p. 499 p. Archived from the original on 17 October 2015. Retrieved 6 November 2015.
- Izett, Glen A. (1981). "Volcanic Ash Beds: Recorders of Upper Cenozoic Silicic Pyroclastic Volcanism in the Western United States". Journal of Geophysical Research. 86 (B11): 10200–10222. Bibcode:1981JGR....8610200I. doi:10.1029/JB086iB11p10200.
- Shipley, Niccole; Bindeman, Ilya; Leonov, Vladimir (18–21 October 2009). "Petrologic and Isotopic Investigation of Rhyolites from Karymshina Caldera, the Largest "Super"caldera in Kamchatka, Russia". Portland GSA Annual Meeting. Archived from the original on 8 June 2011.
- Leonov, V. L.; A. N. Rogozin (October 2007). "Karymshina, a giant supervolcano caldera in Kamchatka: Boundaries, structure, volume of pyroclastics". Journal of Volcanology and Seismology. SpringerLink. 1 (5): 296–309. doi:10.1134/S0742046307050028#page-1 (inactive 2017-01-03). Retrieved 2017-01-03.
- Ort, M. H.; de Silva, S.; Jiminez, N.; Salisbury, M.; Jicha, B. R. and Singer, B. S. Two new supereruptions in the Altiplano-Puna Volcanic Complex of the Central Andes Archived 20 October 2009 at the Wayback Machine.. Portland GSA Annual Meeting, 18–21 October 2009
- Ashfall Fossil Beds State Historical Park. "The Ashfall Story". Archived from the original on 20 August 2006. Retrieved 8 August 2006.
- Soler, M.M.; Caffe, P.J; Coira, B.L.; Onoe, A.T.; Kay, S. Mahlburg (July 2007). "Geology of the Vilama caldera: A new interpretation of a large-scale explosive event in the Central Andean plateau during the Upper Miocene". Journal of Volcanology and Geothermal Research. 164 (1–2): 27–53. Bibcode:2007JVGR..164...27S. doi:10.1016/j.jvolgeores.2007.04.002.
- Ort, Michael H. (June 1993). "Eruptive processes and caldera formation in a nested downsagcollapse caldera: Cerro Panizos, central Andes Mountains". Journal of Volcanology and Geothermal Research. 56 (3): 221–252. Bibcode:1993JVGR...56..221O. doi:10.1016/0377-0273(93)90018-M.
- Lambert, Maurice B. (1978). Volcanoes. North Vancouver, British Columbia: Energy, Mines and Resources Canada. ISBN 0-88894-227-3.
- "Mystery of the Megavolcano" Archived 17 June 2017 at the Wayback Machine.. Pbs.org. Accessed on 2017-10-12.
- Mason, Ben G.; Pyle, David M.; Oppenheimer, Clive (2004). "The size and frequency of the largest explosive eruptions on Earth". Bulletin of Volcanology. 66 (8): 735–748. Bibcode:2004BVol...66..735M. doi:10.1007/s00445-004-0355-9.
- Oppenheimer, C. (2011). Eruptions that shook the world. Cambridge University Press. ISBN 978-0-521-64112-8.
- Timmreck, C.; Graf, H.-F. (2006). "The initial dispersal and radiative forcing of a Northern Hemisphere mid-latitude super volcano: a model study". Atmospheric Chemistry and Physics. 6: 35–49. doi:10.5194/acp-6-35-2006.
- Overview and Transcript of the original BBC program
- Yellowstone Supervolcano and Map of Supervolcanoes Around The World
- USGS Fact Sheet – Steam Explosions, Earthquakes, and Volcanic Eruptions – What's in Yellowstone's Future?
- Scientific American's The Secrets of Supervolcanoes
- Supervolcano eruption mystery solved, BBC Science, 6 January 2014 | <urn:uuid:e8bcb53c-d567-446b-bc7b-3da0be809e84> | 4.25 | 6,082 | Knowledge Article | Science & Tech. | 72.16117 | 95,499,597 |
Netscape 1.0 (and above) and Microsoft's Internet Explorer support different sized fonts within HTML documents. This should be distinguished from Headings.
The element is
<FONT SIZE=value>. Valid values range from 1-7. The default
FONT size is 3. The value given to size can optionally have a '+' or '-' character in front of it to specify that it is relative to the document
BASEFONT. The default
BASEFONT is 3, and can be changed with the
<BASEFONT SIZE ...> element.
<FONT SIZE=4>changes the font size to 4</FONT>
<FONT SIZE=+2>changes the font size to BASEFONT SIZE ... +2</FONT>
Internet Explorer and Netscape support the ability to change the font colour as well as face type. They use COLOR and FACE attributes to the
COLOR = #rrggbb or COLOR = color
The colour attribute sets the colour which text will appear in on the screen. #rrggbb is a hexadecimal colour denoting a RGB colour value. Alternately, the colour can be set to one the available pre-defined colours (see
<BODY BGCOLOR=...>). These colour names can be used for the
BGCOLOR, TEXT, LINK, and
VLINK attributes of the
<BODY> element as well. NOTE : The use of names for colouring text is currently only supported by the Internet Explorer and Netscape. Also, it should be noted that HTML attributes of this kind (that format the presentation of the content) can also be controlled via the use of style sheets.
<FONT COLOR="#FF0000">This text is red.</FONT>
<FONT COLOR="Red">This text is also red.</FONT>
would render as :
This text is red
FACE=name [,name] [,name]
FACE attribute sets the typeface that will be used to display the text on the screen. The type face displayed must already be installed on the users computer. Substitute type faces can be specified in case the chosen type face is not installed on the customers computer. If no match is found, the text will be displayed in the default type that the browser uses for displaying 'normal' text.
<FONT FACE="Courier New, Comic Sans MS"> This text will be displayed in either Courier New, or Comic Sans MS, depending on which fonts are installed on the browsers system.
NOTE : When using this element, care should be taken to try to use font types that will be installed on the users computer if you want the text to appear as desired. Changing the font face is only supported by Netscape and Internet Explorer, while Internet Explorer can also set font sizes/colours/faces within a style sheet. Also note that in a document whose layout/colouring etc., is defined in a style sheet, using the
<FONT> element will have no effect.
The Internet Explorer 4.0 (and above) specific
TITLE attribute is used for informational purposes. If present, the value of the
TITLE attribute is presented as a ToolTip when the users mouse hovers over the
LANG attribute can be used to specify what language the
<FONT> element is using. It accepts any valid ISO standard language abbreviation (for example
"en" for English,
"de" for German etc.) For more details, see the Document Localisation section for more details.
LANGUAGE attribute can be used to expressly specify which scripting language Internet Explorer 4.0 uses to interpret any scripting information used in the
<FONT> element. It can accept values of
LANGUAGE attribute is set.
CLASS="Style Sheet class name"
CLASS attribute is used to specify the
<FONT> element as using a particular style sheet class. See the Style Sheets topic for details.
STYLE="In line style setting"
As well as using previously defined style sheet settings, the
<FONT> element can have in-line stylings attached to it. See the Style Sheets topic for details.
ID="Unique element identifier"
ID attribute can be used to either reference a unique style sheet identifier, or to provide a unique name for the
<FONT> element for scripting purposes. Any
<FONT> element with an
ID attribute can be directly manipulated in script by referencing its
ID attribute, rather than working through the All collection to determine the element. See the Scripting introduction topic for more information.
POINT-SIZE="Font pt. size"
POINT-SIZE attribute is Netscape specific and allows total sizing control over the contents of the element. For example:
<FONT POINT-SIZE="24">This is 24pt. font</FONT>
Note that Internet Explorer supports font sizings primarily through Style Sheets (as does Netscape 4.0).
This Netscape specific
<FONT> attribute can be used to specify a relative font weight (boldness). It accepts values ranging between 100 and 900 (in steps of 100 - i.e. 100, 200, 300 etc.) It allows finer control of the font's boldness, rather than using
<STRONG>, or the
font-weight style sheet attribute.
<FONT> element in a document is an object that can be manipulated through scripting. Note that scripting of the
<FONT> element/object is only supported by Internet Explorer 4.0 in its Dynamic HTML object model. Netscape does not support direct scripting of the
<FONT> element at all.
<FONT...> element/object supports all of the standard Dynamic HTML properties (i.e. className, document, id, innerHTML, innerText, isTextEdit, lang, language, offsetHeight, offsetLeft, offsetParent, offsetTop, offsetWidth, outerHTML, outerText, parentElement, parentTextEdit, sourceIndex, style, tagName and title). Details of these can be found in the standard Dynamic HTML properties topics.
In addition, the
<FONT> element also supports the color, face and size properties, which directly reflect their attribute settings (see above).
<FONT...> element/object supports all of the standard Dynamic HTML methods (i.e. click, contains, getAttribute, insertAdjacentHTML, insertAdjacentText, removeAttribute, scrollIntoView and setAttribute). Details of these can be found in the standard Dynamic HTML Methods topics.
<FONT...> element/object supports all of the standard Dynamic HTML events (i.e. onclick, ondblclick, ondragstart, onfilterchange, onhelp, onkeydown, onkeypress, onkeyup, onmousedown, onmousemove, onmouseout, onmouseover, onmouseup and onselectstart). Details of these can be found in the standard Dynamic HTML events topics.
© 1995-1998, Stephen Le Hunte
|file: /Techref/language/html/ib/Text_Formatting/font.htm, 12KB, , updated: 2004/3/1 16:47, local time: 2018/7/15 18:34,
|©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?|
<A HREF="http://www.piclist.com/techref/language/html/ib/Text_Formatting/font.htm"> <FONT></A>
|Did you find what you needed?|
PICList 2018 contributors:
o List host: MIT, Site host massmind.org, Top posters @20180715 RussellMc, Van Horn, David, Sean Breheny, Isaac M. Bavaresco, David C Brown, Bob Blick, Neil, Denny Esterline, John Gardner, Brent Brown,
* Page Editors: James Newton, David Cary, and YOU!
* Roman Black of Black Robotics donates from sales of Linistep stepper controller kits.
* Ashley Roll of Digital Nemesis donates from sales of RCL-1 RS232 to TTL converters.
* Monthly Subscribers: Gregg Rew. on-going support is MOST appreciated!
* Contributors: Richard Seriani, Sr.
Welcome to www.piclist.com! | <urn:uuid:f739b3e8-c698-47da-8775-257cef8879d2> | 3.03125 | 1,823 | Documentation | Software Dev. | 53.281706 | 95,499,612 |
quantity having both magnitude and direction; it may be represented by a directed line segment. Many physical quantities are vectors, e.g., force, velocity, and momentum. Thus, in specifying a force, one must state not only how large it is but also in what direction it acts.
The simplest representation of a vector is as an arrow connecting two points. Thus, is used to designate the vector represented by an arrow from point A to point B, while designates a vector of equal magnitude in the opposite direction, from B to A. In order to compare vectors and to operate on them mathematically, however, it is necessary to have some reference system that determines scale and direction. Cartesian coordinates are often used for this purpose. In the plane, two axes and unit lengths along each axis serve to determine magnitude and direction throughout the plane. For example, if the point A mentioned above has coordinates (2,3) and the point B coordinates (5,7), the size and position of the vector are thus determined. The size of the vector in the x-direction is found by projecting the vector onto the x-axis, i.e., by dropping perpendicular line segments to the x-axis. The length of this projection is simply the difference between the x-coordinates of the two points A and B, or 5-2=3. This is called the x-component of the vector. Similarly, the y-component of the vector is found to be 7-3=4. A vector is frequently expressed by giving its components with respect to the coordinate axes; thus, our vector becomes [3,4].
Knowledge of the components of a vector enables one to compute its magnitude—in this case, 5, from the Pythagorean theorem [(32+42)1/2=5)]—and its direction from trigonometry, once the lengths of the sides of the right triangle formed by the vector and its components are known. (Trigonometry can also be used to find the component of the vector as projected in some direction other than the x-axis or y-axis.) Since the vector points from A to B, both its components are positive; if it pointed from B to A, its components would be [-3,-4] but its magnitude and orientation would be the same.
It is obvious that an infinite number of vectors can have the same components [3,4], since there are an infinite number of pairs of points in the plane with x- and y-coordinates whose respective differences are 3 and 4. All these vectors have the same magnitude and direction, being parallel to one another, and are considered equal. Thus, any vector with components a and b can be considered as equal to the vector [a,b] directed from the origin (0,0) to the point (a,b). The concept of a vector can be extended to three or more dimensions.
The addition, or composition, of two vectors can be accomplished either algebraically or graphically. For example, to add the two vectors U [-3,1] and V [5,2], one can add their corresponding components to find the resultant vector R [2,3], or one can graph U and V on a set of coordinate axes and complete the parallelogram formed with U and V as adjacent sides to obtain R as the diagonal from the common vertex of U and V.
Two different kinds of multiplication are defined for vectors in three dimensions. The scalar, or dot, product of two vectors, A and B, is a scalar, or quantity that has a magnitude but no direction, rather than a vector, and is equal to the product of the magnitudes of A and B and the cosine of the angle θ between them, or A ⋅ B=|A| |B| cos θ. The vector, or cross, product of A and B is a vector, A×B, whose magnitude is equal to |A| |B| sin θ and whose orientation is perpendicular to both A and B and pointing in the direction in which a right-hand screw would advance if turned from A to B through the angle θ. The vector product is an example of a kind of multiplication that does not follow the commutative law, since A×B=-B×A.
The components of a vector need not be constants but can also be variables and functions of variables. For example, the position of a body moving through space can be described by a vector whose x, y, and z components are each functions of time. The methods of the calculus may be applied to such vector functions, leading to the branch of mathematics known as vector analysis.
The more general extension of vectors leads to the concept of a vector space. A vector space is a set of elements, A, B, C,…, called vectors, for which the operations of addition of vectors and multiplication of a vector by a scalar are defined and which satisfies ten axioms relating to such properties as closure under both operations, associativity, commutativity, and existence of a zero vector, an additive inverse (negative of a vector), and a unit scalar.
- See Vector Algebra (1988);. ,
- Vector Calculus (1988). ,
In mathematics, a quantity that has both a magnitude and a direction, as contrasted with a scalar , which has magnitude only. For example, the ...
1. An animal (often an insect) capable of transmitting a disease-causing microorganism ( See pathogen ) from one organism to another....
A vector is an entity in a geometric space that has both magnitude and direction. Graphically, the magnitude of the vector is commonly... | <urn:uuid:4074fc7e-2abc-4543-b2c8-fb48bb8840a1> | 4.5 | 1,176 | Knowledge Article | Science & Tech. | 53.602951 | 95,499,619 |
The MySQL server maintains a host cache in memory that
contains information about clients: IP address, host name, and
error information. The
Performance Schema table exposes the contents of the host
cache so that it can be examined using
SELECT statements. This may
help you diagnose the causes of connection problems. See
Section 220.127.116.11, “The host_cache Table”.
The server uses the host cache for several purposes:
By caching the results of IP-to-host name lookups, the server avoids doing a DNS lookup for each client connection. Instead, for a given host, it needs to perform a lookup only for the first connection from that host.
The cache contains information about errors that occur during the connection process. Some errors are considered “blocking.” If too many of these occur successively from a given host without a successful connection, the server blocks further connections from that host. The
max_connect_errorssystem variable determines the number of permitted errors before blocking occurs. See Section B.5.2.5, “Host 'host_name' is blocked”.
The server uses the host cache for nonlocal TCP connections.
It does not use the cache for TCP connections established
using a loopback interface address (for example,
for connections established using a Unix socket file, named
pipe, or shared memory.
For each new client connection, the server uses the client IP address to check whether the client host name is in the host cache. If not, the server attempts to resolve the host name. First, it resolves the IP address to a host name and resolves that host name back to an IP address. Then it compares the result to the original IP address to ensure that they are the same. The server stores information about the result of this operation in the host cache. If the cache is full, the least recently used entry is discarded.
The server handles entries in the host cache like this:
When the first TCP client connection reaches the server from a given IP address, a new cache entry is created to record the client IP, host name, and client lookup validation flag. Initially, the host name is set to
NULLand the flag is false. This entry is also used for subsequent client connections from the same originating IP.
If the validation flag for the client IP entry is false, the server attempts an IP-to-host name DNS resolution. If that is successful, the host name is updated with the resolved host name and the validation flag is set to true. If resolution is unsuccessful, the action taken depends on whether the error is permanent or transient. For permanent failures, the host name remains
NULLand the validation flag is set to true. For transient failures, the host name and validation flag remain unchanged. (In this case, another DNS resolution attempt occurs the next time a client connects from this IP.)
If an error occurs while processing an incoming client connection from a given IP address, the server updates the corresponding error counters in the entry for that IP. For a description of the errors recorded, see Section 18.104.22.168, “The host_cache Table”.
The server performs host name resolution using the
gethostbyname() system calls.
It is possible for a blocked host to become unblocked even
FLUSH HOSTS if activity
from other hosts has occurred since the last connection
attempt from the blocked host. This can occur because the
server discards the least recently used cache entry to make
room for a new entry if the cache is full when a connection
arrives from a client IP not in the cache. If the discarded
entry is for a blocked host, that host becomes unblocked.
The host cache is enabled by default. To disable it, set the
variable to 0, either at server startup or at runtime.
To disable DNS host name lookups, start the server with the
--skip-name-resolve option. In
this case, the server uses only IP addresses and not host
names to match connecting hosts to rows in the MySQL grant
tables. Only accounts specified in those tables using IP
addresses can be used. (A client may not be able to connect if
no account exists that specifies the client IP address.)
If you have a very slow DNS and many hosts, you might be able
to improve performance either by disabling DNS lookups with
--skip-name-resolve or by
increasing the value of
host_cache_size to make the
host cache larger.
To disallow TCP/IP connections entirely, start the server with
Some connection errors are not associated with TCP
connections, occur very early in the connection process (even
before an IP address is known), or are not specific to any
particular IP address (such as out-of-memory conditions). For
information about these errors, check the
status variables (see
Section 5.1.9, “Server Status Variables”). | <urn:uuid:af8a2ba5-510b-4fe7-a396-1e2b024e8504> | 3.09375 | 1,048 | Documentation | Software Dev. | 54.440367 | 95,499,659 |
Bacterioplankton and Carbon Turnover in a Dense Macrophyte Canopy
Studies on cascading trophic interactions in lakes have shown that planktonic food web changes may take place to the level of protozoans (reviewed by Carpenter and Kitchell, 1993; Riemann and Christoffersen, 1993). It is more unclear if and how cascading might influence bacterioplankton (Jeppesen et al., 1992; Christoffersen et al, 1993; Pace, 1993). From studies in oligo-mesotrophic temperate lakes, Pace (1993) concluded “that bacteria responded to changes in phytoplankton and increases in nutrients, but not to changes in Zooplankton.“ More generally, it was suggested that “trophic cascades do not have immediately obvious consequences for microbial processes in lakes” (Kitchell and Carpenter, 1993). In accordance, Jeppesen et al. (1992) found that a trophic cascade with high grazing by clado-cerans and a four- to sixfold reduction in phytoplankton biomass only slightly altered bacterioplankton production in two fish-manipulated shallow and eu-trophic Danish lakes.
KeywordsOrbital Period Radial Velocity Trophic Cascade Common Envelope Radial Velocity Data
Unable to display preview. Download preview PDF.
- Carpenter, S.R.; Kitchell, J.F., eds. The trophic cascade in lakes. Cambridge: Cambridge University Press; 1993.Google Scholar
- Jeppesen, E.; Sortkjaer, O.; Søndergaard, M.; Erlandsen, M. Impact of a trophic cascade on heterotrophic bacterioplankton production in two shallow fish-manipulated lakes. Arch. Hydrobiol. Beih. Ergebn. Limnol. 37:219–231; 1992.Google Scholar
- Kitchell, J.F.; Carpenter, S.R. Synthesis and new directions. In: Carpenter, S.R.; Kitchell, J.F., eds. The trophic cascade in lakes. Cambridge: Cambridge University Press; 1993: 332–350.Google Scholar
- Norland, S. The relationship between biomass and volume of bacteria. In: Kemp, P.F.; Sherr, B.F.; Sherr, E.B.; Cole, J.J., eds. Handbook of methods in aquatic microbial ecology. Boca Raton, FL: Lewis Publ.; 1993: 303–307.Google Scholar
- Pace, M.L. Heterotrophic microbial processes. In: Carpenter, S.R.; Kitchell, J.F., eds. The trophic cascade in lakes. Cambridge: Cambridge University Press; 1993: 252–277.Google Scholar
- Riemann, B.; Christoffersen, K. Microbial trophodynamics in temperate lakes. Mar. Microb. Food Webs 7:69–100; 1993.Google Scholar
- Søndergaard, M.; Middelboe, M. Measurements of paniculate organic carbon: a note on the use of glass fiber (GF/F) and Anodisc filters. Arch. Hydrobiol. 127: 73–85; 1993.Google Scholar
- Wright, R.T. Methods for evaluating the interaction of substrate and grazing as factors controlling planktonic bacteria. Arch. Hydrobiol. Beih. Ergebn. Limnol. 31: 229–242; 1988.Google Scholar | <urn:uuid:a4cc217e-8356-46f1-8ed2-7fb3fa971e11> | 2.71875 | 757 | Truncated | Science & Tech. | 57.18944 | 95,499,661 |
Deuterium, (D, or 2H), also called heavy hydrogen, isotope of hydrogen with a nucleus consisting of one proton and one neutron, which is double the mass of the nucleus of ordinary hydrogen (one proton). Deuterium has an atomic weight of 2.014. It is a stable atomic species found in natural hydrogen compounds to the extent of about 0.0156 percent.
…Urey and two collaborators detected deuterium by its atomic spectrum in the residue of a distillation of liquid hydrogen. Deuterium was first prepared in pure form by the electrolytic method of concentration: when a water solution of an electrolyte, such as sodium hydroxide, is electrolyzed, the hydrogen formed at the…
Deuterium was discovered (1931) by the American chemist Harold C. Urey (for which he was awarded the Nobel Prize for Chemistry in 1934) and his associates Ferdinand G. Brickwedde and George M. Murphy. Urey predicted a difference between the vapour pressures of molecular hydrogen (H2) and of a corresponding molecule with one hydrogen atom replaced by deuterium (HD) and, thus, the possibility of separating these substances by distillation of liquid hydrogen. The deuterium was detected (by its atomic spectrum) in the residue of a distillation of liquid hydrogen. Deuterium was first prepared in pure form in 1933 by Gilbert N. Lewis, using the electrolytic method of concentration discovered by Edward Wight Washburn. When water is electrolyzed—i.e., decomposed by an electric current (actually a water solution of an electrolyte, usually sodium hydroxide, is used)—the hydrogen gas produced contains a smaller fraction of deuterium than the remaining water, and, hence, deuterium is concentrated in the water. Very nearly pure deuterium oxide (D2O; heavy water) is secured when the amount of water has been reduced to about one hundred-thousandth of its original volume by continued electrolysis.
Deuterium enters into all chemical reactions characteristic of ordinary hydrogen, forming equivalent compounds. Deuterium, however, reacts more slowly than ordinary hydrogen, a criterion that distinguishes the two forms of hydrogen. Because of this property, among others, deuterium is extensively used as an isotopic tracer in investigations of chemical and biochemical reactions involving hydrogen.
The nuclear fusion of deuterium atoms or of deuterium and the heavier hydrogen isotope, tritium, at high temperature is accompanied by release of an enormous amount of energy; such reactions have been used in thermonuclear weapons. Since 1953, the stable solid substance lithium deuteride (LiD) has been used in place of both deuterium and tritium.
The physical properties of the molecular form of the isotope deuterium (D2) and the molecules of hydrogen deuteride (HD) are compared with those of the molecules of ordinary hydrogen (H2) in the Table.
|Comparison of the physical properties of molecular forms of hydrogen|
|*At 20.39 K.|
**At 22.54 K.
***At 23.67 K.
|ordinary hydrogen||hydrogen deuteride||deuterium|
|gram molecular volume of the solid at the triple point (cu cm)||23.25||21.84||20.48|
|triple point (K)||13.96||16.60||18.73|
|vapour pressure at triple point (mmHg)||54.0||92.8||128.6|
|boiling point (K)||20.39||22.13||23.67|
|heat of fusion at triple point (cal/mole)||28.0||38.1||47.0|
|heat of vaporization (cal/mole)||216*||257**||293***| | <urn:uuid:576e4f9b-f1d2-4bfa-ab73-393910763559> | 3.828125 | 813 | Knowledge Article | Science & Tech. | 45.432536 | 95,499,664 |
Drier conditions at the edges of forest patches slow down the decay of dead wood and significantly alter the cycling of carbon and nutrients in woodland ecosystems, according to a new study.
Forests around the world have become increasingly fragmented, and in the UK three quarters of woodland area lie within 100 metres of the forest edge. It has long been known that so-called 'edge effects' influence temperature and moisture (the 'microclimate') in woodlands, but the influence on the carbon cycle is largely unknown.
Researchers from the University of Exeter and Earthwatch in the UK combined experiments with mathematical modelling to fill this knowledge gap. Wood blocks were placed in Wytham Woods near Oxford at various distances from the forest edge, and left to decay over two years.
The measured decay rates were applied to a model of the surrounding landscape, to allow comparison between the current fragmented woodland cover and decay rates in continuous forest.
The research, published today in the journal Global Change Biology, shows that wood decay rates in the southern UK are reduced by around one quarter due to fragmentation. This effect is much larger than expected due to variation in temperatures and rainfall among years.
Dr Dan Bebber of the University of Exeter said: "We were surprised by the strength of the edge effect on wood decay, which we believe was driven by reduced moisture at the forest edge impairing the activity of saprotrophic fungi – those that live and feed on dead organic matter".
Wood decay, and the recycling of other biological matter like leaf litter, is driven by fungi and other microbes that are sensitive to temperature and moisture. The difference between the absorption of carbon dioxide via photosynthesis by trees, and the release of carbon by microbes, determines the overall carbon balance of the forest.
Dr Martha Crockatt of Earthwatch said: "Saprotrophic fungi control the cycling of carbon and nutrients from wood in forests, and their responses to changes in microclimate driven by fragmentation, and also climate change, will influence whether forests are a carbon source or sink".
The southern UK has a temperate climate with moderate temperatures and rainfall. Similar studies in different parts of the world, from the warm tropics to the cooler boreal regions, are needed to understand how edge effects on decomposition vary globally.
"Edge effects on moisture reduce wood decomposition rate in a temperate forest" by M. E. Crockatt & D. P. Bebber, is published in Global Change Biology.
Eleanor Gaskarth | Eurek Alert!
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:9677ee57-be26-474a-8e48-ae60dd27fabc> | 3.71875 | 1,101 | Content Listing | Science & Tech. | 36.301648 | 95,499,696 |
I've not really been educated on the applications of hyperbolic trig functions (i.e. cosh(x), sinh(x)). Could anyone help me out?
Turn on thread page Beta
Applications of Hyperbolic Trig Functions watch
- Thread Starter
- 09-02-2010 15:50
- Community Assistant
- 09-02-2010 17:18
Here are a few applications:
The curve is a catenary. Google it to find out about hanging chains and bridge archs.
The hyperbolic functions are connected to the trigonometric functions via complex numbers.
Alternatively, Google the Gudermannian function to find a direct connection between trigonometric and hyperbolic numbers. There is an interesting application here in the Mercator projection of the world map.
Vincenzo Riccati introduced the hyperbolic functions to solve cubic equations so there is another application there.
The rapidity of an object in special relativity is given by the formula .
The inverse sinh function is used to calculate the Roche Limit, the minimum distance at which a natural satellite may orbit a planet without being torn apart by tidal forces.
Many expressions involving the sum or difference of two squares can be integrated by making a hyperbolic substitution.
The parametric equations of the right hand branch of the unit rectangular hyperbola are and . | <urn:uuid:6c45d364-54f7-441a-96df-e574ef46e211> | 2.515625 | 284 | Comment Section | Science & Tech. | 37.920456 | 95,499,706 |
The gravitational constant (also known as the "universal gravitational constant", the "Newtonian constant of gravitation", or the "Cavendish gravitational constant"), denoted by the letter G, is an empirical physical constant involved in the calculation of gravitational effects in Sir Isaac Newton's law of universal gravitation and in Albert Einstein's general theory of relativity.
In Newton's law, it is the proportionality constant connecting the gravitational force between two bodies with the product of their masses and the inverse square of their distance. In the Einstein field equations, it quantifies the relation between spacetime topology and energy–momentum.
The measured value of the constant is known with some certainty to four significant digits. In SI units its value is approximately ×10−11 N·kg–2·m2. 6.674
The modern notation of Newton's law involving G was introduced in the 1890s by C. V. Boys. The first implicit measurement with an accuracy within about 1% is due to Henry Cavendish in a 1798 experiment.
According to Newton's law of universal gravitation, the attractive force (F) between two point-like bodies is directly proportional to the product of their masses (m1 and m2), and inversely proportional to the square of the distance, r, (inverse-square law) between them:
The constant of proportionality, G, is the gravitational constant. Colloquially, the gravitational constant is also called "Big G", for disambiguation with "small g" (g), which is the local gravitational field of Earth (equivalent to the free-fall acceleration). The two quantities are related by g = GM⊕/ (where M⊕ is the mass of the Earth and r⊕ is the radius of the Earth).
Newton's constant appears in the proportionality between the spacetime curvature and the energy density component of the stress–energy tensor. The scaled gravitational constant κ = 8π/G ≈ ×10−43 s2⋅m−1⋅kg−12.071 (depending on the choice of definition of the stress–energy tensor it can also be normalized as κ = 8π/G ≈ ×10−26 m⋅kg−11.866) is also known as Einstein's constant.
Value and dimensionsEdit
The gravitational constant is a physical constant that is difficult to measure with high accuracy. This is because the gravitational force is extremely weak compared with other fundamental forces.[a]
In cgs, G can be written as G ≈ ×10−8 cm3⋅g−1⋅s−2. 6.674
In other words, in Planck units, G has the numerical value of . 1
Thus, in Planck units, and other natural units taking G as their basis, the gravitational constant cannot be measured as it is set to its value by definition. Depending on the choice of units, variation in a physical constant in one system of units shows up as variation of another constant in another system of units; variation in dimensionless physical constants is preserved independently of the choice of units; in the case of the gravitational constant, such a dimensionless value is the gravitational coupling constant,
For situations where tides are important, the relevant length scales are solar radii rather than parsecs. In these units, the gravitational constant is:
In orbital mechanics, the period P of an object in circular orbit around a spherical object obeys
where V is the volume inside the radius of the orbit. It follows that
This way of expressing G shows the relationship between the average density of a planet and the period of a satellite orbiting just above its surface.
The above equation is exact only within the approximation of the Earth's orbit around the Sun as a two-body problem in Newtonian mechanics, the measured quantities contain corrections from the perturbations from other bodies in the solar system and from general relativity. From 1964 until 2012, however, it was used as the definition of the astronomical unit and thus held by definition:
Since 2012, the AU is defined as 978707×1011 m exactly, and the equation can no longer be taken as holding precisely. 1.495
The quantity GM—the product of the gravitational constant and the mass of a given astronomical body such as the Sun or Earth—is known as the standard gravitational parameter and (also denoted μ). The standard gravitational parameter GM appears as above in Newton's law of universal gravitation, as well as in formulas for the deflection of light caused by gravitational lensing, in Kepler's laws of planetary motion, and in the formula for escape velocity.
This quantity gives a convenient simplification of various gravity-related formulas. For the Sun, GM☉ is known to ten digits' accuracy, 12440018(9)×1020 m3s−2; 1.327 for the Earth, GM⊕ is known to nine digits, 004418(8)×1014 m3s−23.986, i.e. much more accurately than each factor independently.
Calculations in celestial mechanics can also be carried out using the units of solar masses, mean solar days and astronomical units rather than standard SI units. For this purpose, the Gaussian gravitational constant was historically in widespread use, k = 20209895, expressing the mean 0.017angular velocity of the Sun-Earth system measured in radians per day. The use of this constant, and the implied definition of the astronomical unit discussed above, has been deprecated by the IAU in 2012.
History of measurementEdit
The gravitational constant appears in Newton's law of universal gravitation in its modern notation, but it is not named or discussed in Newton's Philosophiæ Naturalis Principia Mathematica, which merely postulates the inverse-square law of gravitation without attempting to estimate the absolute mass of planets.
Newton in Principia considered the possibility of measuring the strength of gravitational attraction by measuring the deflection of a pendulum in the vicinity of a large hill, but thought that the effect would be too small to be measurable. Nevertheless, Newton implicitly estimated the order of magnitude of the constant when he surmised that "the mean density of the earth might be five or six times as great as the density of water", which is equivalent to a gravitational constant of the order G ≈ ±1)×10−11 m3⋅kg–1⋅s−2. (7
A measurement was first attempted in 1738 by Pierre Bouguer and Charles Marie de La Condamine, in their "Peruvian expedition" (French Geodesic Mission). Bouguer in 1740 downplayed the significance of their results, suggesting that the experiment had at least proved that the Earth could not be a hollow shell, as some thinkers of the day, including Edmond Halley, had suggested.
The Schiehallion experiment, proposed in 1772 and completed in 1776, was the first successful measurement of the mean density of the Earth, and thus indirectly of the gravitational constant. The result reported by Charles Hutton (1778) suggested a density of (4 4.5 g/cm31⁄2 times the density of water), about 20% below the modern value. This immediately led to estimates on the densities and masses of the Sun, Moon and planets, sent by Hutton to Jérôme Lalande for inclusion in his planetary tables. As discussed above, establishing the average density of Earth is equivalent to measuring the gravitational constant, given Earth's mean radius and the mean gravitational acceleration at Earth's surface, by setting
Based on this, Hutton's 1778 result is equivalent to G ≈ ×10−11 m3⋅kg–1⋅s−2. 8
The first direct measurement of gravitational attraction between two bodies in the laboratory was performed in 1798, seventy-one years after Newton's death, by Henry Cavendish. He determined a value for G implicitly, using a torsion balance invented by the geologist Rev. John Michell (1753). He used a horizontal torsion beam with lead balls whose inertia (in relation to the torsion constant) he could tell by timing the beam's oscillation. Their faint attraction to other balls placed alongside the beam was detectable by the deflection it caused. In spite of the experimental design being due to Michell, the experiment is now known as the Cavendish experiment for its first successful execution by Cavendish.
Cavendish's stated aim was the "weighing of Earth", that is, determining the average density of Earth and the Earth's mass. His result, ρ⊕= , corresponds to value of 5.448(33) g·cm−3G = ×10−11 m3⋅kg–1⋅s−2. 6.74(4)
Cavendish's result is surprisingly accurate, perhaps by chance, about 1% above the modern value (and thus still outside the cited standard uncertainty of 0.6%).
The accuracy of the measured value of G has increased only modestly since the original Cavendish experiment.G is quite difficult to measure because gravity is much weaker than other fundamental forces, and an experimental apparatus cannot be separated from the gravitational influence of other bodies. Furthermore, gravity has no established relation to other fundamental forces, so it does not appear possible to calculate it indirectly from other constants that can be measured more accurately, as is done in some other areas of physics.
Cavendish's experiment was first repeated by Ferdinand Reich (1838, 1842, 1853), who found a value of , 5.5832(149) g·cm−3 which is actually worse than Cavendish's result, differing from the modern value by 1.5%. Cornu and Baille (1873), found . 5.56 g·cm−3
Cavendish's experiment proved to result in more reliable measurements than pendulum experiments of the "Schiehallion" (deflection) type or "Peruvian" (period as a function of altitude) type. Pendulum experiments still continued to be performed, by Robert von Sterneck (1883, results between 5.0 and ) and 6.3 g/cm3Thomas Corwin Mendenhall (1880, ). 5.77 g/cm3
Cavendish's result was first improved upon by John Henry Poynting (1891), who published a value of , differing from the modern value by 0.2%, but compatible with the modern value within the cited standard uncertainty of 0.55%. In addition to Poynting, measurements were made by 5.49(3) g·cm−3C. V. Boys (1895) and Carl Braun (1897), with compatible results suggesting G = ×10−11 m3⋅kg−1⋅s−2. The modern notation involving the constant 6.66(1)G was introduced by Boys in 1894 and becomes standard by the end of the 1890s, with values usually cited in the cgs system. Richarz and Krigar-Menzel (1898) attempted a repetition of the Cavendish experiment using 100,000 kg of lead for the attracting mass. The precision of their result of ×10−11 m3⋅kg−1⋅s−2 was, however, of the same order of magnitude as the other results at the time. 6.683(11)
Arthur Stanley Mackenzie in The Laws of Gravitation (1899) reviews the work done in the 19th century. Poynting is the author of the article "Gravitation" in the Encyclopædia Britannica Eleventh Edition (1911). Here, he cites a value of G = ×10−11 m3⋅kg−1⋅s−2 with an uncertainty of 0.2%. 6.66
Published values of G derived from high-precision measurements since the 1950s have remained compatible with Heyl (1930), but within the relative uncertainty of about 0.1% (or 1,000 ppm) have varied rather broadly, and it is not entirely clear if the uncertainty has been reduced at all since the 1942 measurement. Some measurements published in the 1980s to 2000s were, in fact, mutually exclusive. Establishing a standard value for G with a standard uncertainty better than 0.1% has therefore remained rather speculative.
By 1969, the value recommended by the National Institute of Standards and Technology (NIST) was cited with a standard uncertainty of 0.046% (460 ppm), lowered to 0.012% (120 ppm) by 1986. But the continued publication of conflicting measurements led NIST to radically increase the standard uncertainty in the 1998 recommended value, by a factor of 12, to a standard uncertainty of 0.15%, larger than the one given by Heyl (1930).
The value was again lowered in 2002 and 2006, but once again raised, by a more conservative 20%, in 2010, matching the standard uncertainty of 120 ppm published in 1986. For the 2014 update, CODATA reduced the uncertainty to 46 ppm, less than half the 2010 value, and one order of magnitude below the 1969 recommendation.
The following table shows the NIST recommended values published since 1969:
In the January 2007 issue of Science, Fixler et al. described a new measurement of the gravitational constant by atom interferometry, reporting a value of G = ×10−11 m3⋅kg−1⋅s−26.693(34). An improved cold atom measurement by Rosi et al. was published in 2014 of G = 91(99)×10−11 m3 kg−1 s−26.671, below the recommended 2014 CODATA value by 1,200 ppm, with non-overlapping standard uncertainty intervals.
As of 2018, efforts to re-evaluate the conflicting results of measurements are underway, coordinated by NIST, notably a repetition of the experiments reported by Quinn et al. (2013) .
A controversial 2015 study of some previous measurements of G, by Anderson et al., suggested that most of the mutually exclusive values in high-precision measurements of G can be explained by a periodic variation. The variation was measured as having a period of 5.9 years, similar to that observed in length-of-day (LOD) measurements, hinting at a common physical cause which is not necessarily a variation in G. A response was produced by some of the original authors of the G measurements used in Anderson et al. This response notes that Anderson et al. not only omitted measurements, they also used the time of publication not the time the experiments were performed. A plot with estimated time of measurement from contacting original authors seriously degrades the length of day correlation. Also taking the data collected over a decade by Karagioz and Izmailov shows no correlation with length of day measurements. As such the variations in G most likely arise from systematic measurement errors which have not properly been accounted for. Under the assumption that the physics of type Ia supernovae are universal, analysis of observations of 580 type Ia supernovae has shown that the gravitational constant has varied by less than one part in ten billion per year over the last nine billion years according to Mould et al. (2014).
- Gravity of Earth
- Standard gravity
- Standard gravitational parameter
- Gaussian gravitational constant
- Orbital mechanics
- Escape velocity
- Gravitational coupling constant
- Gravitational potential
- Gravitational wave
- Strong gravitational constant
- Dirac large numbers hypothesis
- Accelerating universe
- Lunar Laser Ranging experiment
- Cosmological constant
- For example, the gravitational force between an electron and proton one meter apart is approximately N, whereas the 10−67 electromagnetic force between the same two particles is approximately . The electromagnetic force in this example is some 39 10−28 Norders of magnitude (i.e. 1039) greater than the force of gravity—roughly the same ratio as the mass of the Sun to a microgram.
- "Newtonian constant of gravitation" is the name introduced for G by Boys (1894). Use of the term by T.E. Stern (1928) was misquoted as "Newton's constant of gravitation" in Pure Science Reviewed for Profound and Unsophisticated Students (1930), in what is apparently the first use of that term. Use of "Newton's constant" (without specifying "gravitation" or "gravity") is more recent, as "Newton's constant" was also used for the heat transfer coefficient in Newton's law of cooling, but has by now become quite common, e.g. Calmet et al, Quantum Black Holes (2013), p. 93; P. de Aquino, Beyond Standard Model Phenomenology at the LHC (2013), p. 3. The name "Cavendish gravitational constant", sometimes "Newton-Cavendish gravitational constant", appears to have been common in the 1970s to 1980s, especially in (translations from) Soviet-era Russian literature, e.g. Sagitov (1970 ), Soviet Physics: Uspekhi 30 (1987), Issues 1-6, p. 342 [etc.]. "Cavendish constant" and "Cavendish gravitational constant" is also used in Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, "Gravitation", (1973), 1126f. Colloquial use of "Big G", as opposed to "little g" for gravitational acceleration dates to the 1960s (R.W. Fairbridge, The encyclopedia of atmospheric sciences and astrogeology, 1967, p. 436; note use of "Big G's" vs. "little g's" as early as the 1940s of the Einstein tensor Gμν vs. the metric tensor gμν, Scientific, medical, and technical books published in the United States of America: a selected list of titles in print with annotations : supplement of books published 1945-1948, Committee on American Scientific and Technical Bibliography National Research Council, 1950, p. 26).
- Cavendish determined the value of G indirectly, by reporting a value for the Earth's mass, or the average density of Earth, as 5.448 g·cm−3
- Gundlach, Jens H.; Merkowitz, Stephen M. (2002-12-23). "University of Washington Big G Measurement". Astrophysics Science Division. Goddard Space Flight Center.
Since Cavendish first measured Newton's Gravitational constant 200 years ago, "Big G" remains one of the most elusive constants in physics
- Halliday, David; Resnick, Robert; Walker, Jearl. Fundamentals of Physics (8th ed.). p. 336. ISBN 978-0-470-04618-0.
- Grøn, Øyvind; Hervik, Sigbjorn (2007). Einstein's General Theory of Relativity: With Modern Applications in Cosmology (illustrated ed.). Springer Science & Business Media. p. 180. ISBN 978-0-387-69200-5.
- Einstein, Albert (1916). "The Foundation of the General Theory of Relativity". Annalen der Physik. 354 (7): 769. Bibcode:1916AnP...354..769E. doi:10.1002/andp.19163540702. Archived from the original (PDF) on 2012-02-06.
- Adler, Ronald; Bazin, Maurice; Schiffer, Menahem (1975). Introduction to General Relativity (2nd ed.). New York: McGraw-Hill. p. 345. ISBN 0-07-000423-4.
- Gillies, George T. (1997). "The Newtonian gravitational constant: recent measurements and related studies". Reports on Progress in Physics. 60 (2): 151–225. Bibcode:1997RPPh...60..151G. doi:10.1088/0034-4885/60/2/001.. A lengthy, detailed review. See Figure 1 and Table 2 in particular.
- Mohr, Peter J.; Newell, David B.; Taylor, Barry N. (2015-07-21). "CODATA Recommended Values of the Fundamental Physical Constants: 2014". Reviews of Modern Physics. 88. arXiv: [physics.atom-ph]. Bibcode:2016RvMP...88c5009M. doi:10.1103/RevModPhys.88.035009.
- "Newtonian constant of gravitation G". CODATA, NIST.
- M ≈ 1.000003040433 M☉, so that M = M☉ can be used for accuracies of five or fewer significant digits.
- "Astrodynamic Constants". NASA/JPL. 27 February 2009. Retrieved 27 July 2009.
- "Numerical Standards for Fundamental Astronomy". maia.usno.navy.mil. IAU Working Group. Retrieved 31 October 2017., citing Ries, J. C., Eanes, R. J., Shum, C. K., and Watkins, M. M., 1992, "Progress in the Determination of the Gravitational Coefficient of the Earth," Geophys. Res. Lett., 19(6), pp. 529-531. Ries, J. C.; Eanes, R. J.; Shum, C. K.; Watkins, M. M. (20 March 1992). "Progress in the determination of the gravitational coefficient of the Earth". Geophysical Research Letters. 19 (6): 529–531. Bibcode:1992GeoRL..19..529R. doi:10.1029/92GL00259. Retrieved 5 February 2016.
- Davies, R.D. (1985). "A Commemoration of Maskelyne at Schiehallion". Quarterly Journal of the Royal Astronomical Society. 26 (3): 289–294. Bibcode:1985QJRAS..26..289D.
- "Sir Isaac Newton thought it probable, that the mean density of the earth might be five or six times as great as the density of water; and we have now found, by experiment, that it is very little less than what he had thought it to be: so much justness was even in the surmises of this wonderful man!" Hutton (1778), p. 783
- Poynting, J.H. (1913). The Earth: its shape, size, weight and spin. Cambridge. pp. 50–56.
- Hutton, C. (1778). "An Account of the Calculations Made from the Survey and Measures Taken at Schehallien". Philosophical Transactions of the Royal Society. 68 (0). doi:10.1098/rstl.1778.0034.
- Boys 1894, p.330 In this lecture before the Royal Society, Boys introduces G and argues for its acceptance. See: Poynting 1894, p.4, MacKenzie 1900, p.vi
- Published in Philosophical Transactions of the Royal Society (1798); reprint: Cavendish, Henry (1798). "Experiments to Determine the Density of the Earth". In MacKenzie, A. S., Scientific Memoirs Vol. 9: The Laws of Gravitation. American Book Co. (1900), pp. 59–105.
- 2014 CODATA value ×10−11 m3⋅kg−1⋅s−26.674.
- Brush, Stephen G.; Holton, Gerald James (2001). Physics, the human adventure: from Copernicus to Einstein and beyond. New Brunswick, NJ: Rutgers University Press. p. 137. ISBN 0-8135-2908-5. Lee, Jennifer Lauren (November 16, 2016). "Big G Redux: Solving the Mystery of a Perplexing Result". NIST.
- Poynting, John Henry (1894). The Mean Density of the Earth. London: Charles Griffin. pp. 22–24.
- F. Reich, On the Repetition of the Cavendish Experiments for Determining the mean density of the Earth" Philosophical Magazine 12: 283-284.
- Mackenzie (1899), p. 125.
- A.S. Mackenzie , The Laws of Gravitation (1899), 127f.
- C.V. Boys, Phil. Trans. Roy. Soc. A. Pt. 1. (1895).
- Carl Braun, Denkschriften der k. Akad. d. Wiss. (Wien), math. u. naturwiss. Classe, 64 (1897). Braun (1897) quoted an optimistic standard uncertainty of 0.03%, ×10−11 m3⋅kg−1⋅s−2 but his result was significantly worse than the 0.2% feasible at the time. 6.649(2)
- Sagitov, M. U., "Current Status of Determinations of the Gravitational Constant and the Mass of the Earth", Soviet Astronomy, Vol. 13 (1970), 712-718, translated from Astronomicheskii Zhurnal Vol. 46, No. 4 (July-August 1969), 907-915 (table of historical experiments p. 715).
- Mackenzie, A. Stanley, The laws of gravitation; memoirs by Newton, Bouguer and Cavendish, together with abstracts of other important memoirs, American Book Company (1900 ).
- P. R. Heyl, A redetermination of the constant of gravitation, National Bureau of Standards Journal of Research 5 (1930), 1243–1290.
- P. R. Heyl and P. Chrzanowski (1942), cited after Sagitov (1969:715).
- Mohr, Peter J.; Taylor, Barry N. (January 2005). "CODATA recommended values of the fundamental physical constants: 2002" (PDF). Reviews of Modern Physics. 77 (1): 1–107. Bibcode:2005RvMP...77....1M. doi:10.1103/RevModPhys.77.1. Retrieved 2006-07-01.. Section Q (pp. 42–47) describes the mutually inconsistent measurement experiments from which the CODATA value for G was derived.
- "CODATA recommended values of the fundamental physical constants: 2010" (PDF). Reviews of Modern Physics. 84: 1527–1605. 13 November 2012. arXiv: . Bibcode:2012RvMP...84.1527M. doi:10.1103/RevModPhys.84.1527.
- B. N. Taylor, W. H. Parker, and D. N. Langenberg, Rev. Mod. Phys. 41(3), 375-496 (1969)
- E. R. Cohen and B. N. Taylor, J. Phys. Chem. Ref. Data 2(4) 663-734 (1973), p. 699.
- E. R. Cohen and B. N. Taylor, Rev. Mod. Phys. 59(4) 1121-1148 (1987)
- P. J. Mohr and B. N. Taylor, Rev. Mod. Phys. 72(2), 351-495 (2000)
- P. J. Mohr and B. N. Taylor, Rev. Mod. Phys. 77(1), 1-107 (2005)
- P. J. Mohr, B. N. Taylor, and D. B. Newell, J. Phys. Chem. Ref. Data 37(3), 1187-1284 (2008)
- P. J. Mohr, B. N. Taylor, and D. B. Newell, J. Phys. Chem. Ref. Data 41 (2012)
- P. J. Mohr, D. B. Newell, and B. N. Taylor, J. Phys. Chem. Ref. Data 45 (2016)
- Fixler, J. B.; Foster, G. T.; McGuirk, J. M.; Kasevich, M. A. (2007-01-05). "Atom Interferometer Measurement of the Newtonian Constant of Gravity". Science. 315 (5808): 74–77. Bibcode:2007Sci...315...74F. doi:10.1126/science.1135459. PMID 17204644.
- Rosi, G., Sorrentino, F., Cacciapuoti, L., Prevedelli, M. & Tino, G. M., "Precision measurement of the Newtonian gravitational constant using cold atoms ", Nature 510 (2014), 518–521. Schlamminger, Stephan (18 June 2014). "Fundamental constants: A cool way to measure big G". Nature. 510: 478–480. Bibcode:2014Natur.510..478S. doi:10.1038/nature13507. PMID 24965646.
- C. Rothleitner, S. Schlamminger, "Invited Review Article: Measurements of the Newtonian constant of gravitation, G", Review of Scientific Instruments 88, 111101 (2017) doi:10.1063/1.4994619. "However, re-evaluating or repeating experiments that have already been performed may provide insights into hidden biases or dark uncertainty. NIST has the unique opportunity to repeat the experiment of Quinn et al. with an almost identical setup. By mid-2018, NIST researchers will publish their results and assign a number as well as an uncertainty to their value." (referencing T. Quinn, H. Parks, C. Speake, and R. Davis, "Improved determination of G using two methods," Phys. Rev. Lett. 111, 101102 (2013).) The 2018 experiment was described by C. Rothleitner, "Newton’s Gravitational Constant ‚Big‘ G – A proposed Free-fall Measurement", CODATA Fundamental Constants Meeting, Eltville, 5 February, 2015].
- Anderson, J. D.; Schubert, G.; Trimble, 3=V.; Feldman, M. R. (April 2015). "Measurements of Newton's gravitational constant and the length of day" (PDF). EPL. 110: 10002. arXiv: . Bibcode:2015EL....11010002A. doi:10.1209/0295-5075/110/10002.
- Schlamminger, S.; Gundlach, J. H.; Newman, R. D. (2015). "Recent measurements of the gravitational constant as a function of time". Physical Review D. 91 (12). arXiv: . Bibcode:2015PhRvD..91l1101S. doi:10.1103/PhysRevD.91.121101. ISSN 1550-7998.
- Karagioz, O. V.; Izmailov, V. P. (1996). "Measurement of the gravitational constant with a torsion balance". Measurement Techniques. 39 (10): 979–987. doi:10.1007/BF02377461. ISSN 0543-1972.
- Mould, J.; Uddin, S. A. (2014-04-10). "Constraining a Possible Variation of G with Type Ia Supernovae". Publications of the Astronomical Society of Australia. 31: e015. arXiv: . Bibcode:2014PASA...31...15M. doi:10.1017/pasa.2014.9.
- Standish., E. Myles (1995). "Report of the IAU WGAS Sub-group on Numerical Standards". In Appenzeller, I. Highlights of Astronomy. Dordrecht: Kluwer Academic Publishers. (Complete report available online: PostScript; PDF. Tables from the report also available: Astrodynamic Constants and Parameters)
- Gundlach, Jens H.; Merkowitz, Stephen M. (2000). "Measurement of Newton's Constant Using a Torsion Balance with Angular Acceleration Feedback". Physical Review Letters. 85 (14): 2869–2872. arXiv: . Bibcode:2000PhRvL..85.2869G. doi:10.1103/PhysRevLett.85.2869. PMID 11005956.
- Newtonian constant of gravitation G at the National Institute of Standards and Technology References on Constants, Units, and Uncertainty
- The Controversy over Newton's Gravitational Constant — additional commentary on measurement problems | <urn:uuid:f8bbfc2e-9bac-4fa2-bf64-15045e8d00bc> | 4 | 6,938 | Knowledge Article | Science & Tech. | 71.234188 | 95,499,717 |
Or do record-breaking temperatures prove global warming is happening? No, they don’t. But that’s what the alarmists will claim every time there are record-breaking temperatures. And it doesn’t have to be in the summer either. Funny, though, record breaking cold days don’t count against global warming. Besides that, record-breaking high temps don’t mean anything other than we have not recorded temperatures for long enough.
What is the definition of a heat wave? There really isn’t a formal definition, but Environment Canada say it is typical with more than 2 days in a row above 32C. Thus a “heat wave” consists of days when the summer TMax is at its highest for a prolonged period.
In 2003 there was a heat wave in Europe. The alarmists went nuts. Even New Scientist said this:
At least 35,000 people died as a result of the record heatwave that scorched Europe in August 2003, says…
View original post 1,045 more words | <urn:uuid:d5464ac7-e44b-4d51-b043-9fb9a5f28795> | 3.265625 | 219 | Truncated | Science & Tech. | 65.440325 | 95,499,729 |
Phosphorylation is a quimical process which consists of the addition of a phosphate (PO4) group to a protein or other organic molecule such as glucose. Phosphorylation also involves the addition of phosphate to glucose to produce glucose monophosphate. Phosphorylation activates or deactivates many protein enzymes, causing or preventing the mechanisms of diseases such as cancer and diabetes.
Protein phosphorylation in particular plays a significant role in a wide range of cellular processes. Its prominent role in biochemistry is the subject of a very large body of research. Phosphorylation is carried out through the action of enzymes known as phosphotransferases or kinases. | <urn:uuid:9215cd6f-f94c-4d61-b4a7-a2a02705f24e> | 3.1875 | 140 | Knowledge Article | Science & Tech. | 13.010553 | 95,499,734 |
Study Connects Weight to Local Weather Conditions
Adélie penguins are an indigenous species of the West Antarctic Peninsula (WAP), one of the most rapidly warming areas on Earth. Since 1950, the average annual temperature in the Antarctic Peninsula has increased 2 degrees Celsius on average, and 6 degrees Celsius during winter.
As the WAP climate warms, it is changing from a dry, polar system to a warmer, sub-polar system with more rain.
University of Delaware oceanographers recently reported a connection between local weather conditions and the weight of Adélie penguin chicks in an article in Marine Ecology Progress Series, a top marine ecology journal.
Penguin chick weight at the time of fledgling, when they leave the nest, is considered an important indicator of food availability, parental care and environmental conditions at a penguin colony. A higher chick mass provides the chick a better likelihood of surviving and propagating future generations.
In the study, Megan Cimino, a UD doctoral student in the College of Earth, Ocean, and Environment and the paper’s lead author, compared data from 1987 to 2011 related to the penguin’s diet, the weather and the large-scale climate indices to see if they could correlate year-to-year penguin chick weight with a particular factor. She also evaluated samples from the penguin’s diet to determine what they were eating.
“The ability of a penguin species to progress is dependent on the adults’ investment in their chicks,” said Matthew Oliver, an associate professor of marine science and policy and principal investigator on the project. “Penguins do a remarkable job of finding food for their chicks in the ocean’s dynamic environment, so we thought that the type and size distribution of food sources would impact chick weight.”
Impact of weather and climate
Instead, the study revealed that weather and overall atmospheric climate seemed to affect weights the most. In particular, local weather — including high winds, cold temperatures and precipitation, such as rain or humidity — had the largest impact on penguin chick weight variations over time. For example, westerly wind and air temperature can cause a 7-ounce change in average chick weights, as compared to 3.5-ounce change caused by wind speed and precipitation. A 7-ounce decrease in chick weight could be the difference between a surviving and non-surviving chick.
Cimino explained that while penguins do build nests, they have no way of building nests that protect the chicks from the elements. This leaves penguin chicks unprotected and exposed while adult penguins are away from the nest. Precipitation, while not considered a key variable, can cause chick plumage to become damp or wet and is generally a major factor in egg and chick mortality and slow growth.
“It’s likely that weather variations are increasing the chicks’ thermoregulatory costs; and when they are cold and wet, they have to expend more energy to keep warm,” she said.
The wind can also affect the marine environment, she continued, mixing up the water column and dispersing the krill, a penguin’s main source of food, which may cause parent penguins to remain at sea for longer periods of time and cause chicks to be fed less frequently.
“This is an interesting study, because it calls into question what happens to an ecosystem when you change climate quickly: Is it just large-scale averages that change the ecosystem or do particular daily interactions also contribute to the change,” Oliver said.
Other co-authors on the paper include William Fraser and Donna Patterson-Fraser, from the Polar Oceans Research Group, and Vincent Saba, from NOAA National Marine Fisheries Service. Fraser and Patterson have been collecting data on Adélie penguins since the late 1970s, creating a strong fundamental data set that includes statistics collected over decades, even before rapid warming was observed.
By correlating the relevant environmental variables through analysis of data from sources such as space, weather stations, etc., the researchers were able to scientifically validate a potential cause for chick weight variation over time. Using big data analyses to statistically sift through the possible causes allowed the researchers to take a forensic approach to understanding the problem.
“Climate change strikes at the weak point in the cycle or life history for each different species,” Oliver said. “The Adélie penguin is incredibly adaptive to the marine environment, but climate ends up wreaking havoc on the terrestrial element of the species’ history, an important lesson for thinking about how we, even other species, are connected to the environment.”
Cimino will return to Antarctica next month to begin working with physical oceanographers from University of Alaska and Rutgers, through funding from the National Science Foundation. Using robotics, she will investigate what parent penguins actually do in the ocean in order to gain a broader perspective on how the penguins use the marine environment. In particular, she hopes to explore other possible contributing factors to chick weight variation such as parental foraging components that were not part of this study.
“It’s important for us to understand what’s going on, especially as conditions are getting warmer and wetter, because it may give us an idea of what may happen to these penguins in the future,” Cimino said.
The work reported here is supported in part through funds from the National Marine Fisheries Service, NASA and the National Science Foundation.
Donna O'Brien | newswise
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:c6dacd35-76d8-4831-8aed-3e99f7d8de67> | 3.859375 | 1,794 | Content Listing | Science & Tech. | 36.368528 | 95,499,736 |
We present a study of the galaxy population predicted by hydrodynamical simulations of galaxy clusters. These simulations, which are based on the GADGET-2 TREE + SPH code, include gas cooling, star formation, a detailed treatment of stellar evolution and chemical enrichment, as well as supernova energy feedback in the form of galactic winds. As such, they can be used to extract the spectrophotometric properties of the simulated galaxies, which are identified as clumps in the distribution of star particles. Simulations have been carried out for a representative set of 19 cluster-sized haloes, having mass M200 in the range 5 × 1013-1.8 × 1015h-1Msolar. All simulations have been performed for two choices of the stellar initial mass function (IMF), namely using a standard Salpeter IMF with power-law index x = 1.35, and a top-heavy IMF with x = 0.95. In general, we find that several of the observational properties of the galaxy population in nearby clusters are reproduced fairly well by simulations. A Salpeter IMF is successful in accounting for the slope and the normalization of the colour-magnitude relation for the bulk of the galaxy population. In contrast, the top-heavy IMF produces too red galaxies, as a consequence of their exceedingly large metallicity. Simulated clusters have a relation between mass and optical luminosity, which generally agrees with observations, both in normalization and in slope. Also in keeping with observational results, galaxies are generally bluer, younger and more star forming in the cluster outskirts. However, we find that our simulated clusters have a total number of galaxies which is significantly smaller than the observed one, falling short by about a factor of 2-3. We have verified that this problem does not have an obvious numerical origin, such as lack of mass and force resolution. Finally, the brightest cluster galaxies are always predicted to be too massive and too blue, when compared to observations. This is due to gas overcooling, which takes place in the core regions of simulated clusters, even in the presence of the rather efficient supernova feedback used in our simulations.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:8ee7d39d-324b-4c60-a59f-730973d1c2e0> | 2.59375 | 461 | Academic Writing | Science & Tech. | 31.938872 | 95,499,738 |
Boffins claim breakthrough in hovering robothopter experiments
Paper 'bugs' used subwoofer-powered wing flaps
Brainboxes in New York say they have made progress on one of the knottiest conundrums facing the technology of humankind today: that of constructing mechanical ornithopters able to fly – and specifically to hover – as well as insects can.
This wouldn't be of use at the scale of today's manned aircraft: nobody's that interested in building mighty 'thopter-craft to replace choppers or jets. Where flapping wings are demonstrably more efficient than airscrews or turbines is at the small scale, where the viscous forces in aerodynamics become more significant compared to inertial ones. Little insects, bats and some kinds of birds, despite the fact that they have very limited power output, can perform feats that aircraft of similar size – small unmanned choppers, for instance – struggle to mimic. In particular they can hover for long periods with great precision.
Military boffins have long sought to develop robotic aircraft which could operate in this low-Reynolds-number flight regime: there is in fact one such mini-ornithopter test vehicle flying. Nonetheless this kind of aerodynamics remains poorly understood.
Thus it was that Professor Jun Zhang and his colleagues found themselves experimenting on the dynamics of flapping-wing flight. Rather than building actual ornithopters (an expensive business: the military test minithopter has cost the Pentagon better than $4m so far), they used pyramid-shaped paper "bugs", kept airborne in a stream of blown air. The "bugs" were made to flap their wings rather cunningly, by using an ordinary audio subwoofer to create oscillations in the air column.
This led to several discoveries: one of them, counterintuitively, was that bugs weighted to be top-heavy actually flew more stably than bottom-heavy ones.
“It works somewhat like balancing a broomstick in your hand,” explains Zhang, boffin at New York uni's Courant Institute. “If it begins to fall to one side, you need to apply a force in this same direction to keep it upright.”
With the top-heavy paper flapcopters, this force is generated naturally as the bug tilts.
The new boffinry is set out in a paper published in the journal Physical Review Letters. We learn from the synopsis:
While the “bug” is far from a realistic analogy for an insect, the unsteady flow mechanisms revealed through these experiments can help address current disagreements among models that assess the intrinsic stability of flying insects. The next step could be to replace the pyramids with a mobile robot for a better simulation.
The full paper can be found here. ® | <urn:uuid:260f50e7-2517-4843-afe3-bcf8f2163bb1> | 3.421875 | 578 | News Article | Science & Tech. | 34.816667 | 95,499,764 |
There is increasing evidence that chemical cues play a pivotal role in host selection by the natural enemies of aphids. We use Vinson’s (1976) division of the host selection process into habitat location, host location and host acceptance for both parasitoids and predators and review what is known about the role of semiochemicals in aphid selection by natural enemies. For habitat location (i.e. detection of the host plant), volatiles emitted by plants after aphid attack have been described for a number of plant-aphid inter- actions. These synomones indicate not only the presence of an aphid host plant to the predator or parasitoid, but also the presence of aphids. Volatiles emitted from undamaged host plants are often attractive to aphid parasitoids, but less so for predators. Host location by the natural enemy on the food plant is guided by semiochemicals that mostly originate from the aphids, in particular aphid alarm pheromone, honeydew, or the smell of the aphid itself. Host acceptance is guided by contact chemicals for both predators and parasi- toids. In parasitoids, host recognition may be based on visual cues or on contact chemicals on the aphid’s cuticle, whereas host acceptance is ultimately based on as yet unknown substances within the aphid’s hemolymph. While it appears that many predators and parasitoids are attracted to the same semiochemicals, synergistic and antagonistic interactions among chemical substances have only rarely been investigated. More research into model systems is needed, not only to identify important semiochemicals, but also to determine their range of attraction. Recent progress in the development of analytical techniques has created new opportunities to improve our understanding of the chemical ecology of aphid-natural enemy interactions in the coming years.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:22269492-b2ca-4c05-98b7-966e38101d4a> | 3.1875 | 395 | Academic Writing | Science & Tech. | 16.944477 | 95,499,778 |
Why was March so cold? Blame Greenland.
You're not imagining it: March 2013 was chilly — the second-coldest March since 2000. The culprit is a stubborn mass of warm air over Greenland that blocked the jet stream.
Weather Underground / AP
Last month was a chilly one, ranking as the second-coldest March in the continental United States since 2000, according to the National Weather Service (NWS). The average temperature across the United States this March was also 13 degrees Fahrenheit (7.2 degrees Celsius) lower than in March 2012, and a late-winter blizzard broke snowfall records in many areas.
So, why has it been so cold?
The culprit is a stubborn, stationary mass of warm air over Greenland and the North Atlantic that has blocked the normal flow of air from west to east and south to north, said Greg Carbin, a meteorologist with the NWS' Storm Prediction Center. This flow of air, known as the jet stream, usually brings more warm air from the South as the Northern Hemisphere begins to heat up in the spring.
Obstinate air masses
This March, however, the mass of warm air — a high-pressure system that repels incoming weather systems — has redirected air currents and created a pattern of winds coming from the Northwest, blasting the eastern two-thirds of the United States with Arctic air, Carbin said.
"This obstinate mass of warm air over Greenland has redirected air currents like a rock in a stream," Carbin said.
However, the spring season hasn't been cold everywhere. In fact, the southwestern United States has been warmer than average, as the region has been unaffected by the blocking system in the North Atlantic, said Bob Henson, a meteorologist and science writer with the University Corporation for Atmospheric Research in Boulder, Colo.
Due, in part, to the cold, there have been fewer than 20 tornadoes in the United States this March, Carbin said. On average, March will see 76 twisters across the United States. Tornadoes depend on warm, moist air, which was scarce this past March, Carbin added.
Some research has suggested a link between a retreat of Arctic sea ice in a warming world and these high-pressure blocking systems, Carbin said.
As cold as March seemed, it was only the 59th coldest March since 1871, according to the Washington Post's Capital Weather Gang blog. In other words, the month's frigid temperatures do not disprove the observation that the world is heating up — and climate change could be playing a role in the development of the high-pressure system that has fueled the unusually cool month, Carbin said.
This March’s temperatures contrasted sharply with those seen in March 2012, which was the warmest March on record. In 2012, a mass of hot air developed over the middle of the country, causing unusually high temperatures and fueling an outbreak of tornadoes.
"They were almost like — no pun intended — polar opposites," Henson said.
The cold will not stick around forever, though, Carbin said. the high-pressure blocking system over Greenland is already beginning to weaken and looks likely to dissipate by the beginning of next week (April 7). This will bring temperatures back into the average range, he said — which means it may finally start to feel like spring.
- 6 Signs That Spring Has Sprung
- Season to Season: Earth's Equinoxes & Solstices (Infographic)
- Weirdo Weather: 7 Rare Weather Events
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:a669aa87-af67-4b23-ba96-cda453ac5681> | 3.375 | 758 | News Article | Science & Tech. | 53.120664 | 95,499,786 |
The research, to be published Friday in the journal Science, demonstrates for the first time that the reproductive success of steelhead trout, an important salmonid species, can drop by close to 40 percent per captive-reared generation. The study reflects data from experiments in Oregon’s Hood River.
“For fish to so quickly lose their ability to reproduce is stunning, it’s just remarkable,” said Michael Blouin, an OSU associate professor of zoology. “We were not surprised at the type of effect but at the speed. We thought it would be more gradual. If it weren’t our own data I would have difficulty believing the results.”
Fish reared in a hatchery for two generations had around half the reproductive fitness of fish reared for a single generation. The effects appear to be genetic, scientists said, and probably result from evolutionary pressures that quickly select for characteristics that are favored in the safe, placid world of the hatchery, but not in the comparatively hostile natural environment.
“Among other things, this study proves with no doubt that wild fish and hatchery fish are not the same, despite their appearances,” said Michael Blouin, an OSU associate professor of zoology. “Some have suggested that hatchery and wild fish are equivalent, but these data really put the final nail in the coffin of that argument.”
Even a few generations of domestication may have significant negative effects, and repeated use of captive-reared parents to supplement wild populations “should be carefully reconsidered,” the scientists said in their report.
Traditionally, salmon and steelhead hatcheries obtained their brood stock and eggs from fish that were repeatedly bred in hatcheries – they tended to be more docile, adapted well to surface feeding, and they thrived and survived at an 85-95 percent level in the safe hatchery environment.
More recently, some “supplementation” hatchery operations have moved to the use of wild fish for their brood stock, on the theory that their offspring would retain more ability to survive and reproduce in the wild, and perhaps help rebuild threatened populations.
“What happens to wild populations when they interbreed with hatchery fish still remains an open question,” Blouin said. “But there is good reason to be worried.”
Earlier work by researchers from OSU and the Oregon Department of Fish and Wildlife had suggested that first-generation hatchery fish from wild brood stock probably were not a concern, and indeed could provide a short-term boost to a wild population. But the newest findings call even that conclusion into question, he said.
“The problem is in the second and subsequent generations,” Blouin said. “There is now no question that using fish of hatchery ancestry to produce more hatchery fish quickly results in stocks that perform poorly in nature.”
Evolution can rapidly select for fish of certain types, experts say, because of the huge numbers of eggs and smolts produced and the relatively few fish that survive to adulthood. About 10,000 eggs can eventually turn into fewer than 100 adults, Blouin said, and these are genetically selected for whatever characteristics favored their survival. Offspring that inherit traits favored in hatchery fish can be at a serious disadvantage in the wild where they face risks such as an uncertain food supply and many predators eager to eat them.
Because of the intense pressures of natural selection, Blouin said, salmon and steelhead populations would probably quickly revert to their natural state once hatchery fish were removed.
However, just removing hatchery fish may not ensure the survival of wild populations. Studies such as this consider only the genetic background of fish and the effects of hatchery selection on those genetics, and not other issues that may also affect salmon or steelhead fisheries, such as pollution, stream degradation or climate change.
Blouin cautioned that these data should not be used as an indictment of all hatchery programs.
“Hatcheries can have a place in fisheries management,” he said. “The key issue is how to minimize their impacts on wild populations.”
Michael Blouin | EurekAlert!
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:83553530-0039-4e67-978e-923c55f560b6> | 3.734375 | 1,526 | Content Listing | Science & Tech. | 38.556998 | 95,499,791 |
Designing OOP Solutions: A Case Study
Now that you have analyzed the domain model of an OOP application, you are ready to transform the design into an actual implementation. The next part of this book will introduce you to the Visual Basic language. You will look at the .NET Framework and see how Visual Basic applications are built on top of the framework. You will be introduced to working in the Visual Studio IDE and become familiar with the syntax of the Visual Basic language. The next section will also demonstrate the process of implementing OOP constructs such as class structures, object instantiation, inheritance, and polymorphism in the Visual Basic .NET language. You will revisit the case study introduced in this chapter in Chapter 10, at which time you will look at transforming the application design into actual implementation code.
KeywordsNoun Phrase Class Diagram Sequence Diagram Activity Diagram Department Manager
Unable to display preview. Download preview PDF. | <urn:uuid:14b23ad7-703a-4c5c-9c23-2432bac4f47a> | 3.046875 | 190 | Truncated | Software Dev. | 37.865105 | 95,499,807 |
Photographing the Phases of the Moon
Part of the Patrick Moore's Practical Astronomy Series book series (PATRICKMOORE)
Galileo, with his first telescope, saw that the Moon was not the perfect ball assumed by many. He noted, through his less than perfect lenses, dark and light areas and some large craters.
KeywordsSolar Eclipse Direct Sunlight Lunar Cycle Large Crater Lunar Eclipse
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
© Springer Science+Business Media, LLC 2011 | <urn:uuid:3ffcf603-f526-4dae-ae81-dcbb2868977e> | 2.765625 | 128 | Truncated | Science & Tech. | 36.778185 | 95,499,808 |
From the Labs: Biotechnology
New publications, experiments and breakthroughs in biotechnology–and what they mean.
Blind Mice See the Light
Researchers engineer sight into a broken visual circuit
Source: “Light-Activated Channels Targeted to O.N. Bipolar
Cells Restore Visual Function in Retinal Degeneration”
Botond Roska et al.
Nature Neuroscience 11: 667-675
Results: Blind mice that had been genetically engineered to produce a light-sensitive protein in their retinas developed a rudimentary sense of vision. The mice responded to moving patterns, displaying an ability to resolve fine visual details about half as well as normal mice.
Why it matters: People with macular degeneration or retinitis pigmentosa, two leading causes of blindness in the United States, lose vision when photoreceptor cells degenerate. The new results raise the possibility of a therapy that would enable their eyes to detect and respond to light even in the absence of photoreceptors, partially restoring sight.
Methods: Researchers inserted a gene for a light-sensitive protein found in algae into the retinas of mice that lacked photoreceptors. Embedded in the membranes of retinal cells that normally relay signals from photoreceptors to the brain, the protein acts as a channel that opens when hit with light. That allows positively charged ions to flood into the cells, triggering a signal that ultimately reaches the brain.
Next steps: The cells engineered to produce the light-sensitive protein normally turn on in response to light. The researchers would like to apply their approach to cells that shut off in the presence of light, adding another layer of complexity to the restored visual system. But they must first find a way to deliver a second light-sensitive protein specifically to those cells.
Cells Show Promise for
Brain cells developed from skin cells help alleviate symptoms in rats
Source: “Neurons Derived from Reprogrammed Fibroblasts
Functionally Integrate into the Fetal Brain and Improve Symptoms of Rats with
Rudolf Jaenisch et al.
Proceedings of the National Academy of Sciences 105: 5856-5861; published online April 7, 2008
Results: Skin cells reprogrammed to act as stem cells differentiated in culture into neural stem cells. Transplanted into the brains of rodents, they were integrated into the existing brain circuitry and became functioning neurons. The reprogrammed cells, which are known as induced pluripotent stem cells, also improved symptoms in rats modeling Parkinson’s disease.
Why it matters: Animal and human studies suggest that replacing the dopamine-producing neurons damaged in Parkinson’s can treat the disease. But finding a source of such cells in humans has been problematic. Embryonic stem cells, which can give rise to neurons, are one potential source. But taking cells from human embryos is controversial, and embryonic stem cells are difficult to obtain. Working with reprogrammed cells might prove easier than working with embryonic stem cells.
Methods: In a dish, researchers transformed mouse skin cells into undifferentiated cells by inducing them to express four genes; previous studies had shown that those genes were able to reset the cell to its embryonic state. Then they used a previously identified set of chemicals to prompt those cells to differentiate into neurons. The cells were labeled with a fluorescent marker and transplanted into the brains of fetal mice, where they appeared to integrate into the brain as the mice grew to adulthood.
Researchers also transplanted reprogrammed cells into the brains of rats given a chemical toxin to knock out their dopamine-producing cells. The transplants repaired a motor dysfunction evident in these animals.
Next steps: The scientists are now trying to repeat the experiments with human cells. Once they develop human dopamine neurons, they will transplant them into rodents to see if they behave like the reprogrammed mouse cells. The researchers also aim to determine whether neurons derived from induced pluripotent stem cells are as stable as those derived from embryonic stem cells
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:ee9ba387-2572-4ff2-96dc-c353dd5698d0> | 3.421875 | 852 | Truncated | Science & Tech. | 33.280943 | 95,499,811 |
In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they can be run on real machines efficiently.
Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.
Deterministic algorithms can be defined in terms of a state machine: a state describes what a machine is doing at a particular instant in time. State machines pass in a discrete manner from one state to another. Just after we enter the input, the machine is in its initial state or start state. If the machine is deterministic, this means that from this point onwards, its current state determines what its next state will be; its course through the set of states is predetermined. Note that a machine can be deterministic and still never stop or finish, and therefore fail to deliver a result.
What makes algorithms non-deterministic?
A variety of factors can cause an algorithm to behave in a way which is not deterministic, or non-deterministic:
- If it uses external state other than the input, such as user input, a global variable, a hardware timer value, a random value, or stored disk data.
- If it operates in a way that is timing-sensitive, for example if it has multiple processors writing to the same data at the same time. In this case, the precise order in which each processor writes its data will affect the result.
- If a hardware error causes its state to change in an unexpected way.
Although real programs are rarely purely deterministic, it is easier for humans as well as other programs to reason about programs that are. For this reason, most programming languages and especially functional programming languages make an effort to prevent the above events from happening except under controlled conditions.
The prevalence of multi-core processors has resulted in a surge of interest in determinism in parallel programming and challenges of non-determinism have been well documented. A number of tools to help deal with the challenges have been proposed to deal with deadlocks and race conditions.
Disadvantages of Determinism
It is advantageous, in some cases, for a program to exhibit nondeterministic behavior. The behavior of a card shuffling program used in a game of blackjack, for example, should not be predictable by players — even if the source code of the program is visible. The use of a pseudorandom number generator is often not sufficient to ensure that players are unable to predict the outcome of a shuffle. A clever gambler might guess precisely the numbers the generator will choose and so determine the entire contents of the deck ahead of time, allowing him to cheat; for example, the Software Security Group at Reliable Software Technologies was able to do this for an implementation of Texas Hold 'em Poker that is distributed by ASF Software, Inc, allowing them to consistently predict the outcome of hands ahead of time. These problems can be avoided, in part, through the use of a cryptographically secure pseudo-random number generator, but it is still necessary for an unpredictable random seed to be used to initialize the generator. For this purpose a source of nondeterminism is required, such as that provided by a hardware random number generator.
Note that a negative answer to the P=NP problem would not imply that programs with nondeterministic output are theoretically more powerful than those with deterministic output. The complexity class NP (complexity) can be defined without any reference to nondeterminism using the verifier-based definition.
Determinism categories in languages
Haskell provides several mechanisms:
- non-determinism or notion of Fail
- the Maybe and Either types include the notion of success in the result.
- the fail method of the class Monad, may be used to signal fail as exception.
- the Maybe monad and MaybeT monad transformer provide for failed computations (stop the computation sequence and return Nothing)
- determinism/non-det with multiple solutions
- you may retrieve all possible outcomes of a multiple result computation, by wrapping its result type in a MonadPlus monad. (its method mzero makes an outcome fail and mplus collects the successful results).
- The option type includes the notion of success.
- The null reference value may represent an unsuccessful (out-of-domain) result.
- Edward A. Lee. "The Problem with Threads" (PDF). Retrieved 2009-05-29.
- Bocchino Jr., Robert L.; Adve, Vikram S.; Adve, Sarita V.; Snir, Marc (2009). Parallel Programming Must Be Deterministic by Default. USENIX Workshop on Hot Topics in Parallelism.
- "Intel Parallel Inspector Thread Checker". Retrieved 2009-05-29.
- Yuan Lin. "Data Race and Deadlock Detection with Sun Studio Thread Analyzer" (PDF). Retrieved 2009-05-29.
- Intel. "Intel Parallel Inspector". Retrieved 2009-05-29.
- David Worthington. "Intel addresses development life cycle with Parallel Studio". Retrieved 2009-05-26.
- Gary McGraw and John Viega. Make your software behave: Playing the numbers: How to cheat in online gambling. http://www.ibm.com/developerworks/library/s-playing/#h4
- Determinism categories in the Mercury programming language[dead link]
- Mercury predicate modes[dead link]
- Representing failure using the Maybe monad
- The class MonadPlus | <urn:uuid:30edf8bc-da1d-4063-8a56-fd3197917d5c> | 4.09375 | 1,197 | Knowledge Article | Software Dev. | 37.283051 | 95,499,816 |
Seismological observations show that, in some regions of the lower mantle, an increase in bulk sound velocity, interestingly, occurs in the same volume where there is a decrease in shear velocity. We show that this anti-correlated behavior occurs on cation substitution in bridgmanite by making single crystal elasticity measurements of MgSiO3 and (Mg,Fe,Al)(Si,Al)O3 using inelastic x-ray scattering in the ambient conditions. Cation substitution of ferrous iron and aluminum may explain large low shear velocity provinces in the lower mantle.
Bridgmanite, or, Pbnm-type magnesium-silicate perovskite, is the dominant mineral in the Earth’s lower mantle. Materials with perovskite or related structures also attract broad attention since they can display novel physical properties such as colossal magnetoresistance1, multiferroicity2, and high-temperature superconductivity3. At pressures over 125 GPa (corresponding to depths more than ~2700 km) and at temperature greater than 2500 K, bridgmanite transforms to a post-perovskite (pPv) phase4 with the Cmcm-type CaIrO3 structure. It is widely believed that pPv is the main component of the D″ layer at the bottom of the lower mantle, which is 200 km thick just above the core mantle boundary (~2900 km depth).
In the deep mantle, between 2000 and 2891 km in depth, some regions show an increase in bulk sound velocity (), and a decrease in shear wave velocity (): ΔVB > 0 > ΔVS, and others show a decrease in VB and an increase in VS: ΔVB < 0 < ΔVS5,6 (KS, G, and ρ are adiabatic bulk modulus, shear modulus, and density, respectively). This feature is called an anti-correlated seismic velocity anomaly. It is reported that the phase transformation of (Mg,Fe,Al)(Si,Al)O3 from Pbnm-type to Cmcm-type can explain the increase in VS and decrease in VB from the average (ΔVB < 0 < ΔVS) in some deeper regions7. However, this cannot explain the anomaly in the shallower part of the mantle where the pPv phase is not stable. More importantly, it is difficult to interpret the anti-correlated nature of the anomaly where ΔVB and ΔVS have opposite signs. The regions showing this anomaly, which are beneath Africa and the central Pacific, attract attention as large low shear velocity provinces (LLSVPs).
The origin of the LLSVPs is under debate. Thermal heterogeneity has been considered8, but exclusively thermal effects are insufficient to explain the LLSVPs because usually both VB and VS decrease with temperature. It is thus suggested that the LLSVPs have very different chemical composition from that of the average mantle9 due to accumulations of subducted oceanic slabs10, remnants of Earth’s early magma ocean11, or even chemical reactions with the core12. Recently primordial metallic melt trapped in the mantle was suggested as the nature of LLSVPs13. A complicated model2, including multiple chemical and thermal effects, can reproduce the distribution of the LLSVPs. But this model requires rather a specific distribution of effects that are not internally well correlated. Houser14 suggested that slow VS might be correlated with temperature and chemical anomaly using the parameter set for bridgmanite15, as was used in ref. 6, but did not discuss the anti-correlated anomaly between VB and VS. The theoretical result15 used in both seismological studies6,14 shows anti-correlation only in elastic moduli but not in velocities, and, more importantly, has not yet been experimentally verified.
In order to address these issues, we investigated the elastic properties of single-crystal bridgmanite at ambient conditions. Although Brillouin light scattering (BLS) is frequently used to determine elastic properties of high-pressure minerals, the elasticity of iron-bearing bridgmanite has not been determined by BLS due to its opacity, and its instability against strong optical laser irradiation. We used inelastic x-ray scattering (IXS) technique in this study. We prepared two types of bridgmanite: MgSiO3 (Mg-Bdg, hereafter) and Mg0.943Fe0.045Al0.023Si0.988O3 ((Fe,Al)-Bdg hereafter). Iron in (Fe,Al)-Bdg was confirmed by synchrotron Mössbauer spectroscopy to be in high-spin ferrous state and to occupy a large A site of perovskite structure. The sample characterization and more details of the IXS measurements are given in the Methods section. Elastic stiffness tensors (Cij) for Mg-Bdg, and for (Fe,Al)-Bdg was obtained from analysis of IXS spectra based on Christoffel’s equation16. A typical set of IXS spectra is shown in Fig. 1. The elastic moduli obtained are listed in Table 1 together with literature values15,17,18,19,20,21,22,23. The velocity surface plots of Mg-Bdg from the present Cij determined from two sets of IXS measurements are shown in Fig. 2 together with those calculated from BLS results17,18,19. The patterns of the velocity surfaces are similar to each other: the longitudinal velocity is the fastest along about b axis and minimum along c axis, etc. The absolute values determined using IXS are generally smaller than those from BLS.
The pattern of the velocity surface of (Fe,Al)-Bdg is basically similar to that of Mg-Bdg (red and blue lines in Fig. 2). The present cation substitution affected the velocity surface as follows: 1. VP along the b and c axes is increased; 2. the average VS along the b axis is decreased; 3. the difference of VS along the a and c axes is increased and decreased, respectively. Crystallographic studies24 report that iron substitution enlarges the a axis more than other axes, which is consistent with the present result (see Method section). The large elongation of the a axis probably results in the least change in VP along a axis. Thus qualitatively the velocity surfaces indicate that elastic anisotropy in bridgmanite increases with the present cation substitution. More quantitatively, the acoustic anisotropy defined by 2 × (Vmax − Vmin)/(Vmax + Vmin) increases from 8.81% to 8.92% for longitudinal waves, and from 12.4% to 13.4% for transverse waves. The present results experimentally demonstrate that the degree of anisotropy is increased by the present cation substitution.
The Voigt-Ruess-Hill average of bulk and shear moduli calculated from Cij are listed in Table 1. We determine KS and G to be 236 and 166 GPa, respectively for Mg-Bdg. The value of KS in the present study is consistent with that determined from RUS20 and that by a calculational study22, but is lower than the other values by ~15 GPa (6%). G is also smaller by ~10 GPa (also 6%) than those in the previous results. These differences correspond to 3% in velocity. The origin of the differences in KS and G between two techniques should be further investigated. Nevertheless, this study experimentally demonstrated that the present cation substitution in bridgmanite increases KS and VB and decreases G and VS: an anti-correlated behavior.
This anti-correlated behavior in elastic moduli and velocities by cation substitution has not been reported. Bulk and shear moduli and velocities are summarized in Fig. 3 together with previous results. Previously, the effect of Fe substitution was investigated using ultrasonic interferometry21 (UI) and calculations15,22. The sample used in the UI study contained not only Fe2+ but also Fe3+. The results of the UI study disagree with one calculation15, where Fe2+ substituted for Mg2+, but rather agree with another22, where Fe3+ and Al3+ substituted for Mg2+ and Si4+. These results21,22 imply that Fe3+ substituting for Mg2+ degreases both KS and G. The effect of aluminum substitution was reported using BLS25 and theoretical calculation21. An experimental study25 reported that the substitution of only Al decreases both elastic moduli and slightly increases VB. A theoretical study22 showed that the substitution of only Al decrease VB and VS as well as KS and G. That study22 also investigated the effect of coupled substitution of Fe3+ and Al, demonstrating that the effect of this pair substitution is qualitatively the same as that of the substitution of aluminum only.
Water content sometimes reduces elastic moduli. The present samples contain a certain amount of water (140 and 460 ppm). However, it is not known how much water content affects the elasticity of bridgmanite. If water content decreased shear modulus for bridgmanite, e.g. by 0.3 GPa/100 ppm or shear velocity by 0.02 km/s/100 ppm, the present cation substitution for dry bridgmanite would show a positive correlated behavior, or increase both KS and G. The anti-correlated behavior observed in this study may be due to a combination of the cation substitution and the water content.
We simply consider the effect of iron and aluminum separately thought these substitutions may be coupled. Many investigations have been done about the effect of cation substitution on isothermal bulk modulus of bridgmanite by measuring compression curves. It is well known that KT0 and K′ derived from a compression-curve fitting are strongly correlated. KT0 also depends on the pressure range of the measurement, sample conditions, etc. Nevertheless, the relative change in KT0 determined by the same technique is reliable. The effect of a small amount of Fe2+ substitution on KT0 is reported to be positive26,27,28,29. This is qualitatively consistent with the theoretical study15. In contrast, the effect of aluminum on KT0 is still controversial; a positive effect (increasing KT0) is reported in some studies26,30 and negative effect (decreasing KT0) in others30,31,32. Based on the BLS studies17,18,19,25 and the theoretical one21, the effect of aluminum substitution on KS can be considered negative. Note that the theoretical study22 also investigated the effect of coupling substitution of Fe3+ and Al, demonstrating that the effect of this pair substitution is qualitatively the same as that of the substitution of aluminum only. The effect of only Fe2+ on the velocities can be calculated from the present study by subtracting the effect of Al from the BLS results17,18,19,25, assuming that the effects of Fe2+ and Al are independent. This analysis suggests that Fe2+ substitution increases both VB and VS (Table 2).
We apply the present results to a geochemical and geothermal model to estimate if this effect is sufficient to explain the LLSVPs. We assume a perovskitic lower mantle33 for simplicity. The seismic anomaly observed in the LLSVPs (+1 and −1% of ΔVB/VB and ΔVS/VS, respectively)5,6 may, then be explained by variation of Fe2+ and Al substitution into bridgmanite at temperature conditions for 2000–2891 km depth (2250–2450 K34). The temperature effects on VB and VS were assumed to be independent of pressure and composition (Table 2). The observed anomaly of +/−1% for VB and −/+1% for VS corresponds to the compositional variation between MgSiO3 and Mg0.959Fe0.027Al0.028Si0.986O3, +/−2.0 atom% of ΔFe/(Mg + Fe + Si + Al) and +/−1.3 atom% of ΔAl/(Mg + Fe + Si + Al) in temperature range of 2250–2450 K34. This compositional heterogeneity of bridgmanite then explains the anti-correlated seismic anomaly (Figs 4 and 5AB). This model indicates that cation substitution of a few atomic percent causes an anti-correlated anomaly comparable to that observed in the LSSVPs.
We now consider to include the effect of temperature since the LLSVPs may correlate with local temperature changes. We assume the temperature difference ΔT between the regions with the highest VB and the average value, i.e. ΔT = T(ΔVB/VB = 1%) −T(ΔVB/VB = 0%). The chemical inhomogeneity, ΔX/(Mg + Fe + Si + Al) (X = Fe or Al), needed to explain the velocity anomaly is then shown in Fig. 4. Especially when ΔT is about 113 K, the LSSVPs can be explained by only 2.7 atom% of Fe2+ substitution without Al variation (Fig. 5AC). More detailed modeling requires ferropericlase and taking the effect of spin transition for these two materials into account.
We have experimentally demonstrated that cation substitutions in bridgmanite enhances elastic anisotropy and causes anti-correlated behavior in elastic wave velocities. This result indicates that seismic anomalies observed in the lower mantle could be explained by chemical heterogeneity in bridgmanite.
Sample synthesis and characterizations
The single crystals examined in this study were synthesized at 24 GPa and 1500 °C using a Kawai-type multi anvil press (USSA-5000) installed at ISEI, Okayama University35. The isotope ratios of chemical reagents were at natural abundance.
The chemical compositions are confirmed as MgSiO3 and Mg0.943Fe0.045Al0.023Si0.988O3 by an electron microprobe analyzer. The number ratios of Fe/(Mg + Fe + Si + Al) and Al/(Mg + Fe + Al + Si) of this sample are 0.023 and 0.012, respectively. Assuming all Fe is bivalent, the sum of the charge estimated from the EPMA results is −0.003. This is negligible, taking the uncertainty of the chemical analysis into account. A typical amount of water content of single crystals in the run product was 140 ± 52 and 460 ± 45 ppm according to synchrotron IR absorption analyses35.
The Fe3+/ΣFe ratio of (Fe,Al)-Bdg was evaluated with synchrotron Mössbauer spectroscopy at BL10XU of SPring-836. An obtained spectrum were analyzed using program MossA37. Without any prejudice, the spectrum seems to consist of two absorption lines with different intensities (Fig. 6). They can be interpreted either as two singlets, as an asymmetric doublet, or as combination of a doublet and a singlet. If the spectrum consists of two singlets, an isomer shift of 1.96(9) mm/s corresponds to that of monovalent high-spin iron. Considering the charge neutrality of the system, it is difficult for Fe+ to substitute for Mg or Si in perovskite structure. Therefore, the absorption line at 1.96 mm is the higher velocity one of a doublet. Analysis based on an asymmetric doublet gives the isomer shift of 1.05(6) mm/s and quadrupole splitting of 1.8(1) mm/s. These values indicate that iron in this sample was in a divalent high-spin state38 and substitute for magnesium39. The higher intensity at the lower velocity side is attributed either to that the sample was a single crystal or to that iron existed in another state. The former case is more plausible than the latter due to the following reasons. 1. The linewidths determined using two singlets (0.97(16) and 0.76(30) mm/s for lower and higher velocity lines) are consistent within the fitting uncertainty; 2. The line shape of the lower velocity signal looks symmetric and additional singlet/doublet to the lower velocity signal has not improved the fitting quality at all. We off course have paid much attention to possible existence of iron in a trivalent high-spin state, which should give a doublet with an isomer shift of ~0.5 mm/s38 and a quadrupole splitting of 0.5–1.0 mm/s39. Since parameter fitting assuming two doublets (one for HS Fe2+ and another for HS Fe3+) was not converged, we were not able to detect the amount of ferric iron if existed. The asymmetric doublet is probably attributed to a certain angle between the principal electric field gradient in the Fe site and the incident X-ray beam direction because the sample was a single crystal. The intensity ratio (2.7: 1) indicates the sign of the quadrupole splitting was negative. The linewidth assuming one doublet is 0.93(13) mm/s, which is much broader than a typical energy resolution of the Mössbauer spectrometer at BL10XU (0.43 mm/s). This is perhaps due to variation of the local environment around Fe in this sample given by Mg/Al/Si distribution in neighboring sites, hydrogen, and/or oxygen vacancy. The results of the synchrotron Mössbauer measurement conclude that most iron atoms were in a divalent high-spin state and occupied a magnesium site. Consequently, the simplest substitution model, where the iron substitutes for magnesium and aluminum substitutes for both magnesium and silicon in perovskite structure is consistent with the results of these analyses.
The investigated grains were confirmed to be single domains using a four-circle diffractometer with a laboratory x-ray source at room temperature. The lattice constants a, b, and c were 4.7784(3), 4.9306(4), 6.9005(8) Å and 4.787(1), 4.934(1), 6.904(1) Å for the Mg-Bdg and (Fe,Al)-Bdg, respectively. It was reported that the unit cell volume of MgSiO3 bridgmanite does not change even with 100 ppm water content40. An analytical curve drawn by fitting a linear function to literature values19,20,21,24,26,27,28,30,31,32,41,42,43,44,45,46,47,48,49,50,51,52,53 is shown in Fig. 7. The obtained analytical line for iron substitution is consistent with literature29. The unit cell volumes of Mg-Bdg is larger than the present analytical line by only 0.05%. Since these differences are comparable to the experimental error, the water content of 140 ppm seems to give a negligible effect. In contrast, the unit cell volume of the (Fe-Al)-Bdg is larger than those of the analytical lines by 0.29%. This excess volume may be explained by effect of aluminum and water content. It is known that aluminum incorporation increases the unit cell volume26,30,31,32. Estimating from the previous results, the value of 0.012 for Al/(Mg + Fe + Al + Si) makes the unit cell volume larger by 0.09%. Although the degree of water effect on the unit cell volume of magnesium silicate perovskite is uncertain, the water content of 460 ppm probably made the unit cell volume larger by 0.20%. The densities of the Mg-Bdg and (Fe,Al)-Bdg are 4103.3 and 4139.5 g/cm3, respectively.
IXS measurement and data analysis
Inelastic X-ray scattering with a single crystal sample in conjunction with an analysis based on Christoffel’s equation has been recently used for accurate determination of elastic moduli16,54,55,56. This technique has been adopted to data along high-symmetry directions about samples at high-pressure and high-temperature conditions55,56. In this study, we did not limit data along high-symmetry directions, but measured rather redundant data at off-symmetry positions to determine Cij precisely and to utilize all measured data with an analyzer array16,54 (see Fig. 1). We performed IXS measurements at BL35XU of SPring-857 at 21.747 and 17.794 keV, with which typical energy resolutions were 1.5 and 3.0 meV full-width-half-maximum (FWHM), respectively. 21.747 keV x-ray was used for Mg-Bdg and 17.794 keV for (Fe,Al)-Bdg. The size of the incident X-ray beam was ~70 μm in diameter. We performed another measurement for Mg-Bdg to insure the quality of our results. We measured another grain from the same sample growth run at BL43LXU of SPring-858. At BL43LXU, x-ray beam with size of ~20 μm and energy of 17.794 keV was used. The energy resolution was 3.0 meV (FWHM).
For each observed phonon mode, the elastic wave velocity was calculated assuming a linear relationship between phonon energy and momentum. Single crystal elasticity at ambient conditions was determined by least-square fitting to the observed velocities using the measured densities. Details of the fitting are given in ref. 16. Phonons with momentum transfers, |q| from 1 to 3 nm−1 away from Bragg peaks were used for analysis.
The elastic moduli determined from data at BL35XU and BL43LXU are consistent in contrast to different BLS studies which are not so consistent. The individual results are listed in Tables 1 and 3. The different IXS measurement agree to better than 14% (the maximum deviation) whereas those from three BLS studies are spread more (26%, the maximum deviation). Therefore these two sets of IXS data for Mg-Bdg were analyzed as one set (giving 461 modes) to obtain more reliable elastic properties. For (Fe,Al)-Bdg, 319 modes were used. The residuals of the fitting are shown in Fig. 8. There is a slightly linear relationship between ΔE and |q| observed, probably meaning the assumed linear relationship between ΔE and |q| is not completely valid at this q range.
How to cite this article: Fukui, H. et al. Effect of cation substitution on bridgmanite elasticity: A key to interpret seismic anomalies in the lower mantle. Sci. Rep. 6, 33337; doi: 10.1038/srep33337 (2016).
We appreciate help by Longjian Xie, Ryo Watanabe, and Tatsuya Hiratoko during IXS experiments. This work was performed using joint-use facilities of the Institute for Study of the Earth’s Interior, Okayama University and supported in part by Grants-in-Aid for Scientific Research (Grant Nos 22224008 and 15H021128 awarded to AY and Nos 22000002 and 15H05748 to EO) from the Japan Society for the Promotion of Science. Inelastic X-ray scattering measurements at BL35XU57 were done with the approval of JASRI (Proposal Nos 2012B1196, 2013A1047, 2013B1054, and 2014B1290). Measurements at RIKEN BL43LXU58 were made during commissioning time. The synchrotron Mössbauer spectroscopy was performed on the BL10XU at SPring-8 (Proposal No. 2014A0104). | <urn:uuid:3dd0bac5-8de2-4f26-94f0-044f74fb2edc> | 2.671875 | 5,108 | Academic Writing | Science & Tech. | 52.218043 | 95,499,819 |
Part of what you should learn from this book is a sense of good Perl style. Style is, of course, a matter of preference and debate. I won't pretend to know or demonstrate The One True Style, but I hope to show readers one example of contemporary, efficient, "effective" Perl style.
The fact that the code appears in a book affects its style somewhat. Examples can't be too verbose or boringeach one has to make one or two specific points without unnecessary clutter. Therefore, you will find the following:
I don't use English . It's just too verbose for this little book. Furthermore, English is not common practice among Perl programmers, and scripts that use English suffer a speed penalty. This is not to say that English is not useful, just that you won't see it here.
Not everything runs cleanly under -w or use strict (see Item 36). I advise all Perl programmers to make use of both -w and use strict regularly. However, starting off all the examples with my($this, $that) isn't going to make them more readable, and it's readability that's important here.
I generally minimize punctuation (see Item 10). Veteran Perl 4 programmers may find the lack of parentheses unnerving, but it grows on you.
Finally, I try to make the examples meaningful. Not every example can be a useful snippet, but I've tried to include as many pieces of real-world code as possible. | <urn:uuid:d35c1942-92c1-43ca-bb32-053e1a02f4d3> | 2.578125 | 302 | Truncated | Software Dev. | 58.062177 | 95,499,835 |
What are the main sources of nutrient inputs to Ireland's aquatic environment?
|Title:||What are the main sources of nutrient inputs to Ireland's aquatic environment?||Authors:||Mockler, Eva M.
Archbold, Marie A.
|Permanent link:||http://hdl.handle.net/10197/8562||Date:||26-Apr-2017||Abstract:||Where rivers and lakes are impacted by excess nutrients, we need to understand the sources of those nutrients before mitigation measures can be selected. In these areas, modelling can be used in conjunction with knowledge from local authorities and information gained from investigative assessments to identify significant pressures that contribute excessive nutrients to surface waters. Where surface waters are impacted by excess nutrients, understanding the sources of those nutrients is key to the development of effective, targeted mitigation measures. In Ireland, nutrient emissions are the main reason that surface waters are not achieving the required Good Status, as defined by the Water Framework Directive (WFD). A model has been developed in order to predict the sources of nutrients contributing to these emissions and to assess future pressures and the likely effectiveness of targeted mitigation scenarios. This Source Load Apportionment Model (SLAM) supports catchment managers by providing scientifically robust evidence to back-up decision-making in relation to reducing nutrient pollution. The SLAM is a source-oriented model that calculates the nitrogen and phosphorus exported from each sector (e.g. pasture, forestry, wastewater discharges) that contribute to nutrient loads in a river. Model output is presented as maps and tables showing the proportions of nutrient emissions to water attributed to each sector in each sub-catchment. The EPA has incorporated these model results into the multiple lines of evidence used for the WFD characterisation process for Irish catchments.||Funding Details:||Environmental Protection Agency||Type of material:||Conference Publication||Keywords:||Source load apportionment model;Suir catchment;Phosphorous load||Language:||en||Status of Item:||Peer reviewed||Conference Details:||International Association of Hydrogeologists (IAH) (Irish Group) Conference, 25-26 April 2017, Tullamore, Offaly, Ireland|
|Appears in Collections:||Centre for Water Resources Research Collection|
Civil Engineering Research Collection
Show full item record
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply. | <urn:uuid:ff9d3d50-dd49-45b9-9362-bfe776e65f14> | 3.03125 | 542 | Academic Writing | Science & Tech. | 21.175039 | 95,499,836 |
The study says 60 out of 68 U.S. species, or 88 percent of fish species found exclusively in large-river ecosystems like the Mississippi, Missouri and Ohio rivers, are of state, federal or international conservation concern. The report is in the April issue of the journal Frontiers in Ecology and the Environment.
On the other hand, says lead author Brenda Pracheil, a postdoctoral researcher in the UW’s Center for Limnology, the study offers some good news, too.
Traditionally, the conservation emphasis has been on restoring original habitat. This task proves impossible for ecosystems like the main trunk of the Mississippi River — the nation’s shipping, power production, and flood control backbone. While the locks, dams and levees that make the Mississippi a mighty economic force have destroyed fish habitat by blocking off migration pathways and changing annual flood cycles species need to spawn, removing them is not a realistic conservation option.
But, says Pracheil, we’re underestimating the importance of tributaries. Her study found that, for large-river specialist fish, it’s not all or nothing. Some rivers are just big enough to be a haven.
For any river in the Mississippi Basin with a flow rate of less than 166 cubic meters of water per second, virtually no large-river specialist fishes are present. But in any river that even slightly exceeds that rate, 80 percent or more of the large-river species call it home.
That means Mississippi tributaries about the size of the Wisconsin River and larger are providing crucial habitat for large-river fishes. When coupled with current efforts in the large rivers themselves, these rivers may present important opportunities for saving species.
“Talk to any large-river fish biologist, and they will tell you how important tributaries are to big river fish," says Pracheil. "But, until now, we’ve not really understood which rivers are most important. Our study tackles that and shows which tributaries in the Mississippi River Basin show the most promise for conservation of large-river fishes.”
Current policies governing large river restoration projects are funded largely through the U.S. Army Corps of Engineers, which requires that funds be spent on mainstems — or the big rivers themselves. Pracheil's study suggests spending some of that money on tributary restoration projects might do more conservation good for fish, while also letting agencies get more bang for their habitat restoration buck.
“Tributaries may be one of our last chances to preserve large-river fish habitat,” Pracheil says. “Even though the dam building era is all but over in this country, it’s just starting on rivers like the Mekong and Amazon — places that are hotspots for freshwater fish diversity. While tributaries cannot offer a one-to-one replacement of main river habitats, our work suggests that [they] provide important refuges for large-river fishes and that both main rivers and their tributaries should be considered in conservation plans.”
Brenda Pracheil, 402-613-0315, email@example.com
Brenda Pracheil | Newswise
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d0e8ddd8-da97-495c-9b4a-3123c75f85d1> | 3.75 | 1,258 | Content Listing | Science & Tech. | 40.19999 | 95,499,841 |
As Charles Darwin showed nearly 150 years ago, bird beaks are exquisitely adapted to the birds' feeding strategy. A team of MIT mathematicians and engineers has now explained exactly how some shorebirds use their long, thin beaks to defy gravity and transport food into their mouths.
The phalarope, commonly found in western North America, takes advantage of surface interactions between its beak and water droplets to propel bits of food from the tip of its long beak to its mouth, the research team reports in the May 16 issue of Science.
These surface interactions depend on the chemical properties of the liquid involved, so phalaropes and about 20 other birds species that use this mechanism are extremely sensitive to anything that contaminates the water surface, especially detergents or oil.
“Some species rely exclusively on this feeding mechanism, and so are extremely vulnerable to oil spills,” said John Bush, MIT associate professor of applied mathematics and senior author of the paper.
Wildlife biologists have long noted the unusual feeding behavior of phalaropes, which spin in circles on the water, creating a vortex that sweeps small crustaceans up to the surface, just like tea leaves in a swirling tea cup.
The birds peck at the surface, picking up millimetric droplets of water with their prey trapped inside. Since the birds point their beaks downward during the feeding process, gravity must be overcome to get those droplets from the tip of the bird's long beak to its mouth. Until now, scientists have been puzzled as to how that happens.
Scientists speculated that the feeding strategy depended on the drop's surface tension. Surface tension normally dominates fluid systems that are small relative to raindrops (for example, the world of insects), but it was not clear how it could benefit shorebirds. A key observation was that in order to propel the drop, the birds open and close their beaks in a tweezering motion.
To unravel the mystery, Bush teamed up with Manu Prakash, a graduate student in MIT's Center for Bits and Atoms, and David Quere, of the Ecole Polytechnique in Paris, a visiting professor in MIT's math department at the time of the study. They built a mechanical model of the phalarope beak that allowed them to study the process in slow motion.
The process depends on a surface interaction known as contact angle hysteresis, typically an impediment to drop motion on solids. For example, raindrops stick to window panes due to contact angle hysteresis. In the case of the bird beak, the time-dependent beak geometry couples with contact angle hysteresis to propel the drop upward.
“This may be the first known example where droplet motion is enabled rather than resisted by contact angle hysteresis,” Bush said.
As the beak scissors open and shut, each movement propels the water droplet one step closer to the bird's mouth. Specifically, when the beak closes, the drop's leading edge proceeds toward the mouth, while the trailing edge stays put. When the beak opens, the leading edge stays in place while the trailing edge recedes toward the mouth.
In this stepwise ratcheting fashion, the drop travels along the beak at a speed of about 1 meter per second.
The efficiency of the process, which the authors dub the “capillary ratchet,” depends on the beak shape: Long, narrow beaks are best suited to this mode of feeding. The study highlights the sensitivity of this mechanism to the opening and closing angles of the beak: “Varying these angles by a few degrees can change the drop speed by a factor of 10,” Quere said.
The capillary ratchet also depends critically on the beak's wettability--a measure of a liquid's tendency to bead up into droplets or spread out to wet its surface. Oil is much more “wetting” than water, so if the beak is soaked in oil from a spill, this process won't work.
The researchers note a potential application of nature's design: “We are currently exploring microfluidic devices in which this mechanism could be exploited for directed droplet transport, allowing for controlled stepwise motion of microliter droplets,” Prakash said.
The research was funded by the National Science Foundation, the Centre National de la Recherche Scientifique (France) and MIT's Center for Bits and Atoms.
Written by Anne Trafton, MIT News Office
Elizabeth A. Thomson | MIT News Office
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:18b6e950-a645-4b42-abe8-b59e3c3f4fde> | 4.34375 | 1,542 | Content Listing | Science & Tech. | 42.769388 | 95,499,842 |
is perhaps the most industrially important compound of bismuth
. It is also a common starting point for bismuth chemistry. It is found naturally as the mineral bismite
(monoclinic) and sphaerobismoite (tetragonal, much more rare), but it is usually obtained as a by-product of the smelting of copper
ores. Bismuth trioxide is commonly used to produce the "Dragon's eggs
" effect in fireworks
, as a replacement of red lead
The structures adopted by Bi2O3 differ substantially from those of arsenic(III) oxide, As2O3, and antimony(III) oxide, Sb2O3.
Bismuth oxide, Bi2O3 has five crystallographic polymorphs. The room temperature phase, α-Bi2O3 has a monoclinic crystal structure. There are three high temperature phases, a tetragonal β-phase, a body-centred cubic γ-phase, a cubic δ-Bi2O3 phase and an ε- phase. The room temperature α-phase has a complex structure with layers of oxygen atoms with layers of bismuth atoms between them. The bismuth atoms are in two different environments which can be described as distorted 6 and 5 coordinate respectively.
β-Bi2O3 has a structure related to fluorite.
γ-Bi2O3 has a structure related to that of Bi12SiO20 (a sillenite), where a fraction of the Bi atoms occupy the position occupied by SiIV, and may be written as Bi12Bi0.8O19.2.
δ- Bi2O3 has a defective fluorite-type crystal structure in which two of the eight oxygen sites in the unit cell are vacant. ε- Bi2O3 has a structure related to the α- and β- phases but as the structure is fully ordered it is an ionic insulator. It can be prepared by hydrothermal means and transforms to the α- phase at 400 °C.
This page was last edited on 30 June 2018, at 22:59 (UTC)
under CC BY-SA license. | <urn:uuid:86ac78d1-a4b3-4b5a-8c03-5e7f8ec3714d> | 3.015625 | 459 | Knowledge Article | Science & Tech. | 47.282473 | 95,499,859 |
What do you mean by Data Locality?
Hadoop major drawback was cross-switch network traffic due to the huge volume of data. To overcome this drawback, Data locality came into the picture. It refers to the ability to move the computation close to where the actual data resides on the node, instead of moving large data to computation. Data locality increases the overall throughput of the system. In Hadoop, HDFS stores datasets. Datasets are divided into blocks and stored across the datanodes in Hadoop cluster. When a user runs the MapReduce job then NameNode sends this MapReduce code to the datanodes on which data is available related to MapReduce job. Data locality has three categories: • Data local – In this category data is on the same node as the mapper working on the data. In such case, the proximity of the data is closer to the computation. This is the most preferred scenario. • Intra – Rack- In this scenarios mapper run on the different node but on the same rack. As it is not always possible to execute the mapper on the same datanode due to constraints. • Inter-Rack – In this scenarios mapper run on the different rack. As it is not possible to execute mapper on a different node in the same rack due to resource constraints.
Data locality means that the work being performed is local to where the data is being stored.
The whole point of Map/Reduce is that its far cheaper to ship the map/reduce process to where the data is stored than to ship the data around the cluster.
Note the following:
The importance of data locality diminishes based on the network speed and the level of effort of the work being performed within a single cycle of a Mapper.map() method. See: Uncovering mysteries of InputFormat: Providing better control for your Map Reduce execution.
Here we found that because the level of effort to process the data within a single Mapper.map() was longer than it took to transport the data between the storage node and the node where the work was being performed, it was possible to leverage the cluster and complete the job faster.
In more modern architecture models, you will start to see a compute/storage ( or storage / compute) model where you can have multiple clusters. One for storing the data (HDFS or MapRFS) and clusters or single containers to process the data. This is why containers and container management (Mesosphere , Kubernetes, etc...) are important to the future of Hadoop.
Retrieving data ... | <urn:uuid:a6b0c12d-c7f6-423b-9e7b-7f65512731c5> | 3.1875 | 539 | Q&A Forum | Software Dev. | 45.955158 | 95,499,899 |
The present study used DNA barcodes to identify individual cyprids to species. This enables accurate quantification of larvae of potential fouling species in the plankton. In addition, it explains the settlement patterns of barnacles and serves as an early warning system of unwanted immigrant species. Sequences from a total of 540 individual cypris larvae from Taiwanese waters formed 36 monophyletic clades (species) in a phylogenetic tree. Of these clades, 26 were identified to species, but 10 unknown monophyletic clades represented non-native species. Cyprids of the invasive barnacle, Megabalanus cocopoma, were identified. Multivariate analysis of antennular morphometric characters revealed three significant clusters in a nMDS plot, viz. a bell-shaped attachment organ (most species), a shoe-shaped attachment organ (some species), and a spear-shaped attachment organ (coral barnacles only). These differences in attachment organ structure indicate that antennular structures interact directly with the diverse substrata involved in cirripede settlement.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:36c88601-4b23-4d6e-b959-33f3ada9c199> | 3 | 237 | Academic Writing | Science & Tech. | 9.230756 | 95,499,916 |
Why Do Nuclear Vacuoles Appear in the Prophasic Nucleus of Pollen Mother Cells? Facts and Hypotheses
The presence of nuclear vacuoles during meiotic prophase has been shown in both animal (Rasmussen, 1976) and plant cells, including ferns (Sheffield and Bell, 1979; Sheffield, Laird and Bell, 1983), gymnosperms and angiosperms (Sheffield et al., 1979; Karasawa and Ueda, 1983a; Karasawa and Ueda, 1983b; Sangwan, 1986). An intimate relationship between the appearance of nuclear vacuoles and meiotic processes seems to be a common feature of all cells. Although there is no lack of hypotheses put forth by different authors, the whys and wherefores of these structures have yet to be satisfactorily explained.
KeywordsNuclear Envelope Pollen Mother Cell Meiotic Prophase Nuclear Volume Small Vacuole
Unable to display preview. Download preview PDF.
- Karasawa R, Ueda K (1983b) Occurrence of nuclear vacuoles in meiotic prophase nuclei in Compositae. Caryologia 36: 145–153Google Scholar
- Sangwan RS (1986) Formation and cytochemistry of nuclear vacuoles during meiosis in Datura. Eur J Cell Biol 40: 210–218Google Scholar
- Sheffield E, Bell PR (1979) Ultrastructural aspects of sporogenesis in a fern, Pteridium aquilinum (L.) Kuhn. Ann Bot 44: 393–405Google Scholar | <urn:uuid:10b1020b-6444-4be2-9d3d-e4d64236b64b> | 3.078125 | 335 | Academic Writing | Science & Tech. | 42.811603 | 95,499,943 |
Organic acids are organic compounds that have carboxyl groups. Organic acids can be classified by type of carbon chain (Aliphatic, Alicyclic, Aromatic, or heterocyclic), saturation, substitution, and a number of functional groups.
Organic acid is a type of organic compound that usually has acidic properties. Usually organic acids are called carboxy acids, commonly known as weak acids and do not fully dissociate in media like water, unlike strong minerals.
The simplest form we can encounter from organic acids is acetate and form, it is commonly used in the treatment of corrosion stimulation for gas and oil, since it is less reactive than hydrochloric acid and other strong acids.
The acidity of an organic compound is determined by the pKa value of the compound. The smaller the pKa value inorganic compounds, the higher the acidity. Organic acids are widely used in food industry, chemical industry, and pharmaceutical industry. Organic acids are used as acidification materials, antimicrobial additives, flavored developers in beer and whiskey. as well as preservatives.Small amounts of organic acids are present in plants as intermediate compounds in metabolism.
Some examples of organic acids found in food are malic acid, lactic acid, fumaric acid, pyroglutamic acid oxalic acid. Ascorbic acid, citric acid and tartaric acid. Analysis of organic acids can be done by titration, but the way titration can only determine the total organic acid while in fruits there are many organic acids. Another better method of analysis for organic acid analysis is by using chromatography.
Chromatography is an analytical method that uses 2 phases (stationary and phase phases) to separate a series of chemical compounds. This technique can be done with thin layer chromatography, liquid chromatography, and gas chromatography. Excess analysis by chromatography compared with titration is the number of samples needed less and can analyze various types of organic acids.
Many reported for organic acid analysis. The HPLC technique is distinguished from the normal phase (using nonpolar solvents as the mobile phase), reversed phase (using polar solvent as the mobile phase) and ion exchanger (using ion concentration gradient in its mobile phase).
Organic acids are acids derived from living things. This acid can be obtained naturally from animals and plants. Along with the development of chemistry, chemists can then create various types of acids from various mineral materials. Acid derived from minerals is called mineral acids. Inorganic acids can also be called mineral acids. The characteristics of inorganic acid are not containing carbon atoms.
Organic acid is a chemical compound of a group of fatty acids having a pH below 7 or acidic. However, in terms of animal feed more specified again the function that the use of organic acids aims to preserve feed (feed preservation) and control the pH of the digestive tract (acidifier).
Acids contained in various fruits and vegetables, belong to the acid that is not harmful to our body. However, many of the various acids are very dangerous if exposed to our body, because it can damage the skin. Therefore, never taste the acid if it is not known the acid is safe or not.Read More | <urn:uuid:432d80d4-d357-4900-89e9-166da9ee5b28> | 3.953125 | 662 | Knowledge Article | Science & Tech. | 23.932883 | 95,499,962 |
|Part of a series on|
Outline of tropical cyclones|
Tropical cyclones portal
Tropical cyclones and subtropical cyclones are named by various warning centers to provide ease of communication between forecasters and the general public regarding forecasts, watches, and warnings. The names are intended to reduce confusion in the event of concurrent storms in the same basin. Generally once storms produce sustained wind speeds of more than 33 knots (61 km/h; 38 mph), names are assigned in order from predetermined lists depending on which basin they originate. However, standards vary from basin to basin: some tropical depressions are named in the Western Pacific, while tropical cyclones must have a significant amount of gale-force winds occurring around the centre before they are named in the Southern Hemisphere.
Before the formal start of naming, tropical cyclones were named after places, objects, or saints' feast days on which they occurred. The credit for the first usage of personal names for weather systems is generally given to the Queensland Government Meteorologist Clement Wragge, who named systems between 1887 and 1907. This system of naming weather systems subsequently fell into disuse for several years after Wragge retired, until it was revived in the latter part of World War II for the Western Pacific. Formal naming schemes and naming lists have subsequently been introduced and developed for the Eastern, Central, Western and Southern Pacific basins, as well as the Australian region, Atlantic Ocean and Indian Ocean.
|Tropical cyclone naming institutions|
|Basin||Institution||Area of responsibility|
|United States National Hurricane Center||Equator northward, African Coast – 140°W|||
|Central Pacific||United States Central Pacific Hurricane Center||Equator northward, 140°W - 180°|||
|Western Pacific||Japan Meteorological Agency
|Equator – 60°N, 180 – 100°E
5°N – 21°N, 115°E – 135°E
|North Indian Ocean||India Meteorological Department||Equator northward, 100°E – 45°E|||
|Mauritius Meteorological Services
|Equator – 40°S, 55°E – 90°E
Equator – 40°S, African Coast – 55°E
|Australian region||Indonesian Agency for Meteorology, Climatology and Geophysics
Papua New Guinea National Weather Service
Australian Bureau of Meteorology
|Equator – 10°S, 90°E – 141°E
Equator – 10°S, 141°E – 160°E
10°S – 36°S, 90°E – 160°E
|Southern Pacific||Fiji Meteorological Service
Meteorological Service of New Zealand
|Equator – 25°S, 160°E – 120°W
25°S – 40°S, 160°E – 120°W
|South Atlantic||Brazilian Navy Hydrographic Center (Unofficial)||Equator – 35°S, Brazilian Coast – 20°W|||
Before the formal start of naming, tropical cyclones were often named after places, objects, or saints' feast days on which they occurred. The credit for the first usage of personal names for weather systems is generally given to the Queensland Government Meteorologist Clement Wragge, who named systems between 1887 and 1907. This system of naming weather systems subsequently fell into disuse for several years after Wragge retired until it was revived in the latter part of World War II for the Western Pacific. Formal naming schemes have subsequently been introduced for the North Atlantic, Eastern, Central, Western and Southern Pacific basins as well as the Australian region and Indian Ocean.
At present, tropical cyclones are officially named by one of eleven warning centers and retain their names throughout their lifetimes to facilitate the effective communication of forecasts and storm-related hazards to the general public. This is especially important when multiple storms are occurring simultaneously in the same ocean basin. Names are generally assigned in order from predetermined lists, once they produce one, three, or ten-minute sustained wind speeds of more than 65 km/h (40 mph). However, standards vary from basin to basin, with some systems named in the Western Pacific when they develop into tropical depressions or enter PAGASA's area of responsibility. Within the Southern Hemisphere, systems must be characterized by a significant amount of gale-force winds occurring around the center before they are named.
Any member of the World Meteorological Organization's hurricane, typhoon and tropical cyclone committees can request that the name of a tropical cyclone be retired or withdrawn from the various tropical cyclone naming lists. A name is retired or withdrawn if a consensus or majority of members agree that the system has acquired a special notoriety, such as causing a large number of deaths and amounts of damage, impact, or for other special reasons. A replacement name is then submitted to the committee concerned and voted upon, but these names can be rejected and replaced with another name for various reasons: these reasons include the spelling and pronunciation of the name, the similarity to the name of a recent tropical cyclone or on another list of names, and the length of the name for modern communication channels such as social media. PAGASA also retires the names of significant tropical cyclones when they have caused at least ₱1 billion in damage or have caused at least 300 deaths.
Within the North Atlantic Ocean, tropical or subtropical cyclones are named by the National Hurricane Center (NHC/RSMC Miami) when they are judged to have intensified into a tropical storm with winds of at least 34 kn (39 mph; 63 km/h). There are six lists of names which rotate every six years and begin with the first letters A—W used, skipping Q and U, and alternating between male and female names. The names of significant tropical cyclones are retired from the lists, with a replacement name selected at the next World Meteorological Organization's Hurricane Committee meeting. If all of the names on a list are used, storms are named after the letters of the Greek alphabet.
The current naming scheme began with the 1979 season. It uses alternating women's and men's names, and also includes some Spanish and a few French names. Before then, only women's names were used.
Within the Eastern Pacific Ocean, there are two warning centers that assign names to tropical cyclones on behalf of the World Meteorological Organization when they are judged to have intensified into a tropical storm with winds of at least 34 kn (39 mph; 63 km/h). Tropical cyclones that intensify into tropical storms between the coast of Americas and 140°W are named by the National Hurricane Center (NHC/RSMC Miami), while tropical cyclones intensifying into tropical storms between 140°W and 180° are named by the Central Pacific Hurricane Center (CPHC/RSMC Honolulu). Significant tropical cyclones have their names retired from the lists and a replacement name selected at the next World Meteorological Organization Hurricane Committee.
The current naming scheme began with the 1978 season, one year before the Atlantic basin (and which anomalously used the list that will be used next in 2018, rather than the one for 2020). As with the Atlantic basin, it uses alternating women's and men's names, and also includes some Spanish and a few French names. Before then, only women's names were used. Because Eastern Pacific hurricanes mainly threaten western Mexico and Central America, the lists contain more Spanish names than the Atlantic lists.
When a tropical depression intensifies into a tropical storm to the north of the Equator between the coastline of the Americas and 140°W, it will be named by the NHC. There are six lists of names which rotate every six years and begin with the letters A—Z used, skipping Q and U, with each name alternating between a male or a female name. The names of significant tropical cyclones are retired from the lists, with a replacement name selected at the next World Meteorological Organization's Hurricane Committee. If all of the names on a list are used, storms are named using the letters of the Greek alphabet.
When a tropical depression intensifies into a tropical storm to the north of the Equator between 140°W and 180°, it is named by the CPHC. Four lists of Hawaiian names are maintained by the World Meteorological Organization's hurricane committee, rotating without regard to year, with the first name for a new year being the next name in sequence that was not used the previous year. Significant tropical cyclones have their names retired from the lists, with a replacement name selected at the next Hurricane Committee meeting.
Tropical cyclones that occur within the Northern Hemisphere between the anti-meridian and 100°E are officially named by the Japan Meteorological Agency when they become tropical storms. However, PAGASA also names tropical cyclones that occur or develop into tropical depressions within their self-defined area of responsibility between 5°N–25°N and 115°E-135°E. This often results in tropical cyclones in the region having two names.
Tropical cyclones within the Western Pacific are assigned international names by the JMA when they become a tropical storm with 10-minute sustained winds of at least 34 kn (39 mph; 63 km/h). The names are used sequentially without regard to year and are taken from five lists of names that were prepared by the ESCAP/WMO Typhoon Committee, after each of the 14 members submitted 10 names in 1998. The order of the names to be used was determined by placing the English name of the members in alphabetical order. Members of the committee are allowed to request the retirement or replacement of a system's name if it causes extensive destruction or for other reasons such as number of deaths. Unlike other basins, storms are also named after plants, animals, objects, and mythological beings.
|Hong Kong||Japan||Laos||Macau||Malaysia||Micronesia||Philippines||South Korea
|1||Damrey||Haikui||Kirogi||Kai-tak[nb 1]||Tembin[nb 2]||Bolaven||Sanba||Jelawat||Ewiniar||Maliksi||Gaemi||Prapiroon||Maria||Son-Tinh|
Since 1963, PAGASA has independently operated its own naming scheme for tropical cyclones that occur within its own self-defined Philippine Area of Responsibility. The names are taken from four different lists of 25 names and are assigned when a system moves into or develops into a tropical depression within PAGASA's jurisdiction. The four lists of names are rotated every four years, with the names of significant tropical cyclones retired should they have caused at least ₱1 billion in damage and or at least 300 deaths within the Philippines. Should the list of names for a given year be exhausted, names are taken from an auxiliary list, the first ten of which are published every year.
|Nando||Odette||Paolo||Quedan||Ramil||Salome||Tino||Urduja[nb 4]||Vinta[nb 5]||Wilma||Yasmin||Zoraida|
Within the North Indian Ocean between 45°E – 100°E, tropical cyclones are named by the India Meteorological Department (IMD/RSMC New Delhi) when they are judged to have intensified into a cyclonic storm with 3-minute sustained wind speeds of at least 34 kn (39 mph; 63 km/h). There are eight lists of names which are used in sequence and are not rotated every few years; however, the names of significant tropical cyclones are retired.
Within the South-West Indian Ocean in the Southern Hemisphere between Africa and 90°E, a tropical or subtropical disturbance is named when it is judged to have intensified into a tropical storm with winds of at least 34 kn (39 mph; 63 km/h). This is defined as being when gales are either observed or estimated to be present near a significant portion of the system's center. Systems are named in conjunction with Météo-France Reunion by either Météo Madagascar or the Mauritius Meteorological Service. If a disturbance reaches the naming stage between Africa and 55°E, then Météo Madagascar names it; if it reaches the naming stage between 55°E and 90°E, then the Mauritius Meteorological Service names it. The names are taken from three pre-determined lists of names, which rotate on a triennial basis, with any names that have been used automatically removed. The names that are going to be used during a season are selected in advance by the World Meteorological Organization's RA I Tropical Cyclone Committee from names submitted by member countries.
Within the Australian region in the Southern Hemisphere between 90°E – 160°E, a tropical cyclone is named when observations or Dvorak intensity analysis indicate that a system has gale force or stronger winds near the center which are forecast to continue. The Indonesian Badan Meteorologi, Klimatologi, dan Geofisika names systems that develop between the Equator and 10°S and 90°E and 141°E, while Papua New Guinea's National Weather Service names systems that develop between the Equator and 10°S and 141°E and 160°E. Outside of these areas, the Australian Bureau of Meteorology names systems that develop into tropical cyclones. If a name is assigned to a tropical cyclone that causes loss of life or significant damage and disruption to the way of life of a community, then the name assigned to that storm is retired from the list of names for the region. A replacement name is then submitted to the next World Meteorological Organization's RA V Tropical Cyclone Committee meeting.
If a system intensifies into a tropical cyclone between the Equator-10°S and 90°E-141°E, it will be named by the Badan Meteorologi, Klimatologi, dan Geofisika (BMKG/TCWC Jakarta). Names are assigned in sequence from list A, while list B details names that will replace names on list A that are retired or removed for other reasons.
If a system intensifies into a tropical cyclone between the Equator – 10°S and 141°E – 160°E, then it will be named by Papua New Guinea National Weather Service (NWS, TCWC Port Moresby). Names are assigned in sequence from list A and are automatically retired after being used regardless of any damage caused. List B contains names that will replace names on list A that are retired or removed for other reasons.
When a system develops into a tropical cyclone below 10°S between 90°E and 160°E, then it will be named by the Australian Bureau of Meteorology (BoM) which operates three Tropical Cyclone Warning Centres in Perth, Darwin, and Brisbane. The names are assigned in alphabetical order and used in rotating order without regard to year.
Within the Southern Pacific basin in the Southern Hemisphere between 160°E – 120°W, a tropical cyclone is named when observations or Dvorak intensity analysis indicate that a system has gale force or stronger winds near the center which are forecast to continue. The Fiji Meteorological Service (FMS) names systems that are located between the Equator and 25°S, while the New Zealand MetService names systems (in conjunction with the FMS) that develop to the south of 25°S. If a tropical cyclone causes loss of life or significant damage and disruption to the way of life of a community, then the name assigned to that cyclone is retired from the list of names for the region. A replacement name is then submitted to the next World Meteorological Organization's RA V Tropical Cyclone Committee meeting. The name of a tropical cyclone is determined by using Lists A — D in order, without regard to year before restarting with List A. List E contains names that will replace names on A-D when needed.
|List E (Standby)|
When a significant tropical or subtropical cyclone exists in the South Atlantic Ocean, the Brazilian Navy Hydrographic Center's Serviço Meteorológico Marinho names the system using a predetermined list of names.
None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.
All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.
The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely. | <urn:uuid:abcd9ca9-33a6-4efa-86ac-3bc1bbcf0459> | 3.84375 | 3,534 | Knowledge Article | Science & Tech. | 29.103021 | 95,499,969 |
Common name: Common Shiner
available through www.itis.gov
Identification: Gilbert (1964); Scott and Crossman 1973; Becker (1983); Page and Burr
(1991); Jenkins and Burkhead (1994); Pflieger (1997); another commonly
used name is Notropis cornutus.
Size: 18 cm.
Native Range: Atlantic, Great Lakes, Hudson Bay, and Mississippi River basins, from
Nova Scotia to southeastern Saskatchewan, and south to James River
drainage, Virginia, northern Ohio, central Missouri, and Wyoming (Page
and Burr 1991).
Puerto Rico &
Interactive maps: Point Distribution Maps
Native range data for this species provided in part by NatureServe
Table 1. States with nonindigenous occurrences, the earliest and latest observations in each state, and the tally and names of HUCs with observations†. Names and dates are hyperlinked to their relevant specimen records. The list of references for all nonindigenous occurrences of Luxilus cornutus are found here.
Table last updated 5/25/2018
† Populations may not be currently present.
Means of Introduction: Probable stocking for forage in Utah. Unknown in West Virginia; probable bait bucket release.
Status: Extirpated in Utah (Sigler and Sigler 1996). Established in West Virginia.
Impact of Introduction: Unknown.
References: (click for full references)
Tilmant, J.T. 1999. Management of nonindigenous aquatic fish in the U.S. National Park System. National Park Service. 50 pp.
Revision Date: 8/5/2004
Peer Review Date: 4/1/2016
Fuller, P., 2018, Luxilus cornutus (Mitchill, 1817): U.S. Geological Survey, Nonindigenous Aquatic Species Database, Gainesville, FL, https://nas.er.usgs.gov/queries/FactSheet.aspx?SpeciesID=563, Revision Date: 8/5/2004, Peer Review Date: 4/1/2016, Access Date: 7/17/2018
This information is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely best science. The information has not received final approval by the U.S. Geological Survey (USGS) and is provided on the condition that neither the USGS nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the information. | <urn:uuid:6373c4c1-9732-4bc5-9155-a24e4560a7a8> | 2.953125 | 536 | Structured Data | Science & Tech. | 51.03007 | 95,499,974 |
Results from an expedition to the sea floor near the Hawaiian Islands show evidence that the deep Earth is more unsettled than geologists have long believed. A new University of Rochester study suggests that the long chain of islands and seamounts, which is deemed a "textbook" example of tectonic plate motion, was formed in part by a moving plume of magma, upsetting the prevailing theory that plumes have been unmoving fixtures in Earths history. The research will be published in the August 22 issue of Science.
"Mobile magma plumes force us to reassess some of our most basic assumptions about the way the mantle operates," says John Tarduno, professor of earth and environmental sciences at the University. "Weve relied on them for a long time as unwavering markers, but now well have to redefine our understanding of global geography."
Traditionally, the islands were thought to have formed as the massive Pacific plate, the largest single section of Earths crust, moved sluggishly between the Americas and Asia. A plume, or "hot spot," brought super-heated magma from deep in the Earth to close to the crust, resulting in concentrated areas of volcanic activity. As the Pacific plate moved across this hot spot, the plume created a long series of islands and subsurface mountains. Though this chain of seamounts seemed like a perfect record of Pacific plate movement, a strange bend in the chain, dated at about 47 million years ago, troubled some geologists. To most, however, this bend was taken as the classic example of how plates can change their motion. In fact, a figure of the bend can be found in nearly all introductory text books on geology and geophysics.
Jonathan Sherwood | University of Rochester
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Power and Electrical Engineering
17.07.2018 | Life Sciences
16.07.2018 | Physics and Astronomy | <urn:uuid:90b9c31d-2b06-4b1a-91d6-5264e0716da2> | 3.921875 | 992 | Content Listing | Science & Tech. | 40.670017 | 95,499,975 |
We report an experimental demonstration of controlling plasma flow direction with a magnetic nozzle consisting of multiple coils. Four coils are controlled separately to form an asymmetric magnetic field to change the direction of laser-produced plasma flow. The ablation plasma deforms the topology of the external magnetic field, forming a magnetic cavity inside and compressing the field outside. The compressed magnetic field pushes the plasma via the Lorentz force on a diamagnetic current: j × B in a certain direction, depending on the magnetic field configuration. Plasma and magnetic field structure formations depending on the initial magnetic field were simultaneously measured with a self-emission gated optical imager and B-dot probe, respectively, and the probe measurement clearly shows the difference of plasma expansion direction between symmetric and asymmetric initial magnetic fields. The combination of two-dimensional radiation hydrodynamic and three-dimensional hybrid simulations shows the control of the deflection angle with different number of coils, forming a plasma structure similar to that observed in the experiment.
Plasma control with an external magnetic field can be practically applied to various areas such as the magnetic control of arc plasma1; magnetic confinement of fusion plasma2; and the creation of a magnetic nozzle for electric propulsion in a magnetoplasmadynamic (MPD) thruster3, helicon plasma thruster4, or future propulsion systems such as magnetoplasma sails5 and laser fusion propulsion6, 7. Thrust vector control (TVC) is a useful technique in thrusters because attitude and/or flight path control are possibly achieved by thrusters themselves. In chemical propulsion, TVC has been successfully achieved by deflecting an exhaust jet by mechanical deflection of a nozzle or thrust chamber, insertion of heat-resistant movable bodies into the exhaust jet, and injection of fuel fluid into the side of the nozzle, or by using separate thrusters8. TVC is possibly achieved by deflecting a magnetic nozzle in the case of plasma thrusters. In previous research, especially in the research on laser fusion propulsion, various techniques for thrust deflection have been proposed and studied using numerical simulations: the positioning of the fuel plasma in a magnetic nozzle9, the deflection of a coil9, magnetic nozzle with multiple coils10, and off-axis coils10. A system using multiple coils does not need any mechanism of actuating coils, and the magnetic field structure is controlled only by changing the number of coils to drive. Despite this advantage, this technique has never been demonstrated.
In this study, we report the first experimental demonstration of the technique of controlling unsteady plasma flow from laser-ablation with a multiple-coil magnetic-nozzle system. The multiple-coil system forms symmetric or asymmetric magnetic nozzle by driving different numbers of coils. Plasmas produced by laser irradiation expand forming a diamagnetic cavity in the plasma and compressing the field outside, which changes the field topology, and are pushed out via the Lorentz force between the magnetic field and diamagnetic current.
A numerical simulation is performed to analyse the control of plasma flow and the structure formation in this system. The simulation suggests that the structure formed outside the coil is identical to that observed in the experiment, despite the time-evolution being slower than in the experiment. Additionally, the simulation shows the possibility of controlling the plasma flow by changing the number of coils as demonstrated in the experiment.
The experiment was performed with an extreme-ultraviolet database laser (DB laser) at Institute of Laser Engineering, Osaka University. A DB laser is an amplified Nd:YAG laser with an energy of 7.5 ± 0.2 J, wavelength of 1064 nm, and pulse duration of 9.4 ± 0.1 ns. A 500 μm diameter spherical CH target was irradiated by the DB laser as shown in Fig. 1(a), through a multiple-coil system. The target was supported by a thin carbon fiber (thickness of approximately 10 μm) attached to a glass stalk with a diameter of 0.9 mm.
This system comprises four eight-turn square coils (30 mm and 38 mm inner and outer side lengths, respectively) located at x or y = ±21.5 mm. Each coil was individually driven by a pulse-powered circuit consisting of four capacitors (4 × 3 mF), each charged with a voltage of 500 V and driven by laser–triggered gap switches11, resulting in a maximum current of approximately 3 kA and a field strength of 0.3 T at the initial target position (32 mm from the center of the coil) by operating all four coils. The duration of the field (approximately 500 μs) is much longer than the plasma expansion time (<10 μs), and the field is quasi-static in the time-scale of plasma expansion. A B-dot probe (single-turn, 2 mm in diameter) was placed 20 mm below the target to measure the time-evolution of the external magnetic field. The inductance of the probe was 10 nH and the impedance of this measurement was 50 Ω, resulting in a time constant of ~0.2 ns, which is much smaller than the time scale of the plasma expansion. The laser-produced plasma initially expands in the −z-direction and interacts with the magnetic field to change its direction depending on the field structure. The plasma structure was measured simultaneously by a gated optical imager using an intensified charge-coupled device (ICCD) camera with a minimum exposure time of 5 ns, observing a thermal bremsstrahlung emission at the wavelength of 450 nm (width of 10 nm in FWHM).
The upper and lower panels of Fig. 1 show the magnetic field lines on the planes x = 0 and z = 0, respectively, with the contours of the field strength, and left and right figures in each panel correspond to the magnetic field structures with all four coils and the lower three coils, respectively. The magnetic field inside the four coil system is cancelled by the anti-direction magnetic field from the counter coils, and it forms a cusp magnetic field at the center of four coils. With the three-coil operation, the field lines diverge at the top while they concentrate at the bottom as shown in Fig. 1(c).
Figure 2 shows the time-evolution of the z-component of the magnetic field measured with the B-dot probe below the initial target position, as illustrated in Fig. 1(a), in different initial magnetic fields: with the operations of four coils (solid line), the lower three coils (dashed line), and the upper three coils (dotted line). The field increase and decrease in Fig. 2 correspond to the plasma expansion and cavity formation, respectively. As the plasma expands, the field is amplified at t ~ 0.1–0.15 μs under all three operating conditions. The field strength suddenly decreases after its peak and becomes even smaller than the initial values with the operation of all four coils and the upper three coils. The field, however, does not decrease in the case of the lower three coils, suggesting that the plasma expands upwards and does not reach the probe.
Figure 3 shows the plasma emission at different times. Panels (a)–(c) and (d)–(f) show, respectively, the emission with four-coil and lower-three-coil operations, at t = 0.1 μs [panels (a) and (d)], 0.2 μs [(b) and (e)], and 0.5 μs [(c) and (f)]. The surface of the supporting frame of the coil is depicted with white lines. The DB laser irradiates the CH sphere at t = 0, producing a plasma that is expanding leftwards, as shown in Fig. 3(a) and (d). A glass stalk and the surface of the frame are ablated as well by radiation from the plasma and bright emissions are observed at z ~ 40 mm and 20 mm, respectively. The plasma is decelerated by the magnetic field, and high-density plasma exists at z = 30–40 mm, as shown in Fig. 3(b) and (e) at t = 0.2 μs and in Fig. 3(c) and (f) at t = 0.5 μs, forming a cone-like structure. Additionally, a part of the expanding plasma enters the coil system as shown at t > 0.2 μs. In addition, later in time as shown in Fig. 3(g) (t = 2 μs) and 1(h) (t = 5 μs), plasmas inside the coil system continue emitting longer than 5 μs.
The plasma expansion with the multiple-coil system was also analysed with numerical simulations. The laser absorption and plasma generation were simulated with a two-dimensional radiation hydrodynamic code Star2D12 for 33 ns (until 8 ns after the laser peak) without considering an external magnetic field. This code uses a one-fluid and two-temperature model, and ions are treated as the average of carbon and hydrogen. The plasma expansion in the external magnetic field was simulated with a three-dimensional hybrid code7, in which ions and electrons are treated as individual super-particles and electromagnetic fluid, respectively, in the magnetic field with four [Fig. 1(b) and (d)], three [Fig. 1(c) and (e)], and two (not shown) adjacent coils. The total number of super-particles was 106 and they were distributed according to the ion density calculated by the radiation hydrodynamic simulation.
Figure 4(a–c) show the ion density distributions estimated from the super-particle distributions with the four, lower three, and lower two coils, respectively, at t = 0.5 μs. The coil positions are illustrated as dashed lines in the figures, and in the case of two coils, the coils are positioned as 45 degrees rotated about the z-axis, to see the maximum deflection, as depicted in Fig. 4(c). The laser irradiates the target from the left side and the laser-produced plasma initially expands leftwards. The plasma interacts with the magnetic field being directed to rightwards, as shown for z > 20 mm in the three figures, while a part of the plasma enters the coil system and flows out through the coils, as shown in top and/or bottom regions in Fig. 4(a–c). In the cases of two and three coils, high-density plasmas are observed near the cusp region at the bottom coil where the magnetic field becomes stronger [Fig. 4(b) and (c)] as observed in the experiment [Fig. 3(c) and (f)], and low-density plasma expands upwards.
The emission structure observed in the experiment is estimated from the simulation to understand the time-evolution and structure formation of the expanding plasma. In general, the thermal bremsstrahlung emission strongly depends on the ion number density and charge states, and weakly on the temperature13, 14, as shown in equation (1). Here, artificial camera images are generated in terms of the line-of-sight integration of the emission energy in arbitrary units as , where I, n i , and Z are the brightness, ion number density and charge state, respectively, and shown in Fig. 5(a–f) as the time-evolution from t = 0.2 μs to 0.7 μs. Figure 5(g) and (h) show the self-emission data near the target position [see Fig. 3(a) and (c)]. The laser-produced plasma, which is expanding leftwards, stagnates at z ~ 25 mm, as shown in Fig. 5(b) (t = 0.4 μs) and (c) (t = 0.5 μs) resulting from interaction with the magnetic field. Later in time, the plasma is pushed via the j × B force and forms a cone-like structure, as shown in Fig. 5(d) (t = 0.6 μs), (e) (t = 0.7 μs), and (f) (t = 0.8 μs).
The magnetic skin depth of the laser-produced plasma is estimated from a similar experiment with the same laser parameters15 as μm and is much smaller than the typical plasma size of a few tens of millimetres. Moreover, the diffusion time of the magnetic field15 is τ ~ l 2/η ~ 50 μs with the plasma size of 10 mm, meaning that the plasma in this experiment is highly conductive, and therefore, the plasma expands producing a magnetic cavity inside itself and amplifying the field outside.
As the magnetic field measurement shows, under all conditions, the cavity depth is small: at t ~ 0.2 μs, as shown in Fig. 2, that is, the magnetic field is not fully expelled. The magnetic cavity size can be estimated in an expanding plasma as the total excluded magnetic energy, which is equivalent to the plasma kinetic energy inside16,17,18: , where μ 0 is a magnetic permeability in vacuum, is the kinetic energy of the laser-produced plasma, and B 0 is an external magnetic field. In the present experiment, the maximum R b is approximated, assuming 100% of the laser energy is converted to plasma kinetic energy19, as R b ~ 48 mm, which is twice as large as the distance between the initial target position and the probe. In previous research18, 20, 21, the plasma expansion radius R p was found to be , and it is comparable to the distance of the probe. Moreover, as discussed in ref. 21, the cavity depth increases with decreasing external magnetic field. This finding suggests that the plasma kinetic energy is small compared with the magnetic field energy.
The emission intensity at the cusp region in the bottom coil (z ~ 0 and y ~ −15 mm) is stronger than that at the center in both cases as indicated at t = 0.2 μs [Fig. 3(b) and (e)] and t = 0.5 μs [Fig. 3(c) and (f)]. Additionally, the cusp region with three coils shows stronger emission than that with four coils. In the case of three coils, the plasma inside is guided downwards along the magnetic field lines [see Fig. 1(c)] and the density at the bottom is higher than that with the case of four coils. This deviation on the plasma density is also observed in the numerical simulations as shown in Fig. 4(a–c). As time passes, although the density decreases, the plasma propagates rightwards more than 5 μs for z > 25 mm as a result of the Lorentz force. Comparing the plasma structures in the simulation and experiment early in time [Fig. 5(a) and (g)], and later in time [Fig. 5(e) and (h)], the simulation well replicates the plasma structure, such as the stagnation position of z ~ 25 mm, plasma expansion scale, and cone-like shape later in time, even though the time-evolution in the simulation is approximately 0.1–0.2 μs slower than in the experiment. This difference in time can be caused by some assumptions in our simulation: the assumption of average ion mass (6.5 m p from carbon and hydrogen, where m p is the proton mass), without ionization and recombination, and the cold plasma assumption (T e = T i = 0) in the hybrid simulation; and/or plasma physics conditions that are not included in our simulation codes: the external magnetic field in the radiation hydrodynamic simulation, ablation of the supporting frame, and interaction between plasma flows.
Deflection angles are estimated from the simulations as , where the summation is taken over super-particles in the flow outside of the coil system, and the deflection relative to that in four–coil operation is shown in Fig. 4(d) in two cases: with the operations of three and two coils. Here, the maximum angles of 56.7 degrees and 20 degrees are obtained from the operations of two adjacent coils and three coils, respectively. However, as discussed in previous numerical simulations9, 10, large deflection may result in a low momentum efficiency because the plasma diverges in weaker magnetic fields and the velocity distribution in a perpendicular component broadens, colliding with the coils, as Fig. 4(b) and (c) suggest. Here, the momentum efficiency is expressed as , where is the velocity along the deflection angle θ, and θ = 20° and 56.7° with three and two coils, respectively. Figure 4(e) shows the time–evolution of the momentum efficiency in three cases: four–, three–, and two–coil operations. Initially, the plasma expands in the -z–direction, meaning the efficiency η is negative. As the plasma is pushed by the magnetic field, η increases and η > 0, as shown after t > 0.3 μs. With a large deflection of 56.7 degrees (two coils), the efficiency (~0.14) is much smaller than in the other two cases, and the configuration of multiple coils should be optimized to increase the efficiency and avoid damage to the coils from the collision. As the present simulation suggests, the deflection is difficult measure with imaging diagnostics, as shown in Fig. 3. Further diagnostics such as measurements of the ion density distribution and plasma flow velocity and the direct measurement of a repulsive force on a coil along two axes will be required.
In summary, we demonstrated a multiple-coil magnetic nozzle system to control unsteady laser-produced plasma flow. The plasma flow was deflected in a certain direction depending on the initial magnetic field structure, which could be altered by changing the number of coils being operated. The magnetic cavity and field compression were observed with a B-dot probe, and the field structure changed depending on the flow direction. A combined numerical simulation of two-dimensional radiation hydrodynamic and three-dimensional hybrid codes shows consistency with the experiment in terms of the density structure inside and outside the coil system. The structure outside this system is evaluated using artificial camera images from the simulation and compared with that observed in the experiment. Though the time-evolution in the simulation is ~0.1–0.2 μs slower than the experiment, the plasma structures both from the experiment and simulation are identical. The simulation suggests that the maximum deflection angles of 56.7 degrees and 20 degrees are obtained with the operations of two coils and three coils, respectively, in the same setup as in the experiment.
The DB laser is an amplified single–pulse Nd:YAG laser, and the experimental data with different magnetic configurations were obtained in different laser shots. The reproducibilities of energy and pulse width were well within approximately 3% and ~1%, respectively, and the plasma expansion and structure formation were obtained as shown in Fig. 3.
The plasma structure was measured with a gated optical imager at a wavelength of 450 nm with a width of 10 nm in FWHM, in which no strong emission lines exist for carbon and hydrogen atoms. The plasma density in the present experiment is normally optically thin, and the emission is evaluated as thermal bremsstrahlung emission.
where e is the elementary charge, ε 0 is the permittivity of free space, c is the speed of light, m is the electron mass, nm is the wavelength, h is the Planck constant, and is a velocity averaged Gaunt factor22. The emission intensity, therefore, strongly depends on the electron density and charge state, and the experimental and simulation results shown in Fig. 5 can be directly compared.
To simulate the laser-produced plasma, we conducted the two-dimensional radiation hydrodynamic simulation12 without an external magnetic field in an early stage of the plasma expansion. The laser pulse is in Gaussian shape in both time and space, and is propagated with the ray-tracing technique. The laser peak is set at t = 25 ns and the simulation ends at t = 33 ns.
The density distribution was converted to ion particle and electron fluid density and given as input to the three-dimensional hybrid simulation7, in which the equations of motion for electrons and ions, the equation of internal energy of electron fluids, and Maxwell equations are self-consistently calculated in the magnetic field as shown in Fig. 1(b–e) with four–, three–, and two–coil operations. In this calculation, the cold plasma condition (T e = T i = 0) is assumed for simplicity.
The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We would like to thank Mr. E. Sato for his exceptional support during the experiment. This research was partially supported by JSPS KAKENHI grant number JP15K18283 and JP17K14876, and by the joint research project of Institute of Laser Engineering, Osaka University. | <urn:uuid:f5b3491a-fee3-47c3-ba7a-149ebb5ae3a2> | 3.03125 | 4,389 | Academic Writing | Science & Tech. | 51.375369 | 95,499,976 |
1 | 265
Model View Controller (MVC) has been around for a while and it's a good design pattern as it separates the data, user interface and business logic. MVC enables a development team to collaborate and each concern can be done discreetly with a clear way of content negotiation. Content negotiation could be handled via API or objects. In our engineering sprints, we learned that to make the software development process easier, we adapted the MVC to something we referred to "Base MVC". Basically, is means pushing CRUD functions to a base object and have light children objects. Below are three hurdles that we come across as we iterated to come up with the Base MVC.
#Hurdle 1: Repetition
In the course of our software development, we noted there is repetition of code that perform the same actions especially the CRUD - ie Create, Read, Update, Delete. I'll use two Objects for explanation. Student and Class.
In MVC, for Student you have:
Also for Class you have:
If we wanted to perform a CRUD action, we'll have the following functions in both StudentController and ClassController
The Rule of three postulates that code that is copied more than once should be replaced by a procedure to make it easier for maintain. We noted that the above functions are declaratively the same and technically do the same action. Function postCreate in both controllers do the same thing, save data in the respective table. Why repeat?
Using object Inheritance, we created a Base Controller that both the StudentController and ClassController will inherit from. In the BaseController, we added the five functions. So we have:
This enables us to reduce repetition which makes maintenance easier and also improvements as easy like Sunday morning.
#Hurdle 2: Uniqueness
Soon we noted that objects are unique and we needed to perform object specific actions. eg when creating a student, you might want to calculate their age before saving the data and also send an email to the parent after creating the student. To solve this we use "hooks" that are executed before or after the CRUD action.
These functions are public inner functions that are executed within the postCreate function but also the individual objects can override. The ability to override is very powerful since it enables all children controllers to enjoy the core interfaces of the BaseController but at the same time can inject object specific business logic.
Another aspect of uniqueness is the different table names and table fields. Different fields not only requires different validation rules but also different fields to be presented on the browser. To solve this, we had a structured way to set the table names, fields, validation and columns to present in the Model. We found multi-dimensional array easy to use as shown below:
public $viewFields = array( 'name' => array(1, 'text', 'like', 1), 'age' => array(1, 'text', 'like', 0), 'gender' => array(1, 'select', 'like', 0), );
public $createRules = array( 'name' => 'required', 'age' => 'required', 'gender' => 'required', );
#Hurdle 3: Which object to call
Now that we have the StudentController and ClassController inheriting from the BaseController, we needed a clear way to call each of the five functions. We resulted to using RESTful endpoints that map to each of the functions. eg for StudentController we have the following routes
Having a consistent way of naming your endpoints makes it easier for debugging and also enabled us to have the same object name in different modules within the same application.
The Base MVC pattern has enabled us to offload most of the heavy-lifting of common tasks or actions to a base object thus reducing development time, maintenance headache and making it easier to add functionality that will propagate to all the other children objects.
That said, the main risk to this pattern is the the base code become the single point of failure if something goes wrong. We mitigate this by ensuring the Base Controller is thoroughly tested before moving to production.
In the next part, I'll share how we simplified the generation of forms used to creating and updating objects.
Your email address will not be published. Required fields are marked * | <urn:uuid:9757d228-f37e-430d-b56e-387f7aa0863e> | 3.703125 | 887 | Personal Blog | Software Dev. | 37.935914 | 95,499,978 |
POZNAN, Poland (Reuters) - Soot is darkening ice in the Arctic and speeding a melt that could make the ocean around the North Pole ice-free in summer well before 2050, experts said on Tuesday.
The experts said the fight against warming in the Arctic should be re-directed to focus more on cutting the industrial pollution from soot, ozone and methane in Europe, North America and Russia to try to prevent the ice disappearing.
Soot or black carbon darkens the ice and makes it soak up more heat, accelerating a melt compared to reflective snow and ice. Methane comes sources including oil and gas and agriculture while ozone is formed from industrial pollutants.
“Reductions in these pollutants would have a greater impact” in the next two decades than curbing emissions of the main greenhouse gas — carbon dioxide — according to scientists on the sidelines of 187-nation U.N. climate talks in Poland.
The Arctic is warming at twice the rate of the rest of the world and ice shrank to a record low in 2007, leading to worries that it could pass a point of no return.
“The Arctic sea ice may already have passed a ‘tipping point’,” said Pam Pearson, an Arctic pollution expert at the Climate Policy Center who presented the findings. “An ice-free summer Arctic is now possible well before 2050”.
“Some scientists are arguing that it (the Arctic Ocean) could be (ice free) in summer within the next 10 to 20 years,” said Bob Watson, a former head of the U.N. Climate Panel who chaired a presentation of the research in Poznan.
The three pollutants — soot, ozone and methane — linger in the atmosphere far less time than carbon dioxide, meaning cuts in emissions would have a quicker impact in cleaning the air.
The U.N. panel projected last year that it could be clear of ice by the end of the century. A thaw would threaten indigenous peoples and wildlife such as polar bears and seals.
“The question is: is all of the rapid melt of the Arctic ice in summer all due to human induced climate change or is part of it some natural cycle? We clearly have to understand it,” Watson, now chief scientific advisor to the British Environment Ministry, told Reuters.
“This is not just a climate issue for the Arctic but for the globe as a whole,” said Hanne Bjurstroem, the head of Norway’s delegation, at the Dec. 1-12 climate talks on a new climate treaty.
A melt of the Arctic ice would warm the top of the globe and lead to warming further south. An ice-free Arctic would also make the region more accessible to oil and gas exploration and shipping. | <urn:uuid:bdd19752-4404-46a4-bc84-3caca8e21d23> | 3.328125 | 577 | Truncated | Science & Tech. | 56.136907 | 95,499,991 |
The challenges of combating terrorism, both domestically and abroad.
NAU Scientist: Dry Climates Diminish Microbial Diversity
Deserts like the American Southwest are expected to get drier as the climate warms. That’s bad news for soil microbes, according to a global study co-authored by researchers at Northern Arizona University.
This study is the first to look at soil microbes in drylands all over the world. The researchers collected soil samples from 80 dryland ecosystems, on every continent except Antarctica. It found bacteria and fungi were less numerous and less diverse in drier climates.
“Not only are plants and animals going to respond to climate change, but it seems that also microbes that live in the soil would too,” said NAU soil ecologist Matthew Bowker, one of the study’s authors. “You would expect it, but here it is, loud and clear.”
Bowker said drylands cover 40 percent of the Earth’s landmass, and climate models predict they’ll expand. If microbial diversity diminishes in these regions, soil will become less fertile, and the effects will ripple up the food chain.
Bowker’s next project will be to study livestock grazing in drylands with the international team. | <urn:uuid:4f4b7007-150c-4e32-b38e-fb23974d091e> | 2.8125 | 266 | Truncated | Science & Tech. | 46.512095 | 95,500,000 |
+44 1803 865913
Red snapper is among the most ecologically and economically important reef fishes in the northern Gulf of Mexico. Fisheries management for the species also happens to be among the most controversial in the U.S. Gulf red snapper has been estimated to be overfished and undergoing overfishing since at least the late 1980s. Management is complicated, however, because the greatest source of mortality for red snapper is believed to come from shrimp trawl bycatch, not the directed fisheries. Despite all efforts to solve the bycatch problem and otherwise recover red snapper, the stock remains significantly overfished.
Few other species or assemblages have had as many financial resources contributed to improve knowledge of basic population biology, engineer solutions to management issues such as shrimp trawl bycatch, develop state-of-the-art assessment techniques, and implement novel management approaches as has Gulf red snapper. This volume provides the state of knowledge for research on red snapper ecology and fisheries.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Prompt and trustful service.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:4d195381-6e01-4815-89e1-ec6b9bd4d8af> | 2.921875 | 267 | Product Page | Science & Tech. | 31.709862 | 95,500,024 |
The cephalothorax, also called prosoma in some groups, is a tagma of various arthropods, comprising the head and the thorax fused together, as distinct from the abdomen behind. (The terms prosoma and opisthosoma are equivalent to cephalothorax and abdomen in some groups.) The word cephalothorax is derived from the Greek words for head (κεφαλή, kephalé) and thorax (θώραξ, thórax). This fusion of the head and thorax is seen in chelicerates and crustaceans; in other groups, such as the Hexapoda (including insects), the head remains free of the thorax. In horseshoe crabs and many crustaceans, a hard shell called the carapace covers the cephalothorax.
The fovea is the centre of the cephalothorax and is located behind the head (only in spiders). It is often important in identification. It can be transverse or procurved and can, in some tarantulas (e.g. Ceratogyrus darlingi) have a "horn".
The clypeus is the space between the anterior of the cephalothorax and the ocularium. It is found in most arachnids.
The trident is a small group of (usually three) spines found in harvestmen exclusively. It is located in front of the ocularium. It varies in size amongst species; in some it is completely absent, and in others it is enlarged considerably.
- Eldra Pearl Solomon, Linda R. Berg & Diana W. Martin (2004). "The animal kingdom: an introduction to animal diversity". Biology (7th ed.). Cengage Learning. pp. 534–549. ISBN 978-0-534-49276-2.
- Timothy J. Gibb & C. Y. Oseto (2006). "Glossary". Arthropod Collection and Identification: Field and Laboratory Techniques. Academic Press. ISBN 978-0-12-369545-1.
- Andrew J. Martinez (2003). "Arthropoda (crabs, shrimps, lobsters)". Marine Life of the North Atlantic: Canada to New England (3rd ed.). Aqua Quest Publications. pp. 144–175. ISBN 978-1-881652-32-8.
- Dalton, Steve (2008). Spiders; The Ultimate Predators. A & C Black, London. P.p. 19. ISBN 9781408106976.
- Smith, A. M. (1990c). Baboon spiders: Tarantulas of Africa and the Middle East. Fitzgerald Publishing, London, pp. 138. Retrieved February 13, 2016.
- Gallon, R.C. (2008). "On some poorly known African Harpactirinae, with notes on Avicuscodra arabica Strand, 1908 and Scodra pachypoda Strand, 1908 (Araneae, Theraphosidae)". Bulletin of the British Arachnological Society. 14: 238.
- Spiders... Yorkshire Naturalists Union. Retrieved February 13, 2016.
- Sankey, John & Savory, Theodore. British Harvestmen. Academic Press. P.p. 1-75. ISBN 012619050X.
|This Arthropod anatomy-related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:9eedfb30-d584-4d4e-b539-65fa22d581d6> | 3.265625 | 748 | Knowledge Article | Science & Tech. | 61.479895 | 95,500,050 |
Most scientists think that tachyons do not exist. Einstein's theory of special relativity says nothing can accelerate faster than the speed of light, the theory is that these particles would be constantly traveling faster than the speed of light. If a tachyon did exist, it would have an imaginary number as its mass.
Many scientists believe that if one tachyon existed in the universe at any time, then the universe would be overrun by more and more tachyons. Tachyon is a device that will gain speed as it loses energy. So if a Tachyon starts moving at some speed after some time it will come at a point where it will catch the speed of light and then it will continually gain speed.
This is probably due to the fact that as they slow down, they increase energy. However, some scientists still believe that they could exist if they did not interact with normal matter.
Since the speed of tachyons is considered as faster than the speed of light, by Einstein's Special Relativity the momentum of the tachyon particle turns out to be negative. And since the momentum of the particle turns out to be negetive, the real mass of a tachyon particle can not be determined.
Some astrophysicists still believe, that since the momentum of a tachyon particle is Infinite, according to Einstein's relativity it must lead to the negative contraction of the path where it moves. It is such a state, where the momentum of the particle and contraction of the path, both are negative, and since the particle travels at a speed greater than the speed of light , it may travel into various dimensions since every frame of reference is valid. This concept also relates to the concept of parallel universe .
Another theory says that, a constantly travelling tachyon particle can never be stopped, due to the fact, that it has infinite momentum , therefore an infinite amount of energy is required to deaccelerate the particle. Therefore these particles can neither be deaccelerated, and similarly, nor be accelerated .
The information about such particles have been found in Hindu Holy books like the Ramayana and The Hindu Vedas. These particles have also been related to the Supreme Lord Shiva , who was never being created, and can never be destroyed, according to the Hindu mythology .
Possible sightings[change | change source]
It was once thought that neutrinos in experiments at the ATLAS detector in CERN might have moved faster than the speed of light, which was a problem because neutrinos do have a positive, real number mass. This lead to further investigations into more unusual theories of, for example, how particles with normal mass may move faster than light speed, and some claimed that this result would disprove general relativity. However, it was later revealed that the abnormal results were almost certainly the consequence of instrument failure, leading a wide dismissal of tachyonic behaviour in this instance, although some have remained confident that the theories used to explain the faster-than-light case would still be found accurate. Claims that these observations would disprove general relativity, which was reported probably unhelpfully widely in the press, now also seem unlikely. .
References[change | change source] | <urn:uuid:be67f936-a719-43a6-b229-692eba6742d7> | 3.296875 | 656 | Knowledge Article | Science & Tech. | 33.350924 | 95,500,053 |
The Tropical Rainfall Measuring Mission (TRMM) satellite viewed an area of thunderstorms associated with System 94B near the east coast of India in the Bay of Bengal on December 7 at 0123 UTC. Data from TRMM's Precipitation Radar (PR) and Microwave Imager (TMI) showed that some severe thunderstorms in this area off the Indian coast were producing very heavy intense rainfall of over 50mm/hr (~2 inches/hour).
NASA\'s TRMM satellite captured rainfall rates within System 94B near India\'s east coast on Dec. 7 at 0123 UTC. The yellow and green areas indicate moderate rainfall between .78 to 1.57 inches per hour. Red areas are heavy rainfall at almost 2 inches per hour. Credit: NASA/SSAI, Hal Pierce
The TRMM satellite's main purpose is to measure rainfall over the tropics but it has also proven very valuable for monitoring development of tropical cyclones. TRMM is a joint mission between NASA and the Japanese space agency JAXA.
On Dec. 7 the center of System 94B was located about 240 nautical miles east-southeast of Chennai, India near 11.4 North latitude and 84.0 East longitude.
The AIRS infrared image did show that there were some strong thunderstorms along the immediate southeastern coast of India, where heavy rain was falling in the state of Tamil Nadu, India.
Tamil Nadu is one of the 28 states and lies in the southernmost part of the Indian Peninsula. Its capital city is Chennai located in the northeastern part of the state.
The Joint Typhoon Warning Center (JTWC) maintains forecast responsibility for this storm. The JTWC noted that maximum sustained winds at the surface are estimated between 20 to 25 knots (23 to 28 mph) and minimum sea level pressure is near 1004 millibars.
Today's JWTC forecast said, "Based on the sheared convection and relatively high vertical wind shear, the potential for the development of a significant tropical cyclone within the next 24 hours remains poor."
So far this year five tropical cyclones have spawned in the Bay of Bengal. Tropical cyclones often form in the Bay of Bengal during the month of November but this area of low pressure isn't expected to intensify to tropical storm strength.
Rob Gutro | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:1b6a2d6f-acba-4889-b2fe-11b4d1610c83> | 3.109375 | 1,113 | Content Listing | Science & Tech. | 48.707221 | 95,500,098 |
QSound - Access to the platform audio facilities
QSound ( const QString & filename, QObject * parent = 0, const char * name = 0 )
int loops () const
int loopsRemaining () const
void setLoops ( int l )
QString fileName () const
bool isFinished () const
void play ()
void stop ()
Static Public Members
bool isAvailable ()
void play ( const QString & filename )
bool available ()
The QSound class provides access to the platform audio facilities.
Qt provides the most commonly required audio operation in GUI applications: asynchronously
playing a sound file. This is most easily accomplished with a single call:
A second API is provided in which a QSound object is created from a sound file and is
Sounds played using the second model may use more memory but play more immediately than
sounds played using the first model, depending on the underlying platform audio
On Microsoft Windows the underlying multimedia system is used; only WAVE format sound
files are supported.
On X11 the Network Audio System is used if available, otherwise all operations work
silently. NAS supports WAVE and AU files.
On Macintosh, ironically, we use QT (QuickTime) for sound, this means all QuickTime
formats are supported by Qt/Mac.
On Qt/Embedded, a built-in mixing sound server is used, which accesses /dev/dsp directly.
Only the WAVE format is supported.
The availability of sound can be tested with QSound::isAvailable().
See also Multimedia Classes.
MEMBER FUNCTION DOCUMENTATION
QSound::QSound ( const QString & filename, QObject * parent = 0, const char * name = 0 )
Constructs a QSound that can quickly play the sound in a file named filename.
This may use more memory than the static play function.
The parent and name arguments (default 0) are passed on to the QObject constructor.
Destroys the sound object. If the sound is not finished playing stop() is called on it.
See also stop() and isFinished().
bool QSound::available () [static]
Returns TRUE if sound support is available; otherwise returns FALSE.
QString QSound::fileName () const
Returns the filename associated with the sound.
bool QSound::isAvailable () [static]
Returns TRUE if sound facilities exist on the platform; otherwise returns FALSE. An
application may choose either to notify the user if sound is crucial to the application or
to operate silently without bothering the user.
If no sound is available, all QSound operations work silently and quickly.
bool QSound::isFinished () const
Returns TRUE if the sound has finished playing; otherwise returns FALSE.
Warning: On Windows this function always returns TRUE for unlooped sounds.
int QSound::loops () const
Returns the number of times the sound will play.
int QSound::loopsRemaining () const
Returns the number of times the sound will loop. This value decreases each time the sound
void QSound::play ( const QString & filename ) [static]
Plays the sound in a file called filename.
void QSound::play () [slot]
This is an overloaded member function, provided for convenience. It behaves essentially
like the above function.
Starts the sound playing. The function returns immediately. Depending on the platform
audio facilities, other sounds may stop or may be mixed with the new sound.
The sound can be played again at any time, possibly mixing or replacing previous plays of
void QSound::setLoops ( int l )
Sets the sound to repeat l times when it is played. Passing the value -1 will cause the
sound to loop indefinitely.
See also loops().
void QSound::stop () [slot]
Stops the sound playing.
On Windows the current loop will finish if a sound is played in a loop.
See also play().
Copyright 1992-2007 Trolltech ASA, http://www.trolltech.com. See the license file
included in the distribution for a complete license statement.
Generated automatically from the source code.
If you find a bug in Qt, please report it as described in
http://doc.trolltech.com/bughowto.html. Good bug reports help us to help you. Thank you.
The definitive Qt documentation is provided in HTML format; it is located at
$QTDIR/doc/html and can be read using Qt Assistant or with a web browser. This man page is
provided as a convenience for those users who prefer man pages, although this format is
not officially supported by Trolltech.
If you find errors in this manual page, please report them to firstname.lastname@example.org.
Please include the name of the manual page (qsound.3qt) and the Qt version (3.3.8).
Trolltech AS 2 February 2007 QSound(3qt) | <urn:uuid:2cb3077c-452b-43b0-8071-2c0e9a794a88> | 2.5625 | 1,068 | Documentation | Software Dev. | 56.793899 | 95,500,101 |
I have read that if a smaller number is to the left of a larger number means that the smaller number has to be subtracted from the larger number.
Ok I can understand quickly for below Roman Numbers :
IX = X - I = 10 - 1 = 9
But I have difficulty in understanding Roman Numbers that have odd Roman Numerals – say :
So if we go from Left to right we get
XIX = 10 + 1 + 10 = 21
But if we go from right to left we get
XIX = 10 + 10 - 1 = 19
So which direction we should consider before applying the rule of smaller followed by bigger Roman Numerals – left to right or right to left?
When reading roman numerals, I prefer to think in the following way:
Read from left to right, and if at any point the value of a character decreases, put a comma between the decrease. Then, add each block together.
MCMXCVI $\mapsto$ M,CM,XC,V,I $\mapsto$ $1000+900+90+5+1=1996$
MDCCCLXXIV $\mapsto$ M,D,CCC,L,XX,IV $\mapsto$ $1000+500+300+50+20+4=1874$
MCMXCIX $\mapsto$ M,CM,XC,IX $\mapsto$ $1000+900+90+9 = 1999$
Note that 1999 was not written as IMM or MIM. Converting from arabic numbers to roman numerals, one only uses powers of ten one apart to denote subtraction.
When you are reading Roman numerals, start from the left-most character. Read rightward until the value of the character increases. Then, section those two characters off, and repeat.
That sounds really complicated, and I wrote it somewhat poorly, so here are some examples.
In $XIX$, we start with the left $X$ which is $10$. Then we move to the $I$ which is $1$. We decreased from $10$ to $1$, so we’ll move on. Then we get to the right $X$ which is $10$. We increased from $1$ to $10$, so we need to section off the $IX$. It would look like this:
$$XIX = X + IX = 10 + 9 = 19$$
Here’s a more complicated example:
$$MCMXXIX = M + CM + XX + IX = 1000 + 900 + 20 + 9 = 1929$$
Here we had to section off the $CM$ since we increased in value from $C$ to the second $M$. We had to section off the $IX$ since we increased in value from the $I$ to the last $X$.
One last really complicated example:
$$MMCDXLIV = MM + CD + XL + IV = 2000 + 400 + 40 + 4 = 2444$$
You can apply the following logics:
the units are denoted I, II, III, IV, V, VI, VII, VIII$^*$, IX; the tenths are denoted X, XX, XXX, XL, L, LX, LXX, LXXX, XC; the hundredths C, CC, CCC, CD, D, DC, DCC, DCCC, CM; the thousands, M, MM, MMM.
numbers are written in thousands, hundredths, tenths and units from left to right.
The Roman Empire wasn’t designed to last longer than until 3999 AD 🙂
$^*$Sometimes IIX is used for VIII.
XIX is read left to right, the “I” is always applied to the final X.
XIX = X + IX = 10 + 9
XXI = X + XI = 10 + 11
To avoid confusion , the only pairs of Roman numerals that follow the “subtraction” rule are IV, IX, XL, XC, CD, CM. So if your shirt size followed Roman numerals ( which it does not) L would be 50, XL=50-10=40 and XXL = 10+40=50 . Therefore L would be equal to XXL size in Roman numerals !! Coming onto XIX , this equals 19 . Here we must keep in mind the other rules of Roman numerals also and not just addition subtraction. 20 as a digit as a whole is represented as XX and therefore 21 would be XXI. In XIX reading from left to right take IX as a complete digit ie 9 and therefor XIX =19
I have designed following algorithm to convert roman to decimal.
Replace string contents of length = 4 with appropriate content
VIII = 8
Replace string contents of length = 3 with appropriate content
MMM = 3000,
XXX = 30,
VII = 7,
III = 3
MM = 2000,
XX = 20,
XI = 11,
IX = 9,
VI = 6,
IV = 4,
II = 2
Ican be subtracted (absolute) from
Xcan be subtracted (absolute) from
Ccan be subtracted (absolute) from | <urn:uuid:a25e5fda-c9ea-48b3-b9d4-f34f0ce3f7fd> | 3.703125 | 1,113 | Personal Blog | Science & Tech. | 81.421016 | 95,500,102 |
UML for C#
C# is a modern object-oriented language for application development. In addition to object-oriented constructs, C# supports component-oriented programming with properties, methods and events. UML defines graphical notations for describing and designing object-oriented software systems. It’s an open standard controlled by the Object Management Group (OMG). Although UML has many diagramtypes, we’ll focus on class models that show static class structure and relationships. WinA&D is a complete UML modeling tool enriched with C# language specific details used to generate source code. WinTranslator is a reverse engineering tool that scans code to extract design information into WinA&D models. Diagrams created in WinA&D are used to illustrated C# programs represented in the UMLnotation. This paper assumes a working knowledge of C# and UML. It briefly describes how C# constructs are represented by UML for forward and reverse engineering.
In WinA&D, a class model is drawn from a palette of tools. As each class instance is placed on the diagram, it’s named in the Class Properties dialog. Each class has a corresponding dictionary entry of the same name inthe data dictionary. Many diagrams within the class model and in other types of diagram documents share data that is stored in the same global data dictionary. For a selected class object on the diagram, the Details button presents the Class Attributes & Operations dialog. This dialog is used to define members of the class. In WinA&D terminology, a class can have Attribute, Operation, Property andEvent members. Behind the scenes, WinA&D adds a dictionary entry for each class member with a name of the form Class’Attribute, Class.Operation, Class$Property and Class-Event. Each class member has a details dialog for defining language specific information for that class member. WinA&D supports many programming languages for code generation including C#. Depending on which language is currentlyselected, the Attribute Details, Operation Details, Property Details and Event Details dialog will vary slightly based on specific characteristics of the selected language. WinA&D can concurrently store language specific details for multiple languages for each modeling element. When instances of a class are presented on different diagrams, detailed information is stored once in the globaldictionary. WinA&D uses this information to generate source code.
A UML class diagram shows the static class structure of a C# application. Different types of the objects on the class diagram represent C# classes, interfaces, structs and their generic counterparts.
UML Class Diagram with C# Classes
Each class on a diagram is represented as a named box with dashedsections showing a list of class members. Classes are connected with relationship lines. In the diagram above, the Constant, VariableReference and Operation classes inherit from the abstract Expression class. The presentation of a class diagram can vary widely based on user-specified criteria. In the diagram above, class attributes (C# fields) and operations (C# methods) are shown with access, datatype and signature details. WinA&D gives the user a lot of flexibility to control how classes are presented. Class members can be shown or hidden based on member type or specific conditions based on access or modifiers on each class member. Members of a class instance on a diagram can show various levels of detail like its access type, data type or arguments. Presentation options can be easilyapplied across all diagrams, to specific diagrams or to individual instances of a class. Information about classes, class member and C# details are entered into detail dialogs when drawing diagrams and stored in the global dictionary. Instances of the same class can be shown on many diagrams with different presentations.
Class and Interface Relationships
An interface defines...
Leer documento completo
Regístrate para leer el documento completo. | <urn:uuid:74030333-119d-4fee-aeb9-8e6702ed54dd> | 3.078125 | 795 | Documentation | Software Dev. | 36.340632 | 95,500,120 |
William E. Holt, Ph.D., a professor in the Geosciences Department at Stony Brook University, and Attreyee Ghosh, Ph.D., a post doctoral associate, used their model to help explain the stresses that act on the Earth’s tectonic plates. Those stresses result in earthquakes not only at the boundaries between tectonic plates, where most earthquakes occur, but also in the plate interiors, where the forces are less understood, according to their paper, "Plate Motions and Stresses from Global Dynamic Models."
“If you take into account the effects of topography and all density variations within the plates – the earth’s crust varies in thickness depending on where you are – if you take all that into account, together with the mantle convection system, you can do a good job explaining what is going on at the surface,” said Dr. Holt.
Their research focused on the system of plates that float on the Earth’s fluid-like mantle, which acts as a convection system on geologic time scales, carrying them and the continents that rest upon them. These plates bump and grind past one another, diverge from one another, or collide or sink (subduct) along the plate boundary zones of the world. Collisions between the continents have produced spectacular mountain ranges and powerful earthquakes. But the constant stress to which the plates are subjected also results in earthquakes within the interior of those plates.
“Predicting plate motions correctly, along with stresses within the plates, has been a challenge for global dynamic models,” the researchers wrote. “Accurate predictions of these is vitally important for understanding the forces responsible for the movement of plates, mountain building, rifting of continents, and strain accumulation released in earthquakes.”
Data for their global computer model came from Global Positioning System (GPS) measurements, which track the movements of the Earth’s crust within the deforming plate boundary zones; measurements on the orientation of the Earth’s stress field gleaned from earthquake faults; and a network of global seismometers that provided a picture of the Earth’s interior density variations. They compared output from their model with these measurements from the Earth’s surface.
“These observations – GPS, faults – allow one to test the completeness of the model,” Dr. Holt said.
Drs. Ghosh and Holt found that plate tectonics is an integrated system, driven by density variations found between the surface of the Earth all the way to the Earth’s core-mantle boundary. A surprising find was the variation in influence between relatively shallow features (topography and crustal thickness variations) and deeper large-scale mantle flow patterns that assist and, in some places, resist plate motions. Ghosh and Holt also found that it is the large-scale mantle flow patterns, set up by the long history of sinking plates, that are important for influencing the stresses within, and motions of, the plates.
Topography also has a major influence on the plate tectonic system, the researchers found. That result suggests a powerful feedback between the forces that make the topography and the ‘push-back’ on the system exerted by the topography, they explained.
While their model cannot accurately predict when and where earthquakes will occur in the short-term, “it can help at better understanding or forecasting earthquakes over longer time spans,” Dr. Holt said. “Nobody can yet predict, but ultimately given a better understanding of the forces within the system, one can develop better forecast models.”
William E. Holt | Newswise Science News
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1ab33371-21dc-4bbd-b10c-4f9727585eea> | 4.1875 | 1,330 | Content Listing | Science & Tech. | 41.737835 | 95,500,122 |
When a body is charged, it can attract an oppositely charged body and can repulse a similar charged body. That means, the charged body has ability of doing work. That ability of doing work of a charged body is defined as electrical potential of that body.
If two electrically charged bodies are connected by a conductor, the electrons starts flowing from lower potential body to higher potential body, that means current starts flowing from higher potential body to lower potential body depending upon the potential difference of the bodies and resistance of the connecting conductor. So, electric potential of a body is its charged condition which determines whether it will take from or give up electric charge to other body. Electric potential is graded as electrical level, and difference of two such levels, causes current to flow between them. This level must be measured from a reference zero level. The earth potential is taken as zero level. Electric potential above the earth potential is taken as positive potential and the electric potential below the earth potential is negative.
You may also be interested on
The unit of electric potential is volt. To bring a unit charge from one point to another, if one joule work is done, then the potential difference between the points is said to be one volt. So, we can say, If one point has electric potential 5 volt, then we can say to bring one coulomb charge from infinity to that point, 5 joule work has to be done. If one point has potential 5 volt and another point has potential 8 volt, then 8 – 5 or 3 joules work to be done to move one coulomb from first point to second. | <urn:uuid:adfaf4ef-9e5b-4630-a373-74b950404a6f> | 4.125 | 330 | Knowledge Article | Science & Tech. | 41.927378 | 95,500,155 |
|The National Institute of Meteorological Sciences conducts a cloud-seeding test for artificial snow over Daegwallyeong mountain pass in Gangwon Province in February 2009. / Courtesy of National Institute of Meteorological Sciences|
By Ko Dong-hwan
Korea is looking to cloud-seeding to help combat the dangerous fine dusts originating inside and outside the country.
The initiative comes late on the world stage, with about 40 countries, most rigorously the U.S. and China, having already employed the method of controlling the weather.
Gyeonggi provincial government announced this month it will hold a pilot test as part of its long-term plan to reduce the increasing volumes of harmful micro particles drifting in from deserts and industrial plants in China.
The National Institute of Meteorological Sciences (NIMR), Korea's leading climate modification research organization, based in Seogwipo, Jeju Island, joined in the test.
Researchers plan to conduct cloud-seeding tests over the west coast, which faces China across the West Sea. The Gyeonggi government will receive the test data, analyze the feasibility of cloud-seeding in the maritime region, and determine whether to use it officially.
"There are countries that have already accumulated substantial cloud-seeding data," Bae Hyun-sub from the Gyeonggi government Climate and Air Quality Management Division told The Korea Times.
"One American weather modification company is so qualified that it receives orders from other countries to conduct tests over their skies.
"Meanwhile, Korea has just started the tests at a governmental level. We will find if there is a correlation between cloud seeding and fine dust reduction.
"If we do find this, we will use the evidence to persuade the central government to support cloud seeding."
|The National Institute of Meteorological Sciences rented this Cessna 206 from a Korean company for cloud-seeding tests, which have been conducted since 2008.|
First tested in upstate New York in 1946, cloud seeding is a weather modification technology, in which chemical substances are seeded inside clouds by aircraft, rockets or on-ground incinerators to create artificial rain and/or snow or to clear away fog.
The Gyeonggi government will carry out tests until July next year with a 2 billion won ($178,000) budget.
With the NIMR scheduled to start tests using the fund as early as September, the government has recruited four climate and atmospheric experts from the Gyeonggi Research Institute to analyze the data.
The NIMR plans to hold three tests, monitoring each operation using an automatic weather system and micro rain radar equipment.
The devices will allow researchers to read atmospheric conditions in terms of wind direction, wind speed, temperature, humidity and pressure.
The researchers will also be able to check the changing sizes of cloud condensation or ice nuclei _ silver iodine or dry ice dispersed by cloud seeding that alters microphysical processes within the cloud.
The team plans to acquire research aircraft from the U.S. to disperse the chemical substances, or "seeds," into the clouds.
Researchers will track the seeds, the duration of any artificial rain and its possible ecological effects.
"Cloud seeding has mostly been conducted over the east coast because mountainous regions along the Taebaek Mountain Range flanking the coast offer good geographic conditions for cloud seeding, especially for snow enhancement," said a NIMR researcher leading the cloud-seeding tests, but who asked to remain anonymous.
"Although it remains doubtful that the tests will successfully result in artificial rain over the flat terrain of the west coast, there is still a possibility. In similar regions around Moscow, cloud seeding resulted in manmade precipitation."
The NIMR's cloud-seeding team said in a journal research paper "Estimation for the Economic Benefit of Weather Modification (Precipitation Enhancement and Fog Dissipation)" (2010) that air pollution caused by fine dusts could be partially controlled by enhancing rain or snow.
The Korea Meteorological Administration (KMA) conducted a cloud-seeding test in 2010, which resulted in two millimeters of rain at Anseong, Gyeonggi Province, evidence of the technology's possible cleansing effect.
|A research aircraft disperses chemical substances, usually silver iodine or dry ice, inside clouds during a cloud-seeding test in June 2014.|
The correlation between fine dusts and rain was also shown in research by the Gyeonggi Institute of Health Environment.
In November 2015, researchers recorded an average of 35 micrograms per cubic meter of fine dusts in the atmosphere over Gyomun-dong in Guri, Gyeonggi Province. In one year, the level jumped to 69. The researchers concluded that decreasing precipitation was behind the worsening pollution.
The first cloud-seeding test in Korea was conducted in 1963. A Dongguk University professor and his team used ground incinerators and aircraft to disperse dry ice over clouds.
Because of a shortage of money, the tests were halted until 2001, when severe drought brought the idea to the fore again.
In 2003, the KMA and the Gangwon Regional Office of Meteorology built the Cloud Physics Observation Center above Daegwallyeong, a mountain pass close to the east coast in Gangwon Province, to analyze particles in clouds or fog before and after cloud-seeding tests.
"At the observation center, we have been testing artificial snow for the past 10 years, flying research aircraft 36 times, to prepare for the Pyeongchang Winter Olympics in 2018. It has been our main project so far," the NIMR researcher said.
"But with the latest tests, we will start compiling our data on artificial rain." | <urn:uuid:50ad63dd-bb2b-479b-92c7-6f49fae4f4a1> | 3.046875 | 1,192 | News Article | Science & Tech. | 35.57702 | 95,500,162 |
|You might also like:||Zoom Astronomy Glossary: C||Animal Coloring pages - X||Animal Coloring pages - Q||Pirate Theme Page||North American Animals Coloring/Info Pages||Today's featured page: Label Heart Anatomy Diagram Printout|
|Table of Contents||Enchanted Learning
All About Astronomy
|Our Solar System||Stars||Glossary||Printables, Worksheets, and Activities|
|The Sun||The Planets||The Moon||Asteroids||Kuiper Belt||Comets||Meteors||Astronomers|
KBO is short for Kuiper Belt Object.
The W. M. Keck Observatory is located on the top of the dormant volcano Mauna Kea in Hawaii. The Keck observatory has world's largest infrared and optical telescopes, called the Keck 1 and the Keck 2. Each telescope has a 10-meter (33 ft) primary mirror that is made up of 36 hexagons (each of which is 1.8 meters (6 feet) wide and weighs 880 pounds). The Keck 1 telescope opened in 1993; the Keck 2 telescope opened in 1996. Both telescopes are 8 stories tall. The Keck observatory is run by the California Institute of Technology, the University of California, and NASA.
Lord Kelvin (William Thomson, 1824 - 1907) designed the Kelvin scale, in which 0 K is defined as absolute zero and the size of one degree is the same as the size of one degree Celsius. Water freezes at 273.16 K; water boils at 373.16 K.
KELVIN TEMPERATURE SCALE
Kelvin is a temperature scale designed so that 0K is defined as absolute zero and the size of one unit is the same as the size of one degree Celsius. Water freezes at 273.16K; water boils at 373.16K. This temperature scale was designed by Lord Kelvin (William Thomson, 1824 - 1907). [ K = C + 273°, F = 9/5C + 32°].
The Kelvin wave is a gentle but huge swell of warm water in the Pacific Ocean. This mass of water is a few degrees warmer than surrounding water, is only 5-10 cm high, but is hundreds of kilometers wide.
Johannes Kepler (1571-1630) was a German mathematician who realized that the planets go around the sun in elliptical orbits. He formulated what we now call "Kepler's Three Laws" of planetary motion that mathematically describe the elliptical orbits of celestial objects. For a few years he worked with Tycho Brahe.
KEPLER'S FIRST LAW OF PLANETARY MOTION
Kepler's First Law of Planetary Motion states that the orbits of the planets are ellipses with the sun at one focus of the ellipse.
KEPLER'S SECOND LAW OF PLANETARY MOTION
Kepler's Second Law of Planetary Motion states that a line from a planet to the sun will sweep out equal areas in equal times. The planet moves more slowly when it is farther from the sun and faster when it is near it. (This is equivalent to the conservation of angular momentum.)
KEPLER'S THIRD LAW OF PLANETARY MOTION
Kepler's Third Law of Planetary Motion states that T2 is proportional to a3, where T is the orbital period of a planet (its year) and a is the semi-major axis of the ellipse.
A kilogram (kg) is a unit of mass defined as the weight of one liter of water. One kilogram is equivalent to 1,000 grams or 2.2 pounds.
A kiloparsec is a unit of distance that is equal to 1,000 parsecs or 3,260 light-years. The Milky Way Galaxy's diameter is about 61 kiloparsecs.
Kinetic energy is the energy that an object has because of its motion. An object's kinetic energy is equal to 0.5 times its mass times its velocity squared. In the metric system, kinetic energy is measured in joules, or kg-m2/s2.
Gustav Kirchoff (1824-1887) was a German physicist who realized that each element gave off a characteristic color of light when heated to incandescence. When separated by a prism, the light for each element had a specific pattern of wavelengths. Kirchoff, together with Bunsen, used his techniques to discover two new elements, cesium (1860) and rubidium (1861). Kirchoff found that when light shines through a gas, the gas absorbs some of the light, the same wavelengths of light that it would emit when heated. He applied his techniques to the Sun, explaining Fraunhofer lines. He also found that incandescent solids, liquids, and compressed gases emit a continuous spectrum.
The Kirkwood gaps are radial gaps in the asteroid belt. These gaps are orbital radii where the gravitational forces from Jupiter do not let asteroids orbit (they would be pulled into Jupiter). For example, an orbit in which an asteroid orbited the Sun exactly three times for each Jovian orbit would experience great gravitational forces each orbit, and would soon be pulled out of that orbit. There is a gaps at 3.28 AU (which corresponds to 1/2 of Jupiter's period), another at 2.50 AU (which corresponds to 1/3 of Jupiter's period), etc. The Kirkwood gaps are named for Daniel Kirkwood who discovered them in 1866.
Daniel Kirkwood (1814-1895) was an American astronomer who discovered the radial gaps in the asteroid belt in 1866 (now known as the Kirkwood gaps). Kirkwood also hypothesized that Saturn's moon Enceladus creates the Cassini division with its gravitational attraction (but astronomers today think that Mimas causes it).
KITT PEAK NATIONAL OBSERVATORY
Kitt Peak National Observatory is an astronomical observatory in Tucson, Arizona, USA. It has over fifteen telescopes, including a 158 inch (4 m) reflecting telescope.
216 Kleopatra is a bone-shaped asteroid. Kleopatra is about 135 miles (217 kilometers) long and about 58 miles (94 kilometers) wide. This unusual asteroid was discovered and named in 1880, but its shape was only discovered in 2000, using radar images from the Arecibo telescope.
The Klet Observatory is a state supported research institution located in the Czech Republic (Southern Bohemia). This astronomical observatory is located on Klet mountain (at 1070 meters altitude), southwest of the town of Ceske Budejovice. Telescopes include a 0.57-m f/5.2 reflector, a 0.63-m Maksutov telescope, and a 1.02-m telescope.
Km is short for kilometer or kilometers.
Saturn's outermost ring, the subdivided "F" ring, has many visible knots or clumps of matter. These knots may be clumps of particulate ring material or tiny orbiting moons of Saturn.
The Kuiper belt is a region beyond Neptune in which at least 70,000 small objects (KBO's) orbit, including Quaoar and Sedna. This belt is located from 30 to 50 (?) A.U.'s and was discovered in 1992. The Kuiper belt may be the source of the short-period comets (like Halley's comet). The Kuiper belt was named for the Dutch-American astronomer Gerard P. Kuiper, who predicted its existence in 1951.
KUIPER, G. P.
Gerard Peter Kuiper (1905-1973) was a Dutch-American astronomer who predicted the existence of the Kuiper belt in 1951. In 1948, Kuiper discovered and named Miranda, a moon of Uranus and Neptune's second moon, Nereid. Kuiper did much pioneering research on moons, planetary atmospheres, and planet and moon formation.
Over 35,000 Web Pages
Sample Pages for Prospective Subscribers, or click below
Overview of Site|
Enchanted Learning Home
Monthly Activity Calendar
Books to Print
Parts of Speech
The Test of Time
TapQuiz Maps - free iPhone Geography Game
Biology Label Printouts
Physical Sciences: K-12
Art and Artists
Label Me! Printouts
|Search the Enchanted Learning website for:| | <urn:uuid:6b748c29-3795-4735-acc3-ebfa8113dec8> | 3.578125 | 1,771 | Structured Data | Science & Tech. | 56.823286 | 95,500,165 |
Flow phenomena on solid surfaces: Boundary layer velocity plays key role
This was demonstrated by studying the behaviour of droplets on surfaces with different coatings as they evolved into the equilibrium state. The results could prove useful in optimizing industrial processes, such as the extrusion of plastics. The study has been published in the respected academic journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
When liquids flow over solid surfaces, their flow velocity in the immediate vicinity of the surface is zero. 'By specially coating the surface, the boundary layer velocity can be increased. This also has the effect of reducing the shear forces within the liquid and of increasing its mean flow velocity. In the extreme case, the liquid behaves almost like a solid, but without exhibiting any change in its viscosity,' explains Karin Jacobs, Professor of Experimental Physics at Saarland University. Her research group has been conducting experiments on polystyrene droplets with the aim of uncovering the details of how different surfaces affect boundary layer velocities and the slip behaviour of liquid films. 'Polystyrene is an important polymer that is used, for instance, to manufacture CD jewel cases,' says Dr. Joshua D. McGraw. The former postdoc in Jacobs' research group headed the study, which was a collaborative effort with members of the research team led by Ralf Seemann, Professor of Experimental Physics at Saarland University, and researchers at ESPCI Paris Tech.
McGraw placed individual droplets of polystyrene (PS) onto thin mica substrates, where the droplets assumed a flattened form and sizes more than thousand times smaller than a typical rain drop. They were then frozen in this state and subsequently transferred onto two new 'less PS-friendly' substrates which differed from one another not in their chemical composition, but only in the spatial arrangement of their atoms. On both substrates the droplets adopted an almost hemispherical form. 'Droplets always show a tendency to adopt an equilibrium form in which they exhibit a definite contact angle to the surface. This equilibrium state is determined by the boundary layer conditions,' explains Karin Jacobs. Although the polystyrene droplets showed the same equilibrium contact angle on both substrates, droplet profile measurements made using an atomic force microscope indicated significant differences in the manner in which the droplets contracted, transforming from their original shape with the smaller contact angle to the new hemispherical form. 'This can only mean that the molecules in the droplets move in two different ways on the two different surfaces, which in turn means that the velocity profiles in the two drops must be different,' say Dr. Martin Brinkmann and Dr. Tak Shing Chan from Ralf Seemann's group. 'However, the required resolution is not available experimentally. That's why we needed the support of the theoreticians in Paris.'
The researchers in Saarbrücken concluded that the velocity of the liquid at the solid surface is a key factor influencing the flow behaviour of such small droplets. The research colleagues at ESPCI Paris have managed to incorporate this into a theoretical model of the fluid dynamics. Martin Brinkmann and Tak Shing Chan were then able to use this theoretical description to conduct computer simulations that yield the molecular velocity field within a liquid drop. 'This enabled us to demonstrate that even atomic-scale modifications to a solid surface can alter the speeds with which molecules move in liquid systems that are many orders of magnitude thicker than the surface coating itself,' says Professor Jacobs, summarizing the results of the experiments.
The research results may help to optimize industrial processes, such as the extrusion of polymeric materials,' says Karin Jacobs. Extrusion involves pressing a plastic through a forming die similar to the way that pasta dough is pushed through a pasta press when making fresh spaghetti or macaroni. 'Once the dough has passed through the forming die, the strand expands as the material is now flowing more slowly,' explains Jacobs. 'This expansion of the material as it exits the die (known as "die swell") is usually undesirable in industrial applications, but it may be possible to suppress die swell by suitably coating the inner surfaces of the die.'
Link to publication: http://www.pnas.org/content/early/2016/01/15/1513565113.abstract
An explanatory illustration is available at: http://www.uni-saarland.de/pressefotos
Prof. Dr. Karin Jacobs
Department of Experimental Physics
Tel.: +49 (0)681 302-71788
E-mail: [email protected]
Dr. Joshua D. McGraw
Département de Physique
Ecole Normale Supérieure
Tel.: +33 1 44 32 35 50
Email: Joshua[email protected]
Dr. Karin Jacobs | <urn:uuid:a45a0980-2744-4f9c-b6b8-2f2f3db04049> | 3.3125 | 994 | Academic Writing | Science & Tech. | 36.745894 | 95,500,171 |
American air mass properties
Willett, Hurd C.
MetadataShow full item record
In this paper the term Air Mass is applied to an extensive portion of the earth's atmosphere which approximates horizontal homogeneity. The formation of an air mass in this sense takes place on the earth's surface wherever the atmosphere remains at rest over an extensive area of uniform surface properties for a suffciently long time so that the properties of the atmosphere (vertical distribution of temperature and moisture) reach equilibrium with respect to the surface beneath. Such a region on the earth's surface is referred to as a source region of air masses. As examples of source regions we might cite the uniformly snow and ice covered northern portion of the continent of North America in winter, or the uniformly warm waters of the Gulf of Mexico and Caribbean Sea. Obviously the properties of an air mass in the source region will depend entirely upon the nature of the source region. The concept of the air mass is of importance not only in the source regions. Sooner or later a general movement of the air mass from the source region is certain to occur, as one of the large-scale air currents which we find continually moving across the synoptic charts. Because of the great extent of such currents and the conservatism of the air mass properties, it is usually easy to trace the movement of the air mass from day to day, while at the same time any modification of its properties by its new environment can be carefully noted. Since this modification is not likely to be uniform throughout the entire air mass, it may to a certain degree destroy the horizontal homogeneity of the mass. However, the horizontal differences produced within an air mass in this manner are small and continuous in comparison to the abrupt and discontinuous transition zones, or fronts, which mark the boundaries between air masses. Frontal discontinuities are intensified wherever there is found in the atmosphere convergent movement of air masses of different properties. Since the air masses from particular sources are found to possess at any season certain characteristic properties which undergo rather definite modification depending upon the trajectory of the air mass after leaving its source region, the investigation of the characteristic properties of the principal air mass types can be of great assistance to the synoptic meteorologist and forecaster. We owe this method of attack on the problems of synoptic meteorology to the Norwegian school of meteorologists, notably to T. Bergeron. Investigation of the properties of the principal air masses appearing in western Europe has been made in particular by O. Moese and G. Schinze. The purpose of this paper is to give the results of a similar investigation of the properties of the principal air masses of North America, and to comment on some of the striking differences which appear between conditions here and in Europe.
Suggested CitationBook: Willett, Hurd C., "American air mass properties", Papers in Physical Oceanography and Meteorology, v.2, no.2, 1933-06, DOI:10.1575/1912/1142, https://hdl.handle.net/1912/1142
Showing items related by title, author, creator and subject.
Eickstedt, Donald Patrick (Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 2006-06)In this thesis, an innovative architecture for real-time adaptive and cooperative control of autonomous sensor platforms in a marine sensor network is described in the context of the autonomous oceanographic network scenario. ...
Magnell, Bruce Arthur (Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 1973-06)Many hypotheses have been advanced to explain the formation of mixed layers in the ocean; the salt finger type of double-diffusive convection, in particular, has received much attention. Because of their uniquely ordered ...
Van Leer, John Cloud (Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 1971-01)Two shear experiments performed in the permanent thermocline are described and analyzed in this thesis. The first employed dye streak techniques to gain fractional meter vertical resolution. Shears with small vertical ... | <urn:uuid:33d8c7e7-20b2-4456-a982-a9fbc7ddb498> | 3.625 | 831 | Academic Writing | Science & Tech. | 33.444558 | 95,500,196 |
Numerical Analysis Hardback
This well-respected book introduces readers to the theory and application of modern numerical approximation techniques.
Providing an accessible treatment that only requires a calculus prerequisite, the authors explain how, why, and when approximation techniques can be expected to work-and why, in some situations, they fail.
A wealth of examples and exercises develop readers' intuition, and demonstrate the subject's practical applications to important everyday problems in math, computing, engineering, and physical science disciplines.
Three decades after it was first published, Burden, Faires, and Burden's NUMERICAL ANALYSIS remains the definitive introduction to a vital and practical subject.
- Format: Hardback
- Pages: 912 pages
- Publisher: Cengage Learning, Inc
- Publication Date: 01/01/2015
- Category: Numerical analysis
- ISBN: 9781305253667 | <urn:uuid:674ef04a-464b-44a4-9034-89aeb9970d84> | 2.859375 | 187 | Product Page | Science & Tech. | -0.759659 | 95,500,214 |
by Florida State University, July 18, 2018 in ScienceDaily
Deep in the ocean’s twilight zone, swarms of ravenous single-celled organisms may be altering Earth’s carbon cycle in ways scientists never expected, according to a new study from Florida State University researchers.
In the area 100 to 1,000 meters below the ocean’s surface — dubbed the twilight zone because of its largely impenetrable darkness — scientists found that tiny organisms called phaeodarians are consuming sinking, carbon-rich particles before they settle on the seabed, where they would otherwise be stored and sequestered from the atmosphere for millennia.
This discovery, researchers suggest, could indicate the need for a re-evaluation of how carbon circulates throughout the ocean, and a new appraisal of the role these microorganisms might play in Earth’s shifting climate.
The findings were published in the journal Limnology and Oceanography.
by K. Richard, July 12, 2018 in NoTricksZone
Unearthed new evidence (Mangerud and Svendsen, 2018) reveals that during the Early Holocene, when CO2 concentrations hovered around 260 ppm, “warmth-demanding species” were living in locations 1,000 km farther north of where they exist today in Arctic Svalbard, indicating that summer temperatures must have been about “6°C warmer than at present”.
Proxy evidence from two other new papers suggests Svalbard’s Hinlopen Strait may have reached about 5 – 9°C warmer than 1955-2012 during the Early Holocene (Bartels et al., 2018), and Greenland may have been “4.0 to 7.0 °C warmer than modern [1952-2014]” between 10,000 and 8,000 years ago according to evidence found in rock formations at the bottom of ancient lakes (McFarlin et al., 2018).
In these 3 new papers, none of the scientists connect the “pronounced” and “exceptional” Early Holocene warmth to CO2 concentrations.
by Ulli Kulke, June 29, 2018 in GWPF
Henrik Svensmark, head of solar research at Denmark’s Technical University in Copenhagen, is one of them. And he ventures far ahead in the climate debate, the research with perhaps the greatest significance of our time. His research is contested, of course. Nevertheless, Svensmark and his critics agree that the topic “sun” deserves more attention in climate research. The participants are particularly interested in the complex interplay between our central star and ionizing emissaries from the depths of the galaxy – “cosmic radiation”.
Svensmark says: “The climate is influenced more by changes in cosmic radiation than by carbon dioxide”. CO2 has an effect, of course, “but it is far less than most current climate models assume, and also less than the influence of cosmic radiation”. In his opinion, a doubling of the greenhouse gas in the atmosphere would cause an increase in global temperature of at most one degree, and not two degrees, as is now generally accepted.
In other words, the “climate sensitivity” of carbon dioxide is only half as high as assumed (…)
by David Middleton, June 19, 2018 in WUWT
From ARS Technica, one of the most incoherent things I’ve ever read…
The shocking thing is that Howard Lee has a degree in geology. The fact that he makes his living as an “Earth Science writer” and not as a geologist might just be relevant.
Can the Miocene tell our future? I’ll let Bubba’s mom answer that question:
by Prof. F. Vahrenholt, June 12, 2018 in NoTricksZone
Only Europe and Canada exiting coal
Another reason the Paris Accord is collapsing is because it’s not going to do anything we were promised it would.
When it comes to coal, Vahrenholt notes, so far only Europe and Canada have expressed some sort of a commitment to exit coal, and then he reminds us China, India and all developing countries will still be permitted to continue “massively” expanding their use of coal. He writes : (…)
by Anthony Watts, June 15, 2018 in WUWT
We covered this yesterday, but today the official press release came out, so worth covering again. Via Eurekalert
Land-based portion of massive East Antarctic ice sheet retreated little during past eight million years
But increases in atmospheric carbon dioxide levels could affect stability and potential for sea level rise
Large parts of the massive East Antarctic Ice Sheet did not retreat significantly during a time when atmospheric carbon dioxide concentrations were similar to today’s levels, according to a team of researchers funded by the National Science Foundation (NSF). The finding could have significant implications for global sea level rise.
by Ross C.L. et al., 2017, June 10, 2018 in CO2Science
The global increase in the atmosphere’s CO2 content has been hypothesized to possess the potential to harm coral reefs directly. By inducing changes in ocean water chemistry that can lead to reductions in the calcium carbonate saturation state of seawater (Ω), it has been predicted that elevated levels of atmospheric CO2 may reduce rates of coral calcification, possibly leading to slower-growing — and, therefore, weaker — coral skeletons, and in some cases even death.
As we have previously pointed out on our website, however (see The End of the Ocean Acidification Scare for Corals and A Coral’s Biological Control of its Calcifying Medium to Favor Skeletal Growth), such projections often fail to account for the fact that coral calcification is a biologically mediated process, and that out in the real world, living organisms tend to find ways to meet and overcome the many challenges they face; and coral calcification in response to ocean acidification is no exception.
See also in French
by Kenneth Richard, June 7, 2018 in NoTricksZone
It has long been established in the scientific literature (and affirmed by the IPCC) that CO2 concentration changes followed Antarctic temperature changes by about 600 to 1000 years during glacial-interglacial transitions throughout the last ~800,000 years (Fischer et al., 1999; Monnin et al., 2001; Caillon et al., 2003; Stott et al., 2007; Kawamura et al., 2007).
In contrast, two new papers cite evidence that the timing of the lagged CO2 response to temperature changes may have ranged between 1300 and 6500 years in some cases. It would appear that a millennial-scale lagged response to temperature undermines the claim that CO2 concentration changes were a driver of climate in the ancient past.
by Anthony Watts, June 6, 2018 in WUWT
We have mentioned countless times on this blog that the warming oceans are evidence that CO2 is not the cause of global warming. To understand the climate you must first understand the oceans. The oceans control the global climate. As the oceans warm, they warm and alter the humidity of the atmosphere above them. The problem is, as we have pointed out countless times, CO2’s only defined mechanism by which to affect climate change is through the thermalization of LWIR between 13 and 18µ.
LWIR between 13 and 18µ doesn’t penetrate or warm the oceans. Visible radiation, mainly from the high energy blue end of the spectrum does. CO2 is transparent to incoming visible radiation. The energy stored in the atmosphere and land is insignificant when compared to the oceans. The oceans contain 2,000x the energy of the atmosphere, so small changes to the oceans can mean big changes in the atmospheric temperature. The oceans also produce vast amounts of CO2 (20 x the amount man produces), and the most abundant and potent greenhouse gas, water vapor.
by Prof. Dr. P. Berth, 5 juin 2018, in ScienceClimatEnergie.be
Voici quelques réflexions sur la théorie de l’acidification des océans. Selon cette théorie, le pH des océans diminuerait inlassablement, en raison du CO2 qui ne cesse de s’accumuler dans l’atmosphère.
• Les mesures directes de pH sont récentes et nous n’avons aucun recul. Selon les médias et les ONG écologistes, qui se basent sur le GIEC et sur certaines publications (e.g., Caldeira & Wickett 2003), le pH des océans aurait été de 8.25 en 1750. Cependant, il faut savoir que personne n’a jamais mesuré le pH des océans en 1750, puisque le concept de pH n’a été inventé qu’en 1909 (par le danois Søren P.L. Sørensen), et que les premiers appareils fiables pour mesurer le pH ne sont apparus qu’en 1924… Nous ne sommes donc pas certains de cette valeur de 8.25 pour 1750… La valeur de 8.25 est donc obtenue par des mesures indirectes et n’est donc pas certaine.
• A l’heure d’aujourd’hui, tous les pH sont possibles. Lorsqu’on dit que les océans actuels sont à un pH de 8.1, de quel océan parle-t-on? S’agit-il du pH moyen global? Si c’est de cela qu’on parle, quelle est l’incertitude sur la mesure? (i.e., l’écart-type?). Ceci n’est jamais indiqué. Il faut savoir que si l’on prend un jour de la semaine, tous les pH sont possibles dans les océans, comme l’illustre très bien la figure suivante.
by David Middleton, June 5, 2018 in WUWT
The Fable of Chicken Little of the Sea
Guest essay by David Middleton,
When if comes to debunking Gorebal Warming, Chicken Little of the Sea (“ocean acidification”) and other Warmunist myths, my favorite starting points are my old college textbooks.
Way back in the Pleistocene (spring semester 1979) in Marine Science I, our professor, Robert Radulski, assigned us The Oceans by Sverdrup (yes, that Sverdrup), Johnson and Fleming. Even though it was published in 1942, it was (and may still be) considered the definitive oceanography textbook. I looked up “ocean acidification” in the index… It wasn’t there.
The notion that CO2 partial pressure influences the pH of seawater isn’t a new concept, *surely* ocean acidification must have been mentioned in at least one of my college textbooks.
by K. Richard, June 4, 2018 in NoTricksZone
Dr. Boris M. Smirnov, a prominent atomic physicist, has authored 20 physics textbooks during the last two decades. His latest scientific paper suggests that the traditional “absorption band” model for calculating the effect of atmospheric CO2 during the radiative transfer process is flawed. New calculations reveal that the climate’s sensitivity to a doubling of the CO2 concentration is just 0.4 K, and the human contribution to that value is a negligible 0.02 K.
by Willis Essenbach, May 29, 2018 in WUWT
Inspired by Richard Keen’s interesting WUWT post on using eclipses to determine the clarity of the atmosphere, I went to the website of the Hawaiian Mauna Loa Observatory. They have some very fascinating datasets. One of them is a measurement of direct solar radiation, minute by minute, since about 1980.
I thought that I could use that dataset to determine the clarity of the atmosphere by looking at the maximum downwelling solar energy on a month by month basis. I’ve described my method of extracting the maximum solar energy from the minute by minute data in the appendix for those interested.
by John Harz, May 5, 2018 in SkepticalScience
A chronological listing of news articles posted on the Skeptical Science Facebook Page during the past week.
Recent CO2 measurements at Mauna Loa Observatory in Hawaii. (Scripps Institution of Oceanography)
by Fred Singer, May 15, 2018 in TheWallStreetJournal
It is generally thought that sea-level rise accelerates mainly by thermal expansion of sea water, the so-called steric component. But by studying a very short time interval, it is possible to sidestep most of the complications, like “isostatic adjustment” of the shoreline (as continents rise after the overlying ice has melted) and “subsidence” of the shoreline (as ground water and minerals are extracted).
I chose to assess the sea-level trend from 1915-45, when a genuine, independently confirmed warming of approximately 0.5 degree Celsius occurred. I note particularly that sea-level rise is not affected by the warming; it continues at the same rate, 1.8 millimeters a year, according to a 1990 review by Andrew S. Trupin and John Wahr. I therefore conclude—contrary to the general wisdom—that the temperature of sea water has no direct effect on sea-level rise. That means neither does the atmospheric content of carbon dioxide. | <urn:uuid:5b87fa9a-49e5-4d01-aac3-ca2ac906217b> | 3.53125 | 2,938 | Content Listing | Science & Tech. | 49.406997 | 95,500,218 |
Astronomers believed that early star-forming galaxies could have provided enough of the right kind of radiation to evaporate the fog, or turn the neutral hydrogen intergalactic medium into the charged hydrogen plasma that remains today. But they couldn't figure out how that radiation could escape a galaxy. Until now.
Jordan Zastrow, a doctoral astronomy student, and Sally Oey, a U-M astronomy professor, observed and imaged the relatively nearby NGC 5253, a dwarf starburst galaxy in the southern constellation Centaurus. Starburst galaxies, as their name implies, are undergoing a burst of intense star formation. While rare today, scientists believe they were very common in the early universe.
The researchers used special filters to see where and how the galaxy's extreme ultraviolet radiation, or UV light, was interacting with nearby gas. They found that the UV light is, indeed, evaporating gas in the interstellar medium. And it is doing so along a narrow cone emanating from the galaxy.
A paper on their work is published today (Oct. 12) in Astrophysical Journal Letters.
"We are not directly seeing the ultraviolet light. We are seeing its signature in the gas around the galaxy," Zastrow said.
In starburst galaxies, a superwind from these massive stars can clear a passageway through the gas in the galaxy, allowing the radiation to escape, the researchers said.
The shape of the cone they observed could help explain why similar processes in other galaxies have been difficult to detect.
"This feature is relatively narrow. The opening that is letting the UV light out is small, which makes this light challenging to detect. We can think of it as a lighthouse. If the lamp is pointed toward you, you can see the light. If it's pointed away from you, you can't see it," Zastrow said. "We believe the orientation of the galaxy is important as to whether we can detect escaping UV radiation."
The findings could help astronomers understand how the earliest galaxies affected the universe around them.
The paper is titled "An ionization cone in the dwarf starburst galaxy NGC 5253." Also contributing were researchers from the University of Maryland, MIT's Kavli Institute for Astrophysics and Space Research, and the University of California, Berkeley. The research is funded by the National Science Foundation. Observations were conducted with the Magellan Telescopes at Las Campanas Observatory in Chile.Contact: Nicole Casal Moore
Nicole Casal Moore | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:73ec0027-45cd-4719-8b3d-912b5fd52a5b> | 3.984375 | 1,089 | Content Listing | Science & Tech. | 39.488896 | 95,500,234 |
Acid Rain. What is it?. Acid rain is rain, snow or fog that is polluted by acid in the atmosphere and damages the environment. Two common air pollutants acidify rain: sulphur dioxide (SO 2 ) and nitrogen oxide (NO).
PowerPoint Slideshow about 'Acid Rain' - francis-vaughan
An Image/Link below is provided (as is) to download presentation
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Sulfur dioxide and nitrogen oxides are the main pollutants that cause acid rain. These pollutants are given off largely by the combustion of fossil fuels.
Reducing the use of fossil fuels therefore, including the use of electricity generated by coal- and oil-fired power plants, will help reduce acid rain-causing emissions. The following are some more specific suggestions on what you, as an individual, can do: | <urn:uuid:cc0cc9d8-a7a1-4435-a8cf-ab7a5bce2c57> | 3.484375 | 235 | Truncated | Science & Tech. | 37.024826 | 95,500,265 |
Clear questions and runnable code
get the best and fastest answer
The Scalar Range Operator
I have been playing with the scalar range operator, and it confused me so much that I started experimenting and reading about it, and here is what I wrote up.
Numeric ValuesThis operator, in scalar context, has two forms that act as a bistable or flip-flop. The first form looks like this:
This if statement works as follows: The condition evaluates false until $left evaluates true. Then the $left condition is ignored, and the condition continues to evaluate true until $right evaluates true, at which point the condition evaluates false, and it goes back to check $left. In this way, it flip-flops between waiting for the left side to evaluate true, and then waiting for the right side to evaluate true. Very strange, until you see it operating in a program:
This program prints out the second and third line of the data. A numeric value in the scalar range operator is therefore compared to $..
Regular ExpressionsThis example shows the use of two regular expressions in the scalar range operator:
It prints out lines in the data beginning with the line that first evaluates true (start), until the line that next evaluates true (end). All the lines that are not bracked by start/end pairs are ignored. Note that this data contains two blocks of lines that are between start and end markers, and the lines outside those ranges are ignored.
Numeric and Regular ExpressionsCombining a numeric and a regex in the range operator also works as expected. In this example, the lines from $. == 1 until $_ =~ /end/ are printed.
Exluding MarkersIn order to exclude lines that contain start and end, a further condition is required. The condition is that the result returned by the scalar range operator must be neither 1 (representing $. of 1), nor must it contain E. When the operator encounters the line that evaluates true for the right-hand-side, it's return value is (in the example below) 5E0. This number evaluates to 5, but contains that E, which is the indicator that this line terminates the right-hand-side of the operator. This code prints all the lines between the start and end lines:
To better illustrate that, this program prints out that value, for all lines between start and end:
Markers on Same LineThis example places both the start and end tokens on the same line. From the output, it can be seen that the combined line has a value of 1E0 which satisfies the test as both the first and last line of the desired input.
The Scalar ... OperatorThe other form of the scalar range operator is .... This operator performs as the .. operator does, but lines that meet one criteria are not also evaluated for the other. So a line that contains both start and end is only evaluated once - in this case for the start line, causing this data to be considered as having a start but not an end, meaning that the data is not properly treated in this example.
This form of the scalar range operator is more efficient only if it is known that both conditions can never be true on the same line.
Update: Changed examples to not include so many edge cases. Thanks to hossman. | <urn:uuid:55d97d22-6f0c-4af9-b10e-498408f9ed0d> | 3.140625 | 694 | Comment Section | Software Dev. | 53.312079 | 95,500,277 |
Two issues that worry many geologists are global warming and the greenhouse effect. The greenhouse effect is a natural process that keeps the earth at temperatures that are livable. What does the greenhouse effect have to do with global warming? When humans release gases into the air, the greenhouse effect will alter the temperature of the earth. More gases in the atmosphere means the earth will start to get warmer, and the result is global warming. On the other hand, if there was no greenhouse effect, the earth would be too cold for humans to comfortably exist.
In order to talk about global warming, we must first learn what causes the greenhouse effect. The three most common greenhouse gases are water vapor, carbon dioxide, and methane. Many of the sun's rays are absorbed by water vapor. Water vapor is a natural atmospheric gas and it accounts for "80 percent of natural greenhouse warming; the remaining 20 percent is due to other gasses that are present in very small amounts" (Murck, Skinner, and Porter 488). A greenhouse gas known as carbon dioxide is the second biggest absorber of the sun's heat rays. Humans affect the amount of carbon dioxide in the atmosphere in many ways. Every time fossil fuels are burned, more carbon dioxide is released into the air. Car exhaust emissions also increase the amount of carbon dioxide in the air, and more carbon dioxide means more heat rays being absorbed. This will cause the earth's temperature to warm. Another greenhouse gas is methane. "Methane absorbs infrared radiation 25 times more effectively than carbon dioxide, making it an important greenhouse gas despite its relatively low concentration" (Murck, Skinner, and Porter 490). Many studies have been performed on how methane is released into the atmosphere. Results have shown that methane is "generated by biological activity related to rice cultivation, leaks in domestic and industrial gas lines, and the digestive process of domestic livestock,...
Cited: "Fast Facts." Environmental Media Services. 10 July 2001. 23 Nov. 2001
Murck, Barbara W., Brian J. Skinner, and Stephen C. Porter. Environmental Geology.
New York: John Wiley & Sons, 1996
"The Planet Speaks." The Wilson Quarterly 25.4 (Autumn 2001): 124.
Please join StudyMode to read the full document | <urn:uuid:1389e2b6-b12f-4485-bb8b-05ac2630d936> | 3.890625 | 460 | Truncated | Science & Tech. | 51.466426 | 95,500,279 |
You are here
C3S participates in two European conferences on climate change communication
Freja Vamborg, senior climate scientist at C3S at IWF (Credit: International Weather and Climate Forum)
The ECMWF-run Copernicus Climate Change Service (C3S) presented state-of-the art climate information at two European events in June aimed at increasing climate change awareness with the public. The events focused on strategies to communicate the science more effectively and to boost cooperation between broadcast meteorologists and research scientists.
On 4-5 June, the International Weather and Climate Forum (IWF) ran its annual media workshop for about 50 participants, including TV weather presenters from dozens of nations as well as representatives from various international organisations and companies. Held in the UNESCO building in Paris, the meeting featured discussion groups on communicating extreme weather events; presentations and round-table talks by climate experts; and an interactive role-playing game designed to demonstrate the difficulties of communicating climate change to people with no formal education in the subject.
“The first day was partly dedicated to games looking at various ways to engage people on the question of climate change and risk, highlighting how our decisions would vary in a world with and without climate change,” says Freja Vamborg, a senior climate scientist at C3S.
C3S presented its monthly products and the annual European State of the Climate report to demonstrate how they are used by various media outlets and to garner feedback on the products’ potential value for weather broadcasting. There was also an emphasis on the latest methods used to obtain Earth system observations from satellites, presented by representatives from the European Space Agency and EUMETSAT. In general, this showed the amount of climate and meteorological information available, which needs to be communicated to the public more effectively, according to Vamborg.
“The presentation part of the workshop mainly focused on the potential sources of data, and the rest was about communicating on climate change as well as the barriers to this, such as relatability to lay people,” she says.
Copernicus data is already used by the pan-European service Euronews, which airs exclusive 24-hour air quality forecasts based on the CAMS data on a daily basis. The channel also broadcasts a monthly climate update using C3S maps on surface air temperature, sea ice and hydrological variables in 156 countries. Other European broadcasters in countries such as Belgium, Germany, Greece, the Netherlands and Switzerland also present this information in their weather forecasts.
Four days after the IWF event, C3S promoted its data and services at a seminar in Madrid, where scientists and broadcast meteorologists met to discuss climate change communication, focusing on attribution science. Presented by the Spanish Association of Meteorology Broadcasters (ACOMET), the event featured sessions on the social impact, business prospects and communication challenges of climate change in Spain. The aim was to present current attribution studies, promote new ones and improve communication methods.
Attribution science is an emerging field that aims to assess how human-induced climate change affects local weather. While the evidence for climate change is overwhelming, the extent to which it is caused by human activity is difficult to determine. An increasing body of research has raised hopes in the scientific community that isolated cases of extreme weather will soon be attributable to climate change with a higher degree of certainty. This may have legal implications for governments and companies that fail to implement measures to restrict greenhouse-gas emissions.
“In recent years, for example, Spain and Portugal have suffered from severe droughts followed by heavy rains,” says Joaquin Muñoz Sabater, a C3S research scientist. “The only way to find links between these events and climate change is to compare them through climatology and reanalysis. It is very difficult to prove conclusively that a single event is caused by climate change, but the frequency of extreme events might establish a link.”
C3S is in the initial stages of setting up a pilot scheme that aims to perform the necessary climate attribution calculations within a week of specific weather events. The pilot scheme is scheduled to begin in 2019.
“The main objective of the C3S presentation at the ACOMET seminar was to raise awareness in Spain about how our monthly climate charts and data can help the media, particularly weather forecasters,” Munoz says. “Our catalogue of products was very well received and I believe Copernicus is going to play a very supportive role for communicators in the future.” | <urn:uuid:8d770ddb-f8fd-4e10-bd26-fefd5eb29267> | 2.6875 | 933 | News (Org.) | Science & Tech. | 16.649594 | 95,500,287 |
IN the Nevada desert, geologists and engineers are beginning a task that will take decades, preparing an underground repository meant to isolate radioactive wastes for at least 10,000 years.
But how can scientists predict what conditions will be like 150 lifetimes from now? The problem is complicated by the fact that the people doing the predicting - the engineers and the geologists - don't think much of each other's opinions.
About 100 miles northwest of Las Vegas, the Department of Energy is supervising the $2 billion effort to determine whether Yucca Mountain is suitable as a disposal site for the most radioactive of the waste from 50 years of nuclear bomb production and civilian power plant operation.
The Energy Department's track record for anticipating the future is poor; in the last four decades it has allowed radioactive waste to spread at its weapons plants to a degree it once estimated would take centuries.
The Nuclear Regulatory Commission, which must approve the Yucca Mountain project, is accustomed to licensing civilian reactors with an expected lifetime of 30 or 40 years. The repository project is ''pushing the state of the art in making projections,'' said Hugh L. Thompson Jr., who has overseen the commission's licensing effort for the Nevada site.Continue reading the main story
Though the wastes that would go into the site would be hazardous for millions of years, predictions are limited to 10,000 years. Beyond that time some questions are simply unanswerable. For example, over the millenniums changing rainfall patterns could make analyses of ground and surface water irrelevant.
Even 10,000-year predictions seem far-fetched to Robert R. Loux, executive director of Nevada's Nuclear Waste Project Office, which is trying to stop the repository. He pointed out that Congress forced the repository on Nevada because, among other things, the Yucca Mountain area has a low population density. But, he said, ''Over 10,000 years, is the population relevant?'' It would have been hard to predict 150 years ago that the Las Vegas area would have more than 600,000 people.
But in the end geologic predictions, not demographic ones, will carry the most weight. The assumption is that no ''engineered features,'' or man-made parts, will be effective after even 1,000 years. Beyond that date, containment of the wastes must be left to nature. That leaves the most important predictions to geologists, who acknowledge that their field relies heavily on interpreting history, not predicting the future.
''It's not an experimental science,'' said Richard L. Meehan, an engineer who consults on geologic questions. ''You don't set up mountains in laboratories and make them work for 10 million years, and then say that 'four out of five of them do this.' '' Mr. Meehan is the author of a book on an ill-fated nuclear reactor in Vallecitos, Calif. In that case, some geologists said a fault near the reactor made operation hazardous, but engineers generally disagreed. The debate went on for years until the builder, General Electric, gave up and closed the plant. Geologists have raised questions about volcanic activity and possible water leaks at the Yucca Mountain site but have not yet reached any conclusions.
Geology is more like history, Mr. Meehan said. ''A historian can talk about the Roman Empire or Attila the Hun, but can a historian predict whether there's going to be a World War III?'' he said.
While engineers complain that the geologists' predictions can overreach the bounds of their expertise, geologists contend that the engineers sacrifice scientific rigor for stubborn adherence to procedures used to insure good engineering. At Yucca Mountain the two groups have clashed over the engineers' insistence that all work proceed according to a system called ''quality assurance.'' Engineers plan and supervise a job, then check to see that the plan was followed in every detail.
Scientists usually do not follow so structured a process. Last summer 16 geologists in the Denver office of the United States Geological Survey, which is doing much of the work at Yucca Mountain, complained that the quality controls were ''counterproductive to the needs of good scientific investigations.''
''There is no facility for trial and error, for genuine research, for innovation, or for creativity,'' their report said. After the engineers drilled temporary holes in the mountain, for example, the geologists wanted to sample the gases coming from them. Gas circulation is critical because the nuclear waste would produce radioactive carbon-14 gas.
But the engineers ordered the geologists to stop, because they did not have an approved quality assurance program. The geologists said that by the time all the paperwork could be approved the holes probably would be filled in and the opportunity to gather the data - which might reflect poorly on the suitability of the site - would be lost.
In his book, Mr. Meehan described geologists as ''cracker-barrel philosophers, with their flair for contentious rhetoric, their epistemologically confusing 'historical' science, their preference for contemplation and debate over decision and action.'' They are, he said, ''perfect for confounding the engineers.'' At Yucca Mountain this clash of intellectual subcultures has confounded the already difficult process of seeing 100 centuries into the future.Continue reading the main story | <urn:uuid:119ac7b5-3362-4bd8-b54d-e10b222c4581> | 3.5 | 1,075 | Truncated | Science & Tech. | 41.747724 | 95,500,301 |
LOS ANGELES – Call it space grave robbery for a cause: Imagine scavenging defunct communication satellites for their valuable parts and recycling them to build brand new ones for cheap.
It's the latest pet project from the Pentagon's research wing known for its quirky and sometimes out-there ideas. The Defense Advanced Research Projects Agency is spending $180 million to test technologies that could make this possible.
When satellites retire, certain parts -- such as antennas and solar panels -- often still work. There's currently no routine effort to salvage and reuse satellite parts once they're launched into space.
DARPA thinks it can save money by repurposing in orbit.
"We're attempting to essentially increase the return on investment ... and try to find a way to really change the economics so that we can lower the cost" of military space missions, said DARPA program manager David Barnhart.
Work on DARPA's Phoenix program -- named after the mythical bird that rose from its own ashes -- is already under way. The agency awarded contracts to several companies to develop new technologies, and it is seeking fresh proposals from interested parties next month.
A key test will come in 2016 when it launches a demonstration mission that seeks to breathe new life to an antenna from a yet-to-be-determined decommissioned satellite. DARPA has identified about 140 retired satellites that it can choose from for its first test.
Here's the vision: Launch a robotic mechanic outfitted with a toolkit that can rendezvous with defunct satellites and mine them for parts. The plan also calls for the separate launch of mini-satellites. The robotic mechanic would then string together the mini-satellites and old satellite parts to create a new communication system.
It's like doing robotic surgery in zero gravity.
DARPA officials said one way to keep costs down is for the mini-satellites to hitch a ride aboard available space on commercial rockets.
Harvard astrophysicist Jonathan McDowell, who tracks the world's space launches and satellites, called it "an interesting idea" that may reduce costs in the long-term.
"The first few times you do this, it'll definitely be more expensive than just building the new antenna on your satellite from scratch. But in the long run, it might work out," he said in an email.
McDowell said the biggest challenge in the upcoming demo test is separating the antenna from the retired satellite without breaking it and then successfully integrating it with the mini-satellites.
DARPA is used to funding blue-sky research and a few projects are slowly becoming reality.
In 2011, it dangled seed money to jumpstart a way to rocket people to a star within a century in what's known as the 100-year Starship program.
Long before Google tested self-driving cars, DARPA sponsored a robotic road race in which university-designed autonomous cars eyed for the finish line without human help. | <urn:uuid:346794f8-93dc-434d-8e65-0bb68ad2b61d> | 2.8125 | 602 | News Article | Science & Tech. | 44.23146 | 95,500,312 |