text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Canada and Australia are home to two of the world's largest intact ecological regions, the boreal forests of subarctic Canada and the tropical savannah and deserts of the Outback of Australia. The steps being taken by the two countries' Indigenous peoples to protect the natural environment offer models for conservation of other parts of the world.
What is being achieved in these stunning and globally significant ecological regions represents a new frontier for conservation globally. Steven Kallick, Director, International Lands Conservation, The Pew Charitable Trusts
The experiences and actions taken in both countries to conserve their ecologically different but globally important boreal and Outback regions are part of an important five-day conference in Baltimore. The Pew Charitable Trusts is sponsoring an opening symposium. The International Congress for Conservation Biology starts Monday, July 22, and will be attended by hundreds of international scientists.
A new report, “Conserving the World's Last Great Forest Is Possible,” co-authored by the International Boreal Conservation Science Panel and other well-known academics, will be released at the symposium. The report highlights not only new science about why at least half of large intact eco-regions should be maintained in protected status but also how the Indigenous peoples of Canada are developing world-leading land-use plans for large portions of Canada's boreal forest landscapes.
Similarly, Australia now has 58 Indigenous Protected Areas covering almost 50 million hectares (more than 120 million acres)—an area larger than California—and nearly 700 Indigenous people employed as part of its Indigenous Ranger Program. In addition to conservation, these areas maintain traditional culture, bring Aboriginal owners back to their land, and allow skills development and employment.
The intact nature of the Canadian boreal and Australian Outback presents a unique opportunity to proactively maintain and conserve large-scale functioning ecosystems and biodiversity. Yet these areas are also seen by some as the last frontiers for unbridled natural resource extraction. The resulting pressure to balance ecological integrity and biodiversity with economic needs has led to innovative new ideas and collaborations that have already produced dramatic results—raising the bar for large landscape-conservation initiatives around the globe.
In Canada, for example, more than 526,000 square kilometers (130 million acres) of protected areas are in place in the boreal forest region. What's more, the provincial governments of two of the largest provinces, Ontario and Quebec, have committed in recent years to establishing an additional 800,000 square kilometers (almost 200 million acres) of new protected areas.
Australia recently established four large Indigenous Protected Areas in the rugged and remote Kimberley region of Western Australia, creating the largest Indigenous-owned conservation corridor in the country. These areas protect 69,139 square kilometers (17 million acres) of unspoiled coastline and tropical savannah and represent the latest chapter in the story of successful Aboriginal conservation in Australia.
"What is being achieved in these stunning and globally significant ecological regions represents a new frontier for conservation globally. The achievements to date offer inspiration for anyone committed to finding practical and effective solutions to the ongoing challenge of marrying conservation with economic needs," says Steven Kallick, director of Pew's global wilderness programs, who will speak at a session on conservation during the Baltimore conference.
"What is clear from the experiences in both Canada and Australia is that Indigenous rights and leadership are key to real, long-term conservation success," adds conference presenter Barry Traill, who directs Pew's work in Australia.
Other presenters on the Canadian-Australian experience are Fritz Reid of Ducks Unlimited; Valerie Courtois, Canadian Boreal Initiative; Jeff Wells, International Boreal Conservation Campaign; Aran O'Carroll, Canadian Boreal Forest Agreement Secretariat; and James Levitt, Program on Conservation Innovation, Harvard Forest, Harvard University. | <urn:uuid:02119156-3f9b-4e42-932b-11b0762c6940> | 3.8125 | 771 | News (Org.) | Science & Tech. | 8.695192 | 95,544,413 |
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2018) (Learn how and when to remove this template message)
A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group).
These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems.
Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is known in mathematical terms as Poincaré group, the symmetry group of special relativity. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity.
- 1 Symmetry as a kind of invariance
- 2 Local and global symmetries
- 3 Continuous symmetries
- 4 Discrete symmetries
- 5 Mathematics of physical symmetry
- 6 Mathematics
- 7 See also
- 8 References
- 9 External links
Symmetry as a kind of invariance
Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room.
Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere "looks".
Invariance in force
The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well.
For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges.
In Newton's theory of mechanics, given two bodies, each with mass m, starting at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is 1⁄2m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis.
The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged.
Local and global symmetries
Symmetries may be broadly classified as global or local. A global symmetry is one that holds at all points of spacetime, whereas a local symmetry is one that has a different symmetry transformation at different points of spacetime; specifically a local symmetry transformation is parameterised by the spacetime co-ordinates. Local symmetries play an important role in physics as they form the basis for gauge theories.
The two examples of rotational symmetry described above - spherical and cylindrical - are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by continuous or smooth functions. An important subclass of continuous symmetries in physics are spacetime symmetries.
|Group theory → Lie groups|
Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time.
- Time translation: A physical system may have the same features over a certain interval of time ; this is expressed mathematically as invariance under the transformation for any real numbers t and a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy when suspended from a height above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time (in seconds) and also at , say, the particle's total gravitational potential energy will be preserved.
- Spatial translation: These spatial symmetries are represented by transformations of the form and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room.
- Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry.
- Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance.
- Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity.
- Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant.
Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system.
Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries.
A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges.
- Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, . For example, Newton's second law of motion still holds if, in the equation , is replaced by . This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height.
- Spatial inversion: These are represented by transformations of the form and indicate an invariance property of a system when the coordinates are 'inverted'. Said another way, these are symmetries between a certain object and its mirror image.
- Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries.
C, P, and T symmetries
The Standard model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced.
- C-symmetry (charge symmetry), a universe where every particle is replaced with its antiparticle
- P-symmetry (parity symmetry), a universe where everything is mirrored along the three physical axes
- T-symmetry (time reversal symmetry), a universe where the direction of time is reversed. T-symmetry is counterintuitive (surely the future and the past are not symmetrical) but explained by the fact that the Standard model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the big bang and the resulting low-entropy state in the "future." Since we perceive the "past" ("future") as having lower (higher) entropy than the present (see perception of time), the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past.
These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics.
This section may contain misleading parts.(June 2015)
A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the standard model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the standard model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry.
Mathematics of physical symmetry
Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group . (The 3 refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is . Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group).
Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group .
An important type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.)
Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology).
Conservation laws and symmetry
The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, the isometry of space gives rise to conservation of (linear) momentum, and isometry of time gives rise to conservation of energy.
The following table summarizes some fundamental symmetries and the associated conserved quantity.
|translation in time
|translation in space
|rotation in space
|Discrete symmetry||P, coordinate inversion||spatial parity|
|C, charge conjugation||charge parity|
|T, time reversal||time parity|
|CPT||product of parities|
|Internal symmetry (independent of
|U(1) gauge transformation||electric charge|
|U(1) gauge transformation||lepton generation number|
|U(1) gauge transformation||hypercharge|
|U(1)Y gauge transformation||weak hypercharge|
|U(2) [ U(1) × SU(2) ]||electroweak force|
|SU(2) gauge transformation||isospin|
|SU(2)L gauge transformation||weak isospin|
|P × SU(2)||G-parity|
|SU(3) "winding number"||baryon number|
|SU(3) gauge transformation||quark color|
|SU(3) (approximate)||quark flavor|
|S(U(2) × U(3))
[ U(1) × SU(2) × SU(3) ]
Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations are equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra.
for a general field, . Without gravity only the Poincaré symmetries are preserved which restricts to be of the form:
where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field:
where are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types.
Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind:
If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form:
with D generating scale transformations and K generating special conformal transformations. For example, N=4 super-Yang-Mills theory has this symmetry while General Relativity doesn't although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models.
In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields.
- Conservation law
- Conserved current
- Covariance and contravariance
- Fictitious force
- Galilean invariance
- Gauge theory
- General covariance
- Harmonic coordinate condition
- Inertial frame of reference
- Lie group
- List of mathematical topics in relativity
- Lorentz covariance
- Noether's theorem
- Poincaré group
- Special relativity
- Spontaneous symmetry breaking
- Standard model
- Standard model (mathematical formulation)
- Symmetry breaking
- Wheeler–Feynman Time-Symmetric Theory
- Leon Lederman and Christopher T. Hill (2005) Symmetry and the Beautiful Universe. Amherst NY: Prometheus Books.
- Schumm, Bruce (2004) Deep Down Things. Johns Hopkins Univ. Press.
- Victor J. Stenger (2000) Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws.
- Anthony Zee (2007) Fearful Symmetry: The search for beauty in modern physics, 2nd ed. Princeton University Press. ISBN 978-0-691-00946-9. 1986 1st ed. published by Macmillan.
- Brading, K., and Castellani, E., eds. (2003) Symmetries in Physics: Philosophical Reflections. Cambridge Univ. Press.
- -------- (2007) "Symmetries and Invariances in Classical Physics" in Butterfield, J., and John Earman, eds., Philosophy of Physic Part B. North Holland: 1331-68.
- Debs, T. and Redhead, M. (2007) Objectivity, Invariance, and Convention: Symmetry in Physical Science. Harvard Univ. Press.
- John Earman (2002) "Laws, Symmetry, and Symmetry Breaking: Invariance, Conservations Principles, and Objectivity." Address to the 2002 meeting of the Philosophy of Science Association.
- G. Kalmbach H.E.: Quantum Mathematics: WIGRIS. RGN Publications, Delhi, 2014
- Mainzer, K. (1996) Symmetries of nature. Berlin: De Gruyter.
- Mouchet, A. "Reflections on the four facets of symmetry: how physics exemplifies rational thinking". European Physical Journal H 38 (2013) 661 hal.archives-ouvertes.fr:hal-00637572
- Thompson, William J. (1994) Angular Momentum: An Illustrated Guide to Rotational Symmetries for Physical Systems. Wiley. ISBN 0-471-55264-X.
- Bas Van Fraassen (1989) Laws and symmetry. Oxford Univ. Press.
- Eugene Wigner (1967) Symmetries and Reflections. Indiana Univ. Press. | <urn:uuid:b1d487fc-35ac-4bd0-943e-4856ae1889b9> | 4.09375 | 4,195 | Knowledge Article | Science & Tech. | 30.613648 | 95,544,447 |
1. Species of higher trophic levels are predicted to be more vulnerable to disturbances (e.g. by forestry) than their prey because of low population densities, extreme specialisation and reliance on intact trophic chains. 2. The aim of this study was to acquire some much-needed basic information on saproxylic parasitoids in boreal forest landscapes. To obtain reliable estimates of spe- cies richness, abundance, assemblage composition and host associations of saproxy- lic parasitoids in different stand types (clear-cuts, mature managed forests and old- growth reserves), we used two different methods (emergence traps and window traps). 3. Window traps caught more species and gave a better measure of the species pool in different stand types, while emergence traps were more suitable for detailed analyses concerning substrate requirements, hatching periods and to some extent host choice. 4. The general distribution pattern revealed no significant differences in species richness among stand types, but parasitoid assemblages were affected by forest suc- cessional stage. Idiobionts, dominated by Ontsira antica and Bracon obscurator,pre- ferred clear-cuts over forested sites, while koinobionts, especially Cosmophorus regius, were more common in mature forests and reserves. We conclude that the stand types studied were complementary in assemblage composition, but that none held a complete assemblage of saproxylic parasitoids and we suggest that a range of successional stages be retained to help conserve the entire parasitoid community.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:1aea6ae4-5d80-4342-ae9e-4d96ec267d23> | 2.78125 | 343 | Academic Writing | Science & Tech. | 16.157182 | 95,544,451 |
In face of changes in precipitation variability, climatic extremes
What does Kansas weather and life have in common? In the words of Forrest Gump, both are like a box of chocolates. "Youre never sure what youre going to get." Rain or drought. Drought or rain.
Concerns about future climate changes resulting from human activities often focus on the effects of increases in average air temperatures or changes in average precipitation amounts. But climate models also predict increases in climate extremes such as more frequent large rainfall events or more severe droughts. This aspect of climate change can lead to an increase in climatic variability without accompanying changes in average temperatures or total precipitation amounts, according to a report by a team of researchers at Kansas State University.
The team, headed by Alan Knapp, a university distinguished professor of biology, and John Blair and Phil Fay, professors of biology, has been studying how grasslands respond to increases in the variability of rainfall patterns to better understand how rapidly and to what extent ecosystems might respond to a future with a more extreme climate. Their findings appear in the latest issue of Science.
Alan Knapp | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:21eacc88-b025-494e-a9d1-0be764174a73> | 3.203125 | 833 | Content Listing | Science & Tech. | 34.729479 | 95,544,465 |
The Newcastle University team, led by Michael North, Professor of Organic Chemistry, has developed a highly energy-efficient method of converting waste carbon dioxide (CO2) into chemical compounds known as cyclic carbonates.
The team estimates that the technology has the potential to use up to 48 million tonnes of waste CO2 per year, reducing the UK's emissions by about four per cent.
Cyclic carbonates are widely used in the manufacture of products including solvents, paint-strippers, biodegradable packaging, as well as having applications in the chemical industry. Cyclic carbonates also have potential for use in the manufacture of a new class of efficient anti-knocking agents in petrol. Anti-knocking agents make petrol burn better, increasing fuel efficiency and reducing CO2 emissions.
The conversion technique relies upon the use of a catalyst to force a chemical reaction between CO2 and an epoxide, converting waste CO2 into this cyclic carbonate, a chemical for which there is significant commercial demand.
The reaction between CO2 and epoxides is well known, but one which, until now, required a lot of energy, needing high temperatures and high pressures to work successfully. The current process also requires the use of ultra-pure CO2 , which is costly to produce.
The Newcastle team has succeeded in developing an exceptionally active catalyst, derived from aluminium, which can drive the reaction necessary to turn waste carbon dioxide into cyclic carbonates at room temperature and atmospheric pressure, vastly reducing the energy input required.
Professor North said: 'One of the main scientific challenges facing the human race in the 21st century is controlling global warming that results from increasing levels of carbon dioxide in the atmosphere.
'One solution to this problem, currently being given serious consideration, is carbon capture and storage, which involves concentrating and compressing CO2 and then storing it,' he said. 'However, long-term storage remains to be demonstrated'.
To date, alternative solutions for converting CO2 emissions into a useful product has required a process so energy intensive that they generate more CO2 than they consume.
Professor North compares the process developed by his team to that of a catalytic converter fitted to a car. 'If our catalyst could be employed at the source of high-concentration CO2 production, for example in the exhaust stream of a fossil-fuel power station, we could take out the carbon dioxide, turn it into a commercially-valuable product and at the same time eliminate the need to store waste CO2', he said.
Professor North believes that, once it is fully developed, the technology has the potential to utilise a significant amount of the UK's CO2 emissions every year.
'To satisfy the current market for cyclic carbonates, we estimate that our technology could use up to 18 million tonnes of waste CO2 per year, and a further 30 million tonnes if it is used as an anti-knocking agent.
'Using 48 million tonnes of waste CO2 would account for about four per cent* of the UK's CO2 emissions, which is a pretty good contribution from one technology,' commented Professor North.
The technique has been proven to work successfully in the lab. Professor North and his team are currently carrying out further lab-based work to optimise the efficiency of the technology, following which they plan to scale-up to a pilot plant.
* Based on 2004 figures from the UN. Source: Wikipedia http://en.wikipedia.org/wiki/List_of_countries_by_carbon_dioxide_emissions
Melanie Reed | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:f6f565f1-6fc3-4759-a581-7be32e300c8c> | 3.671875 | 1,332 | Knowledge Article | Science & Tech. | 33.945944 | 95,544,478 |
Continental flood basalt (CFB) provinces have been attributed to a variety of sources. Among these are: (1) ancient continental lithosphere (CL); (2) newly arrived plume heads from the core-mantle boundary; (3) steady-state plumes coincident with CL rifts; (4) fossil plume heads; and (5) contamination of deep upwellings (passive or active) by enriched sublithospheric shallow mantle. The criticism that CL is too cold to provide extensive magmatism has been countered by the proposal that the CL is wet, thereby lowering the melting point. This also lowers the viscosity and increases the local Rayleigh number. The calculated seismic velocities and viscosities in this proposed CL source show that it has a low velocity and is weak and has asthenosphere-like physical properties. It is apparent that what has been called 'continental lithosphere' is actually asthenosphere or the lower part of the thermal boundary layer (TBL) and is unlikely to be a long-lived part of the plate. However, a low density and low viscosity region of the sublithospheric mantle (the perisphere) is a suitable reservoir for the enriched component of CFB. It helps to isolate the deeper depleted reservoir from contamination due to recycling at subduction zones. Lithospheric pull-apart at cratonic boundaries, rather than stretching of uniform lithosphere, is suggested as the trigger for extensive continental magmatism. © 1994.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:5fd7c323-f543-4f4d-8395-ff6d2aa55fe7> | 3.578125 | 334 | Academic Writing | Science & Tech. | 26.3825 | 95,544,572 |
Carbon 14 dating millions dating e mail ru
Now, when I did that, I made a pretty big assumption, and some you all have touched on this in the comments on You Tube on the last video, is how do I know that this estimate I made is based on the assumption that the amount of carbon-14 in the atmosphere would have been roughly constant from when this bone was living to now?
And so the question is, is the amount of carbon-14 in the atmosphere and in the water, and in living plants and animals, is it constant?
decay or the rate of other cumulative changes in atoms resulting from radioactivity. The various isotopes of the same element differ in terms of atomic mass but have the same atomic number..
One half-life is the amount of time required for of the original atoms in a sample to decay.
Following death, however, no new carbon is consumed.
Progressively through time, the carbon-14 atoms decay and once again become nitrogen-14.
Each original isotope, called the parent, gradually decays to form a new isotope, called the daughter.
Isotopes are important to geologists because each radioactive element decays at a constant rate, which is unique to that element.Geologists often need to know the age of material that they find.They use absolute dating methods, sometimes called numerical dating, to give rocks an actual date, or date range, in number of years.Over the second half-life, of the atoms remaining decay, which leaves of the original quantity, and so on.In other words, the change in numbers of atoms follows a geometric scale as illustrated by the graph below.other carbon isotopes in the same ratio as exists in the atmosphere. | <urn:uuid:a7c2467d-1df4-43f6-9202-60347a139b8c> | 2.984375 | 358 | Spam / Ads | Science & Tech. | 44.78998 | 95,544,578 |
International panel of 300 scientists report that global warming is still under way.
Scientists from around the world are providing even more evidence of global warming, one day after President Barack Obama renewed his call for climate legislation.
“A comprehensive review of key climate indicators confirms the world is warming and the past decade was the warmest on record,” the annual State of the Climate report declares.
Compiled by more than 300 scientists from 48 countries, the report said its analysis of 10 indicators that are “clearly and directly related to surface temperatures, all tell the same story: Global warming is undeniable.”
Concern about rising temperatures has been growing in recent years as atmospheric scientists report rising temperatures associated with greenhouse gases released into the air by industrial and other human processes. At the same time, some skeptics have questioned the conclusions.
The new report, the 20th in a series, focuses only on global warming and does not specify a cause.
“The evidence in this report would say unequivocally yes, there is no doubt,” that the Earth is warming, said Tom Karl, the transitional director of the planned NOAA Climate Service.
Deke Arndt, chief of the Climate Monitoring Branch at the National Climatic Data Center, noted that the 1980s was the warmest decade up to that point, but each year in the 1990s was warmer than the ’80s average.
That makes the ’90s the warmest decade, he said.
But each year in the 2000s has been warmer than the ’90s average, so the first 10 years of the 2000s is now the warmest decade on record.
The new report noted that continuing warming will threaten coastal cities, infrastructure, water supply, health and agriculture.
“At first glance, the amount of increase each decade  about a fifth of a degree Fahrenheit  may seem small,” the report said.
“But,” it adds, “the temperature increase of about 1 degree Fahrenheit experienced during the past 50 years has already altered the planet. Glaciers and sea ice are melting, heavy rainfall is intensifying and heat waves are becoming more common and more intense.”
Last month was the warmest June on record and this year has had the warmest average temperature for January-June since record keeping began, NOAA reported last week.
And a study by Princeton University researchers released Monday suggested that continued warming could cause as many as 6.7 million more Mexicans to move to the United States because of drought affecting crops in their country.
The new climate report, released by the National Oceanic and Atmospheric Administration and published as a supplement to the Bulletin of the American Meteorological Society, focused on 10 indicators of a warming world, seven which are increasing and three declining.
Rising over decades are average air temperature, the ratio of water vapor to air, ocean heat content, sea surface temperature, sea level, air temperature over the ocean and air temperature over land.
Indicators that are declining are snow cover, glaciers and sea ice.
The 10 were selected “because they were the most obviously related indicators of global temperature,” explained Peter Thorne of the Cooperative Institute for Climate and Satellites, who helped develop the list when at the British weather service, known as the Met Office.
“What this data is doing is, it is screaming that the world is warming,” Thorne concluded.
Source: AP News | <urn:uuid:fb4334e7-3625-4cdb-a66f-263178d86e30> | 3.34375 | 730 | Truncated | Science & Tech. | 33.005226 | 95,544,631 |
This article needs additional citations for verification. (June 2008) (Learn how and when to remove this template message)
In chemistry, an alkali (//; from Arabic: al-qaly “ashes of the saltwort”) is a basic, ionic salt of an alkali metal or alkaline earth metal chemical element. An alkali also can be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline is commonly, and alkalescent less often, used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases.
The word "alkali" is derived from Arabic al qalīy (or alkali), meaning the calcined ashes (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name Kalium), which ultimately derived from alkali.
Common properties of alkalis and basesEdit
Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include:
- Moderately concentrated solutions (over 10−3 M) have a pH of 7.1 or greater. This means that they will turn phenolphthalein from colorless to pink.
- Concentrated solutions are caustic (causing chemical burns).
- Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin.
- Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution.
Difference between alkali and baseEdit
There are various more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen.
- A basic salt of an alkali metal or alkaline earth metal (This includes Mg(OH)2 but excludes NH3.)
- Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes Mg(OH)2 and NH3.)
The second subset of bases is also called an "Arrhenius base".
- Sodium hydroxide – often called "caustic soda"
- Potassium hydroxide – commonly called "caustic potash"
- Lye – generic term for either of the previous two or even for a mixture
- Calcium hydroxide – saturated solution known as "limewater"
- Magnesium hydroxide – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions)
Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally, due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalograss), most plants prefer a mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems.
In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake.
Examples of alkali lakes:
- Alkali Lake, Lake County, Oregon
- Baldwin Lake, San Bernardino County, California
- Mono Lake, near Owens Valley in California
- Redberry Lake, Saskatchewan
- Summer Lake, Lake County, Oregon
- Tramping Lake, Saskatchewan
- Lake Magadi in Kenya
- Lake Turkana in Kenya (the largest alkali lake in the world)
- There are also alkali lakes in the outback of Australia.
- Bear Lake on the Utah–Idaho border
- Chambers's encyclopaedia: a dictionary of universal knowledge, Volume 1. J.B. Lippincott & Co. 1888. p. 148.
- Alkali | Define Alkali at Dictionary.com. Dictionary.reference.com. Retrieved on 2012-04-18.
- alkali. Tiscali.co.uk. Retrieved on 2012-04-18.[dead link]
- alkali – definition of alkali by the Free Online Dictionary, Thesaurus and Encyclopedia. Thefreedictionary.com. Retrieved on 2012-04-18.
- Chung, L.H.M. (1997) "Characteristics of Alkali", pp. 363–365 in Integrated Chemistry Today. ISBN 9789623722520
- Acids, Bases and Salts. KryssTal. Retrieved on 2012-04-18.
- Davis, Jim and Milligan, Mark (2011). Why is Bear Lake so blue? Public Information Series 96. Utah Geological Survey, Department of Natural Resources | <urn:uuid:b6f731af-6193-4df2-a293-67460cf0fc71> | 3.703125 | 1,222 | Knowledge Article | Science & Tech. | 38.878565 | 95,544,638 |
Industrial melanism is widespread in the Lepidoptera (butterflies and moths), involving over 70 species such as Odontopera bidentata (scalloped hazel) and Lymantria monacha (dark arches), but the most studied is the evolution of the peppered moth, Biston betularia. It is also seen in a beetle, Adalia bipunctata (two-spot ladybird), where camouflage is not involved as the insect has conspicuous warning coloration, and in the seasnake Emydocephalus annulatus where the melanism may help in excretion of trace elements through sloughing of the skin. The rapid decline of melanism that has accompanied the reduction of pollution, in effect a natural experiment, makes natural selection for camouflage "the only credible explanation".
Other explanations for the observed correlation with industrial pollution have been proposed, including strengthening the immune system in a polluted environment, absorbing heat more rapidly when sunlight is reduced by air pollution, and the ability to excrete trace elements into melanic scales and feathers.
In 1906, the geneticist Leonard Doncaster described the increase in frequency of the melanic forms of several moth species from about 1800 to 1850 in the heavily industrialised north-west region of England.
In 1924, the evolutionary biologist J. B. S. Haldane constructed a mathematical argument showing that the rapid growth in frequency of the carbonaria form of the peppered moth, Biston betularia, implied selective pressure. | <urn:uuid:38af6fba-27bd-4cf1-aec3-62f35c9aa364> | 3.421875 | 310 | Knowledge Article | Science & Tech. | 12.936667 | 95,544,667 |
Guest Post By Renee Hannon
In the mid-1900’s many scientists were suggesting the Earth was cooling. Now scientists are forecasting global warming. Indeed, instrumental data shows global temperatures warmed by approximately 1-degree C during the past 165+ years. With warming rates of 0.5 to over 1.3 degrees C per century this has caused considerable alarm for many. This recent warming is commonly attributed to increasing greenhouse gases, primarily CO2.
This post examines natural paleoclimate trends and simple characteristics of past and present climate cycles at different time scales. Data suggests distinct differences between short-term climate variability and longer-term climate change. This is important because short-term climate variability can be misinterpreted as underlying climate change resulting in poor science and potentially worse policy decisions. This post compares modern instrumental trends to paleoclimate trends. This comparison reveals modern warming has characteristics of natural short-term climate variability and not long-term climate change.
Comparing modern instrumental measurements to long-term paleoclimate data is not a simple task. They are vastly different types of datasets. Ice core paleoclimate isotope data are indirect indications of temperature (proxies) over millions of years compared to instrumental temperature measurements with high resolution of hours, days and decades. However, paleoclimate data cannot be ignored or dismissed when trying to understand present-day temperature trends. Paleoclimate characteristics and trends provide the overarching framework and climate history to better understand centennial temperature fluctuations and potential future global temperature tipping points. Earth’s natural baseline of historical climate must be established prior to any attempt to assign potential human impacts.
“Weekly or daily weather patterns tell you nothing about longer-term climate change (and that goes for the warm days too). Climate is defined as the statistical properties of the atmosphere: averages, extremes, frequency of occurrence, deviations from normal, and so forth.” Shepherd.
Climate timeframes and relative scales observed in paleoclimate data are illustrated in Figure 1. The glacial cycle repeats approximately every 100,000 years and consists of an interglacial and glacial period. Cold glacial conditions predominate 70% of the time while warmer interglacial conditions occur about 30% of the 100,000 years. This entire cycle has repeated four times over the past 400,000 years. The glacial cycle and occurrence of interglacial warm periods are commonly accepted as being influenced by the Milankovitch astronomical processes.
Figure 1: The climate framework using EPICA Dome C ice core temperature proxies. a) a glacial cycle over 100,000 years with warm interglacial periods in red and the long glacial period in between. Termination events and onset of the interglacial period are labeled. GM refers to glacial maximum. b) zoom in of the interglacial period (MIS 5e). Each interglacial period consists of a warming, plateau, and cooling segment, medium term events such as climate optimums and intervening cool events, and the smallest detectable climate variations in black that last only hundreds of years.
The glacial and interglacial periods are very different climate cycles and are composed of different patterns and events. The interglacial framework and short-term events superimposed on all segments of the interglacial period are examined below.
Interglacial Periods Define the Long-Term Underlying Trends
There are approximately five interglacial periods ranging in duration from ten to thirty thousand years during the past 500,000 years. The marine isotope stage (MIS) terminology is used in this post. Figure 2 shows the correlation of four of the interglacial warm periods. MIS 7e was omitted from this correlation due to its unique and unusual character of several short interglacial periods. Further discussion of MIS 7e can be found here.
The entire cycle for the interglacial warm period is defined from the glacial maximum to the next significant low temperature minimum. This cycle is subdivided into warming onset, plateau, and cooling segments. Figure 2 shows the systematic and repeatable sequence of these three segments.
Figure 2: Correlation of interglacial warm periods over the past 400,000 years. Present day HadCrut data is shown in red on the Holocene MIS 1 temperature proxy curve. Three key segments are highlighted; red is the onset warming, yellow is the interglacial plateau, and blue is interglacial cooling. Multi-millennial events are labeled as optimum, Younger Dryas (YD), 8.2 kyr event, and corresponding intervening cool events in past interglacial periods. Dansgaard-Oeschger (D-O) events are labeled and are mostly associated with the glacial period. Note the high frequency temperature fluctuations superimposed on the various segments of all interglacial periods.
Amplitude and Duration: The most significant events are terminations of the glacial period and rapid onset of global warming of the interglacial period. These events are frequently referred to as Terminations I-V. The interglacial warming onset shows the largest temperature increases of 5-7 degrees C globally and up to 12 degrees C in the Antarctic dome C data. This dramatic increase in temperature occurs rapidly over 5,000 to 7,000 years as glacial sheets begin to decrease in size, sea levels rise and greenhouse gases increase. This warming process eventually reaches an interglacial optimum and plateau.
The interglacial plateau shows variations in temperatures of approximately 1-4 degrees C and lasts from 10,000 to greater than 20,000 years. MIS 7e plateau was an exception lasting only 6,000 years. Earth is currently within the Holocene interglacial plateau that has lasted for more than 11,000 years so far. During this time, sea level is greater than minus 20 meters relative to present day and the Northern Hemisphere (NH) is predominantly ice free except for the Greenland ice sheet (Berger et. al).
The interglacial plateau is followed by global cooling of 4-6 degrees C and up to 8 degrees C in the Antarctic data that takes 7,000 to 13,000 years to re-enter the next glacial period. Many scientists propose that decreasing obliquity, or Earth’s tilt, is responsible for initiating the cooling tipping point. The initiation and growth of ice sheets occurs, ocean temperatures begin to cool, sea level falls and greenhouse gases gradually decline during global cooling.
Rate of Change: The temperature rate of change was determined for the warming onset, plateau and cooling segments of the interglacial period. Trendlines are calculated using the linear regression analyses by the “least squares” method shown in Figure 3.
Figure 3: MIS 5e as an example for establishing trends and rate of change for the interglacial period segments. The warming and cooling trends have a strong correlation coefficient (R2), whereas the plateau tends to have a lower R2.
Figure 4 compares trendlines for each warming onset, plateau and cooling segment of the five interglacial periods. Note duration is in years. The starting points are pinned at zero, except for the global warming phase which was pinned at minus 10 degrees C. These simple trendlines provide visualization of the underlying long-term interglacial trends and removes medium and short-term internal variations as well as noise.
Figure 4: Rate of change or trendlines for the past five interglacial segments from EPICA Dome C temperature proxies. Initial starting points are pinned at zero except for the global warming trendline which is pinned at -10 degrees C. The length of the trendline approximates the duration of the interglacial segment. a) rate of change for the interglacial plateau segments. Marcott and May’s Holocene global reconstructions are included. b) rate of change for warm onsets also referred to as Terminations and c) rate of change for interglacial cooling segments.
Interglacial trends over the past 400,000 years exhibit steep warming onsets, slower cooling rates and nearly flat plateaus. Average warming onset rate of change is approximately 2.0 degrees C/millennium in the Antarctic with exceptionally strong correlation coefficients of 0.98. Average plateau rate of change is minus 0.01 degrees C/millennium (excludes MIS 7e) with weaker correlation coefficient of about 0.5. Interglacial cooling is less than 1.0 degree C/millennium with strong correlation coefficients of 0.95. Since the warming rate is twice as fast as the interglacial cooling rate, the typical interglacial period has an asymmetrical pattern suggesting Earth heats up due to natural processes more rapidly than when it cools.
During the interglacial plateau, trendlines are relatively flat. MIS 5e, 7e, and 9 have an early and strong climate optimum which resulted in an overall cooling trend for the plateau. The Holocene MIS 1 global temperature reconstructions over the past 11,000 years by Marcott and May show a slight overall cooling trend compared to the Antarctic Dome C MIS 1 temperature proxy. MIS 11 which has the longest plateau shows a slight warming trend due to a later second climate optimum.
The warming onsets are very consistent for MIS 5e and 9 as the trendlines practically overlie each other. MIS 7e has a rapid onset and a very short plateau, if any. MIS 11 has the slowest warming onset rate. The Holocene’s warming onset started out like MIS 5e and 9 but was interrupted by the Younger Dryas (YD) cooling event.
Interestingly, the most consistent trendlines are the interglacial cooling rates which demonstrate a narrow deviation for the past four interglacial periods. MIS 9 cooling trend was measured up to a stadial event which is why it appears shorter. Interglacial cooling rates demonstrate consistent, predictable trendlines suggesting Earth’s climate follows similar, repeatable processes such as ice growth rates and oceanic/atmospheric process interactions as it cools.
Holocene Past Millennia shows Cooling Trends
If Earth continues harmoniously in step with natural processes, the next significant tipping point will be the Holocene interglacial cooling. Scientists generally agree the Earth has been cooling over the past several thousand years at an average global rate of -0.20 degrees C/millennium.
An interesting paper by Stenni et. al. incorporated seven different Antarctic ice core regions consisting of 112 records and used four different reconstruction methods. They observed a Holocene cooling trend in the Antarctic of -0.26 to -0.40 degrees C/millennium for the past 1900 years prior to present day warming of the most recent 200 years. Figure 5 compares these recent trends to the past interglacial cooling trends.
Figure 5: Comparison of the rate of change or trend for the past 1900 years bracketed by the Holocene plateau trend and average interglacial cooling trend. The Holocene plateau and cooling trends are from EPICA Dome C temperature proxies and the past 1900-year trends are from Stenni’s Antarctic region reconstructions. Initial starting points are pinned at zero and projected out in time. Marcott and May’s Holocene global reconstructions for the interglacial plateaus trendlines are included for comparison.
It appears the past millennia Holocene cooling trends in the Antarctic are approximately half way between the 11,000-year Holocene plateau trend and approaching global cooling trends of the past four interglacial cycles. The average interglacial cooling trend from the Dome C data is approximately 0.7 degrees C/millennium and represents the next climate change tipping point.
Interglacial Short-term Events Display Steep Trendlines
Understanding short-term cycles is important. Unfortunately, they are not well defined on the ice core temperature proxies due to resolution difficulties and local latitude differences. Short-term events within the Holocene interglacial period include the Medieval Warm Period (MWP), Roman Warm Period (RWP), Little Ice Age (LIA), and other cool events such as 4.2, 5.9, 7.2 and 8.2 kyr events. Some of these events are not as obvious on the Dome C data. However, they are more pronounced on northern latitude ice core temperature proxies. Global temperature reconstructions also tend to dampen the amplitude of these smaller events.
Davis, et. al conducted an extensive study on the Vostok ice core data examining centennial events. This study concluded the sample resolution of the Vostok ice core data can detect centennial scale cycles. However, it is inadequate to detect decadal scale cycles. Spectrum frequency analyses over the past 12,000 years show centennial scale events on four different Antarctic temperature proxy records at 193, 318, 379 and 493 years. Several figures from Davis’ study are compiled in Figure 6.
Figure 6: a) spectral power density periodogram of Vostok temperature-proxy records over the Holocene for 12,000 years showing six peaks. b) example correlation of the centennial peaks and troughs over four isotope records; Vostok, EPIC Dome C (EDC), EPIC Dronning Maud Land (EDML), and Talos Dome (TD). c) table showing the statistics for the TOc350 cycles with the present day HadCrut and UAHv6 statistics in red.
Interestingly, Holocene temperature peaks in the above periodogram are similar to Holocene solar peaks in the Lomb-Scargle periodogram shown in Javier’s figure 62a. The de Vries cycle is approximately 208 years and the Eddy cycle is approximately 970 years. Davis has attributed the broad 1000-year peak to the Bond cycle.
Amplitude and Duration: Davis identified 650 individual cycles of Temperature-proxy Oscillation (TO-c350) cycles in the Vostok data over the past 220,000 years. At least 60 occurred within the Holocene that are correlative over four Antarctic ice records with an example shown in Figure 6b. The TO-c350 events show temperature amplitude averages of 0.7 degrees C and an average cycle duration of 350 years. Davis speculates these cycles are the result of oceanic oscillations which frequently operate on a centennial scale bases.
Rate of Change: These centennial cycles tend to have steep warming trends, or segments, followed by steep cooling segments. Warming and cooling rates are rapid averaging 0.43 degrees or more per century. They have a high frequency of occurrence, rise and fall within hundreds of years and are highly variable. These short-term events have a rapid peak turnaround in temperatures and lack a plateau or have an extremely brief plateau lasting several decades at most.
Modern instrumental temperatures and warming rates are added to the table in Figure 6c. Their significance is discussed below.
Modern Warming Displays Steep Trendlines
Recent temperature measurements over the past 165+ years based on satellite, marine and land instruments obtained and analyzed by HadCrut, GISS, and Berkeley indicate global temperatures have increased by approximately 1 degree C shown on Figure 7. This calculates into warming rates of 0.5 degrees C to 0.7 degrees C per century. The southern hemisphere trend for HadCrut shows a slightly lower trend of 0.47 degrees C per century. In the past 38 years, global average lower tropospheric temperature (TLT) anomalies show trends up to 1.3 degrees C per century from UAH datasets compiled by Spencer and Christy. This steeper trend can also be seen on the HadCrut, GISS and Berkeley plots from 1976 to present.
Figure 7: a, b d) plots of global temperature in degrees C since 1850 from Hadcrut, GISS, and Berkeley combined land and ocean datasets. Rate of change per Century and correlation coefficient shown on each plot. c) UAH global temperatures from TLT are from 1976 to present day, a much shorter period and shows the highest rate of change.
The instrumental data is a good example of how “…trends for short periods are uncertain and very sensitive to the start and end years…” as noted by the IPCC WG1AR5. As an example, since 1980 to present day, instrumental temperatures show an overall warming trend of 1.8 degrees C per century (Berkeley).
The timeframe of instrumental records needs to be mentioned. Recent instrumental data spans 165+ years during the past 11,000+ years of the Holocene interglacial warm period as shown on figure 2. Instrumental records represent a very small subset (1.4%) of the Holocene interglacial plateau and a tiny blip in geologic time.
Classification of Climate End-Members
By establishing natural trendlines for different levels of climate temperature oscillations it is possible to improve our understanding of how the past 165 years have deviated from the natural baseline. While ice core proxies and instrumental temperature measurements are different datasets, general characteristics and trends of climate oscillations do exist. Comparisons may not be quantitative, however qualitative trends and observations are frequently used with imperfect data to develop scientific hypotheses that can be tested.
Once a natural short-term climate variation from the natural paleoclimate baseline is established, only then can the potential Anthropogenic influence be further investigated.
Rate of Change: By comparing instrumental rate of change to the interglacial plateaus shown in Figure 8a, there is a significant difference in trendlines. Instrumental trends have much steeper slopes (0.6 degrees C/century) than the flat longer-term interglacial plateau (0.001 degrees C/century or 0.01 degrees C/millennium). Projecting the 165-year instrumental trends suggests within 500 years temperatures will reach 2.5 to 3.5 degrees C warmer than present day. Figure 8a demonstrates and amplifies the conundrum of the recent warming trend compared to long-term trends and to reconstructions. This was illustrated quite nicely by the “Mann hockey stick”. Reconstructions tend to dampen the short-term amplitudes. Marcott states that his reconstruction preserves variability for periods longer than 2000 years, only 50% at 1000-year periods, and no variability less than 300 years.
Figure 8: Comparison of modern warming trends with interglacial long-term trends. Starting points are pinned at zero temperatures and zero timeframe for the duration. Instrumental trends are projected for 500 years. Interglacial trendlines projections approximate their duration. a) present day warming trends and interglacial plateau trends. MIS 7e is omitted in the plot. May and Marcott’s global reconstructions rates are included. b) Comparison of instrumental trends with past Termination warming onset trends.
Comparing instrumental warming trends to the Termination onset warming slopes shown in figure 8b is highly informative. Terminations I through V are significant paleoclimate events where the termination of the glacial state occurs, and Earth begins to change into a warm interglacial state. These long-term trends show the largest increase in temperatures of any paleoclimate event during the past millions of years. Importantly, present day slopes are much steeper than the interglacial warm onsets even on the polar Antarctic datasets.
Here, modern warming trend is compared to both shorter-term events as well as long-term interglacial trends.
Figure 9 compares absolute warming and cooling tends against their durations for various scales. Included in the plot are the interglacial only events from the Vostok TOc350 dataset by Davis, slopes for the past 100 and 1900 years from the Antarctic region by Stenni et. al., and trends calculated for Dome C interglacial segments discussed in earlier in this post.
Figure 9: Plot of rate of change in degrees C/century and duration for individual warming, cooling and duration segments. Black dots are instrumental Berkley and UAHv6 rate of change. Blue dots are interglacial segments calcuated by this author, orange dots are Holocene warming rates from Davis, TOc350. Green octagons are from the Antarctica region for 100 and 1900 year durations by Stenni.
Decadal data cannot be extracted from ice core temperature proxies and is missing from this plot. Instrumental data (black dots) are plotted using both the 38-year rate of change of 1.3 degrees C/century and the 165-year rate of change of 0.6 degrees C/century.
Figure 9 shows two distinct classifications; one group is less than +/-500 years of duration and the other is greater than +/-700 years.
The group with less than 500+/- years duration are predominately short-term warming and cooling segments. They typically don’t have a plateau long enough to establish a trend. Warming and cooling rates range from 0.01 to >3.0 degrees per century. Interestingly, the small-scale events which have the shortest duration and smallest temperature amplitude tend to have the steepest trendlines. This is also visibly noted on Figure 1b where the black short-term oscillations are superimposed on all segments of the interglacial period underlying long-term cycle in red.
The long-term interglacial segments demonstrate a clear and distinct second group. Warming, cooling and plateau segments that last longer than 700+/- years consistently have rates of less than approximately 0.25 degrees per century. This includes the geologically significant Termination onsets. These long-term events also exhibit the largest temperature ranges up to 12 degrees C in the Antarctica and Greenland data.
Distinguishing Climate Variability from Underlying Climate Change
The IPCC has the following definitions for climate variability and climate change:
“Climate variability refers to variations in the mean state and other statistics (such as standard deviations, the occurrence of extremes, etc.) of the climate on all spatial and temporal scales beyond that of individual weather events.”
“Climate change refers to a change in the state of the climate identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties and that persists for an extended period, typically decades or longer.”
Although the above definitions are qualitative in nature, Figure 9 distinguishes climate change from climate variability. As more data on multi-centennial events becomes available in the future it will help narrow the range. For now, multi-centennial warming and cooling trends less than +/-500 years in duration can have a wide variance in rates of change. However, warming and cooling trends greater than +/-700 years in duration which are the underlying long-term trends have rates of change less than approximately 0.25 degrees C per century based on Antarctic data. These are true climate change events. Climate variability best describes the shorter multi-centennial trends which are merely the overprinting internal oscillations on longer-term climate change.
Currently, instrumental temperature characteristics are consistent with natural climate variability of short-term events and steep warming trendlines.
Scientists studying climate change should entertain multiple working hypotheses. We’re all familiar with the CO2 hockey stick with ever increasing global temperature projections into the 21st century. Past natural climate characteristics suggest another hypothesis. One that Modern Global Warming is part of a natural warming segment of multi-centennial climate variability and the impact of CO2 is overestimated. The modern high rates of change are certainly in line with past natural short duration events. And in a couple of centuries, there may be a quick turnaround at its peak (no to minor plateau) and then a cooling multi-centennial segment will ensue.
Every several centuries Earth will experience these false alarms of high rates of temperature changes both positive and negative. Recognition of short-term cycles and resulting adaptation strategies should be different than for longer-term climate change. These multi-centennial cycles of climate variability will continue over the next several millennia until the true underlying Holocene interglacial long-term cooling begins to take Earth back into the next glacial period.
Characteristics in paleoclimate data occurring prior to industrial revolution cannot, by definition, be attributed to anthropogenic forcing. Paleoclimate data can establish a natural baseline for long, medium, and short-term climate cycles. The Holocene and past interglacial plateaus are characterized by high frequency multi-centennial climate variability that has different characteristics than the underlying longer-term multi-millennial climate change.
Climate Change tipping points and underlying long-term framework are the interglacial onset, plateau, and eventual cooling. These cycles demonstrate dramatic temperature changes of up to 12 degrees C. The next natural long-term tipping point for Earth will be a cooling at the end of the Holocene interglacial period. It will take thousands of years for Earth to enter and progress through this phase, providing Earth along with its ecosystems and inhabitants time to adapt as necessary.
Climate variability has a high frequency of occurrence, short durations of less than about 500 years and can demonstrate warming and cooling rates greater than 0.25 degrees C/century. Currently, instrumental and satellite temperature data exhibit rates of 0.5 to 1.8 degrees per century that are well within the assemblage of natural centennial events. Therefore, present modern warming is likely a natural multi-centennial warming segment that will soon begin a temperature turnaround.
This post compared modern instrumental trends to paleoclimate trends. The comparison reveals present day warming is not consistent with rates of change (temperature) observed in long-term climate change. The rate of change for modern warming is consistent with climate variability.
Acknowledgements: Special thanks to Andy May and Donald Ince for reviewing and editing this article.
via Watts Up With That?
March 28, 2018 at 07:07AM | <urn:uuid:3ab850cd-7145-4313-ad3f-fd77566ff6f6> | 3.671875 | 5,289 | Academic Writing | Science & Tech. | 37.688705 | 95,544,669 |
The bee is 86.344 by 81.673 pixels. With a density of 1, this calculates to a mass of 141 physics units for mass.
The square is 50 by 50 pixels. With a density of 1, this calculates to a mass of 50 physics units for mass.
A Force (when continue applied) gives an acceleration :
acceleration = Force / Mass (because the first law says : F = m * a )
Conclusion: How bigger the mass (by a constant force), how less you see it moving.
Or: The same force has less effect on heavier masses.
The bee has more mass than the square, so it is not moved as much as the square is by the same force. | <urn:uuid:4a137199-2ce5-4141-bf68-d891a4f2f651> | 3.53125 | 149 | Q&A Forum | Science & Tech. | 81.310773 | 95,544,675 |
Carbon dioxide emissions are set to rise this year after a three-year pause, scientists said at UN climate talks Monday, warning that "time is running out", even as White House officials used the occasion to champion the fossil fuels that drive global warming.
CO2 emissions, flat since 2014, were forecast to rise two percent in 2017, dashing hopes they had peaked, scientists reported at 12-day negotiations in the German city of Bonn ending Friday.
"The news that emissions are rising after a three-year hiatus is a giant leap backward for humankind," said Amy Luers, a climate policy advisor to Barack Obama and executive director of Future Earth, which co-sponsored the research.
Global CO2 emissions for 2017 were estimated at a record 41 billion tonnes.
"Time is running out on our ability to keep warming below two degrees Celsius (3.6 degrees Fahrenheit), let alone 1.5 C," said lead author Corinne Le Quere, director of the Tyndall Centre for Climate Change Research at the University of East Anglia.
The 196-nation Paris Agreement, adopted in 2015, calls for capping global warming at 2 C below pre-industrial levels.
With the planet out of kilter after only one degree of warming—enough to amplify deadly heatwaves, droughts, and superstorms—the treaty also vows to explore the feasibility of holding the line at 1.5 C.
"As each year ticks by, the chances of avoiding 2 C of warming continue to diminish," said co-author Glen Peters, research director at Center for International Climate Research in Oslo, Norway.
"Given that 2 C is extremely unlikely based on current progress, then 1.5 C is a distant dream," he told AFP.
The study identified China as the single largest cause of resurgent fossil fuel emissions in 2017, with the country's coal, oil and natural gas use up three, five and 12 percent, respectively.
Earth is overheating due to the burning of oil, gas and especially coal to power the global economy.
That did not discourage US officials from the administration of President Donald Trump from making a case at the UN negotiations for "The Role of Cleaner and More Efficient Fossil Fuels and Nuclear Power in Climate Mitigation."
"Without a question, fossil fuels will continue to be used," George David Banks, a special energy and environment assistant to the US president told a standing-room only audience, citing projections from the International Energy Agency (IEA).
Faced with this reality, "we would argue that it's in the global interest to make sure that when fossil fuels are used, that it's as clean and efficient as possible."
Flanked by Francis Brooke from the office of Vice President Mike Pence, and senior representatives of American energy companies, Banks addressed a packed room where protesters shouted "you're liars!" and "there's no clean coal!".
Former New York mayor Michael Bloomberg, UN special envoy for cities and climate change, tweeted: "Promoting coal at a climate summit is like promoting tobacco at a cancer summit."
The US is the only country in the world that has opted to remain outside the Paris Agreement.
More than 15,000 scientists meanwhile warned that carbon emissions, human population growth, and consumption-driven lifestyles were poisoning the planet and depleting its resources.
"We are jeopardising our future," they wrote in a comment entitled "World Scientists' Warning to Humanity: A Second Notice," echoing a similar open letter from 1992.
It is "especially troubling" that the world continues on a path toward "potentially catastrophic climate change due to rising greenhouse gases from burning fossil fuels," they said.
Rainforest into savanna
"We have unleashed a mass extinction event, the sixth in roughly 540 million years."
Another group of scientists cautioned that rising global temperatures were bringing Earth ever closer to dangerous thresholds that could accelerate global warming beyond our capacity to rein it in.
"In the last two years, evidence has accumulated that we are now on a collision course with tipping points in the Earth system," Johan Rockstrom, executive director of the Stockholm Resilience Centre.
Some scientists, for example, have concluded that the planet's surface has already warmed enough—1.1 degrees Celsius (2.0 degrees Fahrenheit) on average—in the last 150 years to lock in the disintegration of the West Antarctic ice sheet, which holds enough frozen water to lift global oceans by six or seven metres.
It may take 1,000 years, but—if they are right—the ice sheet will melt no matter how quickly humanity draws down the greenhouse gases that continue to drive global warming.
Rockstrom and colleagues identified a dozen such natural processes that could tip into abrupt and irreversible change.
An increase of 1-3 C, for example, would likely provoke the loss of Arctic summer sea ice, warm-water coral reefs, and mountain glaciers.
A degree or two more would see large swathes of the Amazon rainforest turn into savanna, and slow a deep-sea current that regulates weather on both sides of the northern Atlantic.
The International Union for Conservation of Nature (IUCN), meanwhile, released a report Monday showing that climate change now imperils one in four natural World Heritage sites, including coral reefs, glaciers, and wetlands—nearly double the number from just three years ago.
Explore further: US to defend fossil fuels at UN climate meeting | <urn:uuid:34faa540-6a7e-4b04-b052-32c1caf91691> | 3.03125 | 1,114 | News Article | Science & Tech. | 39.683612 | 95,544,706 |
Nancy Neal-Jones / Bill SteigerwaldGoddard Space Flight Center, Greenbelt, Md.
WASHINGTON -- New images from NASA's Lunar Reconnaissance Orbiter (LRO) spacecraft show the moon's crust is being stretched, forming minute valleys in a few small areas on the lunar surface. Scientists propose this geologic activity occurred less than 50 million years ago, which is considered recent compared to the moon's age of more than 4.5 billion years.
A team of researchers analyzing high-resolution images obtained by the Lunar Reconnaissance Orbiter Camera (LROC) show small, narrow trenches typically much longer than they are wide. This indicates the lunar crust is being pulled apart at these locations. These linear valleys, known as graben, form when the moon's crust stretches, breaks and drops down along two bounding faults. A handful of these graben systems have been found across the lunar surface.
"We think the moon is in a general state of global contraction because of cooling of a still hot interior," said Thomas Watters of the Center for Earth and Planetary Studies at the Smithsonian's National Air and Space Museum in Washington, and lead author of a paper on this research appearing in the March issue of the journal Nature Geoscience. "The graben tell us forces acting to shrink the moon were overcome in places by forces acting to pull it apart. This means the contractional forces shrinking the moon cannot be large, or the small graben might never form."
The weak contraction suggests that the moon, unlike the terrestrial planets, did not completely melt in the very early stages of its evolution. Rather, observations support an alternative view that only the moon's exterior initially melted forming an ocean of molten rock.
In August 2010, the team used LROC images to identify physical signs of contraction on the lunar surface, in the form of lobe-shaped cliffs known as lobate scarps. The scarps are evidence the moon shrank globally in the geologically recent past and might still be shrinking today. The team saw these scarps widely distributed across the moon and concluded it was shrinking as the interior slowly cooled.
Based on the size of the scarps, it is estimated that the distance between the moon's center and its surface shrank by approximately 300 feet. The graben were an unexpected discovery and the images provide contradictory evidence that the regions of the lunar crust are also being pulled apart.
"This pulling apart tells us the moon is still active," said Richard Vondrak, LRO Project Scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. "LRO gives us a detailed look at that process."
As the LRO mission progresses and coverage increases, scientists will have a better picture of how common these young graben are and what other types of tectonic features are nearby. The graben systems the team finds may help scientists refine the state of stress in the lunar crust.
"It was a big surprise when I spotted graben in the far side highlands," said co-author Mark Robinson of the School of Earth and Space Exploration at Arizona State University, principal investigator of LROC. "I immediately targeted the area for high-resolution stereo images so we could create a three-dimensional view of the graben. It's exciting when you discover something totally unexpected and only about half the lunar surface has been imaged in high resolution. There is much more of the moon to be explored."
The research was funded by the LRO mission, currently under NASA's Science Mission Directorate at NASA Headquarters in Washington. LRO is managed by NASA's Goddard Space Flight Center in Greenbelt, Md.
For more information about LRO and related images on the finding, visit http://www.nasa.gov/mission_pages/LRO/news/lunar-graben.html.
- end - | <urn:uuid:f8a0ba88-420d-40a6-8460-e709a519c76b> | 3.90625 | 786 | News Article | Science & Tech. | 45.239486 | 95,544,722 |
What is Hybridization?
The valence band theory was introduced by Heitler and London which is based on the concept of formation of covalent bonds, but later on, this theory was improved by Linus Pauling who introduced this concept. Hybridization is defined as the concept of mixing two atomic orbitals with the same energy levels to give a degenerated new type of orbitals. This intermixing is based on quantum mechanics.
The atomic orbitals of the same energy level can only take part in hybridization and both full filled and half-filled orbitals can also take part in this process provided they have equal energy. During the process, the atomic orbitals of similar energy are mixed together such as the mixing of two ‘s’ orbitals or two ‘p’ orbital’s or mixing of an ‘s’ orbital with a ‘p’ orbital or ‘s’ orbital with a ‘d’ orbital.
Types of hybridization:
- This type involves mixing of one ‘s’ orbital and one ‘p’ orbital of equal energy to give a new hybrid orbital known as an sp hybridized orbital.
- The mixture of s and p orbital and the formed sp orbital is maintained at 1800.
- Example of sp-hybridized is: BeCl2
- This kind involves mixing of one ‘s’ orbital and two ‘p’ orbital’s of equal energy to give a new hybrid orbital known as sp2.
- A mixture of s and p orbital formed in trigonal symmetry and is maintained at 1200.
- Example of sp2 hybridized is: Ethylene (C2H4)
- This type involves mixing of one ‘s’ orbital and three ‘p’ orbital’s of equal energy to give a new hybrid orbital known as sp3.
- A mixture of s and p orbitals formed is in tetrahedral symmetry and is maintained at 109.280.
- Example of sp3 hybridization: Ethane (C2H6)
- It involves mixing of one ‘s’ orbital, three ‘p’ orbital’s and one ‘d’ orbital of equal energy to give a new hybrid orbital known as sp3d hybridized.
- The mixture of s, p and d orbital’s forms trigonal bipyramidal symmetry.
- Example: Phosphorus pentachloride(PCl5)
For more information about Orbital Overlap Concepts and more, check out our learning app Byju’s- the learning app.
Practise This Question | <urn:uuid:f9c2315f-8cf1-4664-9ec9-cb3e62e065ba> | 3.625 | 558 | Knowledge Article | Science & Tech. | 43.558993 | 95,544,728 |
1. If you turn on a faucet so that a thin stream of water flows from it, you may demonstrate to yourself that a charged object (such as a plastic comb) brought near it will deflect the water. The force is always attractive. Explain why this happens, in terms of the polarization of water. Draw a diagram to illustrate.
2. Suppose you have an uncharged metal sphere insulated from the ground. You also have in your possession a plastic rod that is negatively charged. Describe how you could put a positive charge on the metal sphere.
3. A negative charge of 2.0 micro coulombs is held at a distance of 0.30 m from a negative charge of 1.2 micro coulombs. Find the electrical force on the second charge due to the first charge, giving magnitude and direction.
4. An electron starts from rest in a uniform field of 30 N/C. (A) Determine its speed after a time of 1.0 microsecond. (B) Use the work-kinetic energy principle to find how far it traveled during this time.
5. Describe in your own words what capacitance is. What are its units?
6. What is the resulting capacitance of a 4.0 micro farad capacitor and a 10.0 micro farad capacitor hooked up in parallel? What is it if they are hooked in series?© BrainMass Inc. brainmass.com July 19, 2018, 1:39 pm ad1c9bdddf
6 problems related to Electrostatic induction, polarization, capacitance, force, field and potential. | <urn:uuid:5fb415be-9b23-4970-b48a-a13c18e7a66d> | 4.25 | 332 | Content Listing | Science & Tech. | 70.395916 | 95,544,758 |
Leonid Meteors Yield Rich Astrobiology Research ResultsNovember 27, 2000 / Posted by: Shige Abe
Text based on a NASA/Ames Press Release
A team of NASA researchers and their collaborators report their findings from last year’s Leonid meteor storm in a special issue of the journal “Earth, Moon and Planets.”
The scientists – all members of the NASA and U.S. Air Force-sponsored Leonid Multi-Instrument Aircraft Campaign – discussed their results in a series of astrobiology-related papers in the peer-reviewed journal. While their findings covered a range of areas, the key results reported have implications for the existence and survival of life’s precursors in comet materials that reach Earth.
“Last year’s Leonid meteor storm yielded rich research results for NASA astrobiologists,” said Dr. Peter Jenniskens, a NASA astronomer based at Ames Research Center and principal investigator for the airborne research mission. “Findings to date indicate that the chemical precursors to life — found in comet dust — may well have survived a plunge into early Earth’s atmosphere.”
Jenniskens and his international cadre of researchers think that much of the organic matter in comet dust somehow survived the rapid heating of Earth’s atmospheric entry. “Organic molecules in the meteoroid didn’t seem to burn up in the atmosphere,” he explained. They may have cooled rapidly before breaking apart, he concluded.
Another manner in which organic matter can somehow survive the fiery plunge into Earth’s atmosphere was discovered by a team from the Aerospace Corporation, Los Angeles, who detected the fingerprint of complex organic matter, identical to space-borne cometary dust, in the path of a bright Leonid fireball. This “fingerprint” is still under investigation to ensure that trace-air compounds are not contributing to the detection.
Another finding with potentially important implications for astrobiology is that meteors are not as hot as researchers had previously believed. “We discovered that most of the visible light of meteors comes from a warm wake just behind the meteor, not from the hot meteoroid’s head,” said Jenniskens. This warm wake has just the right temperature for the creation of life’s chemical precursors, he said.
Utah State University researchers found that, during the meteors’ demise in the atmosphere, their rapid spinning caused small fragments to be ejected in all directions, quite far from the meteoroid’s head. This is an important finding for astrobiology, because it means that meteors may be able to chemically alter large amounts of atmosphere.
- Life Underground - Available to Play
- Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea
- Yosemite Granite Tells New Story About Earth's Geologic History
- Supporting SHERLOC in the Detection of Kerogen as a Biosignature
- New Estimates of Earth's Ancient Climate and Ocean pH
- How Microbes From Spacecrafts Survive Clean Rooms
- Radical Factors in the Evolution of Animal Life
- Understanding Oxygen as an Exoplanet Biosignature
- Recap of the 2018 Astrobiology Graduate Conference (AbGradCon)
- Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award | <urn:uuid:b675bfc7-11a6-4f7a-a7c9-1bf9a550fe38> | 3.46875 | 702 | News (Org.) | Science & Tech. | 23.917874 | 95,544,760 |
Advanced Stellar Propulsion Systems
Copyright Ó 1995,
1998-2007, 2012- 2018
by Brian Fraser (Scottsdale, Arizona, USA)
last modified 6-26-18a
I was wandering in a bookstore one day and saw one of those StarTrek engineering manuals. I leafed through it and saw all kinds of detailed and interesting drawings. I wondered if the book explained how the "warp drive" worked. I found intriguing terms like "dilithium crystals," "phase inducers," and "inertial dampers." But the book did not explain how this interstellar propulsion system operated, except that it was based on the warping of space. I was disappointed and did not buy the book. But the idea of such a propulsion system fired my imagination. Was it possible? Could I design one?
But I never gave the problem much thought after that. I enjoyed StarTrek because it had good psychological themes, not because it was high-tech. And I really had not the slightest idea how I would design a warp drive.
Eventually though, my mind came back around to confront this problem. I seem to like solving "impossible" problems. I also am intrigued about learning how the human mind solves such problems and creates new concepts and insights seemingly from nothing. I tend to solve problems intuitively, and so the process is not obvious to me.
And so this article is about gravitation and a possible basis for an advanced propulsion system. If you are a regular reader of science magazines you will find the ideas presented here reasonably clear. They are not inherently hard to understand, but they are very different and will require quite a bit of patient reflection. Some background in physics would help you with the terminology.Also, this article is more concerned with finding the right questions, and the right principles, than with finding the right answers per se. I also think you will find it to be a good example of a useful problem solving attitude I call "creative arrogance" —a viewpoint that is quite in contrast to that of our Pavlovian educational system. (For more about creativity see: "Creativity in Science and Engineering", Ronald B. Standler,1998, http://www.rbs0.com/create.htm ; "The Creativity Crisis", Po Bronson, Ashley Merryman, http://www.newsweek.com/2010/07/10/the-creativity-crisis.html ; http://www.newsweek.com/2010/07/12/forget-brainstorming.html ; "Opinion: Academia Suppresses Creativity", Fred Southwick (2012) http://www.the-scientist.com/?articles.view/articleNo/32077/title/Opinion--Academia-Suppresses-Creativity/ ; "Managing Breakthrough Research", Marc G. Millis, http://gcep.stanford.edu/pdfs/lh-ivzYPrcfEnjOxV0q59g/7_11_millis_breakthrough.pdf )
The Key Premise
A previous article, An Atom or a Nucleus? refuted the commonly held belief that atomic matter is made up of fundamental particles. Instead, the evidence from physics points to the idea that matter is actually some sort of relationship between space and time. Matter gravitates, and in order to design an advanced propulsion system, an understanding of gravity is very necessary. It follows that we need to know a lot more about how these space-time relationships operate.
Two lines of evidence suggest that matter is comprised of ratios of space and time. The first is the requirement of consistency in the units of measurement in the mathematical equations describing physical phenomena. In common terms this means that if an equation has the units of measurement of apples, oranges, and pineapple on one side of the "=" sign, then it must have fruit, or fruit cocktail units of measurements on the other side; it cannot equate to typewriters or airplanes.
Certain equations in physics are descriptive of fundamental phenomena and space/time ratios appear in these equations. E = cB and E = mc2 are two well-known examples. The "c" term stands for a constant that appeared in Maxwell’s equations pertaining to electromagnetic phenomena. That constant turned out to be the speed of light and so physicists have continued to use the letter "c" to denote that speed. It is quite high—about 186,000 miles per second. It is clearly a space/time ratio. But note that it is used with E (energy), M (mass) and B (magnetic flux density). In order for these equations to be consistent in their units of measurement, E, M, and B must also be space/time ratios. If mass and energy are space time ratios, then every entity in the physical universe must be ultimately reducible to a space/time relationship.
The second line of evidence is that certain physical entities can be inherently described in terms of space/time ratios. The most prominent feature of the photon (light), for example, is a property called "frequency", which has the units of "cycles per second." Frequency is a special sort of speed and can be treated as a space/time ratio.
This is also true of particles having mass. Electrons have a property called "intrinsic spin." This is not the ordinary type of spin you could visualize on a spinning toy. In fact it can be demonstrated in one-dimensional systems where ordinary angular momentum cannot even be defined. Atoms also possess this kind of spin ("intrinsic angular momentum") as the Stern-Gerlach experiment demonstrated in 1924. Intrinsic spin and angular momentum are also akin to speed, but imply rotational space/time relationships instead of linear or oscillatory ones.
In other words, photons (light), atoms, and subatomic particles all seem to possess inherent space/time ratios of various sorts. As far as we know the entire universe is made up of these three classes of entities and so this again amounts to saying that everything in the universe is a space/time ratio.
This would actually resolve the fundamental particle dilemma. As was pointed out in An Atom or a Nucleus? there can be no such thing as a fundamental particle. Particles can be converted into radiation and radiation into particles, but that which is truly fundamental cannot change into some other thing. If radiation and matter are comprised of space/time ratios, then their interconvertability is understandable. When an electron annihilates a positron, for example, a rotational space/time relationship simply converts into the form represented by radiation. Both are still fundamentally space/time ratios.
We cannot explain what space and time really are. Such an explanation could only occur from a viewpoint that is outside of the physical universe, and therefore outside of the scope of science. We humans have an intuitive feel for space and time, but they are both inherently unanalyzable. The function of space and time, on the other hand, seems to clearly involve the concept of separation. It is as though God created them so he could say, "I am here, and you are there. I am me, and you are you." This "separability" is fundamental to concepts like "locality," "identity," and "existence." These are all very important to the physicist and to our understanding of the universe.
Apparent Properties of Space and Time
While we cannot explain what space and time are, it will be fruitful to note their key properties. These seem to be as follows:
1. Space is three-dimensional. Space has a property we could call "extensionality" and it manifests this property in three independent ways, and is typically described by three independent numbers. This is just a technical way of saying space is "three-dimensional."
2. Time progresses. The most obvious effect of time is to order events in time and to separate such events in time. Time seems to progress only in one direction.
Note that the ordering of events in time seems to be independent of their ordering in space. I can lay out cards on a table, but their spatial order says nothing about which card was laid down first or last.
3. Time is three-dimensional in the same sense that space is three-dimensional.
This peculiar conclusion is forced upon us when we try to account for the observed properties of light. The Michelson-Morley experiment, Bradley’s telescopic stellar aberration, and de Sitter’s problem, all suggest that the speed of light is constant in all unaccelerated reference systems in a vacuum. The measured speed does not depend on the speed of the emitter, or upon the physical reference system used for measurement. This actually facilitates our understanding of the universe, but is also counterintuitive. An illustration will help clarify the meaning of these statements.
Suppose two automobiles are moving directly away from each other, one traveling north and the other, south. Suppose that their speedometers each show 50 miles per hour as the rate of travel over the ground. Simple intuition tell us that the separation rate of these two automobiles relative to each other is 100 miles per hour.
But now suppose we repeated this experiment using two photons instead of automobiles. Photons move at speed "c" the speed of light. The rate at which the two photons separate—the total spatial separation divided by the total time separation—is expected to be 2c. Simple experiments do in fact show a speed of 2c, but the more fundamental evidence mentioned above suggests that this speed is only an artifact of the reference system. The photons physically separate at the actual rate of c, not 2c.
The simplest way around this uncomfortable conclusion is to claim that time is three-dimensional like space, and that photons travel simultaneously in both space and time. As the accompanying illustration shows, the two photons have moved one spatial unit away from the source and are separated from each other by two spatial units. But if light travels through time, the total temporal separation would also be two units. This keeps the ratio constant and so the speed of the photon, whether relative to the source, or relative to the other photon, would always be c. In other words it is both constant and independent of the reference system.
This seems like a good explanation except that temporal positions cannot be depicted in a spatial reference system. So how do we know this is really happening? Is there any evidence that photons have a physical position in three-dimensional (or "coordinate") time? Yes, there is.
Modify the above illustration by requiring that the two photons be emitted in the same event. For example, there is a device that can convert a single violet photon into two red photons; the total energy remains the same and the requirement that they originate in the same event is satisfied. These photons can then separate as shown in the illustration. It can be shown that if something alters the polarization of the photon moving to the right, something will also happen to the polarization of the one moving to the left. The two photons could be widely separated, even miles from each other, and this will still happen. How can one photon "know" what has happened to the other? Could some effect be propagated across space at twice the speed of light? Or is this what Einstein called "spooky action at a distance"—a concept that makes physicists very uncomfortable?
The underlying explanation seems to be simple. The photons are moving in both space and time. In space they are separating, but because they originated in the same event, they remain in the same temporal location, and that location moves away from the source and carries the two photons. It follows that if I disturb one photon, the other one becomes disturbed because they are both in the same temporal location, even though they are not in the same spatial location. Our spatial reference system is incapable of depicting temporal locations, and so the effect looks like the incomprehensible "action at a distance."
This effect, though not the explanation, is actually well known to physicists. It was first described in a scientific paper written by Einstein, Podolsky, and Rosen in 1935 and which later came to be known as the "EPR paradox." It was originally a "thought experiment" with no experimental basis. But in the 1960s a mathematical theorem by John S. Bell allowed this paradox to be tested experimentally. Several experiments of widely differing designs were performed since then and the physical reality of the EPR paradox has been thoroughly confirmed. Physicists still have not found a plausible explanation for this effect and articles about it continue to appear regularly in the scientific and engineering journals. It is a fascinating and classical problem in quantum physics. (For further discussion see: The Problem of Quantum Locality)
4. Space progresses in the same sense that time progresses.
We readily sense the progression of time, but space seems to "stay put" and not progress. If space did progress, it would manifest itself as an expansion. Everything in our environment would be moving away from everything else. I know of only two instances where this seems to be the case: photons always move outward and away from the source of emission, and galaxies are moving away from us as well as each other (the "expanding universe"). Both effects involve what could be called "free space." But if both space and time are three-dimensional, and if both progress, why do we sense the progression of time but not space?
To humans, one second of time is a readily comprehensible quantity. But physically a natural quantity of time is more likely on the order of the Rydberg fundamental frequency, or about 10-16 seconds. That means that we humans apprehend an enormous amount of time at one glance. What if our view of space could be similarly enlarged? What would we see if our desktop unit of space were 1016 light-seconds? That is roughly 300 million light years. Our galaxy is about 100,000 light years in diameter. On this scale we would need a microscope just to see a galaxy! If our desktop were large enough to hold say, 100 of these units of measurement, we would be looking at 30 billion light years of space in one glance. The evidence from astronomy indicates that under these circumstances we would definitely sense the expansion (progression) of space. But we would not have any obvious clue that it is three-dimensional.
This idea, incidentally, is consistent with statements in the Bible about the "stretching out of the heavens." (Job 26:7, 9:8, 37:18, Psalm 104:2, Isaiah 40:22, 42:5, 44:24, 45:12, 48:13, 51:13, Jeremiah 10:12, 51:15, Zechariah 12:1) It is also consistent with recent discoveries in astronomy pertaining to Einstein's cosmological constant, dark energy, etc: "REPULSIVE FORCE IN THE UNIVERSE", Physics News Update, March 4, 1998, http://www.aip.org/pnu/1998/split/pnu361-1.htm ; http://www.wired.com/wiredscience/2008/12/dark-energy-ein/
Summary of Key Space/Time Concepts
- Both space and time progress in three dimensions. This means that space expands and time expands. We cannot see the expansion of time in our reference system but we can see the effects of the expansion of space ("clock space") with appropriate combinations of telescopes and spectrographs.
- Both space and time depict separation and the depiction of separation makes use of both extensionality and expansion.
- Space and time have identical properties and are fundamentally indistinguishable. There is no way to tell which is which. For now, we might simply think of them as two "realms of separation."
- Evidence from quantum physics suggests that space and time exist in very small but discrete, indivisible units.
Space and time are apparently always coupled into ratios. The master ratio is apparently the speed of light, c, which must represent the basic speed of space/time itself. This appears to be the "nothing datum" for the physical universe. (Other ratios are "not nothings," in other words, particles or radiation.) The temporal portion of this ratio is the "time" that we humans perceive as progressing. The progression of the spatial portion explains why the universe expands (the "expansion of the universe"). This expansion is not due to the "Big Bang" that scientists claim blew the stars and galaxies apart during the birth of the universe; it is due to the progression of space itself.
Space and time are not the backgrounds or settings in which events take place. They are the events themselves. I know that readers will have a great deal of difficulty with this and I have had to sacrifice some technical accuracy to keep things understandable. For now, the reader should try to work with both viewpoints.
"Nothing happens until something moves." —Albert Einstein
Gravitation and Space/Time ratios
If matter reduces to a space/time ratio, then matter is on the move. But where is it going? We know matter gravitates towards other matter and so the obvious answer is "towards all other matter." Gravitation would be a nice property to incorporate into a stellar propulsion system because it inherently causes motion towards other things. If we want to visit another star system, we must move towards it, not away from it! J
The summary section above left us with a completely empty universe that was expanding at the speed of light in both time and space. If we could mentally stand outside of this universe and throw some things into it, what would happen? There are three major cases:
Let’s say I throw in a handful of special fluff powder. This special powder has no mass and when it engages the physical universe, all of its component particles are swept outward and away from the original location at the speed of light. The original locations of each of the particles are swept along in the expansion of space. If we called these particles "photons," we would realize that photons are actually stationary; they have no motion relative to space or to time.
That nicely solves a major dilemma in physics: the need for a medium in which to propagate the wave motion of light. In this scenario, light is stationary; it is not propagated, does not go anywhere, and has no need for a medium. It would be swept outward and away from its source at the speed of light, and that is exactly what is observed experimentally!
Now suppose I create some stuff that has a property I will call "antimotion." This antimotion, in effect, figures out which way space and time are expanding and moves in the opposite direction. We will suppose that the speed of the antimotion is equal to the speed of light. What happens if I place a handful of this stuff into the physical universe? From my viewpoint it will remain stationary, because the antimotion exactly opposes the outward expansion of space/time. Note carefully that it appears to be stationary because it is actually moving at the speed of light relative to the space and time in which it is located!
This nicely solves a major dilemma in astrophysics: the need for an explanation of the stability of globular clusters. A globular cluster is a roughly spherical blob of tens of thousands of stars. The cluster does not rotate and so astronomers would expect gravitation to draw the stars together, causing the whole cluster to collapse. But they are manifestly very stable structures. What apparently happens is that the inward gravitational motion of an individual star is exactly balanced out by the outward expansion of space/time. Gravitation moves things "towards" and the expansion moves things "away." The result is an equilibrium and the structure is stable. ( http://arxiv.org/abs/0707.2459 )
Now suppose I create some more of this antimotion stuff but this time make it so it moves anti to the expansion of space/time at twice the speed of light. When I throw a handful of this stuff into the universe, the space/time expansion tries to move it apart, but the antimotion has twice the intensity and causes it to move together. We would say the particles "gravitate" together. Note carefully that they come together, not because they are exerting "gravitational forces" on each other, but because they are on their own independent course, and that course is "anti to outward" in every case for each individual particle.
But how would matter acquire this antimotion? You have already read the answer. Matter is an intrinsic space/time ratio. It is already in motion (or "is motion"). The motion only needs to be opposite that of the space/time expansion.
A deeper analysis shows that this motion can be completely described by one number (like +1 for the expansion or -2 for the antimotion). In other words, its sole distinguishing characteristic is just a magnitude. This is unusual because we usually think of "motion" as having both a magnitude and a direction. So this motion literally has no direction except to say that it is "towards " or "away." To oppose the outward expansion, the intrinsic antimotion does not need a direction, just a magnitude with the correct + or - sign.
It can now be seen that gravitation is not a force. It is more properly treated as a motion. Picture an apple dropping to the earth. The earth has far more mass than the apple, and therefore far more "intrinsic motion." So it is the earth that rushes to meet the apple. The apple itself is relatively stationary in space because it has the least mass and therefore the least intrinsic motion. (See Principle of Equivalence below; this is also consistent with the current view of gravitation. See Spacetime Physics, E. F. Taylor, J.A. Wheeler, 2nd ed., p.26-29. For an explanation of the fundamental origin of gravitation, see Origin of Intrinsic Spin )
The expansion is centerless and is everywhere the same. Gravitation, however, is bound to a center and has a distinct spatial distribution of its intensity. A unit of mass has a definite quantity of motion and this motion is distributed equally in all directions. If this mass were surrounded by a spherical surface, all points on the surface would receive an equal amount of motion. The same would be true if the sphere were made larger, except that this same definite quantity of motion would now be spread out over a larger surface and would therefore be less intense. The surface area of a sphere is directly proportional to the square of its radius. Hence, the intensity of the motion towards a unit area on the surface of the sphere will be proportional to the inverse of the radius squared. If we think of the gravitational motion as being caused by forces, then this type of (motional) gravitation would have an inverse square force distribution just as expressed in Newton’s law of (conventional) gravitation. (See also Feynman Lectures on Physics Vol 2, p 1-5,4-7)
It can be seen that if there were two units of mass instead of one, this unit area would receive twice the motion. Hence, the total gravitational motion will be directly proportional to the total amount of mass. Note, however, that this will be the case only if the gravitational force is measured by its effects on a one unit test mass. If the masses have more than one unit of mass, they will give the appearance of attracting each other in direct proportion to the product of their masses, as the accompanying illustration shows.
The only other item in Newton’s equation that needs an explanation is the proportionality constant, G. This constant presents us with two perplexing questions. G does not represent a "thing" and therefore cannot legitimately have units of measurement. So why isn’t it just a pure number? And why is a proportionality constant needed in such a fundamental equation as that for gravitation? Its presence suggests that the units of length and mass do not match the physical reality.
The explanation is that Newton’s equation is a good mathematical description of gravitation but it tells us nothing of the concepts underlying the phenomena. It says "this is what is happening" instead of "this is why and how it is happening." Another of Newton’s own equations, F=ma, states that force is proportional to mass times acceleration. But his gravitational equation, in contrast, states that force is proportional to the product of two masses, divided by the square of the distance between the two masses. This creates problems with the consistency of the units of measurement and so in conventional physics, G is arbitrarily assigned units of measurement such that the units on both sides of the "=" sign are the same.
This is an unnecessary contrivance however. The r2 term in the denominator does not have the units of "length squared," for example. In reality the r2 term represents the ratio of two areas: the ratio of the total spherical surface to a single unit of area (as shown in the illustration). The term is therefore just a pure number—unitless. Similarly, for reasons explained above, the units of m1 m2 are just "mass", not "mass squared". These are the only insights needed for the "intrinsic motion" explanation of gravitation, but most of us still like to think of motion as being caused by an external force. If so, an acceleration term must be introduced into the gravitational equation. It has a magnitude of one unit and its sole effect is to convert the equation from the motional representation into a force representation. The gravitational equation reduces to F=ma and G becomes unitless just as it should be.
The development here is thus consistent with Newton’s equation for gravitation.
Because gravitation based on intrinsic motion is not a force, it will seem to act instantly, and not have a finite propagation delay like light. This would apparently also be true for magnetic and electric "force field" phenomena because their equations take the same form as that for gravitation. This conclusion is consistent both with Newton’s equation itself (which says that the force is not dependent on time) and with the modern technical use of Newton’s equation in orbital mechanics. Although this conclusion is in disagreement with the prevailing views of the scientific community, it could be tested experimentally. (See also The Speed of Gravity in the Addendum below)
Other misconceptions about gravitation have to do with how the intrinsic gravitational motion appears in the commonly used three-dimensional "spatial reference system." Gravitation is a net inward motion at the speed of light in all three spatial dimensions simultaneously (however, see "Beyond Einstein: non-local physics" for a clarification). It seems paradoxical that gravitational motion is inherently three-dimensional, yet it can be completely described by one number (a signed magnitude); this property seems to make it, in some sense, one-dimensional ("scalar"). Just as peculiar, our "three dimensional" spatial reference system is inherently capable of portraying only one dimension of the gravitational motion. This means that the full gravitational effect cannot be seen in the reference system—only that portion of the motion parallel to the (arbitrary) alignment of the reference system can be measured by our instruments. And, although the gravitating material also has a three-dimensional time coordinate, our instruments can only see the motion that takes place in space.
Scientists have good factual reasons to believe that the gravitational force is weaker than the electrostatic force by a factor of 4 x 1042. I would not expect the gravitational force to be that weak however; the low value may actually be an artifact of the reference system.
Also, the above discussion of antimotion supposed that the intrinsic motion of the atom is an integral multiple (two in this case) of the speed of light and that this causes a net "towards" motion of the atoms. We recognize this effect as that of gravitation. What is not so obvious is that, according to this viewpoint, what we commonly call the speed of light is actually the speed of the gravitational system. My own belief is that c itself is a constant. But the c we measure while in our position in this galaxy will not necessarily have the same numerical value as the c that represents the speed of space/time. This will have all sorts of implications for physical systems that move at high speeds and for equations like E=cB and E= mc2. (See also Shapiro Time Delay ; Other links: "Experimental evidence that the gravitational constant varies with orientation" , http://arxiv.org/pdf/physics/0202058.pdf , http://www.hindu.com/thehindu/seta/2002/05/16/stories/2002051600120200.htm ; http://en.wikipedia.org/wiki/Gravitational_constant , http://cip.physik.uni-wuerzburg.de/~rkritzer/grav.pdf ; Update 8-30-2010: In retrospect the arguments about the speed of light actually seem to be more applicable to the Hubble "constant". Refs: http://en.wikipedia.org/wiki/Cosmological_constant ; http://www.astro.ucla.edu/~wright/cosmo_constant.html; http://phys.org/news/2015-04-gravitational-constant-vary.html http://transpower.files.wordpress.com/2008/07/mathcad-hubble_local_group.pdf '; )
An Advanced Interstellar Propulsion System
A spacecraft and everything in it is made of atoms. The atoms move "towards" all other atoms in three-dimensions. Suppose this inward intrinsic motion could be canceled in all three dimensions simultaneously. If this could actually be done, every atom would become locked into a space/time location that would move outward and away from its original location in a direction that is entirely arbitrary. From our viewpoint the spacecraft would simply explode outward at the speed of light (the motion that each atom would acquire would be exactly like that of the photon)
This example shows that the principle of "antigravity," strictly speaking, is not what is desired in an advanced propulsion system for spacecraft. It would have two obvious problems: it blows things apart, and the motion it produces is not steerable in any manner.
However, my impression from the Periodic table is that mass somehow involves a spin that is intrinsically two-dimensional, not three-dimensional. If this is the case, then an "extra spin" is required to distribute the intrinsic two-dimensional spin such that it opposes all three apparent dimensions of the expansion of space. In other words, it is the distribution of the effects of intrinsic spin that we recognize as gravitational mass. The mass of our everyday encounter has the full distribution and "stays put" in our reference system. But suppose, by technical means, we were able to alter or "align" the "superficial" spin of a group of atoms. This would leave the intrinsic spins of the atoms unchanged (no "cancellation of atoms" to worry about). But such atoms would now "stay put" in only two dimensions, and take off at the speed of light in the remaining one. I believe this could be the foundational principle of an advanced, interstellar propulsion system. I hope to pursue the details in future articles. (See Spin Polarization and "Motion Cancellers" in the Addendum below for some related information.)
For now, let's suppose that the technical means employed could produce an effect that is only one-tenth of one percent of the theoretical value. What sort of capabilities would a spacecraft have which utilized this system?
1. Such a spacecraft would be capable of moving at 186 miles per second.
2. This rapid acceleration/speed capability would make such a spacecraft hard to detect. In less time than it takes to blink an eye, such a craft could go from stationary to a position 18 miles away*. If we were looking right at it, it would just seem to disappear. If we got a glimpse of it while in flight it would be gone before we could focus our eyes on it. It would undoubtedly be atomic powered, use a "field propulsion technology" (the NASA term is "propellantless propulsion") and would therefore leave no contrail in the sky. Motion at this speed in the atmosphere would create a powerful sonic boom. However, electroaerodynamic technology can, even today, prevent the development of sonic shock waves on "conventional" aircraft. Hence, such a craft might not produce a sonic boom. (See "Northrop Studying Sonic Boom Remedy"; AW&ST, Scott, W. B., 1/22/68, pg 21.; "Experiments Indicate Electric Charge Could Quiet Sonic Boom"; Product Engineering magazine 3/11/68, pgs 35-36; "Electroaerodynamics in Supersonic Flow"; Cahn, Andrew; AIAA 68-24; "Recent Experiments In Supersonic Regime With Electrostatic Charges" Cahn, Andrew, Anderson; AIAA 70-759; " 'Air Spike' Could Ease Flight Problems" , AW&ST, May 15, 1995, pages 66-67; "Black world engineers, scientists, encourage using highly classified technology for civil applications", AW&ST, March 9, 1992, p. 66-67, http://members.nbci.com/082499/aviation/nws001/ai014.htm , http://home.swipnet.se/~w-34966/propuls/propulse.htm ; "Air flow control with electrohydrodynamic actuators", Guillermo Artana, Juan D’Adamo, Gastón Desimone, Guillermo DiPrimio, May 2000, http://laboratorios.fi.uba.ar/lfd/web%20publi/electroaero/Paper2.pdf ; "An Experimental Study Of An Electroaerodynamic Actuator", R. Mestiri1 , R. Hadaji1 and S. Ben Nasrallah1, 2010, http://www.techscience.com/doi/10.3970/fdmp.2010.006.409.pdf ) ; Bluff-Body Wake Control, http://mae.osu.edu/labs/afcad/research/bluff-body-wake-control See Addendum below for related information. (* possible example at 2:06 http://www.youtube.com/watch?v=KKuYNtg7M0s&feature=player_embedded )
Such a craft would have a destabilizing effect on world security. It would, however, be a great boon to business. It could fly from New York to Tokyo in less time than it takes for passengers to get on board.
3. If this technology could operate on the atoms of the occupants just as effectively as the spacecraft, then the occupants would experience the same motion as the spacecraft and would move right with it, experiencing no acceleration. From their viewpoint the spacecraft would seem stationary. (In other words they would not be smashed against the walls when the spacecraft takes off at 186 miles per second). From the viewpoint of the occupants, commanding the spacecraft to move away from the earth would only seem to make the earth recede rather than make the spacecraft move.
4. Such a spacecraft would be independent of the medium in which it operates. It does not operate by "antigravity" or by repulsion, so it needs nothing to push against. It would work in interstellar space, in the atmosphere, even in the ocean.
Is it possible? Are there indications that such an effect can be produced? And are there related effects that could give us some insights on this subject? I hope to write more about these topics in the future.
The Social Realizations
Articles like this one are intended to appeal to technically-minded people who enjoy scientific topics. But they are not intended to merely publish interesting insights about physics. Instead they are intended to be a subtle type of "value practice" which should have socially positive effects. I try to comment on a few of these "social realizations" in each article:
1. A factual, but unexpected and radically different viewpoint can be powerfully productive.
This article has given simple, logical answers to such questions as: What causes gravitation? How could the EPR paradox be explained? Is there such a thing as a fundamental particle? What is matter made of? What accounts for the constancy of the speed of light? How can light move from one place to another without an interconnecting medium? These and other "incomprehensible problems" were suddenly solved by a simple insight that had several unexpected consequences.
That is not to say that these ideas will be readily understood or quickly adopted by the scientific community. Radically new ideas still tend to be pictured in terms of old familiar ones and a lot of time and effort is required to break free from these conceptual restraints. Another problem is that the scientific community frequently confuses fact with theory (and even uses theory to "correct" the facts more frequently than it cares to admit). A radical departure from what is commonly accepted is viewed with a sense of betrayal—as though unclean and unwashed infidels were trespassing on the holy turf of the experts—as though someone were making a bomb threat instead of offering a new window into the universe.
Radically new ideas also have qualities that engineers and scientists don’t like: they are vague, imprecise, incomplete, sloppy, have a non-rigorous development, and "raise more questions than answers." They produce the usual false leads, misconceptions and dead ends, as well as attract attention from fringe groups and mass media who invariably spread inaccuracies and get everything blown out of proportion. (In this case follow the science, and ignore the psychological and social pollution.) New ideas are also very fragile and vulnerable to criticism by those who say "it can’t be done" and who can easily point out all kinds of "reasons" the new idea can fail and make everyone look stupid. And not everybody wants an antidote for arrogance either.
I cannot even guess whether the propulsion system described above will ultimately be developed. But I am sure there will be immense and varied benefits from efforts to learn more about the properties of space/time ratios.
2. Finding the right question is more important than finding the right answer.
Solving a problem is often just a matter of straight-forward time and effort. But how do you find the right problem to solve? One principle that I must emphasize very emphatically is to think in terms of what needs to be done rather than what can be done. Think with the final result in view, not the tools you will use to get there.
Suppose you want a better understanding of the structure of the atom. If you thought in terms of tools—using experts in nuclear physics and particle accelerators costing billions of dollars—would you have come up with the idea that the atom does not "have" a nucleus or that matter is not fundamentally made of particles? Probably not. And what if you tried to do something for which there were no tools at all, such as design some kind of "antigravity" propulsion system? If you were accustomed to thinking in terms of tools, you would not have a clue where to start.
You would actually make faster progress if you were able to cast aside your preconceived ideas, your "safe, proven truth," and depend on ordinary fact-centered perception. The problem will speak to you with a weak little voice, calmly telling you it does not want to be a problem, and even telling you how it can be solved. But its solution demands that you take the plunge into terra incognita—the land of the unknown. This land will not mistreat you, but it is a land of strange and bewildering things, where few people feel comfortable. Yet it is a land where many can be productive in spite of their fears.
Besides avoiding this key fallacy, an approach I find helpful in getting to the root cause of the problem is to keep asking questions until no further question of the same sort can be asked: "Why is the software so hard to maintain?" It is because we wrote it without standards. "Why didn’t the management support the use of standards?" They did not realize how important standards were. "Why didn’t they realize how important standards were?" Probably because they saw their task as that of managing projects instead of people. (etc.)
I make a mental graph of this that looks like a "L " (an upside-down Vee). The left side represents the questions and goes upwards and ends at the point where no further question of the same sort can be asked. In this example, I would eventually get to the CEO (Chief Executive Officer) and have to ask a question like "Why did the CEO do that?" This is outside the scope of the company, and is not a question of the same sort. So at this point I start looking for answers, and list them down the right side of the "L ", generally corresponding one-to-one with the questions. I often end up with most of the questions, most of the answers, and many very useful insights.
Unfortunately, corporate management is usually not interested in insights of this depth. When they ask "What went wrong? Why did this fail?" they are all too often looking for someone to blame. They brag about "management for results" but this usually turns into "management of the results." Results managers can only assign blame, limit the damage, and try to clean up the mess. It is simply too late to do anything else. Real management has to manage the process that leads to the result, and must be based on finding the right thing to do, and on supporting the people who do it. Demanding results without supplying the tools and support to get those results will be ineffectual.
Once you find the questions and the answers, it is important to do something with them. I worked for an engineering firm that believed that 90% of its "technical problems" actually had their origins in the company culture. But even with this realization—which was a pretty good one for socially dense engineers—they still did not do anything to fix their culture. The company missed out on the great benefits this could have produced within just a few years.
3. The historical response to discovery now has another opportunity to repeat itself.
The script goes like this: First they say "You are crazy and we can prove it!" Then they say, "Well, you are not crazy, but it is unimportant." Finally they say: "Well, it is important, and we knew it all along!"
"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed.
Third, it is accepted as being self-evident." Arthur Schopenhauer, German philosopher (1788 - 1860)
I hope this article will better empower you to imagine things that were formerly inconceivable. You now have real insights into perplexing problems that scientists have not been able to solve after centuries of research and lifetimes of studies. Hopefully you can now muster the courage to punch through similar conceptual barriers and ‘boldly go where you have never gone before.’ And, again, all it took was an example, in this case a simple article to point the way. (And not a cent of taxpayer’s money was spent producing it!)
See also Addendum below.
Return to Home Page
Mitigation/Elimination of Sonic Shock Waves
The Speed of Gravity : not less than 2 x 1010c
The Speed of Electric Fields
Variations in Speed of Lightf
How to Construct a Sensitive Gravity Meter
Renewed Interest in the Eötvös Experiments
What the Neutron Interferometer Reveals about Gravitational and Inertial Mass
Spin Polarization of Atoms and Photons
Some Related Links about "Gravity Modification Experiments"
Gravitational Lensing and Deflection of Photons by Gravity
The Gravitational Redshift and the Principle of Equivalence
The Shapiro Time Delay
Lack of Recoil in Railguns
The Relativistic Correction Factor, Gamma ( g )
In Search of the Geometry of Space, Time and Motion
Why is gravitation an accelerated motion? What powers gravity?
The Kinematic Time Shift, Gravitational Time Shift ( 2-22-03, edited 7-17-07)
The Biefeld-Brown effect
Motion Couplers and Momentum Converters
Space/time dimensions for some electromagnetic quantities
Various electrogravity, magnetogravity, and gravomechanical effects
Poynting vector insights (electromagnetic momentum)
Speculation on Potential Uses of Antigravity
What is a UFO?
How can UFOs make right-angle turns at high speed?
What did those guys know back then?
Mitigation/Elimination of Sonic Shock Waves
An article in Aviation Week & Space Technology (AW&ST, May 15, 1995, "Air Spike' Could Ease Flight Problems," pages 66-67), shows that research in electroaerodynamic technology is alive and well.
The article says that the aerospike technology "could reduce the drag and heat transfer problems associated with hypersonic flight." It mentions that vehicles so designed could travel at Mach 25 (orbital velocity) but be subject to Mach 3 conditions in the region behind the shock wave. The ultimate goal is to build earth-to-orbit vehicles that reduce transportation costs by a factor of 100 to 1000. Such a vehicle might be "blunt bodied, lens-shaped or saucer-shaped" and would fly blunt face forward (like an Apollo heat shield). The electric energy drives the air radially away from the craft and transforms the traditional conical shock wave into a weaker parabolic one. The air behind the shock is very low in density and this reduces the heat transfer effects. The article also mentions a magnetohydrodynamic fan engine and how it could eliminate sonic booms so that a lens shaped craft "is silent but very bright in hypersonic operation." One photo and a drawing are shown.
An article from Meridian International Research has this note (in part) about electroaerodynamic technology:
Tests were further carried out in a supersonic windtunnel of 1.5 by 3 inch test section using Schlieren photography.
In one test at Mach 1.5, an 8 degree double wedge airfoil model 1.5 inches in span and 0.375 inches in chord was used. When a charge of 70kV at 0.01milliamperes was applied to the leading edge, the shock wave disappeared. The power used was 0.7 watts.
For a 20 metre span straight wing, this would equate to less than 400W of electrical power. (Electroaerodynamic Sonic Boom Elimination, Meridian International Research, http://www.meridian-int-res.com/Aeronautics/SSonic.htm )
This technology could probably also be used with the railgun method of lauching vehicles into orbit. In this scheme the vehicle is accelerated on earth and shot through the atmosphere into a highly eccentric orbit. But atmospheric drag and heating effects on the vehicle during launch are serious problems. The use of an electroaerodynamic shield may circumvent these effects. Such a relatively inexpensive launch method could be used for supplies and fuel.
A suborbital option might be to use a cleverly designed combination of an electroaerodynamic shield with an atmospheric ramjet. The shield would be used on the main airframe to reduce drag, and the ramjet, outside the shield, could produce efficient thrust up to about Mach 3 to Mach 6.
Elimination of sonic shock waves is normally difficult to do in open atmosphere. However, the waves can be suppressed in closed containers by clever techniques. This leaves me with the impression that the techniques used in open atmosphere have just not gotten clever enough (at least not on civilian aircraft). http://www.aip.org/physnews/graphics/html/macroson.htm , http://www.macrosonix.com/pdf files/Physics Today.pdf , www.aiaa.org/events/aners/Presentations/ANERS-Henne.pdf
All these studies have had an aerospace focus. But if electroaerodynamics can reduce drag, why not apply the technology to the automobile to increase gas mileage? The drag reduction would not be as dramatic as in aerospace applications, but even an improvement of a few percent would be worth looking at. And what about its use for reducing aerodynamic drag in long-distance trucking? This might be a way to cut fuel costs with only relatively minor modifications. (For some ideas, see http://www.amazing1.com/hv-dc-power-supplies.htm )
"AFRL Develops Plasma Actuator Computational Model",
AirForce Print News Today, May 1, 2006,
"Plasma Actuators for Bluff Body Flow Control", Alexey V. Kozlov (2007) http://www.nd.edu/~akozlov/Publications/Kozlov_A_candidacy.pdf
"Drag-resistant aerospike", http://en.wikipedia.org/wiki/Drag-resistant_aerospike
"The Northrop shock wave reduction experiment", http://jnaudin.free.fr/html/ehdaero.htm
"Sliding discharge in air at atmospheric pressure: electrical properties", Christophe Loustea,, Guillermo Artanab, Eric Moreaua,
Ge´rard Toucharda (2005) http://laboratorios.fi.uba.ar/lfd/web%20publi/electroaero/sliding.pdf
"Electric wind induced by sliding discharge in air at atmospheric pressure"E. Moreau, , C. Louste, G. Touchard, http://www.sciencedirect.com/science/article/pii/S0304388607001131
"Electrical modeling of a trielectrode sliding discharge", F.O.Minottia, D. Grondonaa, P.Allen, and H.Kellya (2009)
"High repetition rate excimer laser directly pumped by a sliding discharge", V.K. Bashkin and A. B. Treshchalov
"Validation of Plasma Injection for Hypersonic Blunt-Body Drag Reduction", J.S. Shang (2002) http://ftp.rta.nato.int/public//PubFullText/RTO/MP/RTO-MP-089///MP-089-38.pdf
"Airfoil fluid flow control system", John R. Boyd (1960) http://www.freepatentsonline.com/2946541.pdf
"Air resistance reducer", Everett M. Hadley (1937) http://www.freepatentsonline.com/210257.pdf
"Apparatus for the promotion and control of vehicular flight", H.C. Dudley (1963) http://www.freepatentsonline/3095167.pdf
"Let Caesar's things belong to Caesar"
The Speed of Gravity : not less than 2 x 1010c
"The most amazing thing I was taught as a graduate student of celestial mechanics at Yale in the 1960s was that all gravitational interactions between bodies in all dynamical systems had to be taken as instantaneous. . . .Indeed, as astronomers we were taught to calculate orbits using instantaneous forces; then extract the position of some body along its orbit at a time of interest, and calculate where that position would appear as seen from Earth by allowing for the finite propagation speed of light from there to here. . . . That was the required procedure to get the correct answers." And thus begins an article by astronomer Dr. Tom Van Flandern about the speed of gravity. ("The Speed of Gravity - What the Experiments Say" , Tom Van Flandern, Physics Letters A, 250 (1-3) (1998) pp. 1-11; (http://www.usc.edu/isd/elecresources/gateways/physlet_A.html ). The article was reprinted in Infinite Energy, Issue 27, 1999, pages 50-58.
My own article about the nature of gravity shows that it can be treated as an intrinsic motion that is not propagated. Thus, the gravitational effect of a change in the position of a celestial body is felt instantaneously—everywhere in the Universe. This is in accord with Newton's Universal Law of Gravitation, where the speed of gravity is unconditionally infinite. Although Van Flandern does not believe that the speed of gravity is infinite, he does discuss experimental evidence that sets a lower limit on the speed. "Standard experimental techniques exist to determine the propagation speed of forces. When we apply these techniques to gravity, they all yield propagation speeds too great to measure, substantially faster than light speed." The speed of gravity, "if it is a force of nature propagating in flat space-time [is] not less than 2 x 1010c." (That is, not less than 20 billion times the speed of light).
The most obvious and incontrovertible experimental evidence for an extremely high speed of gravity is that gravity has no aberration.
To understand this effect imagine that you are standing out in a light rain storm and that the raindrops are falling straight down. You have a straight piece of plastic pipe in your hand that is about four inches in diameter and about four feet long. You want to align the pipe so that the raindrops fall down the pipe without touching its inside wall. Not surprisingly you find that the pipe has to be aligned straight up and down, exactly parallel to the falling rain drops. But now suppose you begin walking. You still want the raindrops to fall down the pipe without touching the sides. You find that you now have to tilt the pipe in the direction of your motion, otherwise the raindrops will collide with the inside walls of the pipe. If you were moving very fast (compared to the speed of the falling raindrop) you would have to point the pipe almost horizontally in the direction of your motion in order for the raindrops to "fall" straight down the center line of the pipe.
An effect like this was found for starlight and telescopes. It was discovered by an astronomer named Bradley in 1728. It arises because the Earth is moving around the Sun at a speed that is significant (i.e., not ignorable) compared to the speed of light. The effect is well-known and is called "stellar aberration". It requires that telescopes be "misaimed" slightly so that the light will travel directly down the center-line of the telescope. The magnitude of the effect is dependent on the Earth's motion around the Sun relative to the starlight. It can displace the apparent position of the stars by up to 20 seconds of arc. Likewise, the apparent position of the Sun in the sky is displaced 20 arc seconds from its true position.
When photons are emitted from the Sun, they take about 8.3 minutes to reach Earth. By that time the Earth has moved significantly in its orbit. The incoming photons are no longer on a strictly radial path, "straight down the tube" as it were. Instead, they have a very small tangential component. Because light has momentum, the effect would tend to slow the Earth in its orbit. The effect is known as the Poynting-Robertson effect; it causes dust particles in orbit about the Sun to spiral inward.
Now what about gravity? Let's suppose that the Sun "emits gravity" just like it emits photons. Do we see an aberration effect for gravity as we do for photons? Radiation pressure is repulsive but the effect of gravity is attractive. If there were such an effect, it would tend to speed the Earth up in its orbit rather than slow it down. "The net effect of such a force would be to double the Earth's distance from the Sun in 1200 years. There can be no doubt from astronomical observations that no such force is acting," notes Van Flandern. "From the absence of such an effect, Laplace set a lower limit to the speed of propagation of classical gravity of about 108c, where c is the speed of light." (Laplace, P. 1966 Mechanique Celeste, volumes published from 1799-1825). Astronomer Sir Arthur Eddington noted this effect too. (Eddington, A. E., Space, Time and Gravitation. Originally printed in 1920, reprinted by Cambridge University Press, 1987.)
If gravity and light propagate at the same speed, then the angle between the acceleration vector for the Earth-Sun system and the incoming photons from the Sun should be zero. Precise measurements however show that the Earth accelerates toward a position that is 20 seconds of arc in front of the visible Sun (that is, the Earth is accelerating to where the Sun actually is, not to where its light shows up in the sky 8.3 minutes later). This again shows that light and gravity cannot have the same propagation speed.
A third manifestation of the difference in propagation speeds comes from solar eclipses. The Sun has an aberration of 20 arc seconds. The Moon, however has an aberration of only 0.7 arc seconds due to its slower motion around the Earth. The Moon requires 38 seconds of time to move 20 seconds of arc in the sky relative to the Sun. During an eclipse the time of gravitational maximum can be compared with the time of light minimum. If there is no difference in propagation speed, the two times should coincide. But as Van Flandern notes: "We find that the maximum eclipse occurs roughly 38 +/- 1.9 seconds of time, on average, before the time of gravity maximum. If gravity is a propagating force, this three-body (Sun-Moon-Earth) test implies that gravity propagates at least twenty times faster than light."
The article dicusses other evidence from radar ranging and spacecraft data. These set a lower limit on the speed of gravity of 109 c. Evidence using data from binary pulsar PSR1534+12 suggests an even more stringent lower limit of 2 x 1010 c.
The article also discusses Lorentzian relativity, Special Relativity, and General relativity, gravitational waves, gravitational radiation, supernova explosions, and other very interesting topics. It is clearly written and has several useful tables, illustrations, formulas, and a bibliography.
6-14-03Update (Note that the propagation velocity of a gravitational pulse is "at least several thousand times the speed of light, perhaps faster!" and that the intensity of the beam is apparently limited only by geometry, not diffraction.)
“The Cosmic Ether: Introduction to Subquantum Kinetics” Paul A. LaViolette (2012) http://www.sciencedirect.com/science/article/pii/S1875389212025205
"The notion of an ether, or of an absolute reference frame in space, necessarily conflicts with the postulate of special relativity that all frames should be relative and that the velocity of light should be a universal constant. However, experiments by Sagnac (1913), Graneau (1983), Silvertooth (1987, 1989), Pappas and Vaughan (1990), Lafforgue (1991), and Cornille (1998), to name just a few, have established that the idea of relative frames is untenable and should be replaced with the notion of an absolute ether frame. Also a moderately simple experiment performed by Alexis Guy Obolensky has clocked speeds as high as 5c for Coulomb shocks traveling across his laboratory (LaViolette, 2008a). Furthermore Podkletnov and Modanese (2011) report having measured a speed of 64c for a collimated gravity impulse wave produced by a high voltage discharge emitted from a superconducting anode. These experiments not only soundly refute the special theory of relativity, but also indicate that information can be communicated at superluminal speeds.""Measurement of the Speed of Gravity", Yin Zhu (2013) http://arxiv.org/ftp/arxiv/papers/1108/1108.3761.pdfAppendix B
The speed of Gravity:
An Observation on Satellite Motions
"The radius of orbit of the geosynchronous satellite can be observed at the precision of less than 8cm. And, a force about ~10-9m/s2 can make the orbit of satellite shifted. Here, the gravitational forces of the Sun acting on the satellite from the present and retarded positions are calculated respectively, assuming that the retarded position is determined with that the speed of the gravitational force is equal to the speed of light. It is shown that the difference of the force between the present and retarded positions of the Sun acting on a geosynchronous satellite can be larger than 1×10-7m/s2 . And, the difference of the radius of the orbit of the satellite perturbed by the gravitational force of the Sun from the present and retarded positions in 3000s can be larger than 8.2m. It indicates that the gravitational force of the Sun acting on the satellite is from the present position of the Sun and that the speed of the gravitational force is much larger than the speed of light in a vacuum."
"Measuring Propagation Speed of Coulomb Fields", R. de Sangro, G. Finocchiaro, P.Patteri, M. Piccolo, G. Pizzella (2012) https://arxiv.org/abs/1211.2913
6-1-2017 Update: The observational effects of the speed of gravity differing from the speed of light have been presented above. If gravity propagated at the speed of light instead of instantaneously, the effects on orbits of satelites and solar system planets would be very obvious. However, these are relatively small systems. Instead of a solar system, consider the effects on something the size of a galaxy:
We know, the solar system and other stars are orbiting around the center of the Milky Way and the radius of the Milky Way is larger than 5☓104 light-year. . . . But, we know, the Milky Way is moving with a speed on the level of 5☓102km/s. Therefore, the distance between the retarded position and present position of the center of the Milky way is12-26-2017 Update:
. . . 25 light-year. And, a galaxy is usually older than . . . 1☓1010 years . . . . The distance between the retarded and present positions of this center should become larger than 5 x 106 ly. In this case, a spiral galaxy could not maintain with the form of a disc. Instead, it was a very long strip along the direction of the galaxy moving. However, no galaxy has become such a long strip one. ("The speed of gravit;y: An observation on galaxy motions ", Yin Zhu (September 2016) DOI: 10.13140/RG.2.2.30917.45287 https://www.researchgate.net/publication/308409482_The_speed_of_gravity_An_observation_on_galaxy_motions )
"A Critical Analysis of LIGO'S Recent Detection of Gravitational Waves Caused by Merging Black Holes”, Stephen J. Crothers (4 March 2016) http://vixra.org/pdf/1603.0127v4.pdf
Abstract The LIGO Scientific Collaboration and the Virgo Collaboration have announced that on 14 September 2015, LIGO detected an Einstein gravitational wave directly for the first time, with the first observation of a binary black hole merger. The announcement was made with much media attention. Not so long ago similar media excitement surrounded the announcement by the BICEP2 Team of detection of primordial gravitational waves imprinted in Bmode polarisations of a Cosmic Microwave Background, which proved to be naught. . . .The insurmountable problem for the credibility of LIGO's claims is the questionable character of the theoretical assumptions upon which they are based. In this paper various arguments are presented according to which the basic theoretical assumptions, and the consequential claims of detecting gravitational waves, are proven false. The apparent detection by the LIGO-Virgo Collaborations is not related to gravitational waves or to the collision and merger of black holes.
. . .
However, the crucial point of the foregoing mathematical development is that Einstein's gravitational waves do not have a unique speed of propagation. The speed of the waves is coordinate dependent, as the condition at Eq.(A.6) attests. It is the constraint at Eq.(A.6) that selects a set of coordinates to produce the propagation speed c. A different set of coordinates yields a different speed of propagation, as Eq.(A.3) does not have to be constrained by Eq.(A.6). Einstein deliberately chose a set of coordinates that yields the desired speed of propagation at that of light in vacuum (i.e. c = 2.998x108 m/s) in order to satisfy the presupposition that propagation is at speed c. There is no a priori reason why this particular set of coordinates is better than any other. The sole purpose for the choice is to obtain the desired and presupposed result.
All the coordinate-systems differ from Galilean coordinates by small quantities of the first order. The potentials gμν pertain not only to the gravitational influence which has objective reality, but also to the coordinate-system which we select arbitrarily. We can ‘propagate’ coordinate-changes with the speed of thought, and these may be mixed up at will with the more dilatory propagation discussed above. There does not seem to be any way of distinguishing a physical and a conventional part in the changes of gμν. “The statement that in the relativity theory gravitational waves are propagated with the speed of light has, I believe, been based entirely upon the foregoing investigation; but it will be seen that it is only true in a very conventional sense. If coordinates are chosen so as to satisfy a certain condition which has no very clear geometrical importance, the speed is that of light; if the coordinates are slightly different the speed is altogether different from that of light. The result stands or falls by the choice of coordinates and, so far as can be judged, the coordinates here used were purposely introduced in order to obtain the simplification which results from representing the propagation as occurring with the speed of light. The argument thus follows a vicious circle.” Eddington [38 §57]
Eddington, A.S., The Mathematical Theory of Relativity, Cambridge University Press, Cambridge, (1963) (If you search the reprint of this book using Amazon's Look Inside feature, use "vicious circle" for the search text. In the book published by Forgtten Books, the quote is on page 131.)https://www.researchgate.net/publication/2175473_Relativity_and_wavy_motions
. . . There is a widespread and erroneous conviction (see e.g. Fock , p.194) according to which in GR gravitation is propagated with the speed of light in vacuo, i.e. with the speed of light in empty space of SR. The supporters of this false opinion claim that it follows, e.g., from eqs.(4) and (5), when interpreted as differential equations of wave fronts and rays of GW’s. Now, this is trivially wrong even from the viewpoint of the believers in the physical existence of GW’s, because eqs. (4) and (5) – quite independently of their interpretation– affirm in reality that the concerned wave fronts and rays have a propagation velocity that depends on the metric tensor gjk(x), even if this tensor has the form of a mathematical undulation. The non-existence of physical GW’s has the following consequence: if we displace a mass, its gravitational field and the related curvature of the interested manifold displace themselves along with the mass: under this respect Einstein field and Newton field behave in an identical way .
. . . It is regrettable that various physicists insist on publishing useless considerations and computations on hjk–waves . It is time that astrophysical community desist from beating the air – and from squandering the money of the taxpayers.
"On the Signal Processing Operations in LIGO signals", Akhila Raman (Nov 2017) https://arxiv.org/pdf/1711.07421.pdf
Abstract. This article analyzes the data for the five gravitational wave (GW) events detected in Hanford(H1), Livingston(L1) and Virgo(V1) detectors by the LIGO1 collaboration. It is shown that GW170814, GW170817, GW151226 and GW170104 are very weak signals whose amplitude does not rise significantly during the GW event, and they are indistinguishable from non-stationary detector noise.
"The Speed of Gravity - What the Experiments Say"
"Experiments indicate that gravity and electrodynamic forces both propagate far in excess of lightspeed." (from abstract) http://www.metaresearch.org/cosmology/gravity/speed_limit.asp
Meaning of the "speed of gravity"
"Kopeikin and the Speed of Gravity"
"French Nobel Laureate turns back clock: Marshall's global experiment, von Braun memories evoked during August 11 solar eclipse" http://science.nasa.gov/science-news/science-at-nasa/1999/ast12oct99_1/ (includes a list of various gravitational anomalies)
"Beyond Einstein: non-local physics" Brian Fraser (2015) BeyondEinstein.html
The Speed of Electric Fields
Newton's law of gravity has no time dependence, and no velocity dependence. According to Newton's formula (F=Gm1m2/r2), the gravitational force acts instantaneously. If the Sun, for instance, were to suddenly disappear from existence, the light that had been emitted from it would still continue flowing towards Earth for about 8.3 minutes, but the gravitational effect would disappear instantly.
This also means that the gravitational force from a moving body will show no aberration due to its motion. The force from a "source" of gravity will point directly (that is, radially) to a "detector" of gravity with no displacement due to time, or motion, and with no need to calculate "retarded positions", etc.
Most of us are familiar with the concept of aberration even though we do not use the term. Next time you hear a high-altitude jet aircraft in the sky, look up and see where it is. You'll find that it is far ahead of the sound that it makes. This is because sound in the atmosphere travels approximately 1 mile every 5 seconds. For a jet directly overhead at 30,000 feet the sound won't reach you for about 30 seconds. During that time the jet travels an additional 4 miles or so. Hence, the sound and the source of the sound seem to be in two different positions. This difference is the "aberration" and the position directly overhead is the "retarded position."
Other equations in physics, such as that for Coulomb force (F=kq1q2/r2), have the same form as that for gravitational force. This raises obvious questions: Does the force between electric charges act instantaneously? Is the force free of aberration if one of the charges is moving?
The previous article ( The Speed of Gravity : not less than 2 x 1010c ) presented evidence that the propagation speed for such forces is at least extremely fast, far in excess for that of light:
"Experiments indicate that gravity and electrodynamic forces both propagate far in excess of lightspeed." (from abstract) http://www.metaresearch.org/cosmology/gravity/speed_limit.asp
We would now like to find evidence that is more directly in the realm of electrical science, instead of astronomy. In astronomy, "all gravitational interactions between bodies in all dynamical systems had to be taken as instantaneous." But will this hold true for electrical forces? How do physicists design particle accelerators, where the speed of the particle is comparable to the supposed "speed of the electric field"? Does the speed of the field seem to be instantaneous, or do the designs have to allow for an aberration effect and "retarded positions."
Professor of Physics, A. P. French, has a relevant note in his very informative book, Special Relativity (1968), p. 242-243; 267:
"Now the electric field due to a stationary source charge is radial and, of course, spherically symmetrical; that is, it is the same in all directions. It is simply the Coulomb field . . . . If the source charge is moving uniformly, the electric field is no longer spherically symmetrical. Its strength is different in different directions. But, at each instant, the direction of the electric field is still radial with respect to the position of the source charge at that same instant.
If you think about this last result a bit—that at each instant the electric field due to a uniformly moving source charge is directed radially away from the position of the source charge at that same instant—you may begin to realize that this is a very surprising result."
To see why this is so surprising, consider the following illustration:
The electric field from a moving electric charge has no aberration.
Electric charge, q1, is moving at high speed in a particle accelerator from X1 to X2. A charge detector is located at P and it can detect both the intensity and direction of the field associated with q1. Hypothetically, q1 is emitting an electric field which propagates at the speed of light. As q1 passes through location X1, the field is on its way to P, but takes a finite time to get there. But by the time the field reaches P, q1 has actually moved to X2. From what direction then does the detector at P see the electric field as q1 arrives at X2. Does it see the field as though it were at the "retarded position" of X1? Or does it see it as emanating from X2 where q1 is presently located?
"Nevertheless, the field at P points away from the present position of q1. Nature behaves in such a way that, for a uniformly moving source charge, even though the field produced at some point P originated from the location and behavior of the source charge at an earlier time, nevertheless the field points away from the position of the source charge at the present time. It is as though nature calculates where the source charge should be at the present time and acts accordingly. . . . Thus a result which at first glance may seem rather obvious is seen, upon closer examination, to be quite surprising—but nevertheless true."
But it is surprising only if, as French says, "if we believe that no effect—no mass, no energy, no force—can be transmitted with a speed greater than c". If the electric field propagates instantaneously, then the lack of aberration is no surprise at all. We just simply have a different problem requiring a different explanation, namely, how can electric fields propagate instantaneously?
The answer to that problem is simple. Electric fields don't propagate. They are "non-local" in a spatial reference system, much like the concept of time, which is not affected by spatial position.
The concept of "non-local" effects is hard to grasp for most people. So consider a few illustrations. Suppose I have a cloth doll and I stick a pin in it. The pin leaves a hole in the doll. We could say that the effect of my action was "local", that is, cause and effect are clearly related, and they are related spatially.
Now suppose I go to the local Voodoo-Dolls-R-Us store, and get a voodoo doll that has been "correlated" with some evil criminal in Haiti. I stick pins in the head of the doll, and the guy in Haiti instantly gets a headache. This is an example of "non-local" action. Cause and effect are (spatially) separated. I would have a hard time proving that the guy's headache is actually due to my actions with the doll.
Let's say I go back to the store and get their deluxe model, the Universal Voodoo Doll. I stick pins its head, and all humans on Earth (including me) get a headache at that same instant. This is an even stronger version of "non-locality". The effect simply does not care about "where" or "there". The only "connection" the headache events share is the instant of time, which is the same for all victims.
Electric, magnetic, and gravitational fields act this way. They have "non-local" effects. It is as though they produce instantaneous "action-at-a-distance" without any intervening medium or "connection" in space. Physicists are uncomfortable with this concept. They get such a headache thinking about it, they even call it "voodoo physics" occasionally. They would much rather believe that the fields are propagated at the speed of light, despite the evidence to the contrary.
The source of these non-local effects is temporal motion. Instead of being the everyday "space divided by time" type of motion, it is just the inverse: "time divided by space". It is a motion in three-dimensional time instead of three-dimensional space. It does not have a spatial starting point, nor a spatial end point, nor a spatial trajectory connecting the two. It is inherently a "when" type of motion that does not know or care about "where". It is non-directional (knowing only "towards" or "away"), has instantaneous effects, and is unlimited in spatial extent.
Read The Origin of Intrinsic Spin to learn more about temporal motion. Read The Problem of Quantum Locality for more about the non-locality concept. The notes following the article on the Shapiro time delay (below) might also be helpful.
Special Relativity, A.P. French, 1968, Chapter 8, "Relativity and electricity", p. 242-243;267. All italics in the citations are from the book.
"Maxwell’s Objection to Lorenz’ Retarded Potentials" Kirk T. McDonald (2009, 2012)
I should add a note specifically about magnetic fields. Consider Faraday's law of induction in intergral form:
Physicist Thomas E. Phipps, Jr. notes that this equation:
"defines "flux" as an integral. This implies that any circuit senses instantly via its emf —e.g., by a set of voltmeters placed everywhere around the perhaps infinitely spatially extended circuit —any change of a global (integral) property. (If not, please tell me which voltmeter measures the flux change first. And feel free to place yourself in any inertial system!) This can only betoken instant and simultaneous actions at-a-distance —supposedly forbidden by the very term of reference of field theory, not to mention SRT. . . . The bones of quantum mechanics move perceptibly beneath the skin of the Maxwell field —instant action-at-a-distance being an integral aspect of quantum theory. The only known exception in the entire range of physical experience to the rule of instant action is radiation (locally completed quantum processes) —the tail that hitherto has wagged the dog." (Old Physics for New, Thomas E. Phipps, Jr. (2006) p.15 ) http://www.angelfire.com/sc3/elmag/ http://www.angelfire.com/sc3/elmag/files/EM05FL.pdf
In other words, a change in a magnetic flux is "felt" instantaneously everywhere by a wire loop enclosing the flux, even if the loop is extremely large. There is no propagation delay.
"The Sherwin-Rawcliffe Experiment – Evidence for Instant Action-at-a-distance" , Thomas E. Phipps, Jr., Apeiron Vol. 16, No. 4, October 2009 ( http://www.dtic.mil/dtic/tr/fulltext/u2/625706.pdf ) http://redshift.vif.com/JournalFiles/V16NO4PDF/V16N4PHI.pdf
"Since the nineteenth century physical theorists have considered that electromagnetic mass must exhibit tensor properties if causal delays characterize the interactions of electric charges. In 1960 Chalmers W. Sherwin and Robert D. Rawcliffe enlisted the help of mentors of the A. O. Nier highresolution mass spectrograph to test this hypothesis, using the predicted mass line-splitting of a football-shaped Lu175 nucleus of spin 7/2 (a highly asymmetrical charge distribution). No line-splitting was observed. This null result showed that mass behaves in just the way Newton thought, as a scalar, never as a tensor. What, then went wrong with the theory? We argue that the basic assumption of retardation of distant action was at fault, and that the null result in fact provides strong inferential evidence of instant action-at-a-distance of a Coulomb field."
"In Memory: Chalmers W. Sherwin", Thomas E. Phipps (1998) http://www.worldnpa.org/pdf/abstracts/abstracts_1276.pdf
“While at Illinois he conceived and caused to be performed the Sherwin-Rawcliffe experiment (“Electromagnetic Mass & the Inertial Properties of Nuclei,” Report 1-92, March 14, 1960, Coordinated Science Laboratory, University of Illinois, Urbana, Illinois), an experiment establishing the lack of tensor properties of nuclear mass that I personally consider to rank in significance with Michelson-Morely, as one of the great, all-encompassing null results of our time. It is a commentary on the prevailing state of the scientific literature that this experiment was never reported in the regular journals.”
For those who are interested, A Student's Guide to Vectors and Tensors by Daniel Fleisch (2012) gives an excellent introductory treatment of tensors. This is the best single introductory book that I have read on this topic. Other outstanding works include "An Introduction to Tensors for Students of Physics and Engineering", Joseph C. Kolecki (Glenn Research Center, Cleveland, Ohio) (2002) http://www.grc.nasa.gov/WWW/k-12/Numbers/Math/documents/Tensors_TM2002211716 ; Mathematical Tools for Physics, James Nearing (2010) "Tensors", chapter 12 (p.327-359) ISBN 10:0-486-48212-X. This book is a pleasure to read and is offered at a very reasonable price. The online version is also available at: http://www.physics.miami.edu/~nearing/mathmethods/ .html 9/2007b
Experimental Evidence on Non-Applicability of the Standard Retardation Condition to Bound Magnetic Fields and on New Generalized Biot-Savart Law, A.L. Kholmetskii, O.V. Missevitch, R. Smirnov-Rueda, R.I. Tzontchev, A.E. Chubykalo, I. Moreno (2006) https://arxiv.org/abs/physics/0601084
"Finally, we effected numerical calculations taking into account particular experimental settings and compared them with experimentally obtained data that unambiguously indicate on the non-applicability of the standard retardation condition to bound magnetic fields. In addition, experimental observations show a striking coincidence with the predictions of a new generalized Biot-Savart law which implies the spreading velocity of bound fields highly exceeding the velocity of light."
Instantaneous Actions at a Distance Defended , Wolfgang G. Gasser
"Electrostatic effects are instantaneous actions at a distance. There is a very simple experiment which can refute the whole scientific world view. This view is based on the validity of the equations of Maxwell and on the premise that all electromagnetic effects propagate at the speed of light.
It is quite possible that this experiment has been executed without publishing the results. It is not even necessary to carry it out. One must only read carefully the works of Heinrich Hertz, who was the first to prove the existence of electromagnetic transversal radiation. Hertz was an honest person and did not keep quiet about all the results which were in contradiction with his own beliefs (as unfortunately most scientists do).
Hertz clearly found by means of interference effects that electrostatic effects propagate at infinite speed." (a point-and-counterpoint discussion follows)
Charles Wheatstone: Velocity of electricity https://en.wikipedia.org/wiki/Charles_Wheatstone
"He achieved renown by a great experiment made in 1834 – the measurement of the velocity of electricity in a wire. He cut the wire at the middle, to form a gap which a spark might leap across, and connected its ends to the poles of a Leyden jar filled with electricity. Three sparks were thus produced, one at each end of the wire, and another at the middle. He mounted a tiny mirror on the works of a watch, so that it revolved at a high velocity, and observed the reflections of his three sparks in it. The points of the wire were so arranged that if the sparks were instantaneous, their reflections would appear in one straight line; but the middle one was seen to lag behind the others, because it was an instant later. The electricity had taken a certain time to travel from the ends of the wire to the middle. This time was found by measuring the amount of lag, and comparing it with the known velocity of the mirror. Having got the time, he had only to compare that with the length of half the wire, and he could find the velocity of electricity. His results gave a calculated velocity of 288,000 miles per second, i.e. faster than what we now know to be the speed of light (299,792.458 kilometres per second (186,000 mi/s)), but were nonetheless an interesting approximation."
Variations in the speed of light
"On some days it seemed to travel
faster than others, by as
much as twelve miles a second.
Its speed seemed to vary with the season and also in a mysterious shorter cycle lasting
about two weeks. Finally the scientists ended by taking an average of all the readings,
which has just been announced as 186,271 miles a second."
Monthly, March 1934, p. 25
The measured speed of light in a vacuum is really the speed of the gravitational system, as I mentioned previously. I am still trying to sort out the implications of this and find supporting documentation. The claim seems to imply that the measured speed of light will experience variations due to changes in configuration of the local gravitational system (distance from the Sun, Moon, etc). It also implies that the Hubble constant and the Gravitational constant may not actually be constant.
In Search of the Geometry of Space, Time and Motion
"Fundamental constants are not constant—or maybe they are, we don't really know Researchers use quasars to map the value of the fine structure constant", Chris Lee (Nov 30 2011) http://arstechnica.com/science/2011/11/fundamental-constants-are-not-constant-or-maybe-they-are-we-dont-know-really/
"Speed of Light May Not Be Constant, Physicists Say", Jesse Emspak (2013) http://news.yahoo.com/speed-light-may-not-constant-phycisists-133539398.html
"Physicist suggests speed of light might be slower than thought", Bob Yirka, Jun 26, 2014, http://phys.org/news/2014-06-physicist-slower-thought.html#ajTabs
"Albert Einstein Wrong: Speed of Light Calculation May Be Wrong", http://www.inquisitr.com/1325574/albert-einstein-wrong-speed-of-light-calculation-may-be-wrong/
"Do physical constants fluctuate?", Rupert Sheldrake http://www.sheldrake.org/experiments/constants/
"Inconstant Speed Of Light May Debunk Einstein", Michael Christie (8-7-2002) http://www.rense.com/general28/erin.htm
"Variable speed of light" http://en.wikipedia.org/wiki/Variable_speed_of_light (theories)
"Speed of light not so constant after all Pulse structure can slow photons, even in a vacuum", Andrew Grant (January 17, 2015; February 21, 2015) http://www.sciencenews.org/article/speed-light-not-so-constant-after-all
"Physicists propose method to measure variations in the speed of light", Lisa Zyga (Apr 06, 2015) http://phys.org/news/2015-04-physicists-method-variations.html
Einstein's Lost Key: How We Overlooked the Best Idea of the 20th Century, Alexander Unzicker (2016)
I have also proposed that photons are actually stationary with respect to space and time. Although physicists in Academia are shocked when I suggest this, it is apparently not a new idea:
"There is no physical phenomenon whatever by which light may be detected apart from the phenomena of the source and the sink . . . Hence from the point of view of operations it is meaningless or trivial to ascribe physical reality to light in intermediate space, and light as a thing travelling must be recognized to be a pure invention." (The Logic of Modern Physics, P. W. Bridgman (1960) p. 153 )
"According to special relativity the photon is stationary in time and the inertial mass is stationary in space; . . . Since a photon is bereft of rest mass and it is stationary in time it cannot be a projectile and it cannot have a trajectory;" http://www.einsteinsmethod.com/Nonlocality.html (Einstein's Method: A Fresh Approach to Quantum Mechanics and Relativity by Paul A. Klevgard (2008) )
My own thoughts on Klevgard's comment is that the photon is stationary not only in time, but in space as well. The perceived "trajectory" possesed by any photon must therefore be a property of a gravitationally bound reference system, like the one we ordinarily use. But for this concept to work, gravitation must be a multidimensional (non-vectorial) motion. See Gravitational motion has multiple dimensions , 8-10-02 Note , and 11-9-03 Note . The photon speed then becomes a property of the gravitational reference system. (This could really mess up our current views on the Age of the Universe.)
Possibly also relevant are Miller's experiments which attempted to detect "ether drift" but may have instead detected variations in the speed of light that are dependent on a gravitational reference system:
It is also notable that this was the second time Michelson's work had significantly detected an ether, though in the first instance of Michelson and Gale (1925) the apparatus could only measure light-speed variations along the rotational axis of the Earth. These papers by Michelson and also by Kennedy-Thorndike have conveniently been forgotten by modern physics, or misinterpreted as being totally negative in result, even though all were undertaken with far more precision, with a more tangible positive result, than the celebrated Michelson-Morley experiment of 1887. Michelson went to his grave convinced that light speed was inconstant in different directions, and also convinced of the existence of the ether. The modern versions of science history have rarely discussed these facts. ( "Dayton Miller's Ether-Drift Experiments: A Fresh Look" by James DeMeo, Ph.D. http://www.orgonelab.org/miller.htm
How to Construct a Sensitive Gravity Meter
The January 2000 edition of Scientific American has an interesting article about constructing a simple, inexpensive, but very sensitive gravity meter ("Detecting Extra Terrestrial Gravity", pages 94-96). More details, including photos, can be found at http://www.eden.com/~rcbaker/ (titled "The Hi-Q Gravimeter/Seismometer") Also, if you need some coaching in the construction details or application, you might refer to http://earth.thesphere.com/sas/WebX.cgi
Renewed Interest in the Eötvös Experiments
Baron Roland von Eötvös (Eötvös is pronounced somewhat like "ut vush"; rhymes with "brush") was a Hungarian scientist who ran a series of precision gravitational measurements from the late 1880s to 1922. He used a special torsion balance with two weights; one weight was mounted horizontally and the other vertically. In this scheme one weight would be affected by the acceleration of gravity, and the other by the "centrifugal force" caused by the rotation of the earth. In effect, this allowed him to compare gravitational and inertial mass and see, with great precision, if they were equivalent. He also tested weights of differing composition (copper, water, platinum) to determine if "mass" was in someway dependent on composition. His experiments were very carefully and patiently done. They were state of the art until the early 1960s.
You can review various diagrams of his torsion balance at:
Chapter XI. Gravity Measurements with the Eotvos Torsion Balance (at http://www.nap.edu/books/ARC000033/html/167.html)
The results of his experiments showed that gravitational and inertial mass were equivalent to within a few parts per billion and were independent of composition. Later researchers, using a somewhat different approach, improved the precision by a factor of about 1000 over what Eötvös had obtained. But in 1986 some new interest developed in these experiments:
Ironically, a re-examination in 1986 of Eötvös definitive paper of 1922, sparked a lively controversy when the examiners concluded that, contrary to the long-held interpretation, the data in the paper actually provided evidence for a composition dependence of the gravitational acceleration. The origin of this effect (the establishment of which is far from certain) has been attributed to an attractive 'fifth force', . . . that depends not just on total mass, but on certain properties of the 'heavy' elementary particles (the baryons) of which a mass is composed. . . . the baryon number per unit of mass is not necessarily the same for dissimilar materials, since the packing of the baryons can be different." (And Yet it Moves: Strange Systems and Subtle Questions in Physics, Mark P. Silverman, 1993, p. 190)
"neither the concept of baryon number, nor the mass defect existed at that time. Without these concepts, Eötvös could have spent considerable time and effort in a fruitless attempt to find out why the scatter in his data points was larger than his error estimates. We can easily sympathize and imagine the gnawing feeling that something was wrong, or that something very important was being missed." ( http://www.kfki.hu/(hu)/~tudtor/eotvos1/onehund.html)
Back then (1922) the concept of intrinsic spin had not been developed either. And as the reader may know from reading the Advanced Propulsion article above, understanding intrinsic spin is crucial to understanding gravitation. Thus:
"To date [early 1990s] these experiments have not confirmed the original suggestion of a fifth force, as inferred from the Eötvös data by Fischbach and co-workers . . . . However, neither has any group pinpointed an error in the Eötvös experiment which could be the source of their suggestive data. Since all of the recent experiments differ from the original Eötvös experiment in various ways, the possibility remains that there is some theoretical model in which a subtle aspect of the original experiment which we have heretofore overlooked could explain why those authors saw an effect while the more recent ones do not. The significance of the Eötvös experiment is that it will continue to be a stimulus for new ideas, such as the recent suggestion . . . that spin may have played a role in the original work. However the search for new gravity-like forces turns out, it is clear that the Eötvös experiment has played a fundamental role in shaping our understanding of gravity and other possible forces in Nature." (See http://www.kfki.hu/(hu)/~tudtor/eotvos1/onehund.html for the complete references)
Some Thoughts about Intrinsic Spin and The Origin of Intrinsic Spin in the article Intuitive Concepts in Quantum Mechanics.
"Gyroscope's unexplained acceleration may be due to modified inertia", Lisa Zyga (July 26, 2011)
What the Neutron Interferometer Reveals about Gravitational and Inertial Mass
Another interesting hint that gravitational and inertial mass might not be equivalent comes from experiments with neutron interferometers using the Mach-Zehnder configuration.
First a bit of background about this type of interferometer. The Mach-Zehnder interferometer is one of several types of optical interferometers. Schematically it looks much like the illustration below, except of course it uses light instead of neutrons. Light comes in from the left and is split into a reference beam and a test section beam. The upper horizontal segment will have some sort of test apparatus inserted into it. It might be a simple tube (large or small, long or short) which has windows on each end. The test section is commonly used to study the flow of gases and is often a section of a wind tunnel, or a shock tube. The reference beam and test beam are recombined and form an interference pattern at the detector, which, in the case of an optical interferometer, could be a viewing screen or a photographic plate. Interferometers are very sensitive to minute changes in path length differences between the reference and test sections. The differences are caused by density variations in the gas due to flow patterns in the test section. What the observer will see is a series of fringes—a pattern of fuzzy dark lines that may look like curves or nested circles—that correspond to the flow contours of the gas.
For instance, this type of interferometer has been used to study the behavior of plasma in a tube. The tube is something like a common fluorescent light tube with clear windows at each end, and with a magnetic coil wound along the length. It is placed in the test section. The interferogram with the plasma and magnetic field off, is a series of parallel lines. When the plasma and magnetic field are turned on, the pattern of parallel lines then shows a series of fine, nested concentric rings embedded in it, which represent the "pinch" confinement of the plasma. (See Optics, Eugene Hecht, 2nd ed. 1987, p358-359).
For the case at hand, neutrons are used instead of light. Neutrons, like all particles, also have wave characteristics. The neutron wave function can be computed for an interferometer and used to predict the relative number of neutrons that will appear at the detector (a counter) for a specified circumstance. Neutrons have mass, and in this case we want to see how the presence of a gravitational field affects the neutron when it moves horizontally in the field. Classical physics predicts that it will not be affected. Quantum physics predicts that it will be, because the wave function has a potential energy term dependent on the height of the neutron in the field. The apparatus depicted schematically below compares the behavior of two neutrons following paths that have a height difference in the gravitational field.
When the experiment is actually done, the neutron intensity is found to vary periodically with the height of the upper horizontal section. This can be seen in the following diagram:
This result is relevant to studies of gravitation:
"The observation of this neutron interference phenomenon . . . demonstrates convincingly that the Earth's gravity can affect the motion of elementary particles under circumstances where it is not the gravitational force itself, but the difference in gravitational potential energy, that has direct physical significance. Interestingly, it illustrates as well that the equivalence principle [of gravitational and inertial mass] may be of questionable validity in the realm of quantum mechanics." (For a discussion of the particulars, see And Yet It Moves: Strange Systems and Subtle Questions in Physics, Mark P. Silverman, 1993, p. 195-198)
The effect is as though a gravitational field has a kind of "index of refraction for mass" in addition to manifesting a gravitational force. This effect might remind us of the interference effect that occurs when light is reflected from a pane of clear glass (see the third illustration in Counterintuitive Quantum Mysteries). As the glass is made thicker and thicker the reflectivity cycles from 0% to 16% then back to 0% then back to 16% and so on. Similarly, as the neutron interferometer is tilted about the axis of the incoming beam so as to change the height of the upper horizontal beam in the gravitational field, the number of neutrons detected by the counter cycles from a maximum to a minimum, then back to maximum, then to minimum, and so forth. It is as though the path length in the upper section were changing as the apparatus is rotated.
This is pretty hard to understand if gravitation is viewed simply as a vectorial force. The force concept is mathematically convenient but it does not depict the real situation and can be conceptually misleading. On the other hand, if the space/time ratio interpretation of gravitation is used, then gravitation is seen as a coupled motion, and the motion is operative in all three linear dimensions simultaneously (see discussion of scalar motion). The neutron has mass and therefore participates in this motion. The height in the field will therefore affect the speed of the neutron's horizontal motion and this is equivalent to a change in path length that can in turn be detected by the interferometer. As the change in equivalent path length cycles through multiples of the wavelength, the number of neutrons counted goes through maxima and minima.
(There is a related mystery, incidentally, the perplexing Aharonov-Bohm effect. Its difficulties are likewise caused by misconceptions about what scalar and vector potentials really represent. Also, the photon redshift in a gravitational potential ("Gravitational Redshift") as shown by the Pound & Rebka experiment (which used a Mössbauer detector instead of an interferometer) is a scalar potential effect very similar to that described above for the neutron interferometer. See http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/gratim.html#c2 , http://www.rsc.org/membership/networking/interestgroups/mossbauerspect/intropart1.asp for an overview and http://www.quantum.univie.ac.at/research/thesis/gvdzdiss.pdf ("Gravitational and Aharonov-Bohm Phases in Neutron Interferometry", Gerbrand van der Zouw, PhD-Thesis, University of Vienna, 2000) for a specific article. See also The Gravitational Redshift article below.)
Normally we would not be concerned about this effect. With ordinary massive objects the effect cannot be seen because the wavelength is too small. The wavelength of the neutron in this experiment was 1.4 Angstroms (essentially that of a thermal neutron at 300 K) This is comparable to interatomic distances in a crystal lattice, which in turn makes such crystals usable for neutron mirrors. In contrast, a one micron speck of dust with a mass of 10-15 kg and moving at a velocity of one mm/sec has a wavelength of 6.6 x 10-6 Angstroms. This is about a million times smaller than the interatomic distance. For something with the mass of a bullet, the effect would be utterly undetectable. Gravitational and inertial mass would therefore be equivalent "for all practical purposes." In other words, the trajectories of cannon balls would be independent of their mass.
However, when "practical purposes" start to include finding the design principles for asymmetric gravitational propulsion systems, this effect becomes highly relevant. This kind of experiment needs to be repeated with neutrons and atoms (such as hydrogen and helium) that have been spin polarized. This may introduce some asymmetries and even get rid of the fringe shift under certain conditions.
Quantum Gravitational States http://www.aip.org/pnu/2002/573.html , http://physicsweb.org/articles/news/6/1/9
Spin Polarization of Atoms and Photons
The concept of atomic spin and spin polarization is a bit abstract for some of my readers. I intend to cover this topic more fully in an article on quantum mechanics at this site. For now, to get a better intuitive feel for this topic, the reader might explore some of the following links:
An article explaining how spin angular momentum can be efficiently transferred from photons to atoms can be found at:
Some practical applications of spin polarized atoms in the medical field can be found at:
"Hyperpolarized Helium" (J.R.MacFall, H.C.Charles, J.Smith)at http://camrd4.mc.duke.edu/camrdprojects Select Hyperpolarized Helium.
"Hyperpolarized helium technique joins doctors imaging arsenal" (By David Nigro in "The Chronicle online") http://www.chronicle.duke.edu/chronicle/09/01/03HyperpolarizedHelium.html
"A Novel Lung-Imaging Method Using Magnetic Resonance Imaging With Hyperpolarized Helium-3" http://spider.cso.uiuc.edu/cnrs/Cnrspresse/en359a2.htm
(An MRI image of a human lung that used optically polarized Helium-3) http://www.physics.princeton.edu/~benlev/atomic.html
"Head Full of Xenon?" http://www.bric.postech.ac.kr/science/97now/99_3now/990323c.html
"Tiny Bubbles Help Researchers See Inside of Blood Vessels" (by Karyn Hede George) http://www2.mc.duke.edu/news/inside/980914/6.html (click Cancel on the password dialog box)
As can be inferred from the above articles and applications, an atom can retain a particular spin polarization for a substantial amount of time. The "relaxation times" of spin polarized atoms are affected by the environment. "If the inside walls of the cell are suitably coated, collisions with the walls have little effect on the spin state of the atoms. . . . For example, for hydrogen atoms bouncing off teflon walls, tens of thousands of collisions are required for the magnetic moment of the hydrogen atom to become disoriented." (Quantum Mechanics, C.Cohen-Tannoudji, et al., 1977, p. 452)
The reader will note, of course, that none of these articles have anything to do with "antigravity."
Some Related Links about "Gravity Modification Experiments"
There are plenty of articles about "antigravity" on the Internet. A very few are listed below. Peruse them and their many links at your leisure:
http://xxx.lanl.gov/abs/physics/0108005 "Impulse Gravity Generator Based on Charged YBa2Cu3O7-y Superconductor with Composite Crystal Structure", Evgeny Podkletnov, Giovanni Modanese. See "Motion Cancellers" below for more details.
As explained above in the note about the Eötvös experiments, there is a possibility that gravitational mass might have a composition dependence. This may seem to go against everything you have been told about THE LAW OF GRAVITY (!). If you need help in breaking out of the mental cages, consider a related phenomena, magnetism. We know magnetic effects can be created by electric currents. But they can be created in other ways too. So-called "permanent magnets" do not need any electric current to create a powerful magnetic field. And some permanent magnet compositions, Heusler alloys, do not even use ferromagnetic materials. One Heusler alloy has a composition of 65% copper, 25% manganese and 10% aluminum. (See http://www.newi.ac.uk/buckleyc/magnet.htm ) Would you have suspected such an alloy (chiefly copper) to be magnetic? There is also the Barnett effect whereby a weak magnetic field can be produced by rotating an unmagnetized iron cylinder at high speed about its long axis. Would you have suspected that rotating something that is non-magnetic and non-electrical would produce a magnetic field (the phenomena is known as "gyromagnetism")? So keep an open mind. Someday, "antigravitic" materials, schemes and phenomena may be just as common and ordinary as the magnetic ones are today. History shows that today's science fiction is tomorrow's technology. (See also "THE WALLACE INVENTIONS, SPIN ALIGNED NUCLEI, THE GRAVITOMAGNETIC FIELD, AND THE TAMPERE EXPERIMENT: IS THERE A CONNECTION? by Robert Stirniman " http://www.rexresearch.com/wallace/wallaceinventions.pdf )
Links pertaining to technology that was ahead of its time:
http://www.tuc.nrao.edu/~demerson/bose/bose.html microwave experiments prior to 1900
http://en.wikipedia.org/wiki/Semmelweis Ignaz P. Semmelweis and childbed fever
http://rinkworks.com/said/predictions.shtml famous bad predictions
Gravitational Lensing and Deflection of Photons by Gravity
According to what has been presented in these pages, one would expect that photons would not be deflected by a gravitational field. Simply put, photons have no mass, and their path of travel should therefore not be affected by the presence of a massive body. Yet we see statements like the following in physics and astronomy textbooks:
". . . A beam of light will accelerate in a gravitational field in the same way as do more massive objects. For example, near the surface of the earth, light will fall with acceleration 9.8 m/sec2. This is difficult to observe because of the enormous speed of light. For example, in a distance of 3000 km, which takes about 0.01 sec to cover, a beam of light should fall about 0.5 mm. Einstein pointed out that the deflection of a light beam in a gravitational field might be observed when light from a distant star passes close to the sun . . . . Because of the brightness of the sun, such a star cannot be ordinarily be seen. Such a deflection was first observed in 1919 during an eclipse of the sun." (Modern Physics, Paul A. Tipler, 1978, p. 41)
According to this effect, light from stars will be bent slightly when moving past a dense, massive body. Light from the background stars that grazes the surface of the sun would be deflected 1.75 seconds of arc. For a white dwarf, the effect would be about 1 minute of arc. For a so-called neutron star, the effect would be 30 degrees. The observational effect would be much like looking at a field of black dots on a sheet of paper through a magnifying glass. Light from the dots is bent inward by the magnifying glass, but the effect to the observer is that the dots seem to move apart (become "magnified"). Of course, we have to ask the question: Is this predicted effect real, or is it just theoretical?
The existence of such an effect for the sun was supposedly proven observationally by professor/astronomer Arthur Stanley Eddington during a total solar eclipse on May 29, 1919. The eclipse blotted out the Sun's disk (and the bright effects in earth's atmosphere) thereby allowing the positions of the "fixed stars" very near it to be recorded on photographic plates. These star positions could then be compared with the same star positions on other photographic plates taken at night during a different time of the year.
Einstein's prediction of deflection was 1.75 seconds of arc right at the edge (or "limb") of the Sun. Unfortunately, the stars that were actually observed were all outside of two solar radii from the center of the Sun, and the maximum predicted deflection for that location was 0.8 arc seconds. As any amateur astronomer knows, the "seeing" at night at ordinary locations is 2 or 3 arc seconds or worse, due to instabilities in the atmosphere. Hence, this experiment (performed during the day!) was done under less than minimum acceptable conditions, well into the noise level. Furthermore, one would want to measure stars distributed all around the Sun, but nature did not cooperate. Only a couple of the plates had "fairly good images of five stars, which were suitable for a determination". And these few, unfortunately, were all on one side of a line that could be drawn through the Sun's center. Other sources of error could have been significant too. The lensing effect, as seen by a telescope with a 343 cm focal length, would amount to a change in star position of only 0.01 mm on the photographic plate. Distortions of the optical system because of temperature changes during the eclipse could be another source of error. A change in focal length of only 0.1 mm could produce scaling errors between the eclipse plates and the reference plates of about the same order of magnitude as that predicted by Einstein. The observations were also done in the field, not at a regular observatory. Later experiments showed even wider scatter of data, differing from Einstein's prediction by as much as 60 percent.
Hence, I am not convinced that these observations were "proof" of gravitational lensing. The experiment required very delicate measurements but had large margins of error and uncertainty inherent in them. (For more technical details, see Infinite Energy Vol. 7, Issue 38. 2001 p. 19, "Anomalies in the History of Relativity", Ian McCausland, reprinted from Journal of Scientific Exploration, Vol. 13, No. 2, Summer 1999; "The Eclipse that Revealed the Universe", Dennis Overbye (July 2017) https://www.nytimes.com/2017/07/31/science/eclipse-einstein-general-relativity.html
Other proofs of gravitational lensing involve, not star deflections, but duplications of quasar images:
Another example of the influence of curved spacetime is a gravitational lens. Very distant galaxies, called quasars, sometimes lie almost directly behind a massive galaxy. The result . . . is that we see what appear to be two identical quasars instead of just one. (Understanding the Universe, Phillip Flower, 1990, p. 591)
I cannot accept this as a proof either. It presumes the existence of an unobservable massive galaxy. Also, quasars, as clearly shown by their redshifts, involve motion that is greater than the speed of light. According to what I have presented in this and other articles, such speeds are temporal. That means that quasars have, at a minimum, one dimension of motion in time, with the other two dimensions remaining in space. How this kind of phenomena maps into a spatial reference system is not well understood. Possibly Apparently, the missing dimension can make the phenomena look as though space were split in two, much like we were seeing the object along with its mirror image. In fact images of other energetic astronomical objects show a mirror effect much like that claimed for some quasars. Some examples:
Hourglass nebula: http://www.seds.org/hst/Hourgls.html , http://oposite.stsci.edu/pubinfo/jpeg/Hourgls.jpghttp://en.wikipedia.org/wiki/Red_Square_Nebula http://www.newscientist.com/article/dn11577-red-square-nebula-displays-exquisite-symmetry/
Eta carinae: http://www.seds.org/hst/96-23a.html , http://www.seds.org/hst/WFPCEtaCar.html .
NGC 7009 Saturn Nebula http://antwrp.gsfc.nasa.gov/apod/ap971230.html
NGC 7027 http://hubblesite.org/newscenter/archive/1996/05/
NGC 6826 http://imgsrc.hubblesite.org/hu/db/1997/38/images/d/formats/full_jpg.jpg
CRL 2688 Egg Nebula http://hubblesite.org/newscenter/archive/1996/03/
M2-9 Wings of a Butterfly Nebula http://antwrp.gsfc.nasa.gov/apod/ap020106.html
NGC 6302 Bug Nebula
NGC 5307: A Symmetric Planetary Nebula http://antwrp.gsfc.nasa.gov/apod/ap971231.html
quasar HE 1104-1805 http://upload.wikimedia.org/wikipedia/commons/thumb/0/0c/Quasar_HE_1104-1805.jpg/220px-Quasar_HE_1104-1805.jpg
(three of these appear to be the hourglass type seen "top down")
http://en.wikipedia.org/wiki/Red_Rectangle_Nebula http://en.wikipedia.org/wiki/Red_Square_Nebula http://www.newscientist.com/article/dn11577-
Hence, the apparent duplication of some quasar images may be due to an effect that is completely different from that causing the supposed deflection of star light by gravitation.
All is not lost however:
"Eclipse observations to test the relativity effect have continued over the years, but the measures are very difficult to make and the precision of the confirmation is not high. Far higher accuracy has been obtained recently at radio wavelengths. Simultaneous observations of the same source with two radio telescopes far apart can pinpoint the direction of the source very precisely. The United States National Radio Astronomy Observatory at Greenbank, West Virginia, with radio telescopes 35 km apart, observed several remote astronomical radio sources . . . when the sun was nearly in front of them. The apparent directions of the quasars showed shifts similar to those of stars seen near the sun. The accuracy of these observations is high enough to confirm the Einstein prediction to within 1 percent." (Exploration of the Universe, G. O. Abel, D. Morrison, S. C. Wolff, 1987, 5th edition, p. 584)
"Recent improvements in very long baseline interferometry (VLBI) have made it necessary to take the deflection of light into account over the entire celestial sphere. For a source at 900 from the Sun, for instance, the deflection is only a milliarcsecond, but it is still detectable." ( The New Physics, Paul Davies (editor), 1989, p.13)
This kind of experiment appears to have an acceptable design. It was like the one performed by Eddington, except it used radio telescopes and quasi-stellar ("starlike") radio sources. The radio telescopes are in effect the photographic plate, and the plate has to be large because radio wavelengths are much longer than optical wavelengths. It is also capable of high precision. The interferometric methods used can detect angular separations and changes thereof as small as a few hundred microarcseconds of arc. Hence, I accept the claim as factual, and conclude that starlight is in fact deflected as it passes through a gravitational field. (See The New Physics, Paul Davies (editor), 1989, p.13)
But according to what I have presented at this website, photons do not gravitate and space is not curved. So how can the path of photons be bent as they pass near the Sun?
I think the answer is simple. It has to do with gravity being a three-dimensional scalar motion (having a magnitude, but no direction except that it is simply "towards" all locations in space, whether occupied of not). The star light is simply not deflected by the gravitational field of the Sun, nor is the space around the Sun curved. The Sun is a gravitating object and therefore possess this type of scalar motion. In the context of the reference system, however, which seems to insist on assigning vectorial directions to motions that are inherently directionless, we do not see the Sun as moving "towards" the star positions. Rather we invert the picture and claim that the Sun is stationary, and that the starlight is being deflected inward "towards" the Sun. This certainly makes sense in the context of everyday experience, but the alternative interpretation is still consistent with what I have presented here, and will produce the same observational facts.
The effect is as though we had drawn a bunch of dots on the surface of a balloon, then taken one big dot as a stationary reference, and then deflated the balloon. The fabric of the balloon represents space. The dots occupy a fixed position on the surface of the balloon and do not move relative to it, just like photons occupy a fixed position in space and move with it rather than through it. We will say that the big dot (the Sun) is really a little piece of paper, and that as the balloon contracts, the fabric of the balloon is actually in motion underneath it. In other words, the Sun is what is actually moving relative to space. Yet it is very easy to view it as stationary, and to attribute the motion that it actually has to the star positions, which in fact have no motion.
It is quite natural, incidentally, for physicists and astronomers to talk about curved space in this situation. When I was a kid, I went to a school that had a miniature merry-go-round. We kids would sometimes play "catch" on this rotating merry-go-round by throwing a ball straight across the center to another kid. To an observer on the ground, the ball traveled a straight path once it left our hands. But to us kids on a rotating platform the ball's path was strongly curved, and was very difficult to catch. The same effect could be produced by a kid on the stationary ground throwing a ball to a kid on the merry-go-round. We understood these effects because the mechanics of the situation could be clearly seen. But if we did not know the merry-go-round was rotating, we would have had to invent some other explanation. It probably would have been something like "Space becomes curved in the vicinity of merry-go-rounds".
In other cases physicists introduce unrealities as a matter of convenience. Suppose a missile is launched towards New York from the North Pole. As the missile travels, the earth rotates underneath, until, we shall say, Chicago has moved into position underneath the missile instead of New York. To an observer on the ground, the missile has taken a curved path. Curved paths are normally caused by forces acting perpendicular to the line of travel. If we wish to preserve the illusion that the earth is stationary, we can introduce a "fake force" (the Coriolis force) into the calculations to make the calculated path and the actual path coincide. Such calculations are very important in figuring the paths of artillery shells in flight. (The British found this out the hard way once, when they calibrated their tables for Coriolis effects in the northern hemisphere, and then fought a war in the southern hemisphere, where the effect is just the opposite. The shells initially missed their intended targets by a wide margin.)
You can see another common example of reference system effects by watching the moon rise (day or night). In the east it may initially appear like an upside-down bowl. As it rises in the sky, it appears to "stand up" on edge. As it sets in the west, it becomes rightside-up like a normal bowl. You might conclude that the moon rotates as it travels across the sky (making half a turn in twelve hours). But this is only an appearance. An observer at the North Pole would not see this behavior (or at least not so obviously).
These examples all involve rotations, and have no direct connection to gravitational effects like the bending of light. They are intended only to illustrate how it is possible for one motion to couple with another motion —often an unnoticed motion— and give the appearance of motion of another sort, or even no motion at all. Such "reference system effects" are often very subtle and can cause a lot of confusion when we are trying to investigate fundamental phenomena like the behavior of light and gravity.
I think this section, incidentally, is an example of how college textbooks are often vague and sloppy with their facts, but with a lot of digging through several of them, you can usually resolve the discrepancies and distinguish between fact and theory-presented-as-fact. The texts often have good, if very general, information and the math is very useful too. But the conceptual interpretations are often badly flawed and considerable effort and thought are required to sort things out.
The Gravitational Redshift and the Principle of Equivalence
The gravitational redshift is an effect predicted by Einstein's Principle of Equivalence (1907) which was later incorporated into General Relativity (1916). The Principle could be stated as:
A homogeneous gravitational field is completely equivalent to a uniformly accelerated reference frame.
What that means is customarily illustrated with the "Einstein elevator". It is a "thought experiment" that uses an ordinary elevator and a beam of light shining from a side wall to show the consequences of the Principle. There are four cases:
Case #1: The elevator is at rest on the earth. The horizontal light beam coming out of the wall is seen by an observer in the elevator to bend downward. The light falls in the gravitational field on a parabolic path exactly like a ball thrown horizontally, or a stream of water from a garden hose. (We call this a "thought experiment" because the deflection of the light beam is actually far too small to be seen in the elevator. But it is predicted by the Principle of Equivalence.)
Case #2: We move the elevator to remote outer space, far away from any massive body. A small rocket engine on the bottom of the elevator accelerates it "upward" at 9.8 m/sec2 (equivalent to the gravitational acceleration on earth). We find that the light beam deflects downward and that its behavior, to an observer inside the elevator, is indistinguishable from case #1.
Case #3: While we are in remote outer space, we shut off the rocket engine. We now find that the light beam goes straight across the elevator without being deflected.
Case #4: We come back to earth and put the elevator in orbit around the earth. The path of the elevator is curved as it falls around the earth just like the Space Shuttle or a satellite. The curved path causes an acceleration which exactly balances out the effect of the gravitational field of the earth. The path of the light beam is again straight.
The idea that gravity could deflect a light beam, incidentally, is not a recent development. Newton predicted such an effect with his particle model of light, but the effect predicted by Einstein was twice as great. Einstein's version proved to be correct.
So gravity is equivalent to an accelerated reference frame. This insight is fortunate and helpful. From the standpoint of conventional physics the nature of gravity is mysterious and non-intuitive. If we set up an experiment and try to predict how it will be affected by a gravitational field, we may have difficulty visualizing the outcome. But if so, we can just put the experiment into an accelerated box. Understanding motion is much easier than understanding the actions of mysterious forces. (See also: Why is gravitation an accelerated motion? What powers gravity? )
In this "motional" interpretation of Relativity, the photons are not attracted downward to the earth. Rather, the earth is accelerating upwards into the photon path (case #1). It is exactly like the elevator accelerating upwards into the photon path (case #2). Because our reference frame is attached to the earth or the inside of the elevator, the photon's path appears to curve.
Relativity treats light as a form of energy that can be attracted by gravity, and so another trick is possible with light. We could place a light source on the floor of the elevator (case #1) and shine it upwards. If light moves upwards "against gravity" it will lose a very small amount of energy and become redshifted. Or we could place the light source on the ceiling and the detector on the floor. In this case light falls in the gravitational field, and it will gain energy and become slightly blueshifted.
This is again indistinguishable from what would happen within the Einstein elevator which is accelerating upwards (case #2). If the light is on the floor, the detector on the ceiling is accelerating away from the photon. It therefore sees the photon as redshifted. If the light is on the ceiling, the detector on the floor is accelerating towards the photon. It therefore sees the photon as blueshifted. If the elevator is in free fall around the earth (case #4) there will be no redshift or blueshift because the gravitational acceleration is balanced out by an accelerated motion.
In the "motional" interpretation of Relativity, gravity actually is accelerated motion, not merely equivalent to it. The redshift/blueshift that is caused by the accelerated motion of the elevator is the same type of phenomena caused by an accelerating earth. The only difference is dimensional: the elevator accelerates in one dimension; the earth accelerates outwards in three dimensions simultaneously (scalar motion—motion that has only a magnitude and no inherent vectorial direction). Both the elevator and the earth qualify as an "accelerated reference frame" and yet both appear to be stationary when viewed from within each system. In the actual situation, the photons are stationary and the earth, or elevator, accelerates into them (gravitational motion). The "towards" motion applies regardless of whether the earth is moving across the photon path (giving the appearance of a bent path) or whether it moves parallel or directly into it (colliding with it, so to speak, at the speed of light).
The experiments by Pound, Rebka, and Snyder at the Jefferson Physical Laboratory at Harvard circa 1960 have verified the existence of the gravitational redshift/blueshift effect to within one percent of the theoretical value. Those fascinating experiments were done with an extremely high resolution energy spectrometer that utilized the Mössbauer effect in iron 57. Corrections for relativistic effects are also built into the Global Positioning System (GPS). For additional details, visit some of the following links:
Mössbauer spectroscopy, http://en.wikipedia.org/wiki/M%C3%B6ssbauer_spectroscopy
For more insights on multi-dimensional motion, see the various discussions that are scattered around at this website such as those in:
Some Thoughts on Intrinsic Spin
Energy from Massless Particles?
In conclusion it can be seen that the gravitational deflection of starlight ("lensing"), the gravitational redshift/blueshift, the instantaneous "action-at-a-distance" and "inverse square force" characteristics of gravitation, and the Shapiro time delay (see below) all have a common origin. They can all be understood in a simple, intuitively satisfying way only if gravitation is treated as an intrinsic motion, not as a force or warps in space.
The Shapiro Time Delay
In the 1960s Irwin I. Shapiro predicted that there would be a time delay introduced into the round trip time of radar signals as they reflected off a planet passing behind a massive body like the Sun. The delay would be caused by the warpage of space due to the presence of the Sun's mass. (Shapiro, Irwin. I., 1964, Physical Review Letters. 13: 789; Shapiro, Irwin I. et al., 1971, Physical Review Letters, 26, 1132) . This was another good test of General Relativity, and the effect does indeed appear to be factual:
"In the two decades following Shapiro's discovery of this effect, several high-precision measurements have been made using radar-ranging techniques that evolved from the Venus echo work of 1959-60. Three types of targets were employed: planets such as Mercury and Venus, used as passive reflectors of the radar signals; spacecraft such as Mariners 6 and 7, used as active retransmitters of the signals; and combinations of planets and spacecraft, known as 'anchored spacecraft', such as the Mariner 9 Mars orbiter and the 1976 Viking Mars landers and orbiters. The Viking experiments produced dramatic improvements in the determination of the time delay, because anchoring the spacecraft reduced errors due to random fluctuations in their orbits (planets are very imperturbable), and because noise introduced into the tracking signal by the rough planetary topography and poor planetary reflectivity is removed by the use of transponding spacecraft." (The New Physics, Paul Davies, ed., 1989, p.14)
See also "Delay of Light in a Gravitational Field" http://www.whfreeman.com/modphysics/PDF/2-2bw.pdf and others: http://www.geocities.com/newastronomy/animate.htm , http://www.si.edu/opa/researchreports/9892/saoside.htm , http://renshaw.teleinc.com/papers/timedela/timedela.stm
The 200 microseconds is the radar distance equivalent of about 40 miles (roundtrip). So this is like saying that the spacecraft, with a planet attached to it, jumped 20 miles out of its normal orbit as it passed behind the Sun. The observations are "explained" by claiming that the Sun's mass causes a warp in space, and consequently the path of a radar beam passing near the Sun has to go through space that is stretched out, and this causes the additional time delay.
You have probably seen the illustrations of this effect. They show a rubber sheet stretched out across a hoop (like the top edge of a garbage can). Straight lines are then drawn on the sheet and some lines pass near the center of the sheet, and others are closer to the edge. A weight is then placed in the center of the sheet. The sheet deforms downward, with the greatest deformation being at the center. The lines are still at their same positions on the sheet, but the ones near the center are stretched out longer than the ones near the edges. The time delay for a radar beam is thus due to a change in the geometry of space itself, not to fluctuations in the orbital path, and is greatest for signal paths grazing the Sun.
Unfortunately, no one has given a conceptual explanation of how the mass grabs hold of the fabric of space and warps it, and so the "explanation" is not very satisfying. It is like explaining a mystery with an enigma. (I used to be amazed that people actually regarded this as an explanation.)
I would like to offer an alternative explanation. Consider the following illustration:
This situation is quite a bit like that with the Einstein elevator. In the elevator (remember) the path of the light beam is actually straight, but the acceleration of the elevator and the observer within it, causes the path to appear curved in exactly the same way a stream of water or a ball thrown horizontally appears to curve downward here on earth (except of course, that light travels very much faster than a stream of water and its path cannot really be seen to curve). If the elevator were accelerating in the opposite direction, the curvature would likewise be in the opposite direction.
In the above illustration, the ball is constrained by a straight metal track and is analogous to the light beam. The paper is what we think is flat, stationary space-time. The motion of the paper is analagous to the accelerated gravitational motion near the Sun. However, if we are residing on the paper like tiny bugs, we have the same motion that the paper has, and do not realize that the paper gets yanked up and down (we feel an acceleration, but we remain "stationary" at the same spot on the paper). We bugs know that the ball is constrained to follow a straight path, but it actually traces out a curved (parabolic) path. We realize that this could be caused by a warp in the fabric of the paper, or it could be caused by motion of the paper, neither of which is observable to us bugs.
So how do we choose between the two alternatives? Equations like E=mc2 suggest that, if the equation is to be dimensionally consistent, then m must be some kind of space/time ratio just like the speed of light (the c term) is a space/time ratio. If this is the case, then mass must be what we call motion or speed. Moreover, Einstein's Pinciple of Equivalence states that gravitation is equivalent to an accelerated reference frame. We could take this one step further and say that gravitation is accelerated motion, not just equivalent to it. The premise of Scriptural Physics also requires an understandable, plainly evident universe (no inherent mysteries). Motion is much easier to understand and more plainly evident than invisible warps in space. Hence, the "motional" interpretation of gravitation seems to be the best one. ( See also: Why is gravitation an accelerated motion? )
There are some superficial difficulties, however, and we must educate our intuition a little bit. Consider this problem: A man jumps upwards from the earth. According to the "motional" interpretation of gravity, the man is floating momentarily in free space, but the earth has motion and rushes out to collide with him, accelerating him thereafter so that he has the sensation of weight. Meanwhile, another man on the opposite side of the planet does the same thing, and experiences the same result. How can the earth be moving outward to meet both men? How can the earth be moving simutaneously in diametrically opposite directions? This must be an unusual kind of motion!
Actually, scientists have the same sort of problem. To explain the expansion of the universe they use an analogy of an explosion (the "Big Bang"). The explosion blows everything apart in a directionless fashion. The motion is simply "away" from the original location. You have probably also heard the analogy of the expanding balloon. Points on the balloon's surface move away from each other as the balloon is inflated. This is another kind of directionless expansion.
Scientists also distinguish between "force vectors" and "force fields". A force pushing a rocket is in the "force vector" category. But the force around a charged particle is in the "force field" category. Forces are apparent in both situations, but the latter has a kind of "doesn't care" attitude about direction. Its essential "direction" seems to be only "towards" or "away".
The motional interpretation of gravity requires a similar kind of "directionless motion". Mathematicians would call it "scalar motion" instead of "vectorial motion." It is either "towards" or "away" (from everything) and has no property but a signed magnitude. Note that this is simply a description. It is not a theory or an explanation about what causes this type of motion (see spin). Instead of describing the situation with the term "force field", we just use the term "scalar motion". Again, motion is much easier to understand. The "force field" concept requires action-at-distance, and that is an idea that makes scientists uncomfortable.
Because scalar motion is towards or away from everything, it is necessarily a multidimensional motion in the context of the usual reference system. Instead of using the balloon analogy, let's use a picture on a TV screen. As the camera zooms in on a scene, the points on the picture move outward and away from each other. The picture enlarges or expands. The expansion takes place in both the horizontal and vertical dimensions of the picture. Yet this is just one motion, not two. It is one motion of the two-dimensional type.
Another analogy uses Microsoft windows on a computer display. Let's say you want to expand a window. You put the cursor on an edge and then do a click-and-drag. This expands the window in one dimension. You can also click-and-drag the other edge, and expand the other dimension. Note that these are two separate applications of one-dimensional motion. But there is an even simpler way to expand a window. Do a click-and-drag on a corner. This is one application of a two-dimensional motion. Conceptually, you could generalize this even further. If you could click-and-drag on the corner of a cube, you would have one motion of the three-dimensional type.
This multidimensional motion is exactly what we need for gravitation. Are you intuitively more comfortable with it now? Or when you watched the picture on the TV, did you find yourself thinking "The camera is warping the space on my TV screen"? Or maybe "The camera is exerting a force field on the picture"? Hopefully, your mind simply said "The camera is in motion and that explains what I am seeing." Actually, your brain does the same sort of image processing as you walk down the street or drive a car. Your visual system has a built in "scalar motion processor", and you cannot get more intuitive than that!
It is this gravitational motion of the Sun then, not warps in space, that introduces the equivalent of "more space" and thus the Shapiro time delay.
8-10-02 Note: The view that gravitation is one multidimensional motion requires that its "propagation velocity" be instantaneous. Because it is one motion, like the moving points on the TV picture, there is nothing that is propagated, and the action between all points is necessarily instantaneous. See The Speed of Gravity above. Note that this "action-at-a-distance" has a different character than the non-local action of the EPR paradox. In that situation, if two photons originate in the same event, their Schrödinger waves become "phase entangled", and even though they separate spatially, it can be demonstrated experimentally that they are still connected somehow, and that an action on one has instantaneous effects on the other, regardless of the spatial separation. (See The Problem of Quantum Locality ). In this case there are two objects (photons) but there are also two kinds of location, a three-dimensional spatial location and a three-dimensional temporal location. The latter is "non-local" to the spatial system and is responsible for the appearance of instantaneous action-at-a-distance. In the case of gravitation, there are also two objects (say, the Earth and Moon), but only the spatial motion is considered. They are connected by one multidimensional gravitational motion. There is nothing that is propagated, and so actual measurements of the "speed of gravity" give speeds that are so far in excess of the speed of light that only a lower limit on the speed can be given.
11-9-03 Note: You may suspect multidimensional motion is involved somehow when physicists use words like "fields", "potentials", and the "Aharononv-Bohm effect" to describe the phenomena:
"It is possible to interpret the Aharononv-Bohm effect without supposing that the potentials are real by letting the electromagnetic interaction be nonlocal—that is, by permitting action at a distance. Although physicists have traditionally resisted nonlocal theories, it turns out that nonlocal effects may be built into the quantum-mechanical description of nature. There are experiments for which the most natural explanation seems to require that an action at one location produce an instantaneous result at a distant location. This phenomenon is a subtle one in which the principle that signals cannot travel faster than light is not violated . . . and it is surprising and poorly understood. It is a different kind of nonlocality from that suggested by the Aharonov-Bohm effect, but each situation hints that the quantum-mechanical universe, in some strange unexpected way, may be a nonlocal one.
The Aharonov-Bohm effect is a rich phenomenon with numerous implications. As only one example, it suggests that in quantum mechanics the concept of force is no longer useful. The equations of quantum mechanics never involve forces, only potentials. . . . the effect does seem to demonstrate that potentials are more fundamental than forces in the microscopic world." (Classical & Modern Physics, F.J. Keller, W. E. Gettys, M.J. Skove, (1993) p. 915-917. See also The Problem of Quantum Locality
6-21-07 Note: Physicist Mark P. Silverman explains a bit about the Aharononv-Bohm effect in a book review (http://www.trincoll.edu/~silverma/reviews_commentary/neutron_interferometry.html ):
The Aharonov-Bohm (AB) effect is a quantum interference effect that depends on spatial topology and can be manifested only by particles endowed with electric charge. A split electron beam, for example, made to pass in field-free space around (and not through) a region of space within which is a confined magnetic flux, will, upon recombination, exhibit a flux-dependent pattern of fringes. Thus, by a judicious adjustment of the magnetic flux, one can produce an interference minimum in the forward direction, even though the optical path length difference of the two beam components is null. The electrons do not experience a magnetic field locally, and therefore are not acted upon by a classical Lorentz force.
There is also the Aharonov-Casher effect:
As neutral particles, neutrons do not exhibit what is traditionally regarded as the AB effect. However, neutrons have a magnetic moment and give rise to a companion topological phenomenon known as the Aharonov-Casher (AC) effect. In the latter, a split neutron beam is made to pass around a region of space within which is a confined electric charge and, upon recombination, gives rise to a charge-dependent interference pattern. The experimental confirmation of this effect, which may be interpreted as an example of spin-orbit coupling, was performed at the University of Missouri Research Reactor in 1991.
And the Colella-Overhauser-Werner effect:
In their book, the authors describe the so-called COW experiments (for Colella-Overhauser-Werner) in which a beam of neutrons, coherently split into two components moving parallel, but displaced vertically from one another, are recombined to yield an interference pattern that depends on the gravitational potential difference of the two beams.
For my thoughts on the latter see "What the Neutron Interferometer Reveals about Gravitational and Inertial Mass" above. See also The Shapiro Time Delay and Feynman's disk paradox.
I believe that these experiments show how a multidimensional motion manifests itself when interacting with another multidimensional motion of different dimensions. Gravitational motion inherently has three motional dimensions, only one of which can be manifested in our reference system, which uses three dimensions of spatial displacement and one dimension of time progression displacement. Therefore, one dimension of the gravitational motion acts "Newtonian" or as a "force" or as "gravitational potential energy". Moreover, the motional dimensions are inverted relative to our common reference system (t/s instead of s/t). This inversion makes the motion non-local, non-directional, and with no spatial trajectory. We use the words "potentials" and "fields" to describe the tendency for this type of motion. The reference system can likewise depict one Newtonian potential, but the other two are not normally manifest. Their presence, however, can be revealed by clever experimental techniques like those used in the AB, AC, and COW experiments.
All this implies that we can expect yet another mystery to appear on the physics scene someday: a reactionless force generator. By using field technology, the generator would create a beam of mechanical force, but the generator would not experience an "equal and opposite" (Newtonian) reaction. Instead, the reaction would be "equal and radial" (perpendicular) to the beam generated, and would seem to cancel itself out within the generator. The system would act like a cannon but with no Newtonian kick-back.
Lack of Recoil in Railguns
Apparently, an effect similar to that described in the paragraph above has been seen in rail guns. This effect is magnetic, instead of gravitational, but the similarities are intriguing:
"The rails need to withstand enormous repulsive forces during firing, and these forces will tend to push them apart and away from the projectile." http://en.wikipedia.org/wiki/Railgun
That, in and of itself, is not unexpected, as it is predicted by Faraday's law of induction. What is surprising to investigators is the lack of a reaction force:
“An interesting debate in railgun research circles is the location, magnitude, and cause of recoil forces, equal and opposite to the launched projectile. The various claims do not appear to be supported by direct experimental observation. . . . The research is ongoing but we have observed that the magnitude of the force on the armature is at least seventy times greater than any predicted equal and opposite reaction force on the rails.” (AN INVESTIGATION OF THE STATIC FORCE BALANCE OF A MODEL RAILGUN by Matthew K. Schroeder, June 2007 (thesis paper); http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA473387&Location=U2&doc=GetTRDoc.pdf
In other words, there seems to be some "missing recoil" in connection with radial electromagnetic forces. Investigating, I found this comment (quoted in part) on the Internet (http://sci.tech-archive.net/Archive/sci.physics.research/2008-12/msg00010.html )
"There is very little room for skepticism about the paper. Large scale tests performed by the US Navy of a prototype rail gun involved a 3.35 Kg projectile with a muzzle velocity of 2520 meters/sec. This gives a momentum in excess of 8000 Kg-meters/sec, enough to send a 200 Kg rail gun backward at over 40 meters per second. A conventional gun with similar performance would require a massive and extensive recoil absorption apparatus. There is none needed with a rail gun. . . .
The lack of recoil in rail guns has disastrous consequences for physics; it is a direct and unequivocal demonstration that the law of conservation of momentum is incorrect." ("Rail Guns don't recoil", Canup, Robert E., December 2008)
The lack of recoil is, shall we say, "non-intuitive". But it is certainly not "disastrous" for physics. The momentum is still there, just not where we expect it to be, or acting in the expected (Newtonian) manner.
"Motion Cancellers" (below).
"The Origin of Intrinsic Spin"
"An Overview of the Nature of Time" by Brian Fraser (has relevant comments about gravitation)
The Faraday Paradox at http://en.wikipedia.org/wiki/Faraday_paradox
"Video: Railgun Blasts an Aerodynamic Round Seven Kilometers Through A Steel Plate",
http://www.popsci.com/future-war-new-ships-will-determine-control-contested-waters Popular Science July 2015 p. 49 states that a rail gun being tested by the Navy accelerates a shell , which weighs about 35 pounds, from zero to 5,000 miles per hour in 1/100 of a second. "It can strike with 32 megajoules of energy, roughly equal to the force of a locomotive smashing into a wall."
The Relativistic Correction Factor, Gamma (g)
If you study Special or General Relativity you will soon encounter the "relativistic correction factor", gamma, which is usually given as:
The velocity term, v, is the conventional speed of the object in motion and c is the speed of light. Gamma itself is just a dimensionless (pure) number. As per the formula, g is approximately 1 at ordinary terrestrial speeds. At speeds 99.9% of light, g becomes about 22. It is a correction factor, not something that stands alone, and applies to speeds in space. It is used to compute relativistic momentum, relativistic energy, length contraction and time dilation at high speeds.
Physics textbooks have all sorts of examples about how and why it is used. Here is one concerning muons:
Both the length contraction and time dilation are easy to observe for objects moving at velocities whose magnitudes are an appreciable fraction of that of light. A particularly convincing example is found in the behavior of particles called muons. These are known to be formed at an elevation of around 10,000 m, near the top of the atmosphere, as a byproduct of collisions of rapidly moving cosmic rays with the molecular constituents of the atmosphere. The muons are projected toward the surface of the earth at velocities of about 0.999c. They are unstable particles; on the average each lives for 2.2 x 10-6 sec, as measured in a reference frame in which the muons are stationary, before decaying into other particles. Now a particle moving at essentially 3.0 x 108 m/sec for 2.2 x 10-6 sec will travel only 660 m. Hence it might seem that all muons would have decayed long before they are able to reach the ground, since they must travel around 10,000 m to do so. But, in fact, observations show that nearly all the muons formed at the top of the atmosphere reach ground level.
Time dilation explains the observations. A prediction as to whether or not a muon can traverse the thickness of the atmosphere before it decays should not use 2.2 x 10-6 sec for the time available. This value is the proper time the particles live, on the average, because it is measured in a reference frame in which they are at rest. Instead, the corresponding dilated time should be used since the observations are made in a reference frame in which the muons are moving at a very high velocity. For v/c = 0.999, the time dilation factor has the value = = 1/0.045 = 22. Hence the dilated lifetime has the value 22 x 2.2 x 10-6 sec = 4.9 x 10-5 sec. A particle moving at 3.0 x 108 m/sec for this time will travel a distance of 14,000 m, more than enough to reach ground level before decaying. —Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, R. Eisberg, R. Resnick, Second Edition, 1985, p. A-9
Relativity can even predict the actual numbers of muons expected at sea level, not just the expectation that most will arrive there:
It is easy to distinguish experimentally between the classical and relativistic predictions for the number of muons detected at sea level. Suppose that we observe with a muon detector 108 muons in some time interval at an altitude of 9000 m. How many would we expect to observe at sea level in the same time interval? According to the nonrelativistic prediction, the time taken for these muons to travel 9000 m is (9000 m)/0.998c » 30 msec, which is 15 lifetimes. Inserting N0 = 108 and t = 15T into Equation 1-0 [N(t) = N0e-t/T], we obtain
N = 108e-15 = 30.6
We would thus expect all but about 31 of the original 100 million muons to decay before reaching sea level.
According to the relativistic prediction, the earth must travel only the contracted distance of 600 m in the rest frame of the muon. This takes only 2 msec = T. Thus the number expected at sea level is
N = 108e-1 = 3.68 x 107
Relativity predicts that we should observe 36.8 million muons. Experiments of this type have confirmed the relativistic predictions. —Modern Physics, Paul Tipler, 1978, p. 13
(An alternative interpretation is that time progression remains completely uniform, and that motion affects the decay "constant".)
Modern physics does not clearly explain why the Universe acts this way. Consequently, gamma becomes a type of sophisticated "fudge factor" that is used in the equations to make the answers agree with experiment. Hopefully we can educate our intuition by seeking some additional insights into what the gamma equation is trying to tell us.
With some elementary math, we can rearrange it into a different form:
1-v2/c2 = 1/g2
1 = 1/g2 + v2/c2
c2 = c2/g2 + v2
The last equation with the sum of squares suggests a Pythagorean relationship or a "Euclidean distance" relationship with the speed of light. The relationship could also be written in terms of orthogonal functions (sine and cosine, complex numbers, vectors, etc.). My own term for this kind of math is "orthogonal sum".
A slide from my presentation "The Quest for the Stardrive"
Gamma applies only to speeds in space. However, motion at speeds comparable to that of light involve temporal speeds (motion in three-dimensional time) as well as spatial speeds (motion in three-dimensional space). If we want to combine a temporal speed with a spatial speed, we have to use an orthogonal sum—exactly what this equation is using. Hence, we could replace the c2/g2 term with a term that represents a temporal speed, which when stated in terms of our spatial reference system would be an inverse speed. When written in s/t dimensions, such an equation would look like the following:
(s/t)2 [=] (1/(t/s))2 + (s/t)2
The [=] means that only the dimensions are being considered, not numeric magnitudes.
These two forms of the gamma equation tell us:
- A gravitational reference system, be it a planet, a spacecraft, or an atom, always moves at the speed of light. In fact, everything moves at the speed of light.
- The complete speed relative to such a system actually has two orthogonal components: a temporal speed and a spatial speed. When the spatial speed is zero (v=0), the speed of the system is entirely in time. Time flows, but space "stays put" or "is stationary". On the other hand, if the system is moving at the (spatial) speed of light (v=c), then space flows, and time "stays put" or "stops".
In this interpretation, we can immediately see the reason for time dilation at speeds comparable to that of light. At c, the phenomena remains in the same time unit and does not experience the flow of time. At speed c, clocks would have an indefinitely long tick. Unstable particles would have indefinitely long life-times. At speeds slightly less than light, time flows a little bit, but not nearly as fast as we normally experience it. The muons in the example above have their lifetimes stretched out by a slow passage through the time units.
I prefer to illustrate the relationship of the two speeds with this kind of diagram:
2-17-14 Note: A review of this diagram suggests there is a problem with the last sentence in the next-to-last paragraph. The speed s/t = 1/¥ from a standpoint of a gravitational reference system is zero speed in space, c speed in time (ordinary "local physics"). At the speed of light, which is actually intermediate on the speed spectrum from our standpoint, the speed would be c speed in space, and c speed in time (which reduces to 1/1). At the other extreme is zero speed in time, and infinite speed in space ( s/t = ¥ /1 ) which is instantaneous action-at-a-distance from our gravitational observational standpoint (fully "non-local physics").
In the true physical situation the "natural" or real "zero" is c (or 1/1) and spatial and temporal speeds are displacements away from the origin. The math is similar to the familiar r = r sine(q) + r cosine(q) of high school trigonometry where r=1. But trying to explain this from the perspective of a gravitational reference system, and using separate space and time dimensions, instead of "motional dimensions" introduces some conceptual difficulties with the math.
And so I don't know if this note clarifies or only adds to the confusion. Pythagoras encountered a similar problem, and my comments on that might be helpful. See also Gravitational motion has multiple dimensions
3-9-14 Note: Here is more food for thought on the c2 = c2/g2 + v2 equation. I apparently have had a blind spot regarding the "units" in this equation. The left side can be time/space or space/time. These are identical at the speed of light and so it does not matter which is used.
As for the right side, the least strained interpretation is that the first term is truly a temporal speed and the second is truly a spatial speed. In physics terms a "non-local speed" is being combined with a "local speed". The first is more like energy and the second is, of course, a velocity.
My blind spot appears to be that an "orthogonal sum" can not only sum independent things, but things of a completely different character as well.
Consider these examples. The Pythagorean Theorem sums independent x and y lengths, but the sum (the hypotenuse) is still a length. In physics class we would add x and y velocities, but the result was still a velocity. Or we could combine various weights of red, green, and blue independent color dimensions, but the result was still color. But this trait of "independence with sameness" is not really necessary.In familiar terms, we could add (combine) 3 pounds of carrots and 4 pounds of potatoes in a pot of water. The resultant is neither carrots nor potatoes, but is something else that includes the character of both. In this case we can call it “soup” , or more specifically a “soup taste vector”. The length of 5 shows how much of this specific soup taste we have to distribute.
Similarly we could add old refrigerators, junked cars, rotting garbage, dirt etc. and call it a “landfill vector”. We could even devise a vector “inner product space” and take dot products of the "landfill state vector" with a unit vector representing a particular component, to find out how much stuff of a particular kind we have in the landfill (something that might interest the EPA, for instance). Such methods are well known to mathematicians and quantum physicists.
Hence, there is no need to force the units on the right side of the equation to agree in character, as long the concept of Euclidean distance still makes sense (it would not make sense, say, in a pressure-temperature-volume diagram). But what is needed is an adjustment in magnitude for the temporal term as seen from a spatial reference system. Hence, gamma ( g) needs no units and can still be a pure number.
6-8-14 Sooner or later I will get this right. I now think all the terms have the dimensions of space/time (ordinary velocity) and that gamma can remain dimensionless (a pure number). The units really have to be the same to preserve the concept of Euclidean distance. In the soup example, the dimensions are NOT potatoes or carrots, but pounds. In the landfill, the dimensions could be tons or cubic yards but NOT refrigerators, cars, etc. The gamma term simply changes the magnitude of a temporal motion into the magnitude that would be seen in a spatial reference system as an ordinary spatial velocity.
6-23-16 Another twist on the "summation" concept comes from vector and Geometric Algebra:
"How can we add, e.g., a scalar and a vector? Are we not adding apples and oranges? Yes, but there is a sense in which we can add apples and oranges: put them together in a bag . . . . The apples and oranges retain their separate identities, but there are "apples + oranges" in the bag. (Linear and Geometric Algebra, Alan MacDonald (2010) p. 81 )
This is the sense that a quaternion is the "sum" of a scalar and a vector, or a complex number is the sum of a real part and an imaginary part. In Geometric Algebra "the vector space G3 consists of objects of the form M = s + v + B + T, where s is a scalar, v is a vector, B is a bivector, and T is a trivector." Each part retains its own identity and can be "summed" (in the usual sense of the word) only with another part of like kind in a different object. But the object itself may be the "sum" of distinctly different parts in the sense of being in the same bag.
To get a better intuitive feel for this, consider this rather contrived illustration. Imagine you are on a boat in a river with some extraordinarily ignorant boatmen. The boatmen do not know what a river is. The river you are on is, to them, a long lake. When the boat is rowed out to a spot in the middle of the long lake, the "magic wood" in the boat's hull makes the land flow by. You point out that the land seems to move because the boat is in a river of moving water and is being carried along by the motion of the water, and that is why the land seems to flow by. But the boatmen are unconvinced. They are on a lake, and the lake water is stationary. They throw a cork overboard and say "See, the cork stays exactly in the same place as the boat, exactly as it does on land, where we did the same test. We are not moving. It is the land that moves, not us."
Later, you discover that the boat has a motor. And so you propose an experiment. You drive the boat upstream with the motor on. The boatmen remark that "The land has stopped flowing, but now the water moves." They throw another cork overboard, and it rapidly moves away. They seem disappointed that you do not believe in the powers of the "magic wood" in the boat's hull. It is as though you have cheated by using the motor.
One thing you realize in this situation, is that no matter what you do, the boat is always moving. If you turn off the motor, the boat moves with respect to the land. If you drive upstream with the motor on, the boat moves with respect to the water. At intermediate speeds, you are moving with respect to both land and water. If you wrote physics equations to describe the situation, you would always have an extra "speed factor" popping up in the equations somewhere.
And that is how things are in a gravitationally bound reference system. Space stays put, but time flows past us. But if we get into a spaceship and move at the speed of light, we find that space flows past us, but time becomes stationary. No matter what we do, something is still moving! And a speed factor, c, keeps showing up in fundamental physics equations like E=mc2, E=pc, and E=cB. If we try to measure the relationship between a magnetic field and and electric field, we find that different observers with different speeds, will see different magnitudes of the magnetic and electric components (see the example in the Motion Cancellers article below). And unlike the situation with the boat where the speeds are purely spatial and of the same basic nature, the speed of an object in the context of a gravitational system is a combination of two speeds of a dissimilar nature. And so the total speed has to be computed by orthogonal addition of the two terms. This means that they are inextricably intertwined with each other, and that our simple concepts of space and time must be augmented with some really obnoxious "relativistic" relationships.
At ordinary everyday speeds these complex relationships can be ignored. But they are still present, and can be detected with high-precision instruments, even at low speeds. An experiment with ultra precise atomic clocks flown on commercial airline flights in 1971 demonstrated the kinematic time shift (Special Relativity) and the gravitational time shift (General Relativity). And lately there have also been hints of the Lense-Thirring "frame drag" caused by rotation of a gravitational body like the Earth.The Tajmar effect from Quantised Inertia", M.E. McCulloch (June 17, 2011)"Guidelines to Antigravity", Robert L. Forward, American Journal of Physics, Vol. 31, No. 3, 166-170, March, 1963 (Received 12 September 1962) http://www.academia.edu/3336384/Antigravity_-_by_Robert_L.Forward
"It has been found experimentally by [1-3] that when rings of niobium, aluminium, stainless steel and other materials are cooled to 5K and spun, then accelerometers and laser gyroscopes, not in frictional contact, show a small unexplained acceleration in the same direction as the ring, with a size 3 +/- 1.2 x 10-8 times the acceleration of the ring for clockwise rotations, and about half that value for anticlockwise ones. This is called the Tajmar effect and is similar to the Lense-Thirring effect (frame-dragging) predicted by General Relativity, but is 20 orders of magnitude larger and shows the added parity violation. The effect has not yet been reproduced in another laboratory." http://arxiv.org/pdf/1106.3266.pdf
"EINSTEIN'S general theory of relativity provides a number of ways to generate non-Newtonian gravitational forces. Theoretically, all of these forces could be used to counteract the gravitational field of the earth, thus acting as a form of antigravity. The three outlined here were probably known by Einstein before he published his paper on the principle of general relativity in 1916. They were first specifically derived by Thirring in 1918, and since then have been contained in nearly every text on general relativity.
The equations of general relativity not only predict the usual radial Newtonian gravitational force behavior of a stationary mass on a stationary test body, but they also predict that a moving mass can create forces on a test body which are similar to the usual centrifugal and Coriolis forces, although much smaller. In addition, when the general relativity field equations are linearized, they result in a set of dynamic gravitational field relations similar to the Maxwell relations.Thus one can use intuitive pictures from electromagnetic theory to design theoretical models. Whether the effects predicted by the linearized theory really exist, will, of course, have to be checked by repeating the calculations with the nonlinearized field equations."
Three cases are covered:
1. A massive rotating ring with a test body below it and centered on the rotation axis. The result: "the rotating mass not only forces the test body away from the axis in an imitation of centrifugal force, but also pulls it upward into the plane of rotation"
2. A rotating massive spherical shell, with a test body moving inside the shell.
3. A large accelerated mass near a small test body. It is found that the accelerated body drags the test body along with it. "In addition to the usual Newtonian attraction, the test body experiences forces in the direction of the acceleration and the velocity of the large body . . . "
Update 5-19-2003: Buried in this interpretation somewhere is a suggestion that gravitation is necessarily non-directional. Our planet is "moving through time" or "time is passing by us". In other words, our Earth is moving relative to time. The real motion must be some dimensional version of a t/s ratio (three-dimensional time per unit of clock space). Temporal motion, however, has no direction in space. The gravitational motion can therefore have a signed magnitude, but vectorial direction is fundamentally meaningless here. It follows that gravity must necessarily have the 1/r2 or "inverse square" force (motion) distribution explained earlier. Other inverse square forces will likely have an analogous structure (t3/s3 for mass, t2/s2 for magnetic "charge", t1/s1 for electric charge). Also implied is: 1.) the motion of a spacecraft within the spatial system can make the passage of time seem to slow down to zero, but cannot make time speed up, and 2.) the fundamental motion of a spacecraft will always oppose gravitational motion; it must be "towards c" and not "away from c and towards more gravity". See also: http://fqxi.org/data/essay-contest-files/Fraser_NatureOfTime.pdf ; "Luxon Hypothesis" http://www.tardyon.de/other.htm
I need to comment on the practical implications of this interpretation of gamma. Here is what I see:
1. Mass does not increase with increasing speed. Instead, a directly measureable spatial motion is converted to a non-directly measureable temporal motion. Temporal motion (t/s) is the inverted form of spatial motion (s/t). At speeds less than that of light, the spatial motion predominates. At "speeds" above that of light, all space and time relationships invert (from our perspective), and the temporal motion predominates. (Temporal motion is equivalent to what physicists call "non-local" motion; spatial motion is called "local" motion. The former is non-directional in a spatial reference system, and has only a magnitude, like energy; the latter is ordinary velocity)
2. Energy is a better measure of the "amount of motion" at relativistic speed than spatial velocity. The natural, fundamental pattern for speed is c, the speed of light, and its inverse is 1/c, which has the dimensions of energy. This explains the behavior of particles in a particle accelerator. An electron with a speed of 0.995 that of light, has an energy of about 15MeV. At a speed of 0.99999995 that of light, it has an energy of 5GeV. Note that the speed has increased by a factor of only 1.0005 but the energy has increased by a factor of 300. How can there be such a huge increase in energy with only a tiny (5 parts in ten thousand) speed increase? It is because the measure of the "amount of motion", speed, is misaligned with the problem. The "amount of motion" instead goes mostly into the t/s term (energy). (In a more normal circumstance the non-directional momentum would be seen as "thermal motion".)
3. If "temporal speeds" are technologically accessible, we could develop a completely new kind of space propulsion system that is not based on the production of ordinary velocity. It would have to be a "field propulsion" technology based on the non-local characteristics of electric, magnetic, and gravitational fields. It would be capable of non-local motion: an object could go from "here" to "there" without traversing the intervening space (that is, it has no trajectory. An object would appear, then disappear, then reappear somewhere else). Ordinary spatial motion is also possible. And it seems possible that the two could mix, depending on how the type and dimensions of the momentum map into our reference system; spatial dimensions could overlap, and an object could appear to be semi-transparent, seemingly "materializing" out of thin air, and even occupying the same space with something else. Such an object may also manifest side-effects of powerful electric and magnetic fields.
Such a propulsion system would NOT have a Newtonian reaction, like a rocket ship. The reaction in such a system would be radial and symmetric and cancel itself out. The reaction is like the Poynting vectors in a charging capacitor of cylindrical construction. The vectors point radially inward and cancel out to yield no net momentum (unless the capacitor is asymmetric). The action itself is perpendicular to the plane formed by the reaction vectors. That means a spacecraft could be entirely self-contained and "bootstrap" itself (and its occupants) to high spatial velocities or even non-local motion. It would be like a railgun accelerating itself with no recoil. The structure of the ship, however, must be strong enough to withstand the radial reaction. (Newtonian reaction forces are conceptualized as "equal and opposite". Electromagnetic reaction forces can be conceptualized as "equal but orthogonal". Think "dimension", not "direction".)
Special and General Relativity and the gamma correction factor work fine for reference system effects. But remember that these theories are "local" by intention and design. They assume causality in space, that all speeds must be spatial, and that speeds must be less than c. They are out of scope when applied to "non-local" phenomena. See the article below: In Search of the Geometry of Space, Time, and Motion and "Beyond Einstein: non-local physics".
"Call to me and I will answer you and tell you great and unsearchable things you do not know." —Jeremiah 33:3, NIV
In Search of the Geometry of Space, Time, and Motion
Author's note: An article of this title exists in my notes in a fragmented outline form. I never wrote it up because it was lengthy and seemed to lack "personal relevance" to my readers . But many of you might enjoy the fragment below. Maybe someday I'll write up the whole thing, but it would be much too long to include in Advanced Stellar Propulsion. (The blue text means that it is still being edited/reviewed.)
The problem of metrics
The Euclidean metric worked fine for thousands of years, and still works fine today for ordinary purposes. But in the last two hundred years or so, questions have been raised about the physical applicability and scope of the Euclidean metric:
Tensor Analysis Theory and Applications to Geometry and Mechanics of Continua, I. S. Sokolnikoff, 2nd ed. (1964) p.105-016:
There is no branch of mathematics in which the tyranny of authority has been felt more strongly than in geometry. The traditional Euclidean geometry, based on a set of "self-evident truths" and created largely by the Alexandrian School of mathematicians (around 300 B. C.), dominated the thought and shaped the development of physics and astronomy for over 2000 years. There were a few bold souls, even among the ancient mathematicians, to whom "self-evident truths" contained in Euclid's axioms did not seem convincing, but the prestige of logical structure of Euclid's Elements was so high and the hand of authority so heavy that they hindered the development of mathematics for centuries.
In 1621, Sir Henry Savile raised some questions concerning what he called "two blemishes" in geometry, the theory of proportion and the theory of parallels. . . . In 1826, a Russian mathematician, Nicolai Lobachevski, presented to the mathematicians faculty of the University of Kazan a paper based on an assumption that it is possible to draw through any point in the plane two lines parallel to a given line. The geometry developed by Lobachevski proved just as devoid of inner inconsistencies as Euclidean geometry. Indeed, it contained the latter as a special case and implied the arbitrariness of the concept of length adopted in Euclidean geometry.
In 1831, a Hungarian mathematician, John Bolyai, published results of his independent investigations which conceptually differ little from those of Lobachevski, but which perhaps contain a deeper appreciation of the metric properties of space. Bolyai pointed out, just as Lobachevski did, that his geometry in the small is approximately Euclidean and only a physical experiment can decide whether Euclidean or non-Euclidean geometry should be adopted for the purpose of physical measurement. Thus it appears that there are no a priori reasons for preferring one geometry to another. However, it was only after Riemann's profound dissertation on the hypotheses underlying the foundations of geometry appeared in print (published posthumously in 1867) that the mathematical world recognized fully the role played by the metric concepts in geometry.
Riemann appears to have been unaware of the work of Lobachevski and Bolyai, although it was well known to Gauss. Later, Beltrami published his classical paper on the interpretation of non-Euclidean geometries (1868) in which he analyzed the work of Lobachevski, Bolyai, and Riemann and stressed the fact that the metric properties of space are mere definitions. . . .
The reason this is important today is because non-local effects must be considered in the more general physical picture of space, time, and motion. In a non-local situation, events and entities are demonstrably intimately and immediately connected, but not by spatial proximity or spatial contact, and are therefore free of the limitations normally imposed by spatial distance. Consider the EPR effect. This effect implies that two photons can be spatially separated by light years and yet still be "together" in some way, such that an action on one affects the other instantaneously. In other words, it implies that it is possible to set up instant Star Trek-like communications between spacecraft that could be hundreds of lights years apart in space.
And so what is your metric for "distance" in this situation? What is a realistic measure of "separation"? It is certainly not Euclidean. But the Euclidean notion of distance is 'a mere definition'. Might another definition be more appropriate? And could this have physical applications, say, for space travel? Might things actually be closer than we think they are, just not in space? (See also "Teleportation Is Real – But Don't Try It at Home", Danielle Dowling, Jan. 29, 2009 , http://www.time.com/time/health/article/0,8599,1874760,00.html ; also http://newsfeed.time.com/2012/05/15/beam-them-up-scotty-chinese-physicists-reportedly-break-teleportation-record/ )
When you sit in your chair and read this article, you are at equilibrium with Earth's gravitational force. Nevertheless, you are experiencing an acceleration of about 9.8 m/sec2. But you are not moving to a new "where". Gravitation is a non-local motion. It moves you to a new "when". If you don't believe me just look at your watch. It is ticking off the seconds while you are in the same place. Still don't believe me? Remember, General Relativity teaches that clock rates are affected by gravitation. Clocks slow down in a high gravity environment, and experiments have demonstrated this effect. You have both a "when" and a "where" location. And so does your chair. Acceleration can affect the locations of both. So how do you write an expression specifying the true "physical distance" between you and the chair? And will it still be valid for interatomic distance (discussed below)? Or for stars in utracompact galaxies (discussed below)? And at very high speeds, motion acquires more of a non-local character. What will you see when you look out the window of your spacecraft? How will you measure "distance" and do navigation?
Our notions about motion are in need of adjustment too. As per Einstein's Special Relativity, physicists believe that nothing can exceed the speed of light in a vacuum. But today this needs to be interpreted as "nothing can exceed the spatial speed of light in a vacuum." There may be other kinds of speeds, that is, other kinds of motions. Consider astronomical redshifts:
"The most distant observed gamma ray burst was GRB 090423, which had a redshift of z = 8.2. The most distant known quasar, ULAS J1120+0641, is at z = 7.1 . The highest known redshift radio galaxy (TN J0924-2201) is at a redshift z = 5.2 and the highest known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at z = 6.42." ( http://en.wikipedia.org/wiki/Redshift )
Z is the redshift that the telescope sees compared to the laboratory reference value. It is just a number. The interpretation is left up to the astronomer. A z that is greater than one, implies speeds that are greater than light. Simplistically, a z of 5.2 would imply a speed of over 5 times that of light. But because of the acceptance of Special Relativity, physicists and astronomers find this interpretation hard to accept. And so they use Special Relativity theory to "correct" these speeds to sublight values. In other words, they map the speeds into a system of purely spatial motion, so that it is always less than the speed of light.
We see the same "corrections" applied in other situations. Experiments show that the speed of gravity and the speed of electric fields is instantaneous. But today's theories win out over fact. The speed of gravity gets "corrected" down to that of light, even though NASA cannot use this "correction" in their orbital calculations. Only an instantaneous speed for gravity gives the correct answers.
Clearly, we need a more comprehensive metric for motion. Our notions about space and time are derived from motion. Motion is not "made of" a relation between space and time. Motion comes first. Think of how you make a box. Do you start with an "inside" and an "outside"? Or do you start with the box itself first, and then define what is meant by an "inside" and an "outside"? Motion is the primary concept, space and time are secondary, derived concepts.
Problems with time
Our view of motion affects our view of time. In physics, time is generally treated as a parameter, not as a variable. Action occurs "in space", not "in time" Time is used as a descriptor not a participator. Time is "external" to the motion. Because of this, physicists have proposed eliminating the concept of time as being fundamental. Consider Amrit S. Sorli's paper, "Time is Derived from Motion through Timeless Space":
"A growing number of modern researchers are challenging the view that space-time is the fundamental arena of the universe. They point out that it does not correspond to physical reality, and propose “timeless space” as the arena instead. . . . Time and clocks are man-made inventions. Motion is primary, time is secondary. Time is an artifice of measurement, a useful tool that permits us to build mental and mathematical models for our daily lives as well as for our physics and cosmology. But time as a fundamental entity has no role in physics.
[Conclusion] When physical objects move, they move through space, not through space-time, and not through time. Time is derived from this motion through space, and space itself is timeless. Whilst the speed of light is considered to be a maximum rate of motion, this varies with the local environment, the photon is an extended entity that experiences no time, and some atomic-scale physical phenomena appear to be timeless. Clocks are macroscopic measuring devices which accumulate local internal motion, and we can record a sequencing of that motion and the changes that occur in space. But we can find no evidence to support the existence of space-time as a fundamental entity. Accordingly we must conclude that we live in a timeless atemporal universe of space and motion, where the past and future only exist in the human mind, and the only eternity is now." ("Time is derived from motion through timeless space", Amrit S. Sorli,
And this from Carlo Rovelli's paper “Forget time” (2008):
"Following a line of research that I have developed for several years, I argue that the best strategy for understanding quantum gravity is to build a picture of the physical world where the notion of time plays no role at all. I summarize here this point of view, explaining why I think that in a fundamental description of nature we must “forget time”, and how this can be done in the classical and in the quantum theory. The idea is to develop a formalism that treats dependent and independent variables on the same footing. In short, I propose to interpret mechanics as a theory of relations between variables, rather than the theory of the evolution of variables in time. " ("Forget Time", Carlo Rovelli, 2008, http://www.fqxi.org/data/essay-contest-files/Rovelli_Time.pdf )
And this from "The Nature of Time" by Julian Barbour
"I will not claim that time can definitely be banished from physics . . . . Nevertheless, I think it is entirely possible—indeed likely—that time as such plays no role in the universe." ("The Nature of Time", Julian Barbour http://www.fqxi.org/data/essay-contest-files/Barbour_The_Nature_of_Time.pdf
And this from " 'Space Travel is Utter Bilge' ", a quote from astronomer Sir Richard Wooley in 1956, used as the title of an article by Donald Yeomans (2002), a JPL senior research scientist wherein he states:
"We must re-examine the physical properties of space itself if we are to understand the relation between electromagnetic and gravitational forces. We must also re-examine our concept of time. It is possible that time is more than one-dimensional." http://greyfalcon.us/restored/Secrets%20of%20the%20Saucer%20Scientists.htm
And this from "Physical Principles of Advanced Space Propulsion Based on Heims's Field Theory", Walter Dröscher, Jochem Häuser (2002) http://www.hpcc-space.com/publications/documents/PrinciplesOfAdvancedSpacePropulsionAIAA-paper-2002-4094.pdf
"In this context, space and time are not the container for things, but are, due to their dynamic (cyclic) nature, the things themselves. This is an entirely different physical picture from the approach of simply adding the stress-energy-momentum tensor of the electromagnetic field to the right-hand side of Einstein's field equations . . "
For additional articles about time see: http://www.fqxi.org/community/essay/winners/2008.1 And http://milesmathis.com/time.html
Problems with "space"
Similar arguments could just as validly be applied to space. We might need to "forget space" too, at least as a fundamental concept. I have asserted that the quantum mechanical world is a world that is limited to one unit of space. There is no "inside" to this space. It is non-metrizable. We therefore cannot specify trajectories or velocities in the quantum world. The "happenings" are in three-dimensional time, not space. Only a non-local (and therefore non-directional and probabilistic) description can be given.
Clearly, a choice of metric will be affected by quantization boundaries: phenomena that involve one unit of space, one unit of time, or one unit of their ratio (space/time or time/space) may appear/behave/measure in a strangely non-intuitive manner from the view point of humans who are accustomed to a reference system that is quite "distant" :-) from these boundaries. According to current views in physics, the photon, for example, experiences no time flow at all. Now it is appropriate to ask, Does it even experience space flow? Like a leaf in a river, it might be stationary with respect to what is really moving.
A choice of a distance metric also affects interatomic distance measurements, and we know something weird is going on with that. When certain salts are melted, the volume of the melt increases compared to the volume of the unmelted solid. This would lead us to expect that the interatomic distances in the melt would increase slightly. But in fact the distance decreases:
"There is another important fact about the melting process. When many ion lattices are melted, there is a 10 to 25% increase in the volume of the system (Table 5.10). This volume increase is of fundamental importance to someone who wishes to conceptualize models for ionic liquids because one is faced with an apparent contradiction. From the increase in volume, one would think that the mean distance apart of the ions in a liquid electrolyte would be greater than in its parent crystal. On the other hand, from the fact that the ions in a fused salt are slightly closer together than in the solid lattice, one would think that there should be a small volume decrease upon fusion. How is this emptiness—which evidently gets introduced into the solid lattice on melting—to be conceptualized?" (Modern Electrochemistry: ionics, John O'M.Bockris, Amulya K. N. Reddy, 2nd ed, 1998, p. 611-612)
"Such "volumes of nothingness" must be present to account for the large increase in volume upon fusion while at the same time the internuclear distance decreases (see Tables 5.9 and 5.10)" (Bockris, ibid., p. 619)
". . . this space is counterintuitive to the internulcear distances given by X-ray or neutron diffraction. The internuclear distances found in molten salts are smaller, not bigger, as might be thought from the increase in volume." (Bockris, ibid., p. 620)
(For more on this see Melted volume increases, but internuclear distance decreases. Why? and Natural Quantities . . . )
Still more trouble with the interatomic distance metric is suggested by the ultra high density of matter inside white dwarf stars:
"the average density of matter in a white dwarf must therefore be, very roughly, 1,000,000 times greater than the average density of the Sun, or approximately 106 grams (1 tonne) per cubic centimeter. " (http://en.wikipedia.org/wiki/White_dwarf )
Scientists try to explain this fantastically high density with very contrived "explanations" like "electron degenerate matter" and "neutron stars". But again the whole problem may result from some misconceptions about the appropriate metric for interatomic distance. It is important for us to understand what is going on here, and it has implications for space travel.
The density of matter in a white dwarf is greater than that of ordinary water by a factor of 106. In a so-called neutron star it is 1014. What if "space" could, by technical means, be shortened somehow by a factor of 1014? The Andromeda galaxy is approximately 2 x 106 light years from Earth. If by artificial means we would "shrink" the distance by a factor of 1014, then Andromeda would only be 10-8 light years distant. That is about 0.3 light seconds—closer than the Moon is to Earth. Distances in the universe would become trivial from a space travel standpoint. That may seem far-fetched and hard to visualize. But if motion is the real metric, as suggested above, our concepts of what we call space or time are quite artificial. Motion is a ratio between space and time (s/t). Suppose we could somehow put more time between atoms. That would decrease the effect of the spatial unit, seemingly shrinking it. Ultrahigh density matter could be made in the laboratory. Nature does it somehow. Why can't we do the same? And if we could do it in the laboratory, why not in open space? The "inverseness" of the space/time relationship in motion implies that spatially distant objects might be close temporally.
Related: Is it possible to have an "inverted star"? That is, a star where the heavy elements "ungravitate" to the surface and the lighter elements gravitate to the core? Here is a note from Science News "Odd white dwarf offers peek at core", Christopher Crockett (April 30, 2016,) p. 12-13:
White dwarfs . . . are the last place astronomers expected to find a nearly pure oxygen atmosphere. . . . a newly discovered white dwarf . . . has no hydrogen or helium at its surface. Its atmosphere is dominated by oxygen. . . . While oxygen dominates this white dwarf's atmosphere, neon and magnesium come in second and third . . . . In 2007, Dufour and colleagues reported a similar strange sighting: several white dwarfs whose atmospheres were loaded with carbon instead of hydrogen and helium. . . . "This white dwarf might only be a freak. . . . Although often in science, it's the exception that makes you understand a great deal later on."
And there are other astronomical objects that suggest problems with the distance metric. But instead of space between atoms, the problem is space between stars. One example pertains to 'ultra-compact dwarf galaxies' :
"UCDs were discovered in 1999. Although they are still enormous by everyday standards, at about 60 light years across, they are less than 1/1000th the diameter of our own Galaxy, the Milky Way." http://www.sciencedaily.com/releases/2009/02/090212093900.htm
Another pertains to the internal structure of quasars:
"Some quasars display changes in luminosity which are rapid in the optical range and even more rapid in the X-rays. Because these changes occur very rapidly they define an upper limit on the volume of a quasar; quasars are not much larger than the Solar System. This implies an astonishingly high energy density." ( http://en.wikipedia.org/wiki/Quasar )
Quasars are apparently super-compact galaxies. They seem to be an extreme example of the UCDs.
Another conundrum is that quasars, thought to be the most distant objects in the universe, are associated with nearby galaxies:
"The apparent distance of quasars may be illusionary, and they could be nearby. In fact, a good deal of evidence demonstrates that redshifts cannot be trusted as indicators of distance when it comes to quasars." http://www.livingcosmos.com/quasar.htm
Apparently, large galaxies can eject compact objects that expand. Those "knots" in the M87 jets could each be a highly 'compressed' collection of stars that eventually expand back out into small galaxies after ejection:
"To the unconventional astronomer, especially to Halton Arp, who has been the primary collector of these discrepant observations, it looks as if the primary galaxy is ejecting "babies" that grow up into companion galaxies." http://www.thunderbolts.info/tpod/2005/arch05/050106universe-arp.htm
The idea of compressed structures expanding back out into normal density structures reminds me of nova (novea) associated with white dwarf stars. As already noted above, these stars are comprised of extremely dense material. Novea could be a manifestation of a process that causes an ultradense star to adjust its density back to normal. Exactly what is going on here is not at all clear, but it probably involves a quantization boundary, which in turn requires a "motional metric" (discussed below) to be properly understood.
Following this line of thought, there is even a several decades old theory that the Earth itself is physically expanding:
"Global Expansion Tectonics a More Rational Explanation", James Maxlow http://tmgnow.com/repository/global/expanding_earth.html
"The Expanding/Growing Earth", David Bressan (2011):
"A much stranger idea to explain the assumed phenomena was proposed by the German physicist Pascual Jordan in 1966 - the increase of earth was imputable to the general dilatation of the space-time continuum."
"In 1966, Jordan published the 182 page work Die Expansion der Erde. Folgerungen aus der Diracschen Gravitationshypothese (The expansion of the Earth. Conclusions from the Dirac gravitation hypothesis) in which he developed his theory that, according to Paul Dirac's hypothesis of a steady weakening of gravitation throughout the history of the universe, the Earth may have swollen to its current size, from an initial ball of a diameter of only about 7,000 kilometres (4,300 mi)." http://en.wikipedia.org/wiki/Pascual_Jordan "
An even stranger claim is in a German patent by Karl Nowak (Verfahren und Einrichtung zur AEnderung von Stoffeigenschaften oder Herstellung von stark expansionsfaehigen Stoffen . in English: "Method and arrangement to the Change of Material Characteristic or Manufacture of Strongly Expansive-Capable Materials"), German No. 905 847 Class 12g, Group 101 (1943, published 1954; DE0905847C) ) Henery Stevens offers these comments:
According to Karl Nowak's 1954 German patent, patent number 905847, Class 12g, Group 101, by a process of extreme cooling coupled with pressure, the basic atomic structure of material can be changed. It is reduced, narrowed and confined in terms of atomic, crystalline structure. . . . Admittedlly, at first the idea of compression cooling as a means to change atomic structure sounds a lot like junk science.
At this point Dr. Gordon Freeman weighs in with some remarkable scientific insight. According to Dr. Freeman, an elements [sic] behavior is determined by its arrangement of electrons orbiting the nucleus of that elemental atom. Seven electron shells are present around the core. Under high pressure electrons are shifted to lower orbits and new orbital overlappings are formed. This changes the whole behavior of the element concerning color, boiling temperature, density, and so forth.
The trick seems to be to cool and compress the material and then gradually release the pressure. The material will retain its new properties at least for several months. (Hitler's Suppressed and Still-Secret Weapons, Henry Stevens (2007) p. 127; )
Such a claim is both hard to believe and hard to ignore. Certainly there are strong suggestions from several sources that we still have a lot to learn about interatomic distance and related effects. (See also "Scientists Fabricate Room Temperature Superconducting Material" http://www.nextenergynews.com/news1/next-energy-news3.19a.html )
Here is another little tidbit to consider. Cryogenic processing of ferrous metals is used to transform austenite into martensite even after the usual heat tempering treatment:
Factually, if you were to examine mass heat treated items like many available drill bits, saw blades, etc., you would find many that show only 50% to 60% transformation. This is the area in which cryogenics can really strut its stuff. The reason is that cryogenics is the only method known that can complete the transformation to 99.8% to 100% martensite, or come at all close to it. Martensite, as you recall is the fine hardened grain structure that you strive for in the heat treat process. . . .
Deep freezing of metals has been around for many years. It has been in use for at least 30 to 35 years to stress relieve cast iron gears and weldments. This is the reason you will find dry ice at a welding supply store. Welders discovered many years ago that they could rely on dry ice to stress relieve welds. . . . The Chinese . . . are now selling end mills that have been cryogenically frozen.
Cryogenic processing has also been used to reclaim "overcooked steel". This kind of steel has a high percentage of "retained austenite", which greatly reduces hardenability. Its magnetic properties have been so severely altered, a magnetic chuck might not be able to hold it in position for machining. This kind of steel may actually shrink during heat treatment. The internal structure of this metal is so messed up that reheat treating the part usually does not remedy the problem. However, it can usually be completely restored by cryogenic processing. ( Heat Treatment Selection, and Application of Tool Steels, William E. Bryson, 2009, p. 107,114,170-171)
The point here is that even in a metal soaked to liquid nitrogen temperatures, there are still plenty of things happening. The metal may look inert and inactive, but it is not.
These are examples of instances where space itself seems to have 'shrunk', or at least is not behaving in the way we expect it to. Certainly it does not behave in the manner implied by a simple Euclidean metric.
This is totally off the subject but I just could not resist:
"Under the influence of the magnetic field, the number of internal defects decreases as a result of their self-elimination under the action of the Lorentz force. These changes lead to a reduction in the barriers to dislocation movement and thereby increase the material plasticity." ("Hyperplasticity effect under magnetic pulse straightening of dual phase steel", AP Falaleev, VV Meshkov, and A Shymchenko IOP Conf. Series: Materials Science and Engineering 153 (2016) 012014 doi:10.1088/1757-899X/153/1/012014 )This is something like annealing, but it does not use heat and is much faster.
This probably reminds us of the Special Relativity "paradoxes" where one dimension of an object seems to shrink in the dimension of its high speed motion. However, this seems to be only a reference system effect, not an actual physical effect (one that would result in high densities, high temperatures, etc). As noted above (Sorli), there are strong doubts that the Universe actually uses this particular metric (the so-called "Minkowski space" ; and because of the ict term, it is obviously non-Euclidean, in case anyone is wondering).
Examples for Special Relativity effects are usually presented as something with high speed motion as measured from a gravitational reference system (Earth). In this situation there are "relativistic effects" that have to be taken into account. But one thing that I have never seen discussed in the literature is the "distance" metric for two spacecraft both moving at speeds comparable to light (relative to Earth). In view of the increasingly non-local character of motion at high speeds, what is the "relative motion" or "relative distance" applicable to just the spacecraft themselves? To me, this is the essence of the claim of Relativity that "all motion is relative". But that claim only seems to take into account the characteristics of space (and time) as seen from an Earthlike (gravitational) reference system. In other words, physicists would have trouble with this question: "Two spaceships with identical, initially synchronized clocks are moving at 50% of the speed of light. Which spacecraft has the slow clock?" (See also "Herbert Dingle Was Correct! Part VIII The Twins Paradox And Dingle’s Apostasy From Orthodox Relativity" By Harry H. Ricker III http://www.gsjournal.net/old/science/ricker30.pdf and http://en.wikipedia.org/wiki/Herbert_Dingle#Controversies )
Here is another one that probably appears in the literature somewhere: Two identical twins of the same height walk away from each other. Each sees the other as "shrinking in the distance". Which twin does the real shrinking? Is this an actual effect (a change in physical dimensions)? Or is it just a reference system effect (a matter of appearances only) ? What happens when the twins come back together?
Special Relativity seems to have limited applicability (as the name implies). Rotational motion, for instance, is generally regarded as absolute. If I spin around in my chair once per second, I am rotating relative to the rest of the Universe. Or is the Universe violently whipping around me? Mathematically both pictures are equivalent, but only one is physically realizable. I think it is clear that rotational motion is indeed absolute. (See Sagnac effect) Special Relativity cannot apply. Says Feynman:"There is no "relativity of rotation." A rotating system is not an inertial frame, and the laws of physics are different. We must be sure to use equations of electromagnetism with respect to inertial coordinate systems." ( The Feynman Lectures on Physics, Richard P. Feynman, (1964) Vol. 2 page 14-7 )
But . . . what do you do with linear acceleration? Linear acceleration can be detected absolutely too. Does absolute acceleration result in absolute motion or only relative motion? (See Sagnac effect in translational motion)
Another problem is implied by the compensation given to clocks in the Global Positioning System. Clocks in orbit will run slow compared to a clock on the ground. Hence, the clocks for orbit are precalibrated to run slightly fast while they are on the ground so that they will have the same manifested clock rate as the ground clock when in orbit. It is clear that this compensation cannot be symmetric. That is, the same compensation cannot be applied to either set of clocks. That means that the motion is not "purely relative" as claimed by Special Relativity.
You have probably read through the Einstein train example in the textbooks. There are two lightning strikes, one at either end of the train. Both are simultaneous to an observer on the ground at the midpoint next to the train. But they are not simultaneous to the observer riding at the middle of the moving train (at least that is what we think, even though no one actually asks). The math is simple. The logic is self-consistent. Some would call the whole thing "beautiful and elegant" (despite the messy physics). The train example seems intuitive, ironclad, and irrefutable. But does nature really work this way? We have to be careful. Remember quantum mechanics? It is illogical, non-intuitive, even weirdly perverse, until you take into account the ("non-local") effects of three-dimensional time. Then it becomes substantially more intuitive. Photons are very quantum mechanical, even those used by Einstein's train. If you add in the effect of temporal motion to the train problem, you will preserve simultaneity of distant events. But if you do that, you are effectively working the problem in "motional dimensions" instead of 4-dimensional space-time, and again Special Relativity does not apply.
Keep in mind here that Special Relativity and General Relativity are local theories. They artificially (but usefully) map temporal motion into a spatial reference system:
In 1905 Albert Einstein's Special Theory of Relativity postulated that no material or energy can travel faster than the speed of light, and Einstein thereby sought to reformulate physical laws in a way which obeyed the principle of locality. He later succeeded in producing an alternative theory of gravitation, General Relativity, which obeys the principle of locality. ("Principle of locality", http://en.wikipedia.org/wiki/Principle_of_locality )
In General Relativity, the "locality" arises by treating space as a connecting medium, rather than as something that separates. It is much like the Faraday/Maxwell field concept where the field was "action through a medium from one portion to the contiguous portion". The idea of being "spatially connected" is virtually the definition of "locality".
The Universe is both local and non-local in its fundamental nature. It is a mistake to try (in general) to map non-local phenomena into a local reference system. This realization was not around in 1905. The only well-known non-local phenomena back then were the action-at-a-distance "fields" of gravitation, magnetism, and electrostatics. The field concept was an attempt to make their non-local behavior more like local behavior, and thus more compatible with human intuition. Arguably, the first "hard-core" contact with non-locality came with Quantum Mechanics in the 1920s. Later, came the EPR "paradox" (1935) at Einstein's own hand, who again argued for a "local" interpretation. The Aharonov–Bohm effect emerged in 1949-1959. Then Chalmers W. Sherwin and Robert D. Rawcliffe experiment in 1960. Then Bell's Inequality Theorem in 1964. Then the experiments of John Clauser and Stuart Freedman (1972) and Alain Aspect (1981). These experiments (and others) demonstrated non-local behaviors at a fundamental level. Out-of-scope application of Relativity to non-local phenomena at the insistence (tyranny?) of the scientific community has resulted in a lot of misunderstandings (and animosity) and has held back advancement of physics for over 100 years. Scientists still insist that the speed of gravity, magnetic, and electric fields are limited to the speed of light. (A major misconception: see the speed of gravity and the speed of electric fields )
The so-called Twin Paradox has a similar standing. This is where one twin stays on Earth and the other goes away in a rocket ship at some significant fraction of the speed of light. Upon his return, he has aged less than his twin on Earth. But this is not the official paradox; this is just a simple prediction of Special Relativity. The paradox is that either twin can be viewed as being younger than the other, because the motion can only be "purely relative". The fact that the cause of one type of motion can be distinguished from the other is irrelevant to the paradox. That physicists so readily accept this paradox as science (which itself has not been demonstrated) says some really awful things about our science institutions. (My "take" on this is that the twin moving at high speed, ages relative to a "flow-of-space clock", with time not progressing, and the twin on Earth, ages relative to a "flow-of-time clock" with space not progressing. Both age at the same rate (but on different types of clocks), and have the same final age on Earth. There is no paradox if the ages are referred to the progression of an "orthogonal sum" clock that incorporates both time and space progression effects. The effects of both local and non-local behaviors need to be taken into account.)
Another hint that motion is not "purely relative" is implied by Faraday's rule of induction. Says Feynman:
"So the "flux rule"—that the emf in a circuit is equal to the rate of change of the magnetic flux through the circuit—applies whether the flux changes because the field changes or because the circuit moves (or both). The two possibilities—"circuit moves" or "field changes"—are not distinguished in the statement of the rule. Yet in our explanation of the rule we have used two completely distinct laws for the two cases— v x B for "circuit moves" and del x E = -¶B/¶t for "field changes".
We know of no other place in physics where such a simple and accurate general principle requires for its real understanding an analysis in terms of two different phenomena. Usually such a beautiful generalization is found to stem from a single deep underlying principle. Nevertheless, in the case there does not appear to be any such profound implication. We have to understand the "rule" as the combined effects of two quite separate phenomena."— Richard P. Feynman, The Feynman Lectures on Physics Vol. II, pp. 17-2
Note the asymmetry in the behavior. This seems to imply some sort of absolute motion. We probably are indeed missing a "single deep underlying principle" with a "profound implication". See also http://en.wikipedia.org/wiki/Faraday_Paradox
Similar experiments with a charged disk, and a B field detector also give analogous paradoxical results. See ../qm/RadiationCircularChargeMotion.html#FeynmanFaradayFluxRuleParadox and SpeedMagneticField .Einstein's Special and General Relativity theories are, as the names suggest, theories of relative motion. You cannot expect such theories to deliver deep insights about absolute motion, because that kind of motion is simply out-of-scope.
Another "paradox" is becoming evident in sunspot observations:"This first solar image from NuSTAR demonstrates that the telescope can in fact gather data about sun. And it gives insight into questions about the remarkably high temperatures that are found above sunspots—cool, dark patches on the sun. Future images will provide even better data as the sun winds down in its solar cycle." http://phys.org/news/2014-12-sun-sizzles-high-energy-x-rays.html
A sunspot is roughly 4000 K, versus 5800 K for the photosphere. High temperatures, X-rays, and magnetic fields suggest that sunspot activity has a non-local character. Its relationship with our reference system would become inverted: hot stuff will appear cooler. This could mean that sunspots could be far hotter than we might imagine. The gas could be fully ionized, and being therefore unable to absorb radiation, would become transparent. They should also expand (not contract) with time as they cool down. ( Apparently, something similar can happen on a galactic scale: "Mystery Galactic Gamma-ray 'Bubbles' Defy Explanation", Ian O'Neill (Aug 1, 2014) http://news.discovery.com/space/galaxies/mystery-galactic-gamma-ray-bubbles-still-defy-explanation-140801.htm Hypothetically, the hot stuff would appear as microwaves, but when the observational situation re-inverts back to "local", the microwaves then look like gamma rays. If the inversion point is the Rydberg frequency, the math is roughly (1/(109/1016 )) ( 1016 ) = 1023 Hertz. The jet in the M87galaxy (shown above) may likewise be an example of re-localization behavior. As gravitation reduces the speed of the ejected material, it becomes more "local" and begins to expand as a spatial object. It is astonishing to realize that M87 is probably ejecting galaxies in these jets (as per Dr. Halton Arp), and that our own Milky Way could have been one of them!)
Effects of the reference system are not limited to Special Relativity. When you do physics experiments, you get two effects that become combined. One effect is the "pure physics" part, and the other is reference system effects that are combined in with the results, often in insidious, covert, almost perverse ways. This is true even of the commonly used Euclidean metric.
A classical example is the one in which an object is dropped high from the mast of a moving boat. The object will fall straight down to the bottom of the mast, at least as seen by people on the boat. But a person on land will see a different picture. The object has both forward motion due to the boat and downward motion due to gravity. When viewed against the background of a mountain, the object actually falls on a parabolic path, like a bomb dropped from an airplane. Of course, the observer must either have very keen observation skills, or some good photographic equipment to actually see this. Physicists understand this one, and can easily sort out the two, even though people will still ask, "But what did it really do? Did it fall down straight or curved? It cannot be both . . ." But physics will only tell us what we see from what viewpoint. If we could see things from "God's perspective", we would know what it "really" does. Alas, most of us think we are still human.
The trouble really starts when the reference system effects are not understood. Astronomers realize that the Universe is expanding. Far galaxies have a recession velocity, implied by the observed redshifts of spectral lines. Most galaxies are moving away from us in all directions at various speeds. But unless you believe that we occupy a privileged observational position, our galaxy is also participating in the same expansion. Some of that redshift belongs to us. But astronomers take our position as "stationary" and assign the full redshift (or velocity) value to the observed galaxy. The galaxy is assigned a velocity that it does not really have, and our position is regarded as having zero velocity, something that it does not really have either (and I am only referring to the recession, not all the other known motions).
Another related mess concerns what I call the gravipause:"The belief from decades ago was that the (cosmological) redshift was caused by the Big Bang that blew everything apart, resulting in the observational redshift. But the Cosmological Principle points out a problem with that. If everything is supposed to look statistically the same from all viewpoints, then observers in other galaxies must be seeing the same kind of redshift behavior. In other words the redshift must result from a CENTERLESS expansion of space , not from an explosion.
The view that is gaining currency now is that space itself expands or is "emergent" (new spatial units are being generated by some unknown process). It is like time, in that it progresses. But it progresses in three dimensions, and we call that an expansion.
Opposing the expansion is gravitation, which is centered on an object (planet, star, galaxy). We interpret the resulting motions in terms of forces, the cosmological expansion force, which is not affected by distance, and the gravitational force, which has a 1/d^2 dependence. Because of this, there is necessarily a distance where the forces are at equilibrium, a distance I call the "gravipause" (which, in this definition, involves only ONE body, and space itself). For stars it is apparently a few light years, and for galaxies it is apparently a few million light years. Inside this distance, objects come together, and outside this distance, objects move apart (and faster the farther apart, because of the lessening influence of gravitation, and because there are more lengthening units of space in between, like links in a chain).
Astronomers surely understand these things. But they don't seem to recognize the implications. What happens to the Big Bang theory if the redshift did not come from an explosion? They know about Einstein's cosmological constant and gravitational force, but they do not recognize that the two imply a gravipause. Also, the calculated "Hubble constant" would be dependent on the location from which the observations are made (a large versus small galaxy), and they don't recognize that either. And why are stars separated by light years, but not by light weeks? Why don't globular clusters collapse? And so forth." " http://intjforum.com/showthread.php?t=69831 (Related: "Lemaître’s Limit ", Ian Steer http://arxiv.org/ftp/arxiv/papers/1212/1212.6566.pdf ; http://en.wikipedia.org/wiki/Hill_sphere )
Here we see an applicability problem, even when the common metric for spatial distance is used. Gravitation seems to have three regions. Gravitational force near a star starts out strong but declines rapidly with distance (the 1/d2 region). At the gravipause, gravitation is still present, but falls off less rapidly (the 1/d1 region, or "Hubble space" as it could be called). Beyond that, quantization causes the gravitation to disappear completely (the 1/d0 region, where it does not decrease at all, because there isn't any). At this juncture, the only effective "force" involved is the expansion of space, which hypothetically is proceeding at the speed of light. Hence, all very distant galaxies should be receding at the speed of light (there is no gravitational effect, applicable to our observational position, that would decrease the observed speed). This would give a redshift of z = 1.Additionally, Einstein and Lemaître recognized that, according to theory, the structures (galaxies) in the Universe could not be inherently stable. Gravitation would eventually pull everything together. If there were some kind of opposing force, then there would be yet another kind of instability that would cause these structures to fly apart. Yet observations of globular clusters and galaxies imply that these structures are definitely stable. They are very old, yet do not expand and do not collapse. Neither instability is being observed. And buried in this is yet another problem. The outer stars on the rim of a galaxy are moving so rapidly that they should be flung away from the galaxy, like water on a spinning bicycle wheel. Yet that is not the case. Astronomers invent "dark matter" to glue galactic stars together into a collection that does not fly apart. The amount of dark matter in the Universe is estimated to be about five times the mass that is visible. With precisely the right distribution, the galaxies can be made stable again in this theoretical picture. But where is the observational evidence for dark matter? If it is there, what sort of conceptual mutilations are required for the stability of globular clusters? The problem with them is not that they fly apart, but that they do not collapse. Many of these clusters show no appreciable rotation—no "centrifugal force" to keep the stars separated. Adding more matter—dark matter by a factor of five—should cause them to collapse rapidly. One is left with the impession that modern astronomers are studying their own mythical creations and wasting astronomical amounts of taxpayer monies.
This comment from Wikipedia seems appropriate here:
The accelerating universe is the observation that the universe appears to be expanding at an increasing rate. In formal terms, this means that the cosmic scale factor a(t) has a positive second derivative, so that the velocity at which a distant galaxy is receding from us should be continually increasing with time. The first suggestion for accelerating universe from observed data happened in 1992, by Paál et. al. In 1998, observations of type Ia supernovae also suggested that the expansion of the universe has been accelerating since around redshift of z~0.5. (http://en.wikipedia.org/wiki/Accelerating_expansion_of_the_cosmos )
Note the phrase "since around redshift of z~0.5." I would expect this to be around redshift of z = 1 instead. But remember, half of that redshift is due to our own galactic recession motion (which is zero from our standpoint), and half of it is due to the recession affecting the observed galaxy. Have the astronomers unwittingly included a reference system effect here? What is the true physical redshift? I think this is a good question.
A more fundamental metric
We presently use spatial displacement and time progression displacement as our reference system. It is based on differences of location, not on true physical units of space and time, and it creates an arbitrary zero datum. The construct is useful, but not fundamental. Of course, physicists will complain that there isn't any such thing as a "physical unit" of space or time. But that is ok. As was quoted above, their comrades are trying to get rid of the concept of space and time as being fundamental anyway. They are claiming that motion is primary, and that space and time are derived concepts. In other words, we really need a "motional metric" and need to work some of our physics problems in "motional dimensions", not space or time displacement dimensions.
This notion does indeed have a basis in fundamental physical equations. We are all familiar with E=mc2 . Note that there is no separate time term. E=cB (in electromagnetics) is another one. Again, note that there is no separate time term. And Newton's gravitation: F = G (m2m2)/r2 . No time term there either. Time shows up only when connected with space, as in c, the speed of light. Its appearance in Newton's gravitation is concealed as a "motional potential" (expressed as force), and motion is, again, a relationship between space and time. Even in quantum mechanics, time is merely a parameter. The implication is that space and time are not truly fundamental, and that motion should be a more useful and fundamental concept. But if motion is the fundamental concept, then both space/time ("velocity" ) and time/space ("inverse velocity") are legitimate concepts. The former is "local" and the latter is "non-local". (The implications of this are mind-boggling. )
There will be resistance to this kind of thinking, the likes of which have occurred before. Remember our troubles with numbers? First, there were the "counting integers", which made perfect sense. Then someone came up with the concept of a "zero" —a number to represent nothing (unknown to the Romans). Then negative numbers came along (how could you have a number that was less than nothing?!). Then along came Pythagorus and "irrational numbers", the geometric representation of which, could be constructed with an ordinary compass and straight-edge (scandalous!). Still later, "imaginary numbers" came on the scene. At first, this baffled even the most brilliant mathematicians. But the need for them arose naturally in fairly ordinary mathematics, and the concept is now well accepted and very useful. I think the same will happen with "inverse velocity" (the term is actually self-contradictory because there is no trajectory and the effect is instantaneous).
Fundamentally, space and time seem to be progressing. They are not static. They are "emergent" as some physicists are claiming. This is no surprise, really, if motion is the fundamental entity for the physical universe. Space and time could be identical twins that are always linked together in a ratio called motion. This requires them to progress, for example, as 1/1, 2/2, 3/3 etc. The individual units are always changing, progressing, but the ratio remains constant. The ratio is "stationary" even though it has "moving parts", progressing at the speed of light (we will suppose). Yeegads! The "rest frame" is not resting! The speed of light thus becomes the new "zero" (actually 1/1), the datum for no activity. This realization will allow physicists to develop a new metric, one that actually applies fundamentally to the physical universe. (See also UnitEnergy)
This would also answer common questions that appear in the popular media. Example: "Where Is The Center of the Universe?" by Rose Pastore (Popular Science 4-20-2012, http://www.popsci.com/technology/article/2012-04/fyi-where-center-universe )
"First, it’s important to know that the big bang wasn’t an explosion of matter into empty space--it was the rapid expansion of space itself. This means that every single point in the universe appears to be at the center. . . . In the beginning, the universe was a single point. Where was that? It was, and still is, everywhere."
This anywhere/everywhere location of a "center" clearly has a non-local character. Said differently, it is simply a centerless expansion. And following that line of thought leads to the conclusion that it is also edgeless. The edge must be everywhere too. (perhaps the diffuse microwave background, and the diffuse gamma ray background, and the diffuse X-ray background, and the diffuse Far UltraViolet background, and the diffuse cosmic ray background, are trying to tell us something*). See also “The Mystery of the Cosmic Diffuse Ultraviolet Background Radiation”, http://arxiv.org/abs/1404.5714 ; http://phys.org/news/2015-08-cosmic-mystery-deepens-discovery-ultra-high-energy.html#nRlv ;
*"What we know as the universe could actually be just one of a pair that exists in the same space but at different times." (Science News, July 25, 2015, p. 17 "Times Arrow". My thoughts: There may indeed be two "parts" or "sectors" to our Universe. One has matter that gravitates in three-dimensional space and is localized in space. The other has "inverse matter" that gravitates in three-dimensional time and is "non-local" (spatially diffuse) from our standpoint. They operate by the same physical laws and would be statistically indistinguishable to an observer within each system. )
The use of "motional dimensions" as a fundamental unit implies that where (or when) there is no (fundamental) motion, there is no "physical" universe. There is no "where" there, and no "there" there either. If there is no "box", there is no inside or outside either. (The same arguments apply to time.)
There is no reason a spatial viewpoint has to be preferred in the ultimate reference system. Motion can be in space or in time (s/t or t/s) See article. If we could view things from the standpoint of the speed of light (in three dimensions), space would not be expanding. The progression of time cancels the progression (expansion) of space, given the supposition that they are always paired into a ratio. Photons would be stationary. They go no-where and no-when. Mass would be what has actual motion (gravitation) relative to the 1/1 motional datum "fabric" or "ether". Gravitation would make mass move "towards" other masses and those masses would be colliding with the stationary photons in the process. (Photons "collide" or "separate" only when space or time locations are considered individually; this is the reverse of the EPR effect; See also Variations in the speed of light )
The null result of the Michelson-Morley experiment needs to incorporate this insight. This experiment intended to measure the effect of the Earth's relative motion through the ether, the "Aether Wind". But the fundamental motions of both the Earth and the ether are non-directional (i.e., scalar, like the progression of time). They cannot be added vectorially. The design of the experiment did not take this into account. The "ether wind" could not be detected, and this was taken as evidence for the non-existence of the ether itself—a conclusion that is not really justified. There may indeed still be an "ether" (a specific structure of space and time), but it is not the old, mechanical, "wavable medium" type of the 1800s, nor is it "empty space". The "new ether" must be a dynamic one (progressing and non-directional), something quite different from the static ether of the nineteenth century. (See also Gravitational motion has multiple dimensions )
All this is exactly backwards to the way we think the Universe "obviously" works. Physicists and astronomers seem to have little trouble believing that "space exploded" and caused the Universe to come into existence. But the views presented here will seem even weirder, and so don't expect classes in hyperspace navigation to be offered at your local university anytime soon :-).
"Universe boundary in Einstein 1931 same as Lemaître 1927" ( http://adsabs.harvard.edu/abs/2015AAS...22521504S ) a snippet: ". . . universe in balance, changing but always steady, eternal but ever-reborn, is exactly what we observe.")
"Einstein’s aborted attempt at a dynamic steady-state universe", http://arxiv.org/ftp/arxiv/papers/1402/1402.4099.pdf
Proof by Paradox method
a future topic?
Why is gravitation an accelerated motion? What powers gravity?
Acceleration normally causes an increase in speed and change of position. When you accelerate your car on the freeway, you are changing your position and your speed. An engine is required to power this acceleration. If gravitation is equivalent to accelerated motion, then what powers the gravitational engine? And when I stand on Earth, I am being accelerated by it. So where am I going? What is my current velocity after many years of acceleration at 9.8 m/sec2 ? How far have I gone during my lifetime? Why am I still in the same old solar system that I was in years ago?
To answer this, we need to know what other kinds of motion can cause acceleration. Acceleration can cause a change of speed or a change of direction (or both). In the car we think of acceleration as changing our speed. But we would also be accelerated if our direction were changing (through use of the steering wheel) even though our miles-per-hour were constant.
Let's suppose a mysterious thing happened to my house. While I slept, the entire house began quietly rotating. I wake up the next morning and pour myself some coffee. In the kitchen the coffee goes straight into the cup, just as I would expect. Then I wander into the living room. I pour myself another cup, but the coffee stream goes somewhat sideways instead of straight down. I begin thinking, "The house is settling . . . must be on the edge of a sink hole." But I pour another cup in the kitchen and it again goes straight down. The stream only deviates when I get near the outer walls in other parts of the house. It is as though there is a kind of "bent gravity" or "wall magnetism" or something. I have no idea what causes it. It is just a mysterious force that was not there yesterday. I know forces result in acceleration. So I start asking myself the same questions: "Where am I going? What is my current velocity? . . ."
To an observer outside the house, there is no such mysterious force. The effect is caused by rotational motion. A physicist describes it this way:
"Another example of pseudo force is what is often called "centrifugal force." An observer in a rotating coordinate system, e.g. in a rotating box, will find mysterious forces, not accounted for by any known origin of force, throwing things outward toward the walls. These forces are due merely to the fact that the observer does not have Newton's coordinate system . . ." (The Feynman Lectures on Physics, Vol. I, p. 12-11)
Hmmm . . . That reminds us of the Einstein elevator. We gave the elevator a linear acceleration of 9.8 m/sec2 by powering it with a small rocket engine and the result was indistinguishable from normal gravity. But here we see an alternative. We could put the Einstein elevator in a centrifuge and whirl it around with increasing speed until the occupant experiences the same acceleration. But there is an obvious difference. After the centrifuge gets up to speed, we can turn off the power. The occupant will still experience acceleration even though nothing is powering it (in a normal elevator, the acceleration would stop immediately, but the speed would continue at its last value if the deceleration due to Earth's gravity is ignored) So here we have the equivalent of "gravity", but there is nothing that powers it. The effect results from uniform, unchanging motion. But it has to be motion of a special sort: rotational motion.
So now we must ask, Could gravitation be a pseudo force? Physicists have asked the same question:
"One very important feature of pseudo forces is that they are always proportional to the masses; the same is true of gravity. The possibility exists, therefore, that gravity is itself a pseudo force. Is it not possible that perhaps gravitation is due simply to the fact that we do not have the right coordinate system?" (The Feynman Lectures on Physics, Vol. I, p. 12-11)
In their ultimate character, we could say:
rotational motion is a uniform change of direction with no change of position
temporal motion is a change of position with no inherent direction
In other words it could be that this kind of temporal motion (i.e., gravitational motion) is a completely uniform unaccelerated motion when seen from a more complete, true-to-all-facts reference system. It needs nothing to power it. But because we are in a spatial reference system, we experience it as accelerated motion. This is just an idea of course, and needs further investigation and exposition. (Update: See Beyond Einstein: non-local Physics .)
"And what is hidden he brings out to the light" —Job 28:11
The Kinematic Time Shift, Gravitational Time Shift
The existence of kinematic and gravitational time shifts were confirmed by the Hafele and Keating Experiment in 1971. (Kinematics pertains to times, lengths, speeds, etc. Essentially, it is concerned only with the space and time coordinates, and has nothing to do with masses and gravitation.) The Georgia State University physics/astronomy web site offers us this summary:
Hafele and Keating Experiment
"During October, 1971, four cesium atomic beam clocks were flown on regularly scheduled commercial jet flights around the world twice, once eastward and once westward, to test Einstein's theory of relativity with macroscopic clocks. From the actual flight paths of each trip, the theory predicted that the flyng clocks, compared with reference clocks at the U.S. Naval Observatory, should have lost 40+/-23 nanoseconds during the eastward trip and should have gained 275+/-21 nanoseconds during the westward trip ... Relative to the atomic time scale of the U.S. Naval Observatory, the flying clocks lost 59+/-10 nanoseconds during the eastward trip and gained 273+/-7 nanosecond during the westward trip, where the errors are the corresponding standard deviations. These results provide an unambiguous empirical resolution of the famous clock "paradox" with macroscopic clocks." J.C. Hafele and R. E. Keating, Science 177, 166 (1972) See http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/airtim.html
Around-the-World Atomic Clocks
In October 1971, Hafele and Keating flew cesium beam atomic clocks around the world twice on regularly scheduled commercial airline flights, once to the East and once to the West. In this experiment, both gravitational time dilation and kinematic time dilation are significant - and are in fact of comparable magnitude. Their predicted and measured time dilation effects were as follows:
Predicted: Time difference in ns
Gravitational 144 ± 14 179 ± 18
Kinematic -184 ± 18 96 ± 10
Net effect -40 ± 23 275 ± 21
Observed: -59 ± 10 273 ± 21
The kinematic time shift should be understandable in view of what is presented above about the Relativistic Correction Factor, gamma. But we seem to be left with the question of "Why would a clock slow down when it is immersed in a gravitational field?" Most people's reaction is that if a clock acts that way, then it is not a very good clock. Or if it is a good clock, then it must be measuring something, but it is not measuring time. Although this behavior looks rather enigmatic, an explanation can be offered that is simple and intuitive.
The time shift formula is:
(TA-TE)/TE = gh/c2
where TA is the elapsed time on the clock at altitude, TE is the elapsed time on the clock on Earth, g is the acceleration of gravity, h is the height difference in meters, and c is the speed of light. (Note the similarity to the gravitational redshift/blueshift formula: v/c = gh/c2 ).
Let's mentally estimate how small of an effect we are looking for on the right side of the equation. Taking g as 9.8 m/sec2, h as one meter, and c as 3 x 108 m/sec, we can readily see the ratio is about 1 part in 1016—an extremely small effect.
Now plug in some numbers from the Hafele and Keating experiment on the left side of the equation:
(179 x 10-9sec)
(48.6 hours)(3600 sec/hour)
That gives 1.02 x 10-12 for a height difference of 9400 meters. Dividing out the 9400 gives 1.085 x 10-16 which agrees well with our mental estimate for a one meter height difference.
Consider the conventional explanation for this effect from The Feynman Lectures on Physics (Vol.2, section 42-6 "Speed of clocks in a gravitational field")
"Suppose we put a clock at the "head" of the rocket ship—that is, at the "front" end—and we put another identical clock at the "tail" . . . . If we compare these two clocks when the ship is accelerating, the clock at the head seems to run fast relative to the one at the tail." (p. 42-9)
The critical thing to understand here is where the clocks are located relative to the motion of the rocket ship. In my elevator example, they would have to be mounted on the ceiling and on the floor, not on the side walls. Understanding the effect is straight-forward and is exactly like the explanation for gravitational redshift/blueshift. Suppose the upper clock is used to control a device that emits pulses of light. The light pulses are emitted once every second and shine downward to a detector on the floor, which compares their timing upon arrival with an identical clock on the floor. During the transit interval of the pulse from ceiling to floor, the elevator is accelerating and the detector is therefore moving faster than it was when the pulse was first emitted. The detector is moving towards the emitter and sees the pulses as "crammed together slightly". (This is just like a Doppler shift with an extremely low frequency source—something we call a "clock".) The time separation between the pulses is now less than a second. We could say that the equipment on the floor wonders why the pulses are coming in faster than expected. It concludes that the clock on the ceiling that controls the emitter pulse stream is running fast (or that the clock on the floor is running slow).
We will reach the very same conclusion if we put the emitter on the floor and the detector on the ceiling. In this case the detector is moving away from the light pulse at a speed slightly faster than when the pulse was emitted. It sees the incoming pulses "stretched out". The interval between the pulses is now more than a second. And so we conclude that the clock on the floor must be running slow (or the one on the ceiling is running fast). Even though we have reversed the positions of the equipment, we still reach the same conclusion.
Now consider some variations that could be introduced.
1. We leave the emitter and detector on the floor (or ceiling) so that the light path is aligned in the same direction as the motion of the elevator. But we set the elevator to a constant speed (no acceleration). In this case no time shift will be detected. Similarly, no redshift/blueshift would be detected either. The speeds of emitter and detector remain the same and there is no Doppler shift detectable within the elevator.
2. We relocate the emitter and detector (and their clocks) so that they are on the side walls of the elevator and the light path is now perpendicular (transverse) to the motion of the elevator. In this case there will again be no time shift, nor redshift/blueshift. There will be no effect detectable within the elevator regardless of whether it is moving at constant speed or accelerating. The path of the light is slightly elongated due to the combination of the motions (a straight diagonal line for constant speed, or a slight curve for accelerated speed). The detector, however, can detect only the timing between pulses, and once they start arriving, the pulse rate is the same.
3. We put the emitter on a rocket ship and the detector on earth. In this case we will see the conventional Doppler shift. We can tell whether the rocket is moving towards us or away from us. We can also detect whether its speed relative to Earth is constant, accelerating, or even zero. But only the "radial component of the speed" (the portion directly towards or away from Earth) is detectable. This principle is widely used by astronomers.
So is there a problem with the clocks or not? No, the problem is with our intuition and the reference system. Gravity is an ordinary everyday thing. We simply do not expect, offhand, that gravity would have any effect on a time measurement. In contrast, hardly any of us have to measure precise time intervals within a vehicle that is changing speed or direction (accelerating). But if that were an everyday task, we would be quite comfortable with the gravitational time shift too, because an accelerating reference system has the same effects on time interval measurement as gravity.
"You do not know the activity of God who makes all things." —Ecclesiastes 11:5
(If you are just jumping into this section, you might need some background from: Apparent Properties of Space and Time )
A motion canceller (my own term) is a scheme that can be used to cancel (or counterbalance) one motion of a multidimensional motion so that the other motions, which are usually not apparent, become manifest. The resultant motions are perpendicular to the motion used for cancellation.
As applied to gravitation, it means that you can apply a canceling motion (or "force" if you prefer the term) to a stationary object, and it will begin moving (or exerting a force), not in the direction of the canceling motion, but in a direction perpendicular to it.
To get a better intuitive feel for this, consider a non-technical example. It consists of an ordinary spool of thread, a pin, and a card (a business card will do) assembled as shown in the illustration below. Hold the card on the bottom of the spool (using the pin to center it in the hole) and then blow air down the shaft with your mouth. While you are blowing, move your hand away from the card. What do you think will happen?
As every kid who has tried this in an elementary science class knows, the card will not be blown off the spool. It will remain attracted to the bottom as long as air is blown through the hollow shaft of the spool. This little experiment is used to illustrate the Bernoulli and Coanda effects of moving fluids. The principle has widespread applications in industry. A few obvious ones are carburetors in cars, steam jet ejectors used for refrigeration, perfume atomizers, and Bernoulli wands used by the semiconductor industry to lift and move silicon wafers without touching the circuit side (not to be confused with vacuum wands, which are used on the backside).
How does it work? The card is normally bombarded by air molecules coming from all directions and having every orientation. Each ricocheting air molecule has a momentum component that is perpendicular to the face of the card. All these components add up to produce a pressure on each face of the card. As long as the card is fully immersed in air and the bombardment is random, the pressures will be equal, and the card does not move.
But when the card is placed near the spool, and air is blown through the shaft, the pressures become unbalanced. The air flow bends parallel to the surface of the card, and the perpendicular component on the spool side is literally "blown away" (partially). The perpendicular component on the other side of the card is thus unopposed, and an unbalanced pressure develops which moves the card towards the spool. The harder you blow, the more firmly the card moves towards the spool. (The pin simply keeps the card from sliding sideways.)
A slide from my presentation "The Quest for the Stardrive"
Note that air moving in two dimensions, in a plane parallel to the card, has caused the card to move perpendicular to the air flow. It has made apparent the existence of an effect that is otherwise not observable. One motion is used to cancel a hidden motion; the "canceller motion" does not directly produce the resulting motion, but allows an existing motion to become manifest. (loosely, this meets the definition of a motion canceller.) If you could repeat the equivalent of this experiment in the vacuum of outer space, the card would simply be blown off, as there is no opposing motion from air molecules. (See also https://en.wikipedia.org/wiki/Coand%C4%83_effect , https://en.wikipedia.org/wiki/Magnus_effect , https://en.wikipedia.org/wiki/Trench_effect )
The motion canceller idea can also give us insights on physical concepts that otherwise seem to be counter intuitive. One class of problems of this sort involves the Poynting vector. This vector, S = e0c2 E X B, tells us how electromagnetic energy flows in space. It is often encountered in discussions about the properties of light, but it applies to other things too, like electric current in capacitors, a resistance wire, magnets with static charges, and so on. It often implies some surprising, and seemingly awkward things. Here is a textbook example from Feynman Lectures on Physics:
"Now we take another example. Here is a rather curious one. We look at the energy flow in a capacitor that we are charging slowly. . . . There is a nearly uniform electric field inside which is changing with time. . . . So there must be a flow of energy into that volume from somewhere. Of course, you know that it must come in on the charging wires—not at all! It can't enter the space between the plates from that direction, because E is perpendicular to the plates; E X B must be parallel to the plates.
You remember, of course, that there is a magnetic field that circles around the axis when the capacitor is charging. . . . Its direction is shown in [the figure]. So there is an energy flow proportional to E X B that comes in all around the edges as shown in the figure. The energy isn't actually coming down the wires, but from the space surrounding the capacitor." (Feynman Lectures on Physics, Vol II, p. 27-7)
"Our programme of measurement of forces related to electromagnetic momentum at low frequencies in matter has culminated in the first direct observation of free electromagnetic angular momentum created by quasistatic and independent electromagnetic fields E and B in the vacuum gap of a cylindrical capacitor. A resonant suspension is used to detect its motion. The observed changes in angular momentum agree with the classical theory within the error ~ 20%. This implies that the vacuum is the seat of something in motion whenever static fields are sat up with non-vanishing Poynting vector, as Maxwell and Poynting foresaw. " (“Observation of static electromagnetic angular momentum in vacuo", M.Graham, D.G.Lahoz. Nature, 285, 154, 1980. http://www.tts.lt/~nara/introduc/introduc.htm )
(This also brings to mind another topic of popular interest: the Biefeld-Brown effect. Suppose the capacitor is asymmetric in that it has plates with very different areas. The electric field will be shaped somewhat like a cone, instead of a cylinder, and will be highly divergent. The "lifters" constructed with such principles are usually "leaky", due to corona effects, and require electric current to keep them charged. The current is of course accompanied by a magnetic field. The resultant Poynting vector is directed inward toward the central axis, but now also has a vertical component. Could this flow of energy/momentum be related to the source of lift claimed for these devices? And does the electric gradient between the ionosphere and the earth (about 100 volts per meter) have anything to do with lift generation? Refs: http://jnaudin.free.fr/lifters/main.htm , http://jnaudin.free.fr/html/nasarep.htm , http://www.americanantigravity.com/about.html , http://www.meridian-int-res.com/Aeronautics/APS.htm The asymmetric construction may be a way of dealing with the gravitational symmetry problem.
(Yet another thought on this involves a moving dielectric. The lifters use an air dielectric, which, due to the ion wind effect, is moving through the capacitor plates. This means it is always charging (because it is getting "new" unpolarized dielectric) and therefore developing a Poynting vector. The thrust might be the sum of the ion wind momentum transfer and the electromagnetic momentum denoted by the Poynting vector. This suggests a couple of other variations. Make a dielectric disk out of barium titanate and rotate it. A pair of (asymmetric) electrodes charges a portion of the disk as it rotates, and another pair of electrodes discharges the portion as it rotates under the second pair (either discarding the energy or recycling it). Another extremely simple proof of principle configuration uses oil in a wide-based U -tube. Asymmetric plates are mounted radially on the glass base of the U-tube. When the voltage is turned on, the oil should move, and a momentary pressure differential should cause a difference in the height of the oil in the vertical sections of the U-tube; this would be intended only as a demonstration of an effect that does not involve ion wind)
See Update 4-4-11 on the Biefeld-Brown effect:
section that was here previously has been moved to:
Poynting vector insights (electromagnetic momentum)
Let's now try a more technical example involving gravitational motion.We run electrons through a metal bar as shown in the illustration below:
(The idea that an electron is equivalent to rotational space ("spin space" as contrasted to extension space) is discussed more thoroughly in the first three articles of Some Thought Provoking Issues. It is also illustrative to compare the space/time dimensions of mv2 and Li2 . Both must reduce to the dimensions of energy. According to the discussion of the Hamiltonian, energy is t/s and mass is t3/s3. If electron current is space per time, then the dimensions of L (inductance) must be t3/s3, which is the same as that for mass. This makes perfect sense: the nature of the bar is not changed by moving it through space, nor is it changed by moving space through the bar. See also Feynman, Lectures, Vol 2, p. 17-12)
In this example, the bar is moving in all three dimensions of extension space simultaneously. (This multidimensional motion of one object is somewhat difficult to visualize, and you might need to review the above two sections about Gravitational Lensing and Gravitational Redshift.) The motion of the electron space through the bar "cancels" the spatial motion of the bar in one dimension. The other two dimensions of the gravitational motion are still active and act perpendicularly (radially) to the long axis of the bar. This resultant is a still a scalar motion and will become manifest with another object possessing the same type of motion. Hence, two wires so treated will be moving "towards" each other. This is an effect that we call "magnetic". Also, because it is two-dimensional, the resulting motion is "orientable" in the context of a gravitationally bound reference system.
Possibly, this could be related to a motion cancelling effect. The experiments used monopolar, high voltage, high current, pulsed electrical discharges. Alternating current could not produce this effect. Mechanical pressure and heating effects were also observed. The phrase "stood straight out of the line" is consistent with a motion cancelling effect. Possibly, the applied pulse caused something that was balanced and not observable, to become unbalanced and therefore observable. The mechanical effects suggest some sort of momentum density change (Poynting vector) due to the fast changing electrical field. Maxwell's equations apparently are not inclusive of a d(E)/dt effect. Note that this discovery occurred after Maxwell's equations have been published.
For more technical examples, see Motion Couplers and Momentum Converters and Weyl Fermion links.
A similar effect can be produced by moving a wire through a magnetic field, or by moving electrons in free space through a magnetic field:
Dimensional relationships like this involve a factor of the speed of light. In this case the electric field and the magnetic field are related by the equation E = cB, where c is the speed of light. Hence, this "electromagnetic effect" is definitely nothing weak or subtle. It is used in motors, for example, that run everything from simple floor fans to gigantic pumps for municipal water supplies, as well as many other types of devices.
Atoms of the metal bar possess gravitational motion and, again, are moving in all three dimensions of extension space simultaneously. How would an atom act if one of these dimensions of motions could be cancelled by a "motion canceller". We can get a clue from the behavior of massless particles. In contrast to massive particles, massless particles lack one dimension of the gravitational motion. They possess only momentum, not mass. The space/time relationships are shown in the table below. (The table is copied from the article Energy from Massless Particles? , which has more information on mass, inertia, and massless particles).
|C Factor||Energy Term|
Whereas massive particles are moving "anti" to the outward progression of space and time in three dimensions (t3/s3), massless particles, like the neutrino, have this anti-motion only in two dimensions (t2/s2). This means that massless particles cannot fully participate in the motion that is characteristic of a gravitationally bound reference system. Hence, massless particles will move at the speed of light relative to such a system. This missing dimension of motion can, of course, have any orientation relative to the reference system.
Now we begin to see what might be required of an antigravity device. Atoms are built from the 4p and 2p intrinsic spins as explained in The Atomic Spin System. There is no way to get rid of these intrinsic spins and still have intact atoms, because the spins are the source of chemical properties —a defining characteristic of atoms— as well as the gravitational motion. Instead, the most likely approach would be to use a "motion canceller" to cancel out one dimension of the gravitational motion. If this could be done, the device could be made to move at speeds up to that of light. In essence, the device would act like a macroscopic analog of a massless particle. Of course, the influence of the "motion canceller" needs to be fully controllable.
See also: "United States gravity control propulsion research", http://en.wikipedia.org/wiki/United_States_gravity_control_propulsion_initiative
With these insights, you might want to review the gravity modification experiments performed a few years ago at Tampere, Finland and more recently by NASA. Some of many links:
"Finnish researcher reportedly discovers gravity-change effect"
"Superconductive Components, Inc. awarded phase II contract by NASA on gravity modification" http://www.superconductivecomp.com/nasap2award.htm
"Breakthrough as scientists beat gravity."
"Tampere Anti-Gravity Report"
http://xxx.lanl.gov/abs/physics/0108005 "Impulse Gravity Generator Based on Charged YBa2Cu3O7-y Superconductor with Composite Crystal Structure", arXiv:physics/0108005 v2 30 Aug 2001, Evgeny Podkletnov, Giovanni Modanese, (32 pages, 7 figures).
From the abstract: "An apparatus has been constructed and tested in which the superconductor is subjected to peak currents in excess of 104 A, surface potentials in excess of 1 MV, trapped magnetic field up to 1 T, and temperature down to 40K." The apparatus produces a "focused beam" which propagates "without noticeable attenuation through different materials and exerts a short repulsive force on small movable objects and independent of their composition. It therefore resembles a gravitational impulse. The observed phenomenon appears to be absolutely new and unprecedented in the literature." (p. 1)
From the article: The repulsive force "on pendulums made of different materials does not depend on the material but is only proportional to the mass of the sample. Pendulums of different mass demonstrated equal deflection at constant voltage. This was proved by a large number of measurements using spherical samples of different mass and diameter. The range of the employed test masses was between 10 and 50 grams. . . . Measurement of the impulse taken at close distance (3-6 m) from the installation and at the distance of 150 m gave identical results, within the experimental errors. As these two points of measurements were separated by a thick brick wall and by air, it is possible to admit that the gravity impulse was not absorbed by the media, or the losses were negligible. . . . This work indicates that a kind of artificial gravity can be generated using the unique properties of superconducting ceramic materials and a combination of electric and magnetic forces." (p. 8-9, 27)
http://lanl.arxiv.org/ftp/physics/papers/0209/0209051.pdf (illustrations after references)
Illustrations: (these links keep changing; you might have to do some Googling):
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure1and2.jpg (32.3 kB, current as of Sept 2012)
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure3.jpg (40.0 kB discharge chamber; current as of Sept 2012)
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure5.jpg (14.0 kB, current as of Sept 2012)
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure4.jpg (22.4 kB, current as of Sept 2012)
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure6.gif (5.1 kB, current as of Sept 2012)
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure7.gif (3418 bytes, current as of Sept 2012)
http://lanl.arxiv.org/PS_cache/physics/ps/0108/0108005v2.figure8.gif (3125 bytes, current as of Sept 2012)
http://lanl.arxiv.org/ftp/physics/papers/0209/0209051.pdf (illustrations after references)
Interview Dr. Eugene Podkletnov - Full Length Uncut Fixed (2004) http://www.youtube.com/watch?v=AgyAFElQZcU&feature=related
Similarity to "dark rays" of Tesla:
Analysis of this situation proved that electrical energy or electrically productive energies were being projected from the impulse device as rays, not waves. Tesla was amazed to find these rays absolutely longitudinal in their action through space, describing them in a patent as “light-like rays”. These observations conformed with theoretical expectations described in 1854 by Kelvin.
In another article Tesla calls them “dark-rays”, and “rays which are more light-like in character”. The rays neither diminished with the inverse square of the distance nor the inverse of the distance from their source. They seemed to stretch out in a progressive shock-shell to great distances without any apparent loss.( http://journal.borderlands.com/2010/the-broadcast-power-of-nikola-tesla-part-1/ , Gerry Vassilatos )
Lost Science, Gerry Vassilatos, p. 87+ ( http://www.tuks.nl/pdf/Reference_Material/Aetherforce_Libary/Lost%20Science/Gerry%20Vassilatos%20-Lost-Science-Complete-Edition.pdf )
The Free Energy Secrets of Cold Electricity , Peter A. Lindemann, D.Sc , http://www.teslasociety.ch/info/NTV_2011/free.pdf
http://www.freepatentsonline.com/0685957.pdf , http://www.freepatentsonline.com/0685958.pdf ,
( http://www.freepatentsonline.com/6417597.pdf "Gravitational wave generator" , Robert M. L. Baker, July 9, 2002)
Note: The terminolgy use to describe these effects is misleading. A "gravity wave" is neither longitudinal nor transverse in character. It is not a wave either. It is a mechanical effect ("pressure wave") that is instantaneous and not propagated (i.e.,a "non-local" effect). The beam is formed by the geometry of the generating device and is not diffraction limited in the manner of light waves.
Antigravity Replication Experiments:
Tampere Replication -- How to
"Demonstration of transient weak gravitational shielding by a YBCO LEVHEX at the superconducting transition", John Schnurer
"Improved apparatus and method for gravitational modification", John Schnurer
BUSINESS WEEK ONLINE NEWS FLASH! September 25, 1996 ONE STEP CLOSER TO AN ANTIGRAVITY MACHINE
A possibility of gravitational force shielding by bulk YBa2Cu3O7-x superconductor", E. Podkletnov and R. Nieminen
"Weak gravitation shielding properties of composite bulk YBa2Cu3O7-x superconductor below 70 K under e.m. field", E. Podkletnov
"A theory of the Podkletnov effect based on general relativity: anti-gravity force due to the perturbed non-holonomic background of space", by Rabounski, Dmitri,Borissova, Larissa; Progress in Physics, July 1 2007
"803-page Collection of Papers on Anti-Gravity Research "
Other Antigravity Patent claims:
Technical and Theoretical Specifications for Warp Drive Technology
Andrew Peter Worsley, Peter John Twist, June 19, 2003
(http://www.uspto.gov/patft/index.html ) United States patent database
(http://ep.espacenet.com/espacenet/ep/en/e_net.htm http://worldwide.espacenet.com/ ) European patent database
(http://www.freepatentsonline.com ) James Ryley's site
(Some words of caution: a patent attorney once told me that devices do not actually have to work to be granted a patent. Patents can be granted on "plausibility" without an actual demonstration of a working device. Also, many offbeat technology patents seem to have a lot of highly technical circumlocution and obfuscatory nonsense in their "Theory of Operation" section. Apparently, this is just fluff to impress patent examiners or investors. Some of these patents clearly merit skepticism.)
http://www.rexresearch.com/hooper/3610971.htm "All-Electric Motional Field Generator", William J. Hooper
A relevant fact is that atomic spin coordination is possible in high-temperature superconductors:
"Many atoms have a magnetic property called spin, which makes them behave as tiny bar magnets. Scientists noticed in experiments even 10 years ago that at a temperature just below the superconductivity threshold, the spins of many atoms in some copper oxide compounds fluctuated in a coordinated manner. . . Now "it's pretty much been proven that the [spin coordination] is present for all high-temperature superconductors," comments Andrey V. Chubukno of the University of Wisconsin-Madison" —Science News, March 16, 2002, Vol 161, No 11, p. 173-174. ( http://www.sciencenews.org ) (See also "The Wallace inventions, spin aligned nuclei, the gravitomagnetic field, and the Tampere experiment: is there a connection?" by Robert Stirniman " http://www.rexresearch.com/wallace/wallaceinventions.pdf )
"Secret of superconductivity in sight" 24 January 2002 http://physicsweb.org/article/news/6/1/16
How high is high-temperature?
"Currently, the superconductor with the highest critical temperature ever recorded is Mercury Barium Thallium Copper Oxide or Hg0.2Tl0.8Ca2Cu3O, which has a critical temperature of 139 K at one atmosphere. This superconductor is a type of ceramic copper oxide and its critical temperature was determined in 1995 by Chakoumakos, Dai, Wong, Sun, Lu, and Xin. Apparently, metal-copper oxide ceramic superconductors have high critical temperatures, which might unlock the key of synthesizing a high temperature superconductor that is superconductive under room temperature conditions." http://hypertextbook.com/facts/2002/MichaelNg.shtml
Some superconductors are very sensitive to processing parameters, such as heating and cooling rates:
"The recent discovery of superconductivity at temperatures up to 125 K has led to unprecedented worldwide research efforts to understand mechanisms and properties so that these materials can be utilized advantageously for energy conservation in applications such as electrical energy transmission and storage, transportation, and electronics. One family of these materials, containing Bi, Sr, Ca, Cu, and O, is very sensitive to the temperature of heating and the rate of cooling during processing. A wide range of properties is possible, depending on these parameters. This sensitivity to heating temperature and cooling rate suggested an investigation in the PSU ballistic compressor to determine the effects of rapid heating and cooling on the properties of these materials." http://physics.pdx.edu/faculty_files/das/dash.htm See also "Enhancement of To of Bi-Sr-Ca-Cu-O Superconductor by Rapid Heating and Cooling in a Ballistic Compressor" Q. Duan, J. Dash, M. Takeo, and J. Huang, J. Appl. Physics, 69, 4897 (15 April 1991); http://physics.pdx.edu/faculty_files/tak/takeo.htm ; pulsed power might be useful extremely fast heating/cooling http://pps.coe.kumamoto-u.ac.jp/streaming/PulsedPower/RAM/bluhm/pplesson3.ram
Clearly the design of such a "motion canceller" would be facilitated by a factual model of the atom (one based on intrinsic spin systems), and by a re-write of all physical equations in terms of space/time (or time/space) ratios. Such an approach will surely lead to powerful and general solutions to perplexing problems in physics. The biggest obstacle by far however, will be in overcoming our own preconceived ideas and misconceptions about how the Universe actually works.
"The problem of creating something which is new, but which is consistent
with everything which has been seen before, is one of extreme difficulty. "
(The Feynman Lectures on Physics, Vol. II, p. 20-10 to 2011)
6-14-03 Update: Some additional articles have appeared on this subject: recently:
"Podkletnov maintains that a laboratory installation in Russia has already demonstrated the 4in (10cm) wide beam's ability to repel objects a kilometre away and that it exhibits negligible power loss at distances of up to 200km. Such a device, observers say, could be adapted for use as an anti-satellite weapon or a ballistic missile shield." ( Jane's Defence Weekly 29 July 2002, Anti-gravity propulsion comes 'out of the closet', By Nick Cook, JDW Aerospace Consultant, London) See http://www.gravity-society.org/
The following two citations are from "Investigation of high voltage discharges in low pressure gases through large ceramic superconducting electrodes" ( Evgeny Podkletnov, Giovanni Modanese, 26 Apr 2003 (final version), http://www.arxiv.org/pdf/physics/0209051 )
"The propagation velocity of the radiation is still unknown, too. This can be measured in principle by placing two identical detectors A and B along the beam, at a known distance from each other (for instance, the maximum observed distance, AB=150 m). If the beam propagates with the speed of light, then the detection delay will be of the order of 10-6 s. This can be observed by comparing the signals of the two detectors as seen at the middle point between A and B. Then for a check one can exchange A with B. The method requires that the detectors have a temporal resolution better than 10-6 s. In general, it is difficult to obtain fast rise times in detectors based on mechanical transducers." (Section 5)
"The repulsive character of the force is not explainable in the classical gravitational theory, either." (Section 4.2)
3-30-11 Update: ( source: Secrets of Antigravity Propulsion, Paul A. LaViolette, 2008, p. 175-178)
". . . at a higher discharge voltage, of around 10 million volts, the gravity wave pulse became so strong that it was able to substantially dent a 1-inch thick steel plate and punch a 4-inch diameter hole through a concrete block! . . . Podkletnov also disclosed that his improved pulse generator exhibited increased thrust power even when energized with 5 million-volt pulses. Also, he noted that these powerful pulses would sometimes bend the generator's copper anode as well as damage the walls of the discharge chamber. . . .
Podkletnov's team measured a far higher velocity for the concrete-smashing gravity impulses produced by their improved Marx bank pulse generator. Using a pair of synchronized atomic clocks to measure the arrival time of the impulses at separate locations, they were able to determine that the impulses were traveling at least several thousand times the speed of light, perhaps faster!"
(For more about the possible physics behind this, see my (flawed but corrected) comment about the second time derivative of the E field. See also Water Capacitors-Electrical .)
If the Podkletnov gravitational impulse device operates on the motion canceller principle, the "beam" propagation velocity should be infinite. In other words, the effect is instantaneous. Nothing is propagated, for reasons previously outlined in the 8-10-02 Note above in the Shapiro Time Delay article and elsewhere. The effect is repulsive because the "towards" motion of gravity is cancelled in one dimension between the two participating objects. These objects no longer (fully) participate in the motions of a gravitationally bound reference system and are not coming together at the same rate in all dimensions. The effect will look like "repulsion" in the context of a laboratory reference system.
There are also some more clues about the occupant acceleration problem noted previously :
"Because the impulse gravity beam penetrates bulk material and seems to act independently of target composition, it will uniformly accelerate all spacecraft components in the beam path. Even at high accelerations, the spacecraft components would not experience any internal stresses; a spacecraft being propelled by an impulse gravity beam would behave as though it were in freefall. Uniform acceleration of all spacecraft components means that even delicate payloads might safely undergo very high accelerations." (EVALUATION OF AN IMPULSE GRAVITY GENERATOR BASED BEAMED PROPULSION CONCEPT, Chris Y. Taylor, Giovanni Modanese, page 5; presented 7-10 Jul 2002; Published by the American Institute of Aeronautics and Astronautics, Inc., 2002. See http://www.arxiv.org/pdf/physics/0209023 )
At this point, the indications are that spacecraft speeds up to that of light appear to be technically feasible and practical (but not as currently envisioned as "beamed propulsion"). Speeds greater than light do not appear to be forbidden, but such speeds would be temporal, rather than spatial. However, that might prove to be an advantage. The "inverseness" of space/time ratios implies that locations which are widely separated in three-dimensional space are comparatively close in three-dimensional time. The operational equivalent of the science fiction "warp drives" might prove to be possible after all. (See also the diagram, Speeds in a Gravitationally Bound Reference System and its discussion. Another aspect of this is that the speed of electric, magnetic, and gravitational fields are instantaneous, i.e., far in excess of the speed of light (See Speed of Gravity, Speed of Electric Fields ). This implies that propulsion "speeds" effectively far greater than light speed can be produced. The effects would not conform to Special Relativity, however, as temporal speeds do not have a spatial trajectory, and involve "the physics of non-locality". This could produce delocalization of the affected object (i.e., it may disappear from view, become intermingled with other matter, etc. even though it remains in the very same spatial location.)
(A clarification about Reactionless Propulsion: The beamed propulsion concept requires an Earth based generator to transmit the beam to the spacecraft and push it along. The craft could be just an ordinary spacecraft, or even a chunk of rock. Although this scheme has some applications, I don't believe it is practical and safe for space flight in general. But there is an alternative: mount the generator on the spacecraft. It would at first seem that there is no point in mounting such a generator on a spacecraft because the beam generating apparatus produces no Newtonian back reaction. A beam projected from the rear of the craft would NOT produce any forward thrust on the spacecraft. But the fallacy here is that of Newtonian thinking. The beam is NOT like rocket exhaust. In order for it to produce a thrust on the spacecraft, the generator would have to be mounted at the rear, and the beam projected forward into the spacecraft itself. This would push all components/occupants of the spacecraft forward and drag the attached beam generator along with it. This is the literal equivalent of a person 'picking himself up by his own bootstraps' to leap a tall building. It is physically impossible using Newtonian action/reaction mechanics. But in the Motion Canceller concept, the reaction is perpendicular to the beam (in all radial directions) and cancels itself out within the generator). See my comment about reactionless force and railgun recoil. This also implies that the best shape for such a spacecraft would be something having radial symmetry, like a saucer or a cigar. This facilitates keeping the entire craft, and its occupants, within the boundaries of the beam.)
Can mechanical structures handle internal high g acceleration? Here is a partial table from http://en.wikipedia.org/wiki/G-force
Shock capability of mechanical wrist watches > 5,000 g V8 Formula One engine, maximum piston acceleration 8,600 g Rating of electronics built into military artillery shells 15,500 g
With a superconducting emitter, the beam effect is uniform across the face of the emitter. Are superconductors therefore required? I don't have enough information to answer this question. The beam could be approximately uniform across the face of a non-superconductor provided a fast pulsed voltage is used. Here is a slide from Pulsed Power Engineering, 2011, Prof. Sunao Katsuki lecture series Introduction, page 14:
"Each spark gap has a variance of its breakdown voltage, which can be characterised by the standard deviation sa(U). . . . Therefore, to achieve the largest possible number of channels, we must reduce sa(U) and decrease the pulse rise time dU/dt as much as possible." (Pulsed Power Systems, Hansjoachim Blum (2006) p. 95)
Can this be done by hobbyists? Yes:
"The idea of making electrodes parallel enough to discharge along their entire length is intimidating especially if you have ever tried to do this along a very long spark gap. Under normal conditions it is perhaps impossible to get a spark jumping across a long narrow pair of electrodes, to cover their entire length. You will always get a tiny bright spark at one place at a time. This was one of the reasons the TEA laser before building one, seemed intimidating.As it turns out in the case of TEA [Transversely Excited Atmospheric pressure ] lasers, the extremely fast voltage transition between the electrodes creates a discharge across their entire length. Adjustment for this condition is relatively easy. " ("Simple Homemade T.E.A. Laser", Nyle Steiner, K7NS (Oct 2007) http://www.sparkbangbuzz.com/tealaser/tealaser7.htm ; also "Nitrogen Laser Considerations for the DIYer, With a View Toward the Design and Construction of a High-Performance DIY Laser" http://www.jonsinger.org/jossresearch/tjiirrs/005.html
Higher energy density, as well as beam uniformity, can be delivered with pulsed power because of a multiple channel effect. However, the anitgravity effect may still depend on the spin coordination that is possible in high-temperature superconductors. But an interview with Dr. Eugene Podkletnov beginning at the 21:10 point (and 48:17) suggests alternatives to superconductors. http://www.youtube.com/watch?v=AgyAFElQZcU&feature=related
“But to be absolutely honest now, after twelve or fifteen, already, years of research in this field, we came to the conclusion that it is not necessary to use superconducting materials in order to modify the gravity field. We can use rotating magnetic fields, and we can turn to normal conductors, which is much easier, much easier, and uh, ah, this method has a lot of advantages.” http://portal.groupkos.com/index.php?title=Eugene_Podkletnov_portal
And there may even be still other schemes. See "Van de Graaff Generator Effect-Force Concentration", Charles R. Morton http://amasci.com/freenrg/morton1.html . This scheme does not use high temperature superconductors and seems to resemble the Biefeld-Brown effect more so than the Podkletnov effect. Note that both use (or can use) pulsed high voltage electric fields. The Frolov "T-Hat capacitor" also seems related to the Morton effect, although it uses static fields. ("Propulsion unit using asymmetrical (gradient) electric capacitors", Alexander V. Frolov, http://www.faraday.ru/t-cap.html ; compare with Brown's patent http://www.freepatentsonline.com/3187206.pdf ) These effects need to be investigated at much higher power levels.
Morton's Van de Graaff Generator Effect Brown's Electrokenetic Apparatus
Incidentally, Morton describes what could be a radial reaction force (as above) to a longitudinal pulse force:
"The spark fired through a glass tube toward a metal plate with a hole in it. From the tube came a beam of energy unlike anything I had ever heard of. . . . unlike the VandeGraaff explosion, this beam of force passed through metals. . . . the force was so powerful that it sent bits of paper flying.
As the years passed, I developed better and better methods of producing the beam. Then one day it happened - instead of repelling matter, the beam attracted matter. Even radiation pressure could not explain this phenomena. . . . When the spark went through the glass tube, the air collapsed around it".
Morton's description is lacking in detail and I simply don't know what to think of it. Perhaps my readers could try this simple experiment and offer some feedback. Likewise for the experiments of Martin N. Kaplan. See also the forum discussions: http://groups.google.com/group/sci.physics.relativity/browse_frm/thread/25991020eef22a11. . .
Propulsion Through Electromagnetic Self-Sustained Acceleration
Authors: Vesselin Petkov
(Submitted on 29 Jun 1999 (v1), last revised 9 Jul 1999 (this version, v4))
Abstract:As is known the repulsion of the volume elements of an uniformly accelerating charge or a charge supported in an uniform gravitational field accounts for the electromagnetic contribution to the charge's inertial and gravitational mass, respectively. This means that the mutual repulsion of the volume elements of the charge produces the resistance to its accelerated motion. Conversely, the effect of electromagnetic attraction of opposite charges enhances the accelerated motion of the charges provided that they have been initially uniformly accelerated or supported in an uniform gravitational field. The significance of this effect is that it constitutes a possibility of altering inertia and gravitation.
This vague situation suggests another somewhat more difficult experiment. I call it a "gravitational pulse tube". I have no idea if it will work. (see math error note ) The discharge pulse must be unidirectional (no "ringing" or current reversals). The basic scheme is as shown:
If someone is experimenting with such a device, I doubt if the effects would be recognized as coming from such a source: http://www.cbsnews.com/news/cause-of-mystery-beach-blast-in-rhode-island-solved/ (read the comments) See also shockwave-thru-a-coin experiment (LIFE Nov 23, 1942 p. 132 ).
Another device, a vircator, is used to generate microwave pulses in the gigawatt to terawatt range. It has superficial similarities to the Morton device, except it uses an evacuated waveguide (instead of an atmospheric pressure dielectric tube) and an axial magnetic field. The Marx bank is the equivalent of a powerful Van de Graaff generator. Hobbyists need to be careful not to generate intense microwave pulses (or X-rays) when creating design variations intended to explore the Morton effect.
"The monotron as a gridded microwave tube", Joaquim J. Barroso (2003) http://www.plasma.inpe.br/LAP_Publicacoes/LAP2003/JJBarroso_Poster_LAWPP2003b.pdf
See also: George Samuel Piggott's Electro-Gravitation experiments
Anyway, more about Podkletnov:
"Breaking the Law of Gravity" by Charles Platt (Mar 1998)
Some NASA related links are listed below (try rating my website with the criteria in the first citation below):
Millis, M., "NASA Breakthrough Propulsion Physics Program", NASA/TM-1998-208400, (June 98) (9 pg.). http://www.grc.nasa.gov/WWW/bpp/TM-1998-208400.htm
Millis, Marc G. "Challenge to Create the Space Drive," In Journal of Propulsion and Power (AIAA), Vol. 13, No. 5, pp. 577-682, (Sept.-Oct. 1997). http://www.grc.nasa.gov/WWW/bpp/TM-107289.htm
Millis and Williamson, ed., "NASA Breakthrough Propulsion Physics Workshop Proceedings,"NASA/CP-1999-208694, Proceedings of a conference held at and sponsored by NASA Lewis Research Center in Cleveland Ohio, August 12-14, 1997. (Jan. 99) (456 pg.). **NOTE** A condensed, 10-page summary of this workshop is available as: Millis, M. "Breakthrough Propulsion Physics Workshop Preliminary Results," NASA TM-97-206241 (Nov. 97). http://www.grc.nasa.gov/WWW/bpp/TM-97-206241.htm
Advanced Space Transportation Program http://sli.nasa.gov/ast/astp.html
"To find out more about BPP's challenges and new concepts, check out "Warp Drive, When?" http://www.grc.nasa.gov/WWW/PAO/warp.htm "
"To stay aware of any further developments or emerging opportunities associated with the BPP Project, please revisit the Project web site from time to time http://www.grc.nasa.gov/WWW/bpp/ "
You can also access the NASA WWW and do a word search on your topic of choice. The "Search" capability of "Space Link" may provide you a wealth of information. http://www.grc.nasa.gov/Doc/search.htm
NASA funding mechanisms have had breakthrough propulsion added to their solicitation topics. If you are doing work in this field, you might want to investigate the funding opportunities. See http://sbir.gsfc.nasa.gov/
"Responsive Coverage Using Propellantless Satellites", George E. Pollock, Joseph W. Gangestad, James M. Longuski,
http://www.responsivespace.com/Papers/RS6/SESSIONS/SESSION%20II/2002_POLLOCK/2002P.pdf (this has nothing to do with antigravity, but is interesting in its own right) Also: "New Synchronous Orbits Using the Geomagnetic Lorentz Force", Brett Streetman, Mason A. Peck (2007)
Some interesting links about space and time:
Luxon Hypothesis: "H. Zeigler proposed in 1909 that relativity phenomena would be a natural result if the most elemental particles of mass were made of smaller particles that all moved at the constant speed of light." http://www.tardyon.de/other.htm
The Reciprocal System, http://www.rsystem.org
The Collected works of Dewey B. Larson, http://www.rsystem.org/dbl/index.htm
http://www.courses.fas.harvard.edu/~phys16/Textbook/ This is a textbook by David Morin that has "grown out of the first-semester honors freshman physics course that has been taught at Harvard University during recent years." It is quite good. Chapter 5 is about "The Lagrangian Method" which is very useful, general, and powerful in both classical mechanics and quantum mechanics. Chapters 10,11,12, and 13 are about Relativity. I especially agree with the author's approach to teaching:
"One thing many people don’t realize is that you need to know more than the correct way(s) to do a problem; you also need to be familiar with many incorrect ways of doing it. Otherwise, when you come upon a new problem, there may be a number of decent-looking approaches to take, and you won’t be able to immediately weed out the poor ones. Struggling a bit with a problem invariably leads you down some wrong paths, and this is an essential part of learning. To understand something, you not only have to know what’s right about the right things; you also have to know what’s wrong about the wrong things. Learning takes a serious amount of effort, many wrong turns, and a lot of sweat. Alas, there are no short-cuts to understanding physics." —David Morin
is He who reveals the
profound and hidden things."
The Biefeld-Brown Effect
Update 4-27-11 on the Biefeld-Brown effect:
Please review physicist Feynman's remarks about a charging capacitor before reading this section.
In popular practice, there appear to be two technical embodiments of the Biefeld-Brown effect: electrokinetics and electrogravitics. The former develops thrust from an electrostatic ion wind effect generated by a high voltage source (tens of kilovolts or higher). These are the “lifters” you see demonstrated on the internet. They are low mass devices and will not work in a vacuum but are capable of lifting their own weight.
The other type generates thrust by using asymmetric electrical fields, combined with high mass, high K asymmetric capacitors. This type of device will produce thrust in a high vacuum (10-6 Torr ), or when the electrodes are enclosed in Plexiglas shields (or plastic bags) to contain the ion wind or when immersed in transformer oil to suppress corona and ion wind effects. Operation is more efficient without corona leakage, and higher voltages are also possible (the thrust effect scales approximately as the square or cube of the voltage). Cone shaped dielectrics work better than cylindrical dielectrics. High K, high mass dielectrics (like barium titanate) work better than, say, glass or polyethylene. Capacitors with a symmetric construction produce no thrust. High voltages (50-100 kV ) are required to produce moderate thrust. The thrust is towards the larger, (usually positive) electrode; during spark discharges, thrust appears to be independent of electrode geometry or polarity. Pulsed DC, DC with an AC waveform imposed, or even AC itself, works better than constant polarity DC. Thrust characteristics may depend on electrical waveform asymmetry. There is general suspicion in aviation circles that the B2 bomber (United States) operates on these principles. (See Brown’s patent, Electrokinetic apparatus (1965-06-01) http://www.freepatentsonline.com/3187206.pdf )
(An AC waveform imposed on a high DC voltage reminds me of the so-called Hutchinson effect, which seems to be present in some form when some sort of a combination of a Tesla coil (AC) is energized in the presence of a strong DC potential (100kV or more) say from a Van de Graaff generator. I played with both of these as a kid, though not at the same time. All these technologies are potentially world changing, and ones that every Tom, Dick, and Harry has access to—for better or for worse. So pay attention here . . . )
These two effects are different and are often confused. A study sponsored by NASA is an example:
This paper reports on the results of tests of several Asymmetrical Capacitor Thrusters (ACTs). . . .The model assumed the thrust was due to electrostatic forces on the leakage current flowing across the capacitor. It was further assumed that this current involves charged ions which undergo multiple collisions with air. These collisions transfer momentum. All of the measured data was consistent with this model. Many configurations were tested, and the results suggest general design principles for ACTs to be used for a variety of purposes. (“Asymmetrical Capacitors for Propulsion”, Francis X. Canning, Cory Melcher, and Edwin Winet, Institute for Scientific Research, Inc., Fairmont, West Virginia, 2004; http://gltrs.grc.nasa.gov/reports/2004/CR-2004-213312.pdf )
Their use of the term “Asymmetrical Capacitor Thrusters” not withstanding, what was tested here was clearly an ion wind effect. Contrast this study with Brown's comments in his article "How I Control Gravitation" , T.T. Brown , Science & Invention (August 1929):
Since the time of the first test the apparatus and the methods used have been greatly improved and simplified. Cellular "gravitators" have taken the place of the large balls of lead. Rotating frames supporting two and four gravitators have made possible acceleration measurements. Molecular gravitators made of solid blocks of massive dielectric have given still greater efficiency. Rotors and pendulums operating under oil have eliminated atmospheric considerations as to pressure, temperature and humidity. The disturbing effects of ionization, electron emission and pure electro-statics have likewise been carefully analyzed and eliminated. . . .
Let us take, for example, the case of a gravitator totally immersed in oil but suspended so as to act as a pendulum and swing along the line of its elements. When the direct current with high voltage (75-300 kilovolts) is applied the gravitator swings up the arc until its propulsive force balances the force of the earth's gravity resolved to that point, then it stops, but it does not remain there. The pendulum then gradually returns to the vertical or starting position even while the potential is maintained. The pendulum swings only to one side of the vertical. Less than five seconds is required for the test pendulum to reach the maximum amplitude of the swing but from thirty to eighty seconds are required for it to return to zero. . . .
MASS of the dielectric is a factor in determining the total energy involved in the impulse. For a given amplitude an increase in mass is productive of an increase in the energy exhibited by the system (E = mg).
In particular, note the reference to "totally immersed in oil", and "solid blocks of massive dielectric" and the use of lead sheets, and the momentary (not continuous) impulse, in Brown's cellular type of thruster. This is clearly NOT a device that depends on "charged ions which undergo multiple collisions with air" (NASA). Brown's 300311 patent also states that "said linear force or motion is furthermore believed to have no equal and opposite reaction that can be observed by any method commonly known and accepted by the physical science to date" (page 1, line 24) and "This motion seems to possess no equal or opposite motion that is detectable by the present day mechanics" (page 2, line 63; (see discussion above)). This is in contrast to the NASA document which states "These collisions transfer momentum." It is very clear that the NASA study investigates a completely different device and a completely different effect.
Others have recognized this too:
The "Biefeld-Brown Effect," sometimes referred to as the "Townsend Brown Effect," is frequently erronously associated with ionic wind "lifters," . . . . The pure Biefeld-Brown Effect does not incorporate an ionic wind component. ("Stress in Dielectrics (Biefeld-Brown Effect)", http://www.qualight.com/portal.htm/brown/ )
The Wikipedia article on the Biefeld–Brown effect seems to add to the confusion: "This creates a high field gradient around the smaller, positively charged electrode." But in Brown's patents, the positive electrode is actually the larger one. http://en.wikipedia.org/wiki/Biefeld%E2%80%93Brown_effect (accessed 4-4-11) , http://www.freepatentsonline.com/3187206.pdf
Another problem is spelled out in the Wikipedia article:
Critics and supporters alike have called throughout the years for vacuum experiments, in order to eliminate ion wind contributions from the devices. While there have been a handful of such experiments, most notably the efforts of Dr. R.L. Talley in the late 1980s and early 1990s, there is still a great deal of discrepancy over whether the effect is directly related to gravity or not, mainly because it isn't predicted by conventional electrostatics or general relativity. (http://en.wikipedia.org/wiki/Biefeld%E2%80%93Brown_effect)
The effect is not predicted by conventional physics. It is therefore easy to write it off as more “internet mythology” and “crazy patents” by delusional people and "air-head techno babblers" (of which there are many). Additionally, these topics are often mixed in with other "stuff" about UFOs, extraterrestrials, pyschic phenomena, teleportation, and so forth. The physical theories offered might not use your favorite terminology, and some words, like "ether" and "gravitational radiation" may raise red flags. Scientists would likely conclude that investigating this effect, and others like it, is probably a waste of time and money. This simply shows how hard it is for an idea that has no peers to get “peer reviewed”. Public investigation/implementation of the effect has been left to hobbyists and inventors.
Another effect noted by Brown (above) and Piggott (elsewhere):
Less than five seconds is required for the test pendulum to reach the maximum amplitude of the swing but from thirty to eighty seconds are required for it to return to zero. . . .
The possibility that this has something to do with spin relaxation times should be investigated:
"an atom can retain a particular spin polarization for a substantial amount of time. The "relaxation times" of spin polarized atoms are affected by the environment. "If the inside walls of the cell are suitably coated, collisions with the walls have little effect on the spin state of the atoms. . . . For example, for hydrogen atoms bouncing off teflon walls, tens of thousands of collisions are required for the magnetic moment of the hydrogen atom to become disoriented." (Quantum Mechanics, C.Cohen-Tannoudji, et al., 1977, p. 452) See comment about spin relaxation time and Gravomechanical effect.
See also: Guidelines to Antigravity, Robert L. Forward, American Journal of Physics, Vol. 31, No. 3, 166-170, March, 1963. Abstract::
"This paper emphasizes certain little known aspects of Einstein's general theory of relativity. Although these features are of minor theoretical importance, their understanding and use can lead to the generation and control of gravitational forces. Three distinctly different non-Newtonian gravitational forces are described. The research areas which might lead to methods for the control of gravitation are pointed out and guidelines for initial investigation into these areas are given." http://u2.lege.net/culture.zapto.org_82_20080124/antigravidity/Robert%20L.Forward%20-%20Guidelines%20to%20Antigravity.pdf
I should add that while this research has mostly an aerospace focus, there may be more down-to-earth and immediate applications as well. If electroaerodynamics can reduce drag, or if a hundred pounds of barium titanate and a few hundred kilovolts can produce significant thrust, this technology could be used to increase gas mileage for automobiles, or reduce fuel expenses on long-distance trucking (or used as a manuevering engine inside a ship). The requirements in these applications would be much more easily met than those in aerospace applications, and could be demonstrated by almost any advanced electronics hobbyist. (For some ideas, see http://www.amazing1.com/hv-dc-power-supplies.htm )
And now for a quiz. WHAT are THESE things? Are the photos real or fake? There were several witnesses ( http://www.ufocasebook.com/bestufopictures10.html ) A few examples:
"California UFO Drone Analysis of the Center Hub Structure"These photos are of interest because IF they depict an actual device, they offer significant clues about how field levitation might be achieved. The "stave" configuration seems consistent with the use of pulsed monopolar high voltage electric fields with asymmetric time derivatives, and with possibly electrically phased rotation of such fields. This brings to mind the Poynting vector, curl and asymmetric momentum density relations (Ñ´E = - ¶B/¶t and Ñ´B = J/e0c2 + 1/c2(¶E/¶t) ), and historical reports of electrogravity effects. The big rings might imply the use of magnetic fields (possibly several, and possibly also phased).
Another possibility would be the use of bipolar electric fields, such as the kind produced by a bipolar Tesla coil. A minimal configuration would consist of two such coils and two pairs of electrodes (one pair for each coil). These coils produce very high voltages that are generated by tuned resonances (and not so much by "winding turns ratio" as in conventional transformers). Each electrode in a pair is mounted at opposite ends of a diameter, and the two diameters are perpendicular to each other. The coils produce intermittent "damped ringing wave forms" which fade out after dozens of cycles. The frequencies are usually in the range of tens to hundreds of kHz. The coils are "switched" by spark gaps, which introduces a lot of timing jitter. Phasing can still be done by using the techniques of early radio (prior to the use of the vacuum tube), or by letting the configuration do whatever it will do. The likely result will be high voltage rotating chaotic electric fields (which could produce a variety unexpected physical effects, as well as protests by angry neighbors over EMI). See LissajousPatterns for more about phased fields.
This scheme is similar to that used in ordinary single-phase induction motors for produce a rotating magnetic field. The input power is a single source of ordinary sine wave electrical power, usually a wall outlet. The main winding uses this power directly, but the start winding uses a capacitor to shift the phase of this same power by approximately 90 electrical degrees. The windings (poles) are physically mounted internally so that they are perpendicular to each other. This combination produces a rotating magnetic field. After the armature begins rotating sufficiently, the start winding is no longer needed and is disconnected by a centrifugal switch.
Update 7-17-2017: Can a similar scheme be used to create rotating electrical fields (instead of rotating magnetic fields)? At first I thought it could be. How convenient it would be to get the required phase shift of voltages with nothing more than a high voltage capacitor! Alas, things are probably not so simple. In the motor example, the capacitor shifts one of the currents out of phase with the voltage. The two magnetic fields thus created by the different currents are out of phase by the required 90 degrees. This scheme however, probably does not shift the phase of the voltages. Hence, it won't work to produce high voltage, phase-shifted electrical fields.
What is apparently needed is a quarter-wave transmission line. One pair of electrodes is fed directly by a Tesla coil, and the other pair is fed from the same source but through the transmission line, which has a length that delays the phase by 90 electrical degrees relative to the first pair. The result would be a rotating electrical field that should be easy to characterize, and therefore be reproducable by other investigators.
But there are a couple of serious problems. At the frequencies used, the line length would be roughly a half-kilometer. And if made from coax, it would have to be rated for at least 100,000 volts. Such transmission lines do exist, but clearly this is not practical for the Do-It-Yourselfer.
A type of transmission line called a cage line, used for high power, low frequency applications. It functions similarly to a large coaxial cable. This example is the antenna feedline for a longwave radio transmitter in Poland, which operates at a frequency of 225 kHz and a power of 1200 kW.
It is probably possible to get the same effect from a combination of discrete capacitors and inductors configured to simulate a transmission line. But there are problems with that too.
And so the use of two Tesla coils will probably be the most practical implementation, at least for proof-of-concept purposes.
Such fields have no known accepted, practical use, and nobody wastes their time and resources building such equipment. Here, the purpose would be for investigating antigravity, interatomic bonding, (especially in metals), alteration of radioactive decay rates, and some currently very obscure effects associated with neutrinos. This technology is easily accessible to the hobbyist. and there are myriads of Tesla coil designs and projects on the Internet. These are used mostly to entertain friends with displays of spectacular electrical sparks. My hope is that this adult version of kids playing with matches can be converted to useful technologies.
See also "California Drones" (aka "Dragon-fly drone"*) http://droneteam.com/mediawiki/index.php/Chad_details ;
http://www.dronehoax.com/drone_history/isaac_documentation.htm (more pictures, statements about the language)
http://www.bibliotecapleyades.net/ciencia/ciencia_flyingobjects11.htm (separate photos)
*It is called a "dragonfly drone" because "it moves like a dragonfly". Its motion is jerky, not smooth and continuous. This is consistent with the idea that field propulsion systems would use point-to-point, start-stop navigation. As for the photos, some regard them as 'too detailed to be faked'; others regard them as 'having too much detail to be real'. (Similar claims can probably be made about the 1969 moon landings!)
Field visualizations: http://web.mit.edu/8.02t/www/802TEAL3D/visualizations/guidedtour/Tour.htm
(Pictures like these remind me of a problem I have with organizations that investigate UFO sightings: they only investigate sightings. As such, they are like "investigative journalists" and "mystery writers". But people like me want to know what these things are, not just that they are mysterious. I can understand why UFO organizations would not have a physics and engineering research staff, but they could at least give a list of contacts for groups who are investigating the physics behind these things. UFOs seem to have mastered the "physics of non-local phenomena" (for lack of a better term, although "monstrous physics" also appears in the popular literature ). This kind of physics appears in the public textbooks only in the form of quantum mechanics. We need to extend the science of non-local behaviors to things the size of aircraft carriers that can float in the sky and which can disappear in an instant (http://youtu.be/DNFyjWANDmw?t=127 ) or make high speed right angle turns. Engineerable technology of this sort has been around for over a hundred years, but the science behind it has never been made public.)
What is perhaps a start at addressing the science questions can be found at http://droneteam.com/drt/index.php?topic=869.0
Or maybe you could just go to the US Patent and Trademark Office to find out how UFOs work:
Obviously a Top Secret design :-)
I liked the splash of Geometric Algebra (paragraph 0009) in the patent application (http://www.freepatentsonline.com/y2006/0145019.html ). See "An Appeal to my Readers" at http://scripturalphysics.org/4v4a/BeyondEinstein.html#Appeal
The patent reads like this is a tabletop model that could be constructed by a hobbyist or by a microwave engineer However, very little is said about the parabolic antenna or the motion control hemispheres. My gut feeling is that the hemispheres have to do with the alteration of the electrostatic field profile for motion control. It is not clear if the parabolic antenna is a direct part of the propulsion scheme. However, the patent is careful to note that "the electric field arrows are parallel crossing the center parabolic antenna (C). The electric field is also parallel to the side (D) of the triangle." That seems to imply a functional purpose of the parabolic antenna (vertical lift?).
The patent also notes the use of "traveling waves". One of the problems antigravity designs have to deal with is the symmetry of the gravitational field, which, at a fundamental level is mathematically scalar. The symmetry needs to become unbalanced for antigravity to work. But electric and magnetic fields are also fundamentally scalar (they differ from gravity only in dimensions; their apparent directional traits originate from a coupling to the reference system) How do you get fundamentally symmetric fields to become unsymmetric? One scheme could be based on rotation (as above). The patent seems to refer to this with its reference to "travelling waves" (See Fig. 5).
If it works, it is ingenious. Hobbyists out there have a new toy to play with. Unfortunately, I have some serious doubts. See https://zapatopi.net/blog/?post=200604284330.st_clair_hyperinventor Or do a Google search with the terms "John St. Clair" "San Juan" "Hyperspace Research" .
If you want more in-depth coverage of the Biefeld-Brown effect and related effects, I highly recommend reading:
Secrets of Antigravity Propulsion by physicist Paul A. LaViolette (2008).
"Progress in Electrogravitics and Electrokinetics for Aviation and Space Travel", Thomas F. Valone, presented at the Space Tech. App. Info. Forum, Albuquerque, NM; http://users.erols.com/iri/ProgressElectrograviticsElectrokinetics.PDF , http://www.integrityresearchinstitute.org/
Electrogravitics Systems, Vol I, Thomas Valone, 6th ed., 2008
Electrogravitics II, Thomas Valone, 3rd ed., 2008
T.T. Brown's Electrogravitics Research, Thomas Valone, Integrity Research Institute
T.T. Brown Family web site, http://www.qualight.com/portal.htm/brown/
"Electric Flying Machines", T.T Brown, http://www.bibliotecapleyades.net/ciencia/ciencia_flyingobjects25.htm
"Electrogravitics systems reports on a new propulsion methodology", Thomas Valone, 2001; http://www.bibliotecapleyades.net/archivos_pdf/electrogravitics_systems.pdf
"Can Electricity Destroy Gravitation?", Prof. Francis E. Nipher Electro-Gravitic Experiments, (1918) http://www.rexresearch.com/nipher/nipher1.htm
"Theoretical explanation of the Biefeld-Brown Effect",Takaaki Musha, http://www.thelivingmoon.com/41pegasus/03PDF_files/Biefeld_Brown_Effect.pdf
"Explanation of dynamical Biefeld-Brown Effect from the standpoint of ZPF field", Takaaki Musha
"Force on an Asymmetric Capacitor", Thomas B. Bahder and Chris Fazi, March 2003.http://arxiv.org/ftp/physics/papers/0211/0211001.pdf
"Asymmetric capacitor operating in high vacuum", http://www.youtube.com/user/hec031 (in this experiment the direction of thrust is towards the negative, smaller electrode. Max voltage was 18kV @ 3 micro amp)
"Study on the influence that the number of positive ion sources has in the propulsion efficiency of an asymmetric capacitor in nitrogen gas", A A Martins1 and M J Pinheiro2, http://arxiv.org/ftp/arxiv/papers/1009/1009.6111.pdf
"T. T. Brown’s 1955-1956 Paris Experiments Revealed", http://starburstfound.org/electrograviticsblog/?p=49
NOTE: the articles that formerly occupied this space have been moved to: Various reported electrogravity, magnetogravity and gravomechanical effects
A Method of and an Apparatus or Machine for Producing Force or Motion (Nov. 15, 1928)
British Patent 300311; "How I control gravitation" http://www.rexresearch.com/gravitor/gravitor.htm
Electrostatic motor (1934-09-25, http://www.freepatentsonline.com/1974483.pdf
Electrokinetic apparatus (1960-08-16) http://www.freepatentsonline.com/2949550.pdf
Electrokinetic transducer (1962-01-23) http://www.freepatentsonline.com/3018394.pdf
Electrokinetic generator (1962-02-20) http://www.freepatentsonline.com/3022430.pdf
Electrokinetic apparatus (1965-06-01) http://www.freepatentsonline.com/3187206.pdf
Electric generator (1965-07-20) http://www.freepatentsonline.com/3196296.pdf
Method and Apparatus for Producing Ions and Electrically-Charged Aerosols ( 1967-01-03) 3296491
Fluid Flow Control System (1970-06-30) http://www.freepatentsonline.com/3518462.pdf
(Motion of contaminants http://www.electrotechnik.net/2013/04/breakdown-in-liquids-due-to-presence-of.html )
A. H Bahnson patents:
Electrical Thrust Producing Device http://www.freepatentsonline.com/2958790.pdf
Electrical Thrust Producing Device http://www.freepatentsonline.com/3263102.pdf
"How do floating water bridges defy gravity?", Chelsea Whyte, (2012) http://phys.org/news/2012-11-bridges-defy-gravity.html
"Utilization of poly(ethylene terephthalate) plastic and composition-modified barium titanate powders in a matrix that allows polarization and the use of integrated-circuit technologies for the production of lightweight ultrahigh electrical energy storage units (EESU)" http://www.freepatentsonline.com/7466536.html , http://en.wikipedia.org/wiki/EEStor
"This paper reports the successful creation of a new ultracapacitor structure that offers a capacitance density on the order of 100 to 200 Farads per cubic centimeter; versus the current state of the art capacitance density of 1 F/cm3. " ("New mega-farad ultracapacitors", Bakhoum, E., 2009, http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4775259
"An asymmetric supercapacitor using RuO2/TiO2 nanotube composite and activated carbon electrodes",Yong-Gang Wang, Zi-Dong Wang, Yong-Yao Xia, 2005, http://www.chemistry.fudan.edu.cn/usr2000/xyy/pdf_web/2005/ea-ruo2.pdf
"We report the observation of extremely high dielectric permittivity exceeding 109 and magnetocapacitance of the order of 104 % in La0.875Sr0.125MnO3 single crystal." ("Giant dielectric permittivity and magnetocapacitance in La0.875Sr0.125MnO3 single crystals", R. F. Mamin, T. Egami, Z. Marton, and S. A. Migachev, 29 March 2007; DOI: 10.1103/PhysRevB.75.115129 ; PACS numbers: 77.22.d, 71.45.d, 75.47.Lx; http://repository.upenn.edu/cgi/viewcontent.cgi?article=1158&context=physics_papers )
"Extremely high values of the relative permittivity up to 107 and the magnetocapacitance up to 105 % have been found in La1 - x Sr x MnO3 single crystals (x = 0.1, 0.11). These phenomena are observed even at room temperature." ("Giant dielectric susceptibility and magnetocapacitance effect in manganites at room temperature", R. F. Mamin, T. Egami, Z. Marton, C. A. Migachev and M. F. Sadykov, JETP Letters Volume 86, Number 10, 643-646, DOI: 10.1134/S0021364007220067 )
"Moreover, our investigations in external magnetic fields up to 5 T reveal the simultaneous occurrence of magnetocapacitance and magnetoresistance of truly colossal magnitudes in this material." ("Colossal magnetocapacitance and colossal magnetoresistance in HgCr2S4", S. Weber, P. Lunkenheimer, R. Fichtl, J. Hemberger, V. Tsurkan and A. Loidl, http://arxiv.org/ftp/cond-mat/papers/0602/0602126.pdf )
Calcium copper titanate, k= 250,000 ( http://en.wikipedia.org/wiki/Relative_permittivity) ;
"Advanced Calcium Copper Titanate/Polyimide Functional Hybrid Films with High Dielectric permittivity", Zhi-Min Dang, et al. (2009), http://www.paper.edu.cn/index.php/default/scholar/downpaper/dangzhimin511435-201001-20.pdf
"Counterintuitive discovery boosts supercapacitor energy storage", Leo Williams, June 17, 2011,
Dissectable Leyden jar retains its charge after disassembly/reassembly: http://www.physics.ucla.edu/demoweb/demomanual/electricity_and_magnetism/electrostatics/dissectible_leyden_jar.html"The Antigravity Underground", Clive Thompson, http://www.wired.com/wired/archive/11.08/pwr_antigravity_pr.html
Water capacitors-Electrical: (dielectric is distilled water) (disambiguation: WaterCapacitors-Chemical)
http://www.freepatentsonline.com/3558908.pdf "High voltage impulse generator", Kulikov, Lagunov, Nesterikhin, Fedorov, 1971:
"Since the spark gap is of a controlled type, the capacitor operates as a transmission line. Because of this, the rate of rise of the current can be readily increased, since the internal impedance of the transmission line is purely resistive. . . . The water capacitor produces a negative voltage pulse of 250 kilovolts at 250 kiloamperes with a rise time of 50 nanoseconds."
Marx generators: http://en.wikipedia.org/wiki/Marx_generator , http://skyfi.org.ru/photos/?path=marksgen , http://www.youtube.com/watch?v=vPPMaDH7L7I , http://ru-abandoned.livejournal.com/977217.html
Organic liquid capacitors:
http://www.freepatentsonline.com/3903460.html "Capacitor with liquid dielectrics", Tsacoyeanes, Charles W.,
Payne, Richard, Levine, Morton A., 1975) : "A considerable effort has been applied in recent years to the development of high energy capacitors using water as the dielectric. The principal attraction of water is its high dielectric constant (78.3) which compares with values of only 2-3 for the transformer oils conventionally used in capacitors. Furthermore, the dipole relaxation time for water is in the picosecond time region and therefore has little effect on the discharge characteristics. . . . it has been found . . . that a range of high dielectric constant liquids possess characteristics that make them useful as capacitor dielectrics. They show a substantial improvement in energy storage capability, and also possess other important advantages over water. The liquid dielectrics of this invention are organic solvents with dielectric constants ranging from 30-200. They are well-known to organic chemists and have recently found important applications in electrochemistry. . . . The use of the organic liquid dielectric materials of this invention enables practical energy densities to be increased by a factor of 3 or more over currently available devices which use a water dielectric and the organic liquids have a much more stable conductance."
See also my own Capacitor Tests.
Various reported electrogravity, magnetogravity, and gravomechanical effects
George Samuel Piggott Effect "Electro-Gravitation"
http://www.rexresearch.com/piggott/piggott.htm (includes a "dark belt" observation)
http://www.freepatentsonline.com/1006786.pdf (1911, Piggott's static generator for a space telegraph)
A quick reading of Piggott's patent (filed 1903, issued 1911) describes an electrostatic generator ("influence machine") for use in "space telegraphy" (radio). The machine is essentially an industrial strength Wimshurst machine with multiple pairs of counterrotating disks. It is enclosed in an air tight container which is pressurized with dry air to about 30 p.s.i. The air is supplied by a pump and dried over anhydrous calcium chloride (other schemes, using a dry ice cold trap, for example, could be used). The Leyden "condensers" on the wall are apparently part of the Wimshurst design. The output is stored in a bank of "Leyden jars" to the left of the "Static Machine". One side on the bank connects to ground, and the other goes to the machine and the sphere on the stand next to Piggott. Details of the patent relate to the machine's use in "space telegraphy" wherein a single discharge (spark) represents a "dot" and two closely spaced discharges represent a "dash". The machine was powered by a 1/4 kilowatt electric motor.
The machine is clearly capable of producing pulsed, high voltage electricity of either polarity, and with a fairly strong amount of current (at least several tens of microamps, more likely hundreds of microamps). ( http://en.wikipedia.org/wiki/Wimshurst_machine http://en.wikipedia.org/wiki/Van_de_Graaff_generator ) The repetition rate would be whatever is sufficient for use in telegraphy. It is not clear from the picture however, whether the levitation effect is occuring with a pulsed field or a purely static field; it is difficult to see how a static field could produce the levitation and dark band effects. Because he intended to use the machine in space telegraphy, the spark repetition rate would be important, and certainly he would have tested it. If the above represents such a test, the so-called spark gap "switch" was probably at the big Leyden jar and was connected to the test sphere by a cable. Likely the spark gap was configured to ground the sphere (rather than charge it, which is another possibility); this would result in a slow rising edge and a fast falling edge of the voltage waveform at the sphere. (Either configuration should work, but there are a lot of unknowns here. ( It is more likely Piggott tested the machine in the normal configuration in which it was to be used. The spark gap is above the words "Static machine"; actual connections are not clear from the photo, but it appears the test sphere was connected to the "Leyden jar" and then to the spark gap through the aerial (85) terminal. The fundamentals are very similar to those of Tesla's monopolar, monodirectional pulsed fields used in his Magnifying Transmitter, and also to Edwin Gray's "cold electricity" machines (http://www.freepatentsonline.com/3890548.pdf ) , and to the Testatika Machine designed by Paul Baumann. For some good background, read The Free Energy Secrets of Cold Electricity, Peter A. Lindemann, D.Sc (2000) http://www.teslasociety.ch/info/NTV_2011/free.pdf )
For the design of an ultrafast sparkgap switch see CapacitorTests/CapacitorTests.html#BiconicalFastSparkGaps Tesla used magnetically quenched spark gaps, but they were not designed for coaxial, impedance controlled systems. There are all sorts of modern techniques to generate high voltage, fast rise time, monopolar pulses. )
A possible investigative concept machine could be built from a source of monopolar (+) high voltage source such as a Van de Graaff generator (500,000 volts @ a couple hundred microamps), a spark gap switch, and several coaxial delay lines (each with a different delay) each connecting to rods or spheres on a circular periphery. The delays are such that a fast, rotating electric field is produced from a single spark. A much slower mechanically rotated pulsed field should also be investigated. What comes to mind are the rotary spark gaps of the early radio days (prior to the 1920s) and the automotive ignition distributor. Even the cavity magnetron (with added multiple outputs) comes to mind. Piggot's experiments apparently did not use intentionally rotated fields, nor did Farrow's. Other applications of this sort of technology, such as the Nazi aircraft ignition disrupter and the dragonfly drone, apparently did (however, these devices require a directional control feature).
Piggott's levitation experiments were performed with a small sphere serving as the high voltage electrode. Says http://www.rexresearch.com/piggott/piggott.htm :
Figure 3 illustrates suspension stand and field producing electrode. The latter can be revolved in any direction by means of a spring motor shown on the upper section of the stand.
The small apertures seen in electrode, which is hollow, are there for the purpose of ascertaining the action of the reduced field tension at these points, and are also made use of to hold different sized metallic discs, which are cemented to insulating plates, forming condensers, the function of which is to create weak opposite polarities at these points and thus show a reaction on the suspended object and also a greater ocular effect in the vacuum tube.
Figure 4 is a detailed drawing of the vacuum tube principally used; this is of the spectrum type, without sealed-in electrodes and when introduced into the electrical fields, flows very brightly at its extremities, especially giving a sharp line bordering the dark space around the metallic object. A very high vacuum is sustained in the tube and it is found necessary to build it of a very perfect insulating glass; the bulb must be kept absolutely dry on its outer surface.
Were these "apertures" essential to the levitation effect? Did Piggott actually use "different sized metallic discs, which are cemented to insulating plates, forming condensers" on the sphere during his experiments? Were these some version of the one hundred million volt intensifiers proposed by Tesla? We can only wonder if these details were crucial to the levitation effect.
Additional Note: The essence of Piggott's pulse forming scheme is found in Fig. 7 of the patent:
The antenna or "aerial" is item 85 and is connected to the positive HV terminal. There are three sets of spark gaps shown (51), only one of which is in use at any one time. The central sphere (53) in the gap is used to vary the capacitance for tuning ("syntonizing") the emission ( See also Righi Spark Gap ). Spheres 51 are about 1.5 inches in diameter and the intermediate sphere (53) is about 3 or 4 inches in diameter. "When a signal is to be transmitted the negative terminal 51 is preferably adjusted to a position quite close to the intermediate discharge ball 53 while the positive discharge terminal 51 is placed about an inch away from the center discharge ball so that a heavy strong spark occurs between the positive sparking terminal and the center discharge ball." The two Leyden jars, or series of jars, are at 83 and are used for improving efficiency and for tuning. The signaling switch (59) in figures 4 and 5 is pressed to move a corona leak rod (66), allowing charge to rapidly build up and rapidly fire the spark gap (comprised of 51,53, 51), the length of which is adjustable. (Note that the signal switch is used to turn OFF the leak; this machine was intended for signaling, not antigravity. See also #TrapsForUnwary )Here is a partial description of Piggott's electrogravitic experiments from "Electric Flying Machines: Thomas Townsend Brown", Gerry Vassilatos ( http://www.bibliotecapleyades.net/ciencia/ciencia_flyingobjects25.htm http://borderlandresearch.com/book/lost-science/electric-flying-machines-thomas-townsend-brown/1 ;What is a possible explanation for the lingering "black belt" and lingering antigravity effects seen in the experiments of Tesla and Piggott, and in observations of UFOs (see below)?. A working hypothesis is that these are both manifestations of temporal momentum effects on the air in the surrounding the environment. It is well known that jet aircraft leave behind spatial momentum effects on the air during flight (wind, air turbulence, etc.) and that these effects take time to dissipate. The same could be true of temporal momentum effects on the air (absorption of light being one of them?). But in such a case, there may not be a change in spatial position, as this is a kind of "motionless motion." Likewise, jet aircraft don't suddenly fall out of the sky when they shut off their engines. Presumably, a similar effect may apply to UFOs. If their propulsion means were shut off, they would be expected to descend like a fluttering leaf or feather, not drop like a bomb.
"Mr. Piggot observed a strange electro-gravitational effect. It was first seen, the result of accidental occurrences while performing unrelated electrical experiments.
Mr. Piggot was able to suspend heavy silver beads . . . and other materials in the air space between a charged sphere and a concave ground plate when his generator was fully charged at 500,000 electrostatic volts. The levitational feat was only observed when the charged sphere was electropositive.
The Piggot effect was clearly not a purely electrical phenomenon. If it were, then the presence of the grounded plate would have destroyed the effect. The very instant in which a discharged passed to ground, every suspended object would have come crashing down. But, without the ground counterpoise, the levitational effect was not observed. Mr. Piggot believed that he was modifying the local gravitational field in some inexplicable manner, the effect being the result of interaction between the static field generator and some other agency the ground.
Piggot further stated that heated metal marbles fell further away from the field center than cold ones. These suspended marbles remained in the flotation space for at least 1.25 seconds even after the static generator ceased rotating. The marbles fell very slowly after the field was completely removed; a noticeable departure from normal gravitational behavior.
Mr. Piggot stated that suspended objects were surrounded by a radiant “black belt”. . . . Effects developed by Piggot were entirely similar to those observed by Nikola Tesla, who employed high voltage electrostatic impulses.
The Piggott device certainly discharged its tremendous charge in a rapid staccato-like fashion to the ground plate. The rate of this disruptive unidirectional field . . . . certainly it was a very rapid impulse rate.
. . .
George Piggot mentioned the mysterious “black band” which appeared around his highly charged suspended metal marbles. Light seemed to disappear into these zones. But it was Nikola Tesla, whose forgotten and ignored testimony on the perceptual effects of high voltage electrical systems took first place. Tesla produced such intense electrical arcs that the same strange blackout effects were repeatedly observed. In the case of Tesla’s famed Colorado Springs Experiments, the blackout effect produced a lingering state . . . .
Noted in his published diary, the results followed the intense activity of his Magnifying Transformer. Visual distortions, clarifications, black shadows, black streamers, black waves, lingered for hours all around his plateau laboratory, whereby he stated that:
“These phenomena are so striking that they cannot be satisfactorily explained by any plausible hypothesis, and I am led to believe that possibly the strong electrification of the air, which is often noted to an extraordinary degree, may be more or less responsible for their occurrence.” "
UFOs with possible dark band ("black halo") effects:
(image has been enlarged and sharpened)
This strange ring was seen in England. Various conventional explanations are offered.
This is an actual smoke ring made by a smoke ring machine.
These rings "appeared over several areas of Copenhagen and Denmark".
Unconventional explanations would include the "black streamers" that Tesla wrote about. It could also be a partially localized UFO showing only the black band. A similar description from 1979: "It looked like two saucers joined at the rims by black band" (Flying Suacer Review, Jan-Feb 1979)
"UFO Outside Jet Liverpool April 2015" (seems to have a smoke-like wake as it moves. Watch the video.)
Possible example of partially localized UFO showing part of the black band.
January 28 2005
October 12, 2008
March 11, 2015Note: If you want to inspect YouTube videos frame-by-frame or in slow motion, go to http://rowvid.com/ and paste in the URL of the YouTube video. (examples:
Black band effects even show up in infrared photos of UFOs.
UFO ON İSTANBUL/RİVA 6.5.2015 TARİHİNDE ÜMİT PAKER TARAFINDAN ÇEKİLMİŞTİR http://www.youtube.com/watch?v=XObsbvPUKuU
This UFO shows a dark black wake which quickly disappears!
Another posssible example of black halos and black wake effects.
This is the well-publicized "Gimbal" video. It is an infrared video taken from a military aricraft. You have to look closely to see the back aura surrounding the white object at the center of the photo.
This is another military photo taken by the Chilean air force. The aura can be seen, but appears to be intermittent (off and on).
Flying Suacer Review, Jan-Feb 1979: "It looked like two saucers joined at the rims by black band"
http://youtu.be/x_BnS683nCA 11-6-2015 UFO rods Sunnyvale, CA
http://www.youtube.com/watch?v=jBFo2xSDTbc (view in full screen mode)
https://youtu.be/_E23e9cye9M "INSANE! Best UFO Sightings Of June 2015 [Breaking News] Share This!" ("black halo" at 28:57 and 29:30)
https://www.youtube.com/watch?v=NO0QjKzTo-M "Freaky black circle over Disneyland"
http://www.youtube.com/watch?v=kWM0twSuBQY&ebc "Black Smoke Portals" Opening in Skies Around The World? (2015)
Milwaukee sightings 7-27-15 (Caution: profane language) http://youtu.be/2lWVqp0PhbI"UFO Releasing Glowing Orbs Into a Formation in Western Massachusetts", http://youtu.be/Kp4jxRPCaz8 http://youtu.be/Yc5StQpaUqk
"UFO Orb or Government Experiment Caught on Camera (Strange Lights in the Sky)" http://youtu.be/hU7K2SUuqPM?list=UU3cUZkyN3CPqlw1BGzChsgA
"Video captures 10 white globes floating in sky above Osaka in Japan", http://youtu.be/xIG8TwyV_qE
"The Best UFO Cases Ever Caught On Tape", https://www.youtube.com/watch?v=N1H9S_Yk89Y https://youtu.be/N1H9S_Yk89Y?t=1995 (one of several)
"Mysterious rings were seen in Ulan-Ude" http://www.openminds.tv/wp-content/uploads/information-items_3643.jpg
". . . he took three photographs of the metallic-appearing object, and a forth of a black "smoke ring" left behind by the object after it departed at high speed." (The UFO Evidence, Vol. 2, Richard H. Hall (2001) p. 284 )
http://www.caelestia.be/ringvortex.html "Ring-shaped vortices"
http://youtu.be/_BgJEXQkjNQ?t=71 (watch for a UFO at 1:11 (upper right to left) for about 5 frames)
http://www.youtube.com/watch?v=0drMT6bOpGY (UFO starting at 00:10 )
http://youtu.be/elpy1em9rOE (UFO starting at 00:28.45) http ://youtu.be/lpdBEFuH-yk (UFOs starting at 00:12.06
It should be noted that the 1880s and 1890s saw the development of high voltage electrical machines and spark gaps "switches" that could be used to generate radio waves and microwaves:
Just one hundred years ago, J.C. Bose described to the Royal Institution in London his research carried out in Calcutta at millimeter wavelengths. He used waveguides, horn antennas, dielectric lenses, various polarizers and even semiconductors at frequencies as high as 60 GHz; much of his original equipment is still in existence, now at the Bose Institute in Calcutta. Some concepts from his original 1897 papers have been incorporated into a new 1.3-mm multi-beam receiver now in use on the NRAO 12 Meter Telescope. . . .
Hertz had used a wavelength of 66 cm; other post-Hertzian pre-1900 experimenters used wavelengths well into the short cm-wave region, with Bose in Calcutta [7,8] and Lebedew in Moscow independently performing experiments at wavelengths as short as 5 and 6 mm. https://www.cv.nrao.edu/~demerson/bose/bose.html
This means that the technology of those days was capable of producing high voltage, monopolar pulsed power with asymmetric rise and fall times. This is exactly the kind of circumstance that surrounds the discovery of electrical levitation effects. One would certainly wonder if this has anything to do with the "airship sightings" of the late 1890s .
http://altereddimensions.net/2013/mysterious-new-mexico-brilliant-flash-of-light-august-15-1999 ; http://www.sandia.gov/LabNews/LN09-10-99/meteor_story.html
Edward S. Farrow Gravity Reduction:
http://www.rexresearch.com/farrow/farrow.htm (weight reduction by means of a "condensing dynamo" circa 1911; Note reference to "condensing dynamo" and "current sent the wheels in the dynamo whirring". The description is too vague to reproduce the device (words in the article and the photos suggest, possibly, it included the equivalent of a powerful ignition coil ("Ruhmkorff coil"), a rotary spark gap, and aerial wires). A patent was not issued.
Ruhmkorff coils can produce monopolar pulsed high voltage with asymmetric time derivatives on the rise and fall times of the high voltage. This, and the Piggott experiment, suggest that high voltage pulsed monopolar (+) power with asymmetric time derivatives may somehow be connected with levitation or weight reduction effects.
http://www.rexresearch.com/farrow/farrow.htm Ruhmkorff coil http://www.sparkmuseum.com/INDUCT.HTM
http://www.thebirdman.org/Index/Others/Others-Doc-Science&Forteana/+Doc-Science-StrangePhysics/TamingGravity.htm (The article states "In all likelihood it was no more than an electromagnet". This is very unlikely given the above descriptions.)
en&sa=X&ei=rat0UKvgEYKsjAK3i4GACw&ved=0CDAQ6AEwAw#v=onepage&q=edward%20farrow%20condensing%20dynamo&f=false (alternate source)
s=470211109,470211125,470211142,470211088&formats=0,0,0,0&format=0 (4 pictures of Technical World Magazine, "Gravity Conquered at Last", Vol XVI, No.3, November 11, pages 257-260)
"How to overcome Gravity by Hertzian Air Waves", http://query.nytimes.com/mem/archive-free/pdf?res=F40C12FF3A5517738DDDAF0994DF405B818DF1D3 (the reference to Hetzian air waves implies that the wires going out of the photo are aerials intended to spread the effect)
"Science versus Gravity" (Flight Magazine, Dec 2, 1911) http://www.flightglobal.com/pdfarchive/view/1911/1911%20-%201046.html
(Notice the stack of "pancake" windings)
http://www.electrotherapymuseum.com/2005/Norrie/index.htm (4th edition, 1907 ?)
http://archive.org/download/inductioncoilsho00schn/inductioncoilsho00schn.pdf (2nd edition, 1901)
(note spark length of 45 inches)
"Induction Coils How to Make Use and Repair Them", H.S. Norrie (1907) http://www.electrotherapymuseum.com/2005/Norrie/index.htm
Dr. Francis E. Nipher Electro-Gravitic Experiments
http://www.rexresearch.com/nipher/nipher1.htm#1 (studies on gravitation)
(Dr. Nipher was an esteemed educator and was professor of physics at Washington University at St. Louis, Missouri. He was also president for several years of the St. Louis Academy of Science and of the Engineers Club. He wrote several valuable papers in the late 1800s. Nipher Middle School was named after him.)
"In 1979, Hutchison claims to have discovered a number of unusual phenomena, while trying to duplicate experiments done by Nikola Tesla. He refers to several of these phenomena jointly under the name “the Hutchison effect”, including: levitation of heavy objects; fusion of dissimilar materials such as metal and wood; while lacking any displacement, the anomalous heating of metals without burning adjacent material; the spontaneous fracturing of metals; changes in the crystalline structure and physical properties of metals; disappearance of metal samples."
http://www.scribd.com/doc/15125148/Secrets-of-Cold-War-Technology (p 50; Hutchison effect by Tesla?)
Searl effect (and similar):
("An Example of Self–Acceleration for Incompressible Flows", Jens Lorenz, Randy Ott (2013)
http://www.math.unm.edu/~lorenz/publi/selfac.pdf some sort of possible relevance?)
Roschin and Godin:
"Orbiting Multi-roller Homopolar System", Vladimir Vitalievich Roschin, Sergi Mikhailovich Godin (2004)
About Strange Effects Related to Rotating Magnetic Systems, M. Pitkänen http://www.worldsci.org/pdf/ebooks/Pitknen-AboutStrangeEffectsRelatedtoRotatingMagneticSystems.pdf (see chapter 3)
"The Morningstar Energy Box", Paul A. Murad, Morgan J. Boardman, John E. Brandenburg, Jonathan McCabe, Wayne Mitzen http://www.morningstarap.com/downloads/Morningstar%20Energy%20Box%20AIAA%202012_4_920.pdf
"Abstract. The Morningstar Energy Box is a derivative of both the Searl device and a variant of the Russian Scientists Godin and Roschin. Laminated rollers and a main ring with ferromagnetic fluid are used to enhance electrical and magnetic properties. The device is constrained by a mechanical cage to hold the rollers. An operational theory for the Energy Box uses rotating electromagnetic fields different from either Searl or the Russians. Moreover, the Russians made several serious claims that produced self-acceleration to generate electricity, created a large weight loss when spun in one direction and weight gain when spun in the opposite direction. They also claimed their device generated discrete magnetic walls. To date, no one has validated these outrageous Russian claims. However, the Energy Box found similar phenomenon regarding the discrete magnetic walls with both weight gain and loss, although not at the same magnitude. Where they claimed to lose as much as 35% of the weight of a 375 kg armature, the Energy Box in an early test only lost 2 to 5 pounds of its 190 pounds at steady-state. During transient rotation changes, the weight change dropped as much as 20 to 40 pounds. However, a last test series recorded a weight lost of 14 pounds with a 7.3% change during steady-state. We can state that we saw similar phenomena as the Russian claims as well as lost weight and the device may represent an advanced propulsion scheme for space travel."
Tapered Ring Device
It would be insightful to know if (or how) the tilt and rotation addresses the gravitational symmetry problem. Also, atoms do have a space/time "polarity" and so it is not necessarily "outrageous" that rotation would produce "a large weight loss when spun in one direction and weight gain when spun in the opposite direction."
Nikola Tesla:"Tesla was sure that this new discovery would produce a completely new breed of inventions, once tamed and regulated. Its effects differed completely from those observed in high frequency alternating current. These special radiant sparks were the result of non-reversing impulses. In fact, this effect relied on the non-reversing nature of each applied burst for its appearance. A quick contact charge by a powerful high voltage dynamo was performing a feat of which no alternating generator was capable. Here was a demonstration of “broadcast electricity”.
Most researchers and engineers are fixed in their view of Nikola Tesla and his discoveries. They seem curiously rigidified in the thought that his only realm of experimental developments lay in alternating current electricity. This is an erroneous conception which careful patent study reveals. Few recognize the documented facts that, after his work with alternating currents was completed, Tesla switched over completely to the study of impulse currents. His patents from this period to the end of his career are filled with the terminology equated with electrical impulses alone.
The secret lay principally in the direct current application in a small time interval. Tesla studied this time increment, believing that it might be possible to eliminate the pain field by shortening the length of time during which the switch contact is made. In a daring series of experiments, he developed rapid mechanical rotary switches which handled very high direct voltage potentials. Each contact lasted an average of one ten-thousandth second.
Exposing himself to such impulses of very low power, he discovered to his joy and amazement that the pain field was nearly absent. In its place was a strange pressure effect which could be felt right through the copper barriers. Increasing the power levels of this device produced no pain increase, but did produce an intriguing increased pressure field. The result of simple interrupted high voltage DC, the phenomenon was never before reported except by witnesses of close lightning strokes. This was erroneously attributed however to pressure effects in air.Tesla made electrical measurements of this projective stream. One lead of a galvanometer was connected to a copper plate, the other grounded. When impulses were applied to wire line, the unattached and distant meter registered a continual direct current. Current through space without wires! Now here was something which impulses achieved, never observed with alternating currents of any frequency.
Analysis of this situation proved that electrical energy or electrically productive energies were being projected from the impulse device as rays, not waves. Tesla was amazed to find these rays absolutely longitudinal in their action through space, describing them in a patent as “light-like rays”. These observations conformed with theoretical expectations described in 1854 by Kelvin.
In another article Tesla calls them “dark-rays”, and “rays which are more light-like in character”. The rays neither diminished with the inverse square of the distance nor the inverse of the distance from their source. They seemed to stretch out in a progressive shock-shell to great distances without any apparent loss.
. . .
Most imagine that the Tesla impulse system is merely a “very high frequency alternator”. This is a completely erroneous notion, resulting in effects which can never equal those to which Tesla referred. The magnetic discharge device was a true stroke of genius. It rapidly extinguishes capacitor charge in a single disruptive blast. This rapid current rise and decline formed an impulse of extraordinary power. Tesla called this form of automatic arc switching a “disruptive discharge” circuit, distinguishing it from numerous other kinds of arc discharge systems. It is very simply a means for interrupting a high voltage direct current without allowing any backward current alternations. When these conditions are satisfied, the Tesla Effect is then observed.
. . .
The asymmetrical positioning of the capacitor and the magnetic arc determines the polarity of the impulse train. If the magnetic arc device is placed near the positive charging side, then the strap is charged negative and the resultant current discharge is decidedly negative.
Tesla approached the testing of his more powerful systems with certain fear. Each step of the testing process was necessarily a dangerous one. But he discovered that when the discharges exceeded ten thousand per second, the painful shock effect was absent. Nerves of the body were obviously incapable of registering the separate impulses. But this insensitivity could lead to a most seductive death. The deadly aspects of electricity might remain. Tesla was therefore all the more wary of the experiments.
He noticed that, though the pain field was gone, the familiar pressure effect remained. In its place came a defined and penetrating heat. "
See also Poynting vector insights (below) and Biconical Fast Spark gaps.
"Dynamo Electric Machine", Nikola Tesla (1889) http://www.freepatentsonline.com/0406968.pdf
The Free Energy Secrets of Cold Electricity, Peter A. Lindemann, D.Sc , http://www.teslasociety.ch/info/NTV_2011/free.pdf ,
http://nrgnair.com/MPT/zdi_tech/tesla/common/radiant/TRE1.htm ( http://donsmithcoils.blogspot.com/2010/06/don-l-smith-device.html , http://www.youtube.com/watch?v=TI5XWz8aZvo, http://cactuss.ru/wp-content/uploads/sites/4/2012/05/pjkbook-21-extract.pdf , http://freenrg.info/Misc/Resonance_NRG_Methods_Donald_Smith.pdf) ,
Magnetohydrodynamic (field propulsion but not antigravity)
"Magnetohydrodynamic propulsion apparatus", J. F. King (1967) http://www.freepatentsonline.com/3322374.pdf
The chaotic electrical environment associated with tornadoes includes pulsed monopolar electrical power, polarity reversals, rotating electric fields, radio waves, etc. Could this environment produce weird levitation effects that are not explainable by air flow? While still speculative, there are suspicions that it can. See "Tornadic levitation" in :The Electromagnetic Nature of Tornadic Supercell Thunderstorms, Charles L. Chandler (2007~2014)
"Physical Principles of Advanced Space Propulsion Based on Heims's Field Theory" Walter Dröscher, Jochem Häuser (2002)
"The coupling obtained from Heim’s theory is derived from fundamental principles, and is very different from the ones obtained by other ad-hoc approaches. Heim's theory is therefore much more interesting, since it may allow gravity manipulation at lower energy densities, and is based on new physics, thereby leading to new predictions. There may be several new and surprising physical phenomena with far reaching consequences that are predicted by Heim's theory. Some of these can be checked against presently available experimental data, both from cosmology and quantum physics. The physical principle is presented of how to construct a space propulsion device that does not use any propellant, instead is based on an energy transformation process. . . .
Since the interaction between gravitation and electromagnetism reduces the inertial mass of a material object, it is called inertial transformation. Since conservation laws for momentum and energy are strictly adhered to, the theory requires superluminal velocities, without contradicting Einstein's theory of relativity. Heim's physical theory, provided it reflects physical reality, has the potential to lead to a completely new concept of space transportation." http://www.hpcc-space.com/publications/documents/PrinciplesOfAdvancedSpacePropulsionAIAA-paper-2002-4094.pdf
"Guidelines for a Space Propulsion Device based on Heim's Quantum Theory", Walter Dröscher, Jochem Häuser (2004)
"According to HQT, a transformation of electromagnetic energy into gravitational energy should be possible. It is this interaction that is used as the physical basis for the novel space propulsion concept, termed field propulsion [1, 2], which is not conceivable within the framework of current physics." http://www.hpcc-space.com/publications/documents/aiaa2004-3700-a4.pdf (italics are in original)
"Coupled Gravitational Fields A New Paradigm for Propulsion Science", Walter Dröscher, Jochem Hauser (2010)
"There seems to be substantial evidence of novel gravitational phenomena, based on both new theoretical concepts as well as recent experiments by Tajmar et al. at AIT, Austria that may have the potential to leading to advanced space propulsion technology, utilizing two novel fundamental force fields. According to EHT these forces are represented by two additional long range gravity-like force fields that would be both attractive and repulsive, resulting from interaction of electromagnetism with gravity. . . .
A simple analogy is used to differentiate between the classical rocket principle (including all other means of propulsion) and the novel field propulsion concept of EHT incorporating spacetime as a physical quantity. Suppose a boat is in the middle of a large lake or ocean. In order to set the boat in motion, a force must be mediated to the boat. The classical momentum principle requires that a person in the boat is throwing, for instance, bricks in the opposite direction to push the boat forward. However, everybody is well aware of the fact that there is a much better propulsion mechanism available. Instead of loading the boat with bricks, it is supplied with sculls, and by rowing strongly the boat can be kept moving as long as rowing continues. The important point is that the medium itself is being utilized, i.e., the water of the lake or ocean, which amounts to a completely different physical mechanism. The rower transfers a tiny amount of momentum to the medium, but the boat experiences a substantial amount of momentum to make it move. For space propulsion the medium is spacetime itself. Thus, if momentum can be transferred to spacetime by field propulsion, a repulsive or recoil force would be acting on the space vehicle moving it through the medium, like a rowing boat. " http://www.hpcc-space.com/publications/documents/AIAA2010-021-NFF-1.pdf
James E. Cox
"Dipolar force field propulsion system"
Glenn E. Hagen
(atmospheric ion propulsion system)
Chris B. Hewatt patent
"Method and apparatus for gyroscopic propulsion" (???)Jean Claude Lafforgue patent:
"Isolated systems self-propelled by electrostatic forces" (March 1, 1991) FR 2651388 http://worldwide.espacenet.com/publicationDetails/biblio?DB=EPODOC&adjacent=true&locale=en_EP&FT=D&date=19910301&CC=FR&NR=2651388A1&KC=A1
(See also http://jnaudin.free.fr/lfpt/index.html )
E. J. Saxl patent:
"Device for measuring gravitational and other forces", http://www.freepatentsonline.com/3357253.pdf
"Method and apparatus for generating propulsive forces without the ejection of propellant", http://www.freepatentsonline.com/6098924.pdf
"Method for transiently altering the mass of objects to facilitate their transport or change their stationary apparent weights" http://www.freepatentsonline.com/5280864.pdf
Rex L. Schlicher
"Nonlinear electromagnetic propulsion system and method"
Hector L. Serrano patent:
WO 2000058623 http://worldwide.espacenet.com/publicationDetails/originalDocument?FT=D&date=20010118&DB=EPODOC&locale=en_EP&CC=WO&NR=0058623A3&KC=A3
"Propulsion Device and Method Employing Electric Fields for Producing Thrust" http://www.freepatentsonline.com/6492784.pdf
Alerander P. de Seversky
Leon Sprink, Jacques Ravatin patents:
WO8000293 // FR2421531
Jonathan W. Campbell patents:
Henry Wm Wallace patents:
http://www.freepatentsonline.com/3626605.pdf , "Method and Apparatus for Generating a Secondary Gravitational Force Field"
http://www.freepatentsonline.com/3626606.pdf , "Method and Apparatus for Generating a Dynamic Force Field"
http://www.freepatentsonline.com/3823570.pdf , "Heat Pump"
"The Wallace inventions, spin aligned nuclei, the gravitomagnetic field, and the Tampere experiment: Is there a connection?", Robert Stirniman, May 1998, http://antigravitypower.tripod.com/stirniman/stirniman21.html
"Nonlinear Electromagnetic Propulsion System and Method", http://www.freepatentsonline.com/5142861.pdf
http://adsabs.harvard.edu/abs/1971PhRvD...3..823S , http://prd.aps.org/abstract/PRD/v3/i4/p823_1
Saxl Torsion Pendulum
http://adsabs.harvard.edu/abs/1971PhRvD...3..823S , http://prd.aps.org/abstract/PRD/v3/i4/p823_1
"Van de Graaff Generator Effect", Charles R. Morton http://amasci.com/freenrg/morton1.html
Alexander Frolov's ELG-Hat Capacitor (related to Morton effect?)
Counter-rotating (or opposed) magnetic fields effect:
"Systems for producing gravity neutral regions between magnetic fields, in accordance with ECE-theory", Charles W. Kellum (2012) http://www.freepatentsonline.com/20120105181.pdf
"Crossfield-Homopolar Device", Charles W. Kellum (2012) http://aias.us/documents/DeviceDev/HPexp2.pdf
"Propulsion System Using the Antigravity Force of the Vacuum and Applications", Baptista de Alves Martins (2010)
http://www.freepatentsonline.com/WO2010151161A2.pdf ; http://www.freepatentsonline.com/WO2010151161A8.pdf (119 pages, 22 figures; several references to magnetic vector potential)
"John Brandenburg on Antigravity and Gravity-Control", http://www.americanantigravity.com/news/space/john-brandenburg-on-antigravity-and-gravity-control.html
"Moving flame experiment with liquid mercury: possible implications for the Venus atmosphere", Schubert, G. and J. A. Whitehead, (1969) Science, 163, 71--72
Abstract. A bunsen flame rotated under a cylindrical annulus filled with liquid mercury forces he liquid mercury to rotate in a direction counter to that of the rotating flame. The rate of rotation of the liquid is several times greater than that of the flame. This observation may provide an explanation for the high velocities of apparent cloud formations in the upper atmosphere of Venus. http://www.whoi.edu/cms/files/69Sch&WhScience_32423.pdf
"Anomalous weight reduction on a gyroscopes right rotation around the vertical axis of the earth", H. Hayasaka and S. Takeuchi (1989) http://earthtech.org/experiments/tajmar/papers/p2701_1.pdf
"Responding to Mechanical Antigravity", Marc G. Millis, Nicholas E. Thomas (2006) http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20070004897_2007004127.pdf
91. Triboexcitation of Sorrento (FL) Red Sand.
Catalina Island; March 30, 1973.
Test No. 90 has been repeated today, making sure that the weighing was accurately done at the Avalon Post Office (It is now confirmed by the Postmaster, Pete G. Salamunovich).
The sample of red sand which was tested was contained (as in Sec. 90) in a glass Mason jar. In two day since the last excitation test on March 28, the weight had returned to normal; i.e., 1 lb-14-1/2 oz. It was then shaken for 30 minutes and then immediately (within 3 minutes) weighed. It then weighed less than 1 lb-4-1/4 oz, having lost at least 1/2 oz, possibly 0.3 oz.
This loss of weight (if 0.3 oz is considered) represents a greater degree of excitation than that recorded in Test 90. This may have been expected, as the duration of shaking was increased 10 minutes. This represents a loss of weight of 1 part in 101.6 or 0.984%. This represents an excitation of 9.84 millighos or a value of g approx 970.6 cm/sec2 !
This apparent confirmation is intriguing, to say the least!
T.T. Brown (3-30-73)
Witnessed: J.P. Quillin (3-30-73)
(BF comment: My initial reaction to this was "This is just crazy!" But motion has a both spatial component and a temoral component (not the same thing as clock time). Gravitation is mostly temporal motion. Apparently, shaking the sand causes the atoms to seek a new equilibrium with the combination of the two motions; the added temporal component would espress itself as a potential, and would have a sign opposite to the normal gravitational motion. This would manifest itself as a weight loss. When the shaking stops, this kind of temporal motion should "decay", somewhat like a diffusion (i.e., non-directional). Hence, this experiment might not be as crazy as it at first seems.
Around 1870, Thomson had conducted experiments which seemed to indicate that “gravitational action” could be induced by spheroidal bodies oscillated by electrical currents or mechanical pulses (F. Guthrie Phil. Mag. xli , p. 405). The surface pulsations could cause attractions or repulsions in respect to other bodies, as verified by Thomson. Tesla was aware of Thomson’s work during his student days in Graz, Austria, beginning 1875, when he was 19.
Thomson’s work undoubtedly served as the spark of inspiration for Tesla in his early conception of an “ideal flying machine” which would be propelled by electricity acting upon the ether. This explains Tesla’s continual references to Thomson, such as demonstrating during his 1892 London lecture, a ‘luminous wire’ sign powered by a Tesla coil, which said “WILLIAM THOMSON”.
At first, Thomson found that ponderomotive forces act between two solid bodies immersed in an incompressible fluid, when one of the bodies is immobilized and made to oscillate with a force which acts along a line between its center and that of a much larger sphere which is free. The free sphere was attracted to the smaller (immobilized) sphere, if its density was greater than the fluid, while a sphere of less density than the fluid was repelled or attracted, according to the ratio of its distance to the vibrator in relation to a certain quantity (Phil. Mag, xli , p. 405; Letter, Thomson to F. Guthrie, p. 427.)
Thomson’s experiments were analogical ones, for which he had evoked praise from his contemporaries even when he was still a teenager, although his refusal to believe anyone’s assertions unless he could build an analogical model to prove them often led to the consternation of those of his contemporaries, such as Maxwell, who relied often on mathematical equations. The sphere experiments were designed to use mechanical and electrical wave methods to construct a model to probe the gravitational, inertial and momentive reactions of solid bodies in the ether.
The Faraday effect—the rotation of the plane of polarization of radiation in a dielectric medium (such as the atmosphere, space, and certain solid materials) in a magnetic field—stated that the angle of rotation of radiation is proportional to the magnetic field strength and the length of the path in the medium in the field. These early experimenters knew there was a connection between the rotatory motion and momentum, and sought to find it.
The rotatory (versus the linear) character of magnetic phenomena was strengthened by Thomson’s experimentally verified conclusions on the magnetic rotation of light. This rotatory character not only influenced Tesla’s discovery of the rotating magnetic field, but is also fundamental to inertia and momentum, as I will later explain, since movement of a charged body constitutes a current which creates a magnetic field which creates the rotatory motion which “bores” through the ether like a drill to create momentum.
Thomson’s system was later investigated by C.A. Bjerknes between 1877 and 1910. Bjerknes showed that when two spheres immersed in an incompressible fluid were pulsated, they exerted a mutual attraction which obeyed Newton’s inverse square law if the pulsations were in phase, while if the phases differed by a half wave, the spheres repelled. At one quarter wave difference, there was no action. Where pulses were non-instantaneous at distances greater than a quarter wavelength, attractions and repulsions were reversed (Repertorium d. Mathematik I [Leipzig, 1877], p. 268; Proc. Camb. Phil. Soc. iii , p. 276; iv , p. 29).
"Aether Vibrations-A Wave Based Universe" (2012) http://www.bibliotecapleyades.net/ciencia/ciencia_fisica36.htm ; http://beforeitsnews.com/alternative/2012/07/aether-vibrations-2357376.html?currentSplittedPage=0
Scalar or torsion waves now seem to play a significant role in explaining our physical reality. Although torsion fields are very weak they can be measured using torsion beam balances that were first developed by Kozyrev. Torsion waves create minute forces in matter and that’s how they can be detected.
Torsion fields can be either static or dynamic. Static torsion fields can take on the form of vortexes like the one mentioned in the implosion physics of Daniel Winter. These static vortex torsion fields in the fabric of the vacuum space can stay in one place for a very long period of time. Kozyrev discovered that torsion fields can also propagate through space as torsion waves at tremendous speeds at least one billion times the speed of light (109 C).
He noticed that all physical objects both absorb and radiate torsion waves.
By shaking, vibrating, deforming, heating and cooling physical objects they generate measurable torsion waves. Even the displacement of an object generates torsion waves that can be measured. All movement therefore from the vibrations of atoms to the orbits of our planets and stars leaves their traces in the form of torsion waves in the aether.
A very remarkable phenomenon that Kozyrev discovered by rotating gyroscopes is that they lose very small but measurable amounts of weight. Also firmly shaking objects could make objects lose weight. Now from our current understandings of physics this is quite impossible! It violates all physical laws, how can solid matter lose weight when it is spun at high speeds or shaken?
If we still believe that matter is made of little hard marbles called particles, yes this would be a great mystery!
However Kozyrev showed that the gyroscopes shed more torsion waves when shaken or spun, so that aetheric energy that sustains the object was shed back into the background sea of the aether. The momentary loss of aether energy accounted for the weight drop.
Dr. Harold Aspden of Cambridge University discovered a related phenomenon. He attached a powerful magnet to a gyroscope and spun it at high speeds. He measured the amount of energy required to accelerate the gyroscope to full speed to be a 1000 Joules. Now to his surprise when he stopped the gyroscope from spinning and restarted the gyroscope to spin again within 60 seconds after it stopped, it required 10 times less energy to spin the gyroscope to the same speed.
The spin of the gyroscope had added extra spin to the aether that sustains the gyroscope that lasted for a while before it wore off, rather like the momentum stored in the tea of a teacup after stirring it with a teaspoon. We now know that spinning magnets are strong torsion wave generators.
"At about this same time , a small antigravitational device was independently developed in Paris. In this, a highly charged mica disc spun at high rate and levitated when electrostatically charged (Ducretet)." (Lost Science, Gerry Vassilatos (1999) p. 243 ; Lost Science, Gerry Vassilatos, http://www.tuks.nl/pdf/Reference_Material/Aetherforce_Libary/Lost%20Science/Gerry%20Vassilatos%20-Lost-Science-Complete-Edition.pdf p.174 )
Poynting vector insights (electromagnetic momentum)
[edit in progress]
Another twist with the Poynting vector is embodied in the Crossed Field Antenna (CFA), which is constructed much like a capacitor, but acts like a radio antenna. Its claimed advantage is small size and higher gain, especially at long wavelengths, when compared to conventional towers or wire aerials. In a capacitor, the Poynting vector is always directed inward (even when the current reverses) but in the CFA, the E and B fields are manipulated so that the Poynting vector (S) is directed outward:
Radio Antennas, Maurice C. Hatley, Fathi M. Kabbary (1992)
As far as I know, this scheme has never been tested for gravity/antigravity effects. The voltages and waveforms might not be conducive to these effects anyway. But the configuration raises interesting questions about Poynting vector manipulations and directional control. The commonly accepted paradigm is that a meaningful Poynting vector is generated only if the E and B (or H) fields are from the same source (cannot be independent) and must be time varying. But the CFA alters the fields separately so as to create a "synthetic" Poynting vector (forbidden fun!). Physicist Feynman also reminds us about independent E and B fields; see example. This implies that our current understanding of Maxwell's equations might be merely a subset of something more general. That, in turn, might help explain reports of some bizzare effects (unrelated to gravity) of unusual configurations of electromagnetic equipment. Here is snippet of a report about a Nazi weapon intended to stop Allied aircraft engines:
"This 'transmitter was a strange contraption, a tower surrounded by an array of posts with pear-shapted [sic] knobs on top. At the same time a similar system was erected on the peak of the Feldberg near Frankfurt. When it began operation, there were soon reports of strange phenomena in the vicinity of the Brocken tower. Cars traveling along the mountain roads would suddenly have engine failure. A Luftwaffe sentry would soon spot the stranded car, and tell the puzzled motorist that it was no use trying to get the car started at present. After a while, the sentry would tell the driver that the engine would work again now, and the care [sic] would then start up and drive away." Hitler's Suppressed and Still-Secret Weapons, Science and Technology, Henry Stevens (2007) page 170-189 http://www.amazon.com/Hitlers-Suppressed-Still-Secret-Weapons-Technology/dp/1931882738#reader_1931882738 (Search inside with "motorstoppmittel" and then select page 170 from list;
See also http://en.wikipedia.org/wiki/Levelland_UFO_Case , "The Tex Files: Levelland UFOs" http://www.myfoxdfw.com/story/17512052/the-tex-files-levelland-ufos ;"UFOs: More Engine Effects", James McCampbell (1985) http://www.nicap.org/More_Engine_Effects.htm , http://www.nicap.org/papers/ufointerf.htm )
"And like Tesla, Marconi was reported to have been working on a war-ray. His, it was said, would when perfected be able to stop airplane and other motors many miles before invading forces could reach their goals. . . . Marconi said little about his mysterious ray, nor will Tesla discuss the details of his. It is his secret and he will not reveal it, he says, except to the United States Governmnet . . . . But of what it will do, he speaks freely. "This new type of force," he said the other day, "would operate through a beam one one hundred-millionth of a centimeter in diameter. . . . This beam would melt any engine, whether Diesel or gasoline-driven." (Marconi's partly-perfected beam was said to be ineffective against Diesel engines). "It would also ignite any explosives aboard any bomber. No possible defense against it could be devised, as the beam would be all-penetrating." " ( "THE NEW ART OF PROJECTING CONCENTRATED NON-DISPERSIVE ENERGY THROUGH NATURAL MEDIA System of Particle Acceleration for Use in National Defense", Circa May 16, 1935, Briefly Exposed by NIKOLA TESLA, http://www.teslaradio.com/pages/teleforce.htm p. 25/26 This article has several links to several news stories on this topic from about 1934 to 1940. The stories would have been easily accessible to Nazi scientists of those days, and they could have figured out the remaining necessary details, as others evidently did before this time period.)
"Inventor Hides Secret of "Death Ray" ", "Before a group of scientists, it is reported, he once demonstrated that the radiations would kill rats, mice, and rabbits, even when the animals were incased in a thick-walled metal chamber. " https://books.google.com/books?id=2CYDAAAAMBAJ&q=Longoria#v=snippet&q=Longoria&f=false Dr. Antonio Longoria; Popular Science (Feb 1940) p. 117 )
"The new Death-dealing Diabolic Rays," H. Grindle-Matthews (August 1924) http://www.americanradiohistory.com/Archive-Popular-Radio/Popular-Radio-1924-08.pdf paper page 148-155
"In 1923 Matthews claimed that he had invented an electric ray that would put magnetos out of action. In a demonstration to some select journalists he stopped a motorcycle engine from a distance. He also claimed that with enough power he could shoot down aeroplanes, explode gunpowder, stop ships and incapacitate infantry from the distance of four miles. Newspapers obliged by publishing sensational accounts of his invention."
The "Death Ray" (or "Beam") may have something to do with neutrinos or the Weyl fermion.
Others include the so-called Hutchison Effect with its reports of levitation of heavy objects, delocalization ("dematerialization") of objects, anomalous mutilation of metals, and so forth, and the Searl effect with reports of weight loss, gravitational effects, temperature decrease, "magnetic walls", etc. What these configurations seem to have in common is the presence of high voltages, electromagnetic waves ("radio waves"), rotating fields, and a mixture of both static and time varying electric and magnetic fields—or something like that. ( See also: Orbiting Multiroller Homopolar System, Roschin, et al. http://www.freepatentsonline.com/6822361.pdf and Tesla's patent "Dynamo-Electric Machine" (1889) http://www.freepatentsonline.com/406968.pdf ; "Beyond Electromagnetic Waves", Bibhas De, http://www.bibhasde.com/radiocomm.html ; "Vacuum Electromagnetic interaction", B R De, J. Phys. A: Math. Gen. 26 (1993) 7583-7588 http://www.bibhasde.com/veipaper.pdf ; "How to build a flying saucer", Pentagon Aliens, William Lyne, 3rd edition, p. 195-218 http://www.whale.to/b/lyne.pdf)
This raises questions I wish someone would investigate:
1. Accelerated electric charges produce ordinary electromagnetic radiation. What effect would be produced by accelerated magnetic charges? See: Radiation from a charge in circular motion and math error note . The latter note suggests that spatially accelerated magnetic fields and the first time derivative of an electric field will both independently produce Poynting vectors and therefore a momentum flow; a combination of these might possibly produce an effect very different from ordinary electromagnetic radiation (radio waves). See UFO Physics and note what is said about Weyl fermions and neutrino currents.
2. The speed of gravity and the speed of electric and magnetic fields are instantaneous (because they are "non-local" in their nature). The speed of light however, is finite, and is exactly midway between the spatial zero and the temporal zero for speeds. (see speed diagram ) If light is a combination of electric and magnetic fields, or even a "pure electric" oscillation, why is its speed finite? Are other speeds possible for E and B field combinations between the speed of light and infinite (temporal) speeds? Are other speeds possible for E and B field combinations between the speed of light and zero (spatial) speeds? The E and B fields are inherently temporal in nature, but an inversion into the spatial spectrum of speeds might occur at a quantization boundary. A natural quantity of speed is c, the speed of light. But what is a natural quantity of, say, voltage? I have not given much thought about how to calculate it, but it appears to be at least several tens of millions of volts, possibly much larger. Voltages (and magnetic fields) approaching these levels could make for some very interesting (and really weird, even scary) physics. We are familiar with electric, magnetic, and electromagnetic fields (light). But are these just special, particular instances of something more general? Is there something "in between" these cases that was simply not apparent in Maxwell's day? (See also an example calculation of a unit quantity.)
3. The engine failure effects seem to be vaguely the reverse of the principles in the example above which compares the energy of a wire moving through space, with energy of space moving through a wire. The engine stopping effect might be due to not only the "ionization of the air" shorting out ignition systems, but something even more fundamental as well. The effect described below takes place inside a wire, and involves the wire's gravitational motion. Any field effect that alters that gravitational motion, would be expected to alter the electron motion as well (perhaps sending it sideways (leaping out of the wire), instead of along the wire. However, the effect reportedly has burned out spark plugs and neon signs.).
Tesla noticed and researched a similar effect. It occurred when DC dynamos were initially and suddenly switched into long transmission lines. Note the phrase (below) "bluish needles, pointing straight away from the line into the surrounding space." The effect occurred only during the instant of initial switch closure. (A large pulse of stored energy release would be expected upon suddenly opening a switch connected to a long DC transmission line, but this effect occured before the electrical energy was actually in the line.)
For more details, see Secrets of Cold War Technology: Project HAARP and Beyond , Gerry Vassilatos (2000) http://www.scribd.com/doc/15125148/Secrets-of-Cold-War-Technology , p. 27 )
Long transmission lines have two characteristics which may be relevant to the observed effect: they have a large amount of gravitational mass, and they have large loop areas (which may have been minimized for AC circuits, but not necessarily for DC circuits). Pulsing a large loop area with a fast rise-time pulse would momentarily destroy the symmetry and balance of the system with what I call the "Expansive Ether" which fills the loop area enclosed by the gravitational mass. The result could be a "backfire" of "hundreds of thousands, even millions of volts" upon connection to a dynamo producing only a few thousand volts. See further: George Samuel Piggott Effect "Electro-Gravitation" and "Discussion" . Update: Tesla apparently discovered a new type of electricity. See UFO Physics .Modern transmission systems address this problem with Pre Insertion resistors:
Pre Insertion Resistors in Circuit Breakers ( http://www.electrotechnik.net/2017/11/pre-insertion-resistors-in-circuit.html )
Circuit breakers used in switching of long transmission lines have a resistors which is pre-inserted between the contacts before the contacts are closed. This resistor is called the Pre-insertion resistor. The function of this resistor is to limit the initial charging current of the line. The resistance of the line is around 500 ohms.
Once the closing command is given to the breaker, the resistor is first connected across the contacts. This resistance in series limits the line current. A few milliseconds later, the contacts are closed.
While opening the breaker, the pre-insertion resistor is first disconnected before the contacts are opened by the circuit breaker. Pre-insertion resistors are also used in lines which have transformers to limit the high inrush current.
(See also https://www.academia.edu/9116079/Pre-insertion_Resistors_in_High_Voltage_Capacitor_Bank_Switching )
4. In the opening paragraph of the section on Motion Cancellers, I said that "The resultant motions are perpendicular to the motion used for cancellation." This may help explain why UFOs are often reported as disk shaped. The electro/magnetic equivalent of the air flow across the card would be radial, and the resulting cancelled motion would be perpendicular to the disk (i.e., vertical). A disk would be the most natural form for this kind of field configuration. (See also Reactionless Propulsion and AntigravityLoophole )
5. German research on the Nazi weapon described above was reportedly well underway by 1936 (Stevens, p.131). The American technical intelligence report is from 1945 (Stevens, p.174). During this period the German engineers and scientists involved in this effort would surely have noticed some levitation effects, and that in turn would have led to the development of saucer shaped flying machines within a few years. But there may have been predecessors even to that: related levitation effects were noted by George Samuel Piggott circa 1911 (PiggottLinks), by Edward S. Farrow circa 1911 (Farrow links), and Dr Francis Nipher's electro-gravitation experiments which were done circa 1916. Concurrent and even prior to all that was Nikola Tesla's experiments. And, (I find this interesting) there were waves of "airship sightings" in the late 1890s in the United States. All this suggests that these startling levitation effects could be demonstrated with electrical technology that has been known for 115 years! ( http://en.wikipedia.org/wiki/Mystery_airship http://borderlandresearch.com/book/lost-science/electric-flying-machines-thomas-townsend-brown/1 )
"UFO - NAZI Documentary 2016" https://www.youtube.com/watch?v=vvX6H745kFU
"Propellant-less Electromagnetic Propulsion", Stavros G. Dimitriou, Dr. David King http://jnaudin.free.fr/stvdmdoc/prplessp.htm
"The electric waveforms used to generate the vectors of velocity and/or acceleration must have dissimilar slopes between the ascending and descending part of the signal. This is necessary in order to obtain a non-zero sum of the derivatives per period. Extensive analysis has been carried in to optimize the parameters pertaining to each particular waveform.
The efficiency of the electrically generating domains of velocity and acceleration depends on the dimensions of the generating element, with regard to the fundamental wavelength of the waveform applied to it, as stated above."
"On the Existence of Undistorted Progressive Waves (UPWs) of Arbitrary Speeds 0£ v < ¥ in Nature", Waldyr A. Rodrigues Jr. , Jian-Yu Lu (1997) Foundations of Physics, Vol. 27, No.3, p. 435-508 (1997) http://link.springer.com/content/pdf/10.1007/BF02550165.pdf#page-2
"Considerations on Undistorted-Progressive X-Waves and Davydov Solitons, Frohlich-Bose-Einstein Condensation, and Cherenkov-like effect in Biosystems", Marcus V. Mesquita, Aurea R. Vasconcellos, and Roberto Luzzi (3 June, 2003) http://www.sbfisica.org.br/bjp/files/v34_489.pdfhttp://en.wikipedia.org/wiki/Plasma_antenna , http://www.freepatentsonline.com/1309031.pdf
Speculation on Potential Uses of Antigravity
A lot of people think antigravity is a joke, and so any uses of it are therefore purely imaginative. However, they might change their minds if they read the following:
"United States gravity control propulsion research", http://en.wikipedia.org/wiki/United_States_gravity_control_propulsion_initiative
"Emerging Possibilities for Space Propulsion Breakthroughs", Marc G. Millis (1995)"Conquest of Gravity Aim of Top Scientists in U.S.", New York Herald-Tribune, Sunday, November 20, 1955, http://www.bibliotecapleyades.net/ciencia/secret_projects/project048.htm .
"UFOs Merit Scientific Study", Hynek JA, Science. 1966 Oct 21;154(3747):329. PubMed PMID: 17751686.
"I Know The Secret Of The Flying Saucers" by Maj. Donald E. Keyhoe, USMC (Ret.) (1966) http://www.nicap.org/iknow.htm A very interesting partial:
"With a real all-out effort this could happen a lot sooner than the 10 or 20 years many scientists have in mind.
But getting enough top men to work in the field is a problem. One scientist says, "Scientists are sensitive about their reputations and many of them still think antigravity is a joke. If they knew the facts, they’d be eager to get into it."
Fear among scientists is partially due to the Air force censorship of UFO reports. Air force censors not only hide the facts but also belittle those who publicly report UFO sightings. . . .
But AF policy notwithstanding, the drive to get the secret of antigravity is well underway. It can’t be stopped now. But it can be speeded up. We are already spending billions on the space program – on the race to the moon, to Mars. Harnessing gravity could put us years ahead and save us enormous sums of money.
With control of the universe at stake, a crash program is imperative. We produced the A-bomb, under the huge Manhattan Project, in an amazingly short time. The needs, the urgency today are even greater. The Air Force should end UFO secrecy, give the facts to scientists, the public, to Congress. Once the people realize the truth, they would back – even demand – a crash G program.
For this is one race we dare not lose. – Maj. Donald E. Keyhoe" (1966)" "Outside the Box" Space and Terrestrial Transportation and Energy Technologies for the 21st Century", Theodore C. Loder, III (2002)"How To Investigate a Flying Saucer", Central Intelligence Agency (CIA) (January 2016)
Abstract : "This paper reviews the development of antigravity research in the US and notes how research activity seemed to disappear by the mid 1950s. It then addresses recently reported scientific findings and witness testimonies - that show us that this research and technology is alive and well and very advanced. The revelations of findings in this area will alter dramatically our 20th century view of physics and technology and must be considered in planning for both energy and transportation needs in the 21st century. "
". . . the CIA and USAF have learned a thing or two about how to investigate a UFO sighting. While most government officials and scientists now dismiss flying saucer reports as a quaint relic of the 1950s and 1960s, there’s still a lot that can be learned from the history and methodology of “flying saucer intelligence.” "
The article lists "10 Tips When Investigating a Flying Saucer". The methodology is conventional and sound, but the odd thing about this article is that the CIA seems to be recommending these procedures to the general public for the purpose of investigating UFO sightings. That would have made sense back in 1952 when the "Flying Saucers Problem" was a hot and openly discussed topic ( http://www.foia.cia.gov/sites/default/files/document_conversions/89801/DOC_0000015344.pdf ) . But why now in 2016? This topic has been marginalized and ridiculed for decades. This now seems to lend it a degree of official respectability. And the recommendation to "Consult with experts" will become an invitation to stir the witch's brew as the questions won't just be about photography, weather balloons, and swamp gas. (See UFO Physics, below; also the CIA has, in the past, encouraged reports of UFO sightings as a cover for the U2 program. See "6 decades of UFO sightings & Evidence", Nick Cook https://youtu.be/cSCMhDEecQM 1:40:53)
"Secrets of the Saucer Scientists", William F. Hamilton III http://www.ufoevidence.org/documents/doc1756.htm
Project Greenglow http://projectavalon.net/forum/showthread.php?t=9386 http://projectavalon.net/forum4/
"Anitgravity update" N'Elkan Institute (2012) http://www.nelkan.com/institute/winning-the-human-race/
Antigravity is no joke. This is serious stuff.
The term "antigravity" as used below is defined as:
The ability by technical means to exert a mechanical push or pull on a target object of any material composition located at a distance without actually touching it with radiation, particles, or electric or magnetic fields. The mechanical effect "propagates" instantaneously and does not show wavelength, phase, or aberration effects like light. The effect can be focused, shaped, or concentrated in some manner, as is currently done with magnetic or electric fields, but not in the manner done with light or electromagnetic radiation. The effect can produce self-levitation or self-propulsion when applied back upon the mechanism generating the effect (a.k.a. non-Newtonian "bootstrapping"). Additionally, generation of the effect might involve a non-Newtonian radial reaction confined within the generating apparatus.
The term "field propulsion" is probably preferable to the more popular term "antigravity", as the latter implies that a gravitatonal field (e.g., from a planet) is necessary for propulsion. A field propulsion system would work just as well in deep intergalactic space. Rocket propulsion is like someone sitting in a boat and throwing bricks out the back end to get the boat to move. Field propulsion, in contrast, is like someone dipping oars directly into the water and exerting a force directly on the medium. The medium in this case is the fundamental space/time (not "space-time") structure of the physical Universe itself.
Hundreds of years ago, electric and magnetic phenomena were thought to be unrelated. But the later experiments of Gauss, Faraday, Maxwell and many other investigators showed that they are actually related in very definite ways. Nowadays conventional science suspects that the gravitational field is also somehow related to electric and magnetic fields. But modern progress in this area has been minimal, and gravitation remains an oddball that has not been "unified" with the other fields.
The various articles above suggest that there is a clear dimensional relationship between electric (t1/s1), magnetic (t2/s2) and gravitational (t3/s3) fields. These fields were treated as multidimensional motions (space/time ratios) rather than as some sort of mysterious action-at-a-distance effect. In principle, the concept of motion is completely understandable by the human mind. In practice, particularly as used here, some education of our intuition is definitely required. Motion of our ordinary experience is expressed as s/t; this could be the motion of a raindrop falling through the atmosphere. Motion expressed as s2/t would be like the motion of dots on the surface of an expanding sheet of rubber, or the motion of picture elements as a camera zooms in on a scene. Motion of the t/s form has no "path" in space, and requires "field equations" for its description. Rotational motions of either the s/t form or the t/s form are much less intuitive, as are combinations of "motions-of-motions" like momentum (t2/s2) which is a combination of linear spatial motion (velocity) and rotational temporal motion (mass). Every physical property can be expressed as some sort of space/time ratio (see article).
As you can see, the concepts can get messy very quickly, but with better exposition, better examples, more comprehensive equations, better experimental insights, etc., we should still be able to understand these things. They are not inherently beyond our comprehension.
We have already seen that mass (t3/s3) can be combined with electron current (space per time) to give a resultant magnetic field: (t3/s3)(s/t) = (t2/s2) Note that the field remains "bound" to the mass that is so treated. The big questions implied now are:
1. Can mass be combined with yet another kind of motion so that its interaction with other masses becomes repulsive instead of attractive? Rotation, or combinations of rotations, would be a good candidate for investigation. See effects of spinning an ordinary object. Other possibilities involve rotating magnetic fields, or asymmetrically pulsed monopolar electric fields.
2. Can concepts like permittivity and permeability be extended to some kind of "gravity saturable" material? *
3. Are there special materials (akin to dielectric and ferromagnetic materials) that can facilitate the utilization of such a property? If "nuclear spin" is involved in this, for example, then it might be productive to see how something like deuterium would respond to special configurations of electric and magnetic fields. A deuteron has a spin of +1, making it a boson. ( http://en.wikipedia.org/wiki/Deuterium#Spin_and_energy ) Bosons tend to clump into the same state—a trait that might be useful in this connection. But non-local physics (time/space) is so different from local physics (space/time) that even conceiving of a machine to utilize this trait for such a purpose would be difficult for us denizens of locality. Physicists will probably see the effect as an "anomaly" produced by an accidental and "useless" configuration of fields. (See Piggott for an example)
Other possibilities involve super- or hyperdeformed nuclei or excited spin states:
"From Single-Particle to Superdeformed: a Multitude of Shapes in MERCURY-191 and a New Region of Superdeformation", Ye, Danzhao (1991) Dissertation Abstracts International, Volume: 52-06, Section: B, page: 3125.
"Superdeformed bands in 150Gd and 151Tb: Evidence for the influence of high-N intruder states at large deformations" P. Fallon, A. Alderson, M.A. Bentley, A.M. Bruce, P.D. Forsyth, D. Howe, J.W. Roberts, J.F. Sharpey-Schafer, P.J. Twin, F.A. Beck, T. Byrski, D. Curien, C. Schuck Pages 137-142 Physics Letters B, Volume 218, Issue 2, Pages 119-262 (16 February 1989) [The article says that in some element isotopes, the values of dynamic moments of inertia are high at low spin speeds and decrease rapidly at high spin speeds Gadolinium150, Terbium151, but for others (Dysprosium) they are almost constant.]
"Nuclear Moments of Inertia at High Spin, M. A. Deleplanque (1982) http://www.osti.gov/bridge/servlets/purl/6593868-XcDoVC/6593868.pdf
* "Guidelines to Antigravity", Robert L. Forward, Hughes Research Laboratories, Malibu, California, American Journal of Physics, Vol. 31, No. 3, 166-170, March, 1963 (Received 12 September 1962) http://www.academia.edu/3336384/Antigravity_-_by_Robert_L.Forward
"In studying analogies between electromagnetism and gravitation, it can be seen that one analogous quantity has not been investigated. This is the gravitational equivalent to the magnetic permeability. Electrical power distribution systems depend upon the anomalously large and nonlinear permeability of iron and othermagnetic materials. Since all atoms have spin, all materials will have a gravitational permeability which is different from that of free space. Rough calculations show that this difference is very small, but experimental investigation may find materials with anomalously large or non-linear properties that can be used to enhance time-varying gravitational fields. Also, since the magnetic moment and the inertial moment are combined in an atom, it may be possible to use this property to convert time-varying electromagnetic fields into time-varying gravitational fields."
Actual experiments suggest that antigravity is definitely within reach of the technology available today, and might even be closer than most of us think. Fuller development of this technological capability will have many applications and many serious implications. I list a few of the more interesting ones below (construction equipment, terrorism, cheating at sports, etc., have not been included):
Safe, inexpensive access to space: Access to low Earth orbit presently costs about $10,000 per pound of payload. Hazardous propellants, special launch facilities, extensive ground crews and supporting equipment are required. The launch vehicles are not readily reusable like a commercial airliner. A propulsion system based on antigravity would strongly reduce launch costs, perhaps to less than $1 per pound. Both launch and re-entry could be done at low speeds, greatly reducing risks. Access to low earth orbit would become as routine as ordinary intercontinental commercial air flights are today. Weight would apparently not be a problem; the kind of forces involved in such a propulsion system are 1040 stronger than gravity. Hovering over a location could be done without expenditure of energy (work is defined as force moving through a distance; how much energy does the Sun expend keeping the Earth in orbit?).
Access to the stars directly from earth: With antigravity, flight to the stars and planets could be done directly from Earth without any need for intermediate bases on the Moon or Mars. Complete spacecraft could be assembled and provisioned here on Earth, instead of in space.
Adjustment of satellite orbits directly from Earth: Communication satellites in synchronous orbits need small "station keeping" adjustments periodically. With antigravity as a maneuvering system, these adjustments could be done indefinitely because there is no depletion of propellant. Satellite orbits could also be altered by beaming a push or a pull from Earth, although this would require very precise aiming capability and probably several widely-separated beam sources.
Disposal of space junk: Close Earth orbit currently has thousands of pieces of "space junk" flying around at very high speeds. With Earth-based antigravity, this junk could easily be de-orbited and allowed to burn up in the atmosphere.
Direct production of motion: motion can be produced directly with this technology. There is no need for gears, bearings, cranks, pistons, turbine blades, wheels, lubricants, rockets, etc. Friction would be minimal and conversion efficiences very high. Basic machines would be simple and easy to manufacture.
Advanced personal transportation: With antigravity, you could drive your vehicle anywhere in the world (even across oceans) on electronically defined paths. Such a vehicle could move in outer space as well as under the ocean. Speeds could be high. Distances to shopping centers would become trivial, even if hundreds of miles away from home. Roads would not be needed. The real estate market would be drastically altered.
Control of weather: Use of antigravity would allow us to move hurricanes, tornadoes, deflect floods, and blow back forest fires.
Create perfect vacuum pumps: Even a weak antigravity field could be used to sweep residual air molecules remaining at high vacuum into a collection port where they would be removed by a conventional turbo-molecular pump. Almost perfect vacuums would be obtainable even in large and slightly leaky vessels (A single finger print on the wall of a high-vacuum vessel can require 24 hours for the removal of volatile components).
Suppression of aircraft sonic booms: A high-speed antigravity craft could be configured to move the air out of its forward path such that the air becomes increasingly rarefied nearer the craft. A sonic boom could not develop with such a configuration. Additionally, ordinary aircraft are supported by a pressure wave, which makes the magnitude of the sonic boom proportional to aircraft weight. But this would not be true of an antigravity craft, which gets its "lift" from a completely different source. (See http://www.answers.com/topic/sonic-boom Sonic boom suppression is already possible today through the use of electric fields; see article)
Long-range navigational deflection for spacecraft: A spacecraft moving at the speed of light could clear the path ahead of it. Interstellar space is generally quite empty, but even micron sized particles could pose a hazard to spacecraft moving at these speeds. High speed impacts by gas molecules could result in structural erosion, as well as collision generated gamma radiation which would be hazardous to the crew and electronics (like living inside of a particle accelerator). Another problem is ambient light. At relativistic speeds, microwaves and ambient starlight, X-rays, and gamma rays will be blue shifted to higher energies as viewed in the forward direction from a spacecraft. This, combined with the relativistic headlight effect , could make for a formidable heating effect.
Short-range thermal insulation for spacecraft: A spacecraft exploring a planet with a hot, corrosive atmosphere (like Venus) could insulate itself by repelling all external gas molecules away from its hull. This would be equivalent to placing the spacecraft inside a vacuum bottle.
Instantaneous interstellar communication: Antigravity, as defined above, could be used for instantaneous communication over distances of light years. Instantaneous interstellar communication is probably best accomplished with the currently envisioned schemes that use "entangled" photons. Communication channels that use gravitational pulses would have relatively low bandwidth but would not require prepositioning of special equipment containing precorrelated photons.
Alteration of planetary orbits: The ellipticity, revolution rate, rotation rate, and polar orientation of a planet (or moon) could be altered by the use of antigravity technology.
Manipulation of light: Antigravity technology would be able to create concentrated, extremely powerful gravitational fields. The effects should be strong enough to readily bend a beam of light. This opens up all sorts of possibilities in the science of optics:
Gamma ray focusing: Currently X-rays can be focused by grazing incidence mirrors. I know of no such devices for gamma rays, which have much higher energies.
Giant aperture telescopes: In our huge universe, light is actually rather slow. Why bother with telescopes if we can just send a probe there and back at speeds much faster than light? One answer is that light carries a lot of useful information, and large surveys can be done rapidly from telescopes on Earth. Antigravity will not make telescopes obsolete, but might be used to enhance their resolving power and extend the available spectrum.
Cloaking: It is probably possible to bend light around an object such that the light "flows" in a streamlined fashion. If so, the object would become invisible, at least from one point of view. Shielding a spacecraft from gamma rays might also be possible.
An interesting but different method of cloaking by the use of metamaterials can be found at::
http://www.msnbc.msn.com/id/12961080/ by Alan Boyle, Science editor, MSNBC:
"Here's how to make an invisibility cloak. Theoretical cloaking device
could soon beocme reality (sort of)"
"The black lines in this drawing show the path that light rays would take
through a theoretical cloaking device. The device's metamaterial would be
patterned in such a way to route the rays around the cloaked sphere."
1949, Norwood, Ohio Searchlight UFO Incident"An additional photo was found in the possession of RAY STANFORD, who states: "This is several generations down from the original16 mm movie film, but it seems to rather clearly show that while the beam was projecting several degrees away from the object, when it got within a certain event horizon of the object, it was simply bent or 'pulled' the beam directly into the object, seemingly bending it about26.5 degrees, as measured in the photo plane! This frame has always amazed me since I first saw it in the mid-'50s. Several persons, back then, who had seen the actual movie said that at one time the object seems, indeed, to 'suck' the beam squarely into it! " http://www.ufocasebook.com/1949norwoodufo.html
(Note: in the photo, the searchlight beam starts at the lower left and the UFO is at the upper right. The beam should have shot past the UFO, but instead seems to get completely 'pulled' into it.)
"Also, in several cases, light (e.g. from car headlights or beaming spotlights) is reported to "bend" in front of the UFO . . ." http://www.hyper.net/ufo/physics.html by Dimitris Hatzopoulos
- Extreme pressure experiments: Antigravity fields could be shaped with techniques analogous to what is done today with electric and magnetic fields, except that the effect is expected to be much more compact. Because the primary effect is mechanical, it should be possible to make devices that can produce extreme pressures. This capability could be used to study material at extreme densities (such as that inside the Earth or stars). Because physical-mechanical contact is not necessary, experiments can use BOTH extreme temperatures and extreme pressures. This capability may also make certain industrial processes more economical, such as chemical process operations that use supercritical solvents ( http://www.isopro.net/web8.htm ), or for making large diamonds for heatsinks and lenses, or for making high energy-density explosives. (See: "Nitrogen Power: New crystal packs a lot of punch", Alexandra Goho, Science News, July 17, 2004, Vol. 166, p.36-37, www.sciencenews.org/articles/20040717/fob4.asp and high energy-density materials ; "Gravity Control by means of Electromagnetic Field through Gas or Plasma at Ultra-Low Pressure", Fran De Aquino (2007-210) ". . .it is also possible to build a Gravitational Press of ultra-high pressure . . ." https://arxiv.org/vc/physics/papers/0701/0701091v7.pdf p.16
"Hypergravity helping aircraft fly further", Phys.org (2012) ["using titanium aluminide would reduce their weight by 45% over traditional components"] http://phys.org/news/2012-11-hypergravity-aircraft.html
If that seems too hard to imagine, see what can already be done with a powerful magnetic field. This link shows how a magnetic field can be used to shrink a U.S. metal coin to about 75% of its original diameter: http://www.magnet.fsu.edu/education/tutorials/slideshows/shrinkingquarter/index.html . The gravitational version of this experiment would be continuous, rather than pulsed, and far more powerful. (See also "Electromagnetic hammer" http://www.youtube.com/watch?v=5inJ7sDndBI&feature=related )
Deep sea exploration: A vessel designed for deep sea exploration could fly from an inland base directly to the destination, submerge, and begin exploration. No support from a surface vessel would be needed. No propellers would be needed either because the propulsion system would be contained entirely inside the submersible. Such a vessel would be safer, require less maintenance, and be far more convenient to use than those available today.
Industrial processing: Any industrial process that uses gravity or centrifugal separators to separate materials might benefit from a perfected antigravity technology. This would be especially true for substances that have only slight differences in densities such as atomic isotopes. (of interest: http://techxplore.com/news/2017-01-whirligig-toy-bioengineers-cent-hand-powered.html )
Medical uses: A rotationless centrifuge could be used for separation of serum components and faster measurements of sedimentation rates. Non-contact levitation of patients (e.g.: burn victims) might also be possible.
Clearing of mine fields: Land mines could be exploded remotely. Suppressing or containing the effects of the explosion would also be possible.
Pop-in reconnaissance: An antigravity craft would be able to move with very high speeds and very high accelerations. Conceivably, it should be able to "pop-in" noiselessly at a target location, take a bunch of photos, and then suddenly depart —all within a few tenths of a second. Such a visit would be hard to detect visually. Observers who witnessed the visit might not be sure of what they saw, especially at night, and especially if there were some manufactured distractions.
- Missile shield/satellite killer/asteroid deflector: A target that is hit with a powerful, rod-shaped antigravity beam will experience high shear forces unless the beam envelopes the entire target. The effect would be like shooting a thin walled tube through a layer cake; a "plug" of all layers would be expelled intact, leaving a clean hole through the cake. This would certainly disrupt the operation of the missile or satellite (or spacecraft, or building, underground bunker, etc). A less powerful beam would simply damage it or push it away or alter its trajectory. The latter could be used to alter the incoming trajectory of a rogue meteor or asteroid, thereby avoiding catastrophic damage to structures on the ground. Only a few hours of warning would be needed.
- Mega-gravity: This is the opposite of antigravity. It is a technology that would presumably be possible when the science of antigravity is figured out. It could be used to make the weight of something much greater than normal. Obvious applications would be motionless centrifuges and vacuum pumps, industrial processes that require extreme pressure. Artificial gravity for spacecraft interiors should also be possible.
- Utilization of Non-local effects: It is reasonable to assume that non-local effects would accompany the "territory" of antigravity phenomena. It is not clear just what they would be, but might include "anomalous mixing of materials", "disintegrator beams", and "delocalization of objects" (meaning that they become invisible, or seemingly immaterial or spatially non-contiguous, wholly or partially). Spatial "blackout" effects might also be observable.
I think one of the best applications will be a startlingly new type of remote sensing. I am struggling with how to illustrate this. Perhaps these thoughts will help. Our perception depends on how an observer's motion couples to the motion of the object he is trying to perceive. If the observer is stationary and looks at an airplane propeller mounted on an airplane wing, he can see it clearly if it is stationary as well. But if it is rotating, it becomes nearly invisible. However, if we set up a camera, give it the characteristics of the human eye, and then rotate it to give it the kind of motion possessed by the rotating propeller, the propeller becomes visible again. But other things disappear. The wing and background, for instance, cannot be seen anymore. Some parts may be visible in either system; the engine cowling, which may have a rotational symmetry, may still be visible (an effect that might baffle the observer). (Compare medical X-ray stratigraphy. See "Medical X-Ray Techniques In Diagnostic Radiology", G.J.van der Plaats, P. Vijlbrief http://www.amazon.com/Medical-X-Ray-Techniques-Diagnostic-Radiology... page 261, "Special Radiographic Techniques" )
Suppose now we attach temporal motion to an object, say, by means of powerful, specially configured electrical and magnetic fields. The object now has a different type of motion than the observer has, and it can disappear from view. But it is not just optically invisible. It has actually "delocalized"; it is not "there" anymore as a spatially contiguous physical object. It has shifted to a "when" type of location while the observer is in a "where" type of reference system. If we could invent a special camera with the same kind of temporal motion, this delocalized object would become visible. But all the normal spatial stuff would become invisible. This implies that you could see right through a building, maybe even a planet. Temporal structures would become visible and the normal spatial structures would become transparent. This could lead to a fantastic new form of remote sensing. (Is this what UFOs are doing when they are seen leisurely hovering near the ground? . . . taking a survey of structures with temporal signatures?)
Possibly relevant: Transparent UFOs:
http://unitedstatesufo.blogspot.com/2011/12/semi-transparent-long-tubular-ufo.html ; http://www.examiner.com/article/colorado-couple-v-shaped-semi-transparent-ufo-flew-low-and-silent ; http://the-v-factor-paranormal.blogspot.com/2012/02/huge-semi-transparent-v-shaped-ufo-over.html
"UFO NEWS 2014 UFO ENCOUNTER JUNE 10 1931 82414" http://nyufo.bravesites.com/entries/ufo-news/ufo-news-2014-ufo-encounter-june-10-1931-82414- , http://www.abovetopsecret.com/forum/thread600397/pg1
Compare similarities: http://www.ufoevidence.org/topics/hudsonvalley.htm ; http://www.chron.com/news/nation-world/space/article/Mystery-lights-over-Houston-keep-people-talking-5691206.php ; http://www.syracusenewtimes.com/26-july-2014-ufo-new-york-lights/
See also the links listed in The Hutchison Effect.
If you have trouble with the mental gymnastics of comprehending how something would appear in a different reference system, consider the technology of holography, spread spectrum signaling, and cryptography. These convert ordinary comprehensible images or information into "noise". The information still exists in an intact, definite, form however, and can be converted back into its original form with the proper equipment and algorithms. Similarly, the link between a "where" type of reference system and a "when" type of reference system is encoded in the motion used to convert from a gravitational reference system to some other kind of system. The key question is: What type of momentum, temporal or spatial, is the object taking on relative to the reference system?
Another possible application is instantaneous communication. A message can be "written" directly at the destination without having to transmit it through intervening space. Likewise, a "bomb" ("bundle of energy"?) could be delivered to a destination without traversing the intervening space (a rather scary prospect!). See also GammaBurster . Presumably, physical objects could be transferred in a similar manner (e.g.: UFOs would not need doors for occupant entry/exit.) See also Ball Lightning ("Some appear within buildings passing through closed doors and windows. Some have appeared within metal aircraft and have entered and left without causing damage")
Rapid construction of tunnels might be another application. The "dirt" is simply delocalized (scattered to different where/when locations in the Universe). Huge tunnels and caverns could be made without having to haul away dirt or rock.
There are thoughts even today about using non-local effects for remote sensing:
Because time and gravity are innately linked . . . researchers would be able to use a clock as a sort of scale, correlating subtle fluctuation in a clock's ticking rate with the mass below it. Clocks flying above enemy territory could detect missing mass below the surface—perhaps the location of a secret underground tunnel or cave. ("Quantum Timekeeping", Andrew Grant, Science News, March 8, 2014, p. 22 https://www.sciencenews.org/article/quantum-timekeeping )
Non-local effects are not limited by spatial contact or spatial proximity or spatial barriers. This opens up a completely new way to manipulate things in our physical universe (even including atomic structure). To most of us, such effects will seem magical and almost beyond belief. (See also: In Search of the Geometry of Space, Time, and Motion, Speed of Gravity, Speed of Electric Fields )
See also http://georesonance.com/
"If not us, someone else will lead in the exploration, utilization and,
ultimately, the commercialization of space, as we sit idly by."
A Journey to Inspire, Innovate and Discover [by using obsolescent technology], p. 12,
June 2004, http://www.nasa.gov/pdf/60736main_M2M_report_small.pdf
"This was a failure of policy, management, capability,
and above all, a failure of imagination."
—Tom Kean, chairman of the commission investigating
the September 11, 2001, attacks
"Boldly go where no man has gone before." —Star Fleet
"Look with favor upon a bold beginning." —Virgil
What is a UFO?
"The Battle Of Los Angeles"
Test your opinion about "What is a UFO?" by reading about "The Battle Of Los Angeles" ( http://www.rense.com/ufo/battleofla.htm , http://www.militarymuseum.org/BattleofLA.html , http://rationalwiki.org/wiki/Battle_of_Los_Angeles http://www.youtube.com/watch?v=tAag2hn2w-w&feature=youtu.be http://youtu.be/1rv9Fpp2eQA ). On February 25, 1942, around 3 a.m., a UFO was sighted over Los Angeles, California The Army fired 1,430 rounds of antiaircraft shells at it with no effect on the object. About 100,000 or more people witnessed the incident. What do you think it was?
Mystery Air Objects Seen In Sky over L.A. This is another report of widespread sightings of UFOs in the southern California area on November 6, 1957. You can read the details at http://www.ufocasebook.com/2009/la1957.html .
Washington D.C. UFO incident
Also well-publicized was a series of UFO sightings in Washington, D.C. during July 1952:
"1952 Washington D.C. UFO incident" http://en.wikipedia.org/wiki/1952_Washington_D.C._UFO_incident"The sightings of July 19–20, 1952, made front-page headlines in newspapers around the nation."
"UFO over Washington DC. Film Footage from 1952" http://www.youtube.com/watch?v=sTZ7O9cfpPQ :
"UFO - OVNI - UFOs In Washington D.C - 60 years ago" http://www.youtube.com/watch?v=hObI12DD3-Y
"UFOs in Washington D.C July 12, 1952 and Gen. Samford UFO press conference Pentagon, July 29, 1952" http://www.youtube.com/watch?v=1iER6ESzscY
"Washington 1952 UFO "Flap" Video" http://www.youtube.com/watch?v=hI4XJ3IsLDs&feature=related
"1952-Washington D. C. Buzzed by UFOs", Billy Booth http://ufos.about.com/od/visualproofphotosvideo/p/washingtondc.htm"The capabilities of the UFOs were far beyond our technological proficiency at the time.
By the time our first missions were off the ground, the UFOs were nowhere to be seen. But, when our planes returned to ground, the UFOs were back, as if taunting our defenses. For hours, U.S. planes chased the illusive targets, yet without success. Pilots could actually see the perplexing objects, but as they approached, the lights of the UFOs vanished."
"Washington DC UFO Merry go round" http://www.youtube.com/watch?v=S75oeBhhAbI&feature=player_embedded
UFO Sightings Washington DC. July 12 1952 http://www.youtube.com/watch?v=hObI12DD3-Y&feature=player_embedded
Invasion Washington: UFOs Over the Capitol, Kevin D. Randle (2001)"First Contact Special Edition: 1952 UFO Eyewitness Howard Cocklin", http://youtu.be/UQwwl1ln30Y
Farmington UFO Armada:http://www.theufochronicles.com/2007/05/huge-saucer-armada-jolts-farmington.htmlhttp://ufoevidence.org/cases/case880.htm
(related: http://www.openminds.tv/amazing-first-hand-ufo-testimonials-from-dulce-new-mexico-families/35382 )
A peculiar thing about most UFO sightings is that there is no follow up. They may be front-page headline news one day, but virtually nothing the next. It is as though you hear an announcement on TV that "UFOs land on White House Lawn. President greets space aliens. And now for your local weather report. " It seems that this sort of news is very carefully managed. We are only allowed to know just so much, and nothing more.
The Farmington Daily Times
March 18, 1950
If you are searching for information on UFOs, be aware that the names have been changed: UFOs are now "Unidentified Arial Phenomena" or "unconventional aircraft" or "unconventional helicopters", etc. This change helps to abandon the popular connection between "UFOs" and "space aliens". Pilots and other observers are more willing to file reports on UAPs than on UFOs. If you are filing inquiries under the Freedom of Infomation Act, you need to play "exact word games". The authorities no longer investigate so-called "UFOs".
Some believe that these objects are man-made, being a secret Nazi technology within a secret society (or civilization) that survived WW II. What do you think?
http://en.wikipedia.org/wiki/File:George_Adamski_ship_1.jpg (Alleged photo of Haunebu II UFO from 1952)
http://www.adamskifoundation.com/html/AboutGA.htm (Attributed opinions about authenticity)
Hitler's Flying Saucers -A Guide to German Flying Discs of the Second World War, Henry Stevens (2003)
"Because the object is unidentified, the object's source is also undetermined. Only a leap of faith can connect UFOs to an extraterrestrial course without first introducing proof. A radical hypothesis such as an extraterrestrial origin of UFOs requires overwhelming proof in order to be generally accepted. No such overwhelming extraterrestrial proof has ever been offered which has stood up to scrutiny. No crashed alien craft have ever been produced by anyone, inside or outside government. Likewise, no alien bodies have ever been found. No extraterrestrial culture, or alien technology has ever been uncovered by anyone. There is simply no actual evidence at all linking UFOs with an extraterrestrial source. Therefore, no such leap of faith should be made. We need to start all over again. All rational earthly explanations need to be exhausted before any extraterrestrial theories are even put forth.
Unfortunately, the simple truth is that, for the most part, UFO research has done a leap-frog to the extraterrestrial explanation without ever adequately exploring and exhausting a terrestrial origin. This statement is inclusive of everyone regardless of background or education. It applies to the charlatan UFO attention getters as well as to former NASA scientists with Ph.D.s. This is the condition of our current state of affairs in the UFO world.
. . .
Already in this brief discussion, the evidence, taken as a whole, is overwhelming. Please compare this to any and all extraterrestrial explanations of flying saucers. Here we have Germans who claim to have invented the idea of the flying saucer. We have Germans who claim to have designed flying saucers. We have Germans who claim to have built flying saucers. We have Germans who claim to have flown flying saucers. We have Germans who claim to be witnesses to flying saucers known beforehand to be of German construction. We have German construction details. And finally, we have a man who took pictures of a known German flying saucer in flight. The facts speak for themselves. During the Second World War the Germans built devices we would all call today "flying saucers". No other UFO explanation can even approach this in terms of level of proof." (Hitler's Flying Saucers -A Guide to German Flying Discs of the Second World War, Henry Stevens (2003) http://www.tiono.com/model/FlyingSaucers.pdf )
(My aside regarding a standard of proof: I can truthfully say that I have touched a piece of the moon (obviously extraterrestrial). I did it when I visited the National Air and Space Museum in Washington, D.C. decades ago. A little piece of moon rock was on display for people to touch. I thought "Who would want to touch a piece of rock from the Moon?" But no one was in line, and so I went over and touched it, just so I could say I did. Millions of other people have done the same. Can we do the same kind of thing for "extraterrestrial UFO artifacts"? Can you go to a museum and touch an extraterrestrial UFO seat cushion, or a star map, a family photo, a charm, a glove . . . anything?
Collecting data on UFO sightings is in an altogether different category. Collecting data on sightings is loosely like an astronomer collecting data on supernova explosions, except that the astromomer's data is much more precise, and he understands the object of his study. Neither can be called "laboratory experiments", but eventually, commonalities will show up in the data sets, and some conclusions can be drawn.
Originally I thought UFOs were a kind of silly, "fringe" topic, evoking images like those on the front page of National Enquirer titled "President consults with space aliens"—ridiculous stuff I see at the checkout stand. Indeed, I had browsed through one supposedly authoritative book on free energy devices, but it was full of obfuscatory impressive-sounding technobabble and was scientifically useless. Still, my studies in quantum mechanics and electromagnetics suggested that antigravity ought to be doable, and, once we figure it out, easily doable at that! And if I could develop these insights, surely someone else could too. But if it is so easy, why aren't numerous people building these machines? Why aren't UFOs being seen all over the world and reported in the newspapers?
Normally, science depends on open discourse and discovery for its progress. But this is not the case with military science. Advanced developments will be kept secret for military advantage. Hence, two versions of science develop in parallel: one for public consumption and another for the military (usually accessible only on a "need to know" basis). When the public version catches up to the military version, the latter can be declassified and become public. UFOs then, could be a manifestation of a deep, dark, secret research project, possibly military in character. Reluctantly, I decided I had to look into this topic—one that no physicist would touch—expecting to sift through a lot of "nut case" literature just to see what I could find. To my utter surprise, some of the literature on this topic turned out to be quite well-researched and carefully reasoned (though necessarily somewhat speculative). I started with Hitler's Suppressed and Still Secret Weapons, Science and Technology by Henry Stevens, quoted above. Works by various other authors are listed below.)
"Unacknowledged special access programs", Joël van der Reijden (2005) http://www.bibliotecapleyades.net/sociopolitica/sociopol_usap.htm
"Top Secret Black Projects – Unacknowledged Special Access Programs", http://www.abovetopsecret.com/forum/thread613064/pg1
Dark Star, Henry Stevens (2011)
Hitler's Suppressed and Still Secret Weapons, Science and Technology, Henry Stevens ( 2007) http://www.amazon.com/Hitlers-Suppressed-Still-Secret-Weapons-Technology/dp/1931882738#reader_1931882738 (My review: these are both very well researched, thought-provoking books and are well worth reading.)
Reich of the Black Sun, Joseph P. Farrell (2004)
The SS Brotherhood of the Bell, Joseph P. Farrell (2006) My review: These highly interesting and well written books (usual misspellings excepted) describe various facts, legends, history, suppositions, etc. of the Nazi SS super high technology "wonder weapons" development during and somewhat after World War II. The Nazis apparently had built atom bombs but the war ended before they were able to deploy them. They were also working on other weapons that had far more terrifying and powerful capabilities, ones that would make the atom bomb seem feeble:
"This could only mean that there was a weapons system that possessed enormous range and degree of efficiency that lay beyond the nuclear weapons technology. Did the Third Reich really prepare the Doomsday Weapon? And if so, where is the technology today? Was it discovered by the Allies or does it lurk secretly deep in the earth waiting for its rediscovery? If such an Ultimate Weapon has already been in existence for more than fifty years, then it is a legitimate question to ask what today's military really, actually possesses." (page 96, quoted from Das Geheimnis der deutschen Atombombe: Gewann Hitlers Wissenschaftler den nuklearen Wettlauf doch? Die Geheimprojekte bei Innsbruck im Raum Jonastal bei Arnstadt und in Prag, Edgar Mayer and Thomas Mehner (2001) p. 89)
The Bell makes comments about the "monstrous physics" discovered and developed by the Nazis that "totally abandoned conventional physical laws" (page 220-221). These are described as "scalar waves", "longitudinal waves", "vorticular physics", etc. However, this just seems to be a lot of popular technobabble. The key words today should probably be "non-local physics" , "nuclear spin", and a mathematical creature called "curl". The Nazis apparently discovered new, unconventional ways of applying these concepts to electric and magnetic fields. The results and capabilities would be astonishing—even seemingly magical—to the modern physicist. Our institutions cannot currently handle these concepts because the required paradigm shift is enough to choke a swampful of alligators, and our staid, complexity-loving, overfunded atherosclerotic institutions are just not willing to go in wildly new directions. (See also:
In Search of the Geometry of Space, Time and Motion
, Reactionless Propulsion, and The Problem of Quantum Locality )
The Truth about the Wunderwaffe, Igor Witkowski, 2nd ed. (2013)
Saucers, Swastikas and Psyops, Joseph P. Farrell (2011)
Secrets of the Third Reich: The Rediscovery of Vimanas http://www.youtube.com/watch?v=8ryS1o0u31E
Man-Made UFOs 1944-1994 50 years of Suppression, Renato Vesco & C. H. Childress (1994). The title of this book says it all. It is "a comprehensive and in-depth look at the early 'flying saucer' technology of Nazi Germany and the genesis of early man-made UFOs . . ." (back cover). It is quite interesting from an historical perspective and is well-researched, well-written, and has abundant illustrations. It could use an index however. My favorite samplings:
What becomes clear to researchers into man-made UFOs and early German discoid craft is that this technology is real, "Above Top Secret," and is possessed by various groups on this planet today. Not only are the Americans and British said to have this technology but so do such countries as Russia, China, France, Italy, Israel, and Chile. Private corporations, individuals and agencies are also claimed to possess "craft." (p. 369; see also quiz above)
In June of 1936 Marconi demonstrated to Italian Fascist dictator Benito Mussolini a wave gun device that could be used as a defensive weapon. . . . Marconi demonstrated the ray on a busy highway north of Milan one afternoon. Mussolini had asked his wife Rachele to also be on the highway at precisely 3:30 in the afternoon. Marconi's device caused the electrical system in all the cars, including Rachele's, to malfunction for half an hour, while her chauffeur and other motorists checked their fuel pumps and spark plugs. At 3:35 [sic?] all the cars were able to start again. Rachele Mussolini later published the account in her autobiography. (p. 362)
It also comments (p. 338) on Air Force Regulation 80-17 , which I quote here, in part, from another source (http://www.cufon.org/cufon/afr80-17.htm ):
AIR FORCE REGULATION 80-17
DEPARTMENT OF THE AIR FORCE
Washington, D.C. 19 September 1966
Research and Development
UNIDENTIFIED FLYING OBJECTS (UFO)
This regulation establishes the Air Force program for investigating and analyzing UFOs over the United States. It provides for uniform investigative procedures and release of information. The investigations and analyses prescribed are related directly to the Air Force's responsibility for the air defense of the United States. The UFO Program requires prompt reporting and rapid evaluation of data for successful identification. Strict compliance with this regulation is mandatory.
. . .
2. Program Objectives. Air Force interest in UFOs is two-fold: to determine if the UFO is a possible threat to the United states and to use the scientific and technical data gained from study of UFO reports. To attain these objectives, it is necessary to explain or identify the stimulus which caused the observer to report his observation as an unidentified flying object.
a. Air Defence. The majority of UFOs reported to the Air Force have been conventional or
familiar objects which pose no threat to our security.
(1) It may be possible that foreign countries may develop flying vehicles of revolutionary
configuration or propulsion.
UFOs have, with impunity, invaded restricted airspace, prowled around nuclear missile sites, destroyed radar installations, and are capable by far, of out-maneuvering the most advanced military fighter aircraft any nation has produced. Popular opinion seems to favor the idea that this technology is extraterrestrial, being developed by "space aliens". But if in fact if has been developed by humans, this prospect is even more frightening! Does only one group or country have this capability? Or is there a "balance of power" —some equivalent of Mutual Assured Destruction (MAD)? How are they financed? Where is the manufacturing done? How has the science been help secret for so long? How did our proud country (or anyone's) "miss the boat"? Should we continue to develop rockets for space exploration, or develop other far superior technologies (which someone obviously possesses)? All sorts of awkward, agonizing questions can be raised. Some of these questions are addressed in this book.
Vesco's book also has an important chapter, The Advent of "Suction Aircraft". Aerodynamic drag could be significantly reduced by sucking in the boundary layer on portions of a wing surface. The effect had practical limits when applied to conventional aircraft. It would work better if the fuselage could be eliminated. And so the next incarnation was a flying wing. An even fuller explotation was possible with a saucer shaped aircraft. These aircraft used conventional technology, but had spectacular reductions in drag, and equally spectacular increases in speeds. Later, these developments were followed by stunning breakthroughs in the application of electric and magnetic fields to the field of aviation. The result is (apparently) the modern-day UFO.
Also valuable are the numerous photos and drawings of saucer craft. Included are sixteen photos from the Brown/Bahnson experiments.
Sideways relevant: "Super-insulated clothing could eliminate need for indoor heating", Lisa Zyga (Jan 2015) Silver nanowires are used to reflect infrared heat from the body. The fabric is also electrically conductive and could be used to control exposure to electric fields. http://phys.org/news/2015-01-super-insulated-indoor.html
The Rise of the Fourth Reich, Jim Marrs (2008). Some notable quotes:
"The Germans were defeated in World War II . . . but not the Nazis. They were simply forced to move." (p. 4)
"One edition of the American Heritage Dictionary of the English Language defined fascism as "a philosophy or system of government that advocates or exercises a dictatorship of the extreme right, typically through the merging of state and business leadership together with an ideology of belligerent nationalism." (p. 6)
"In twenty-first century America, many thoughtful persons have witnessed what appears to be a recycling of the events of pre-World War II Germany: the destruction of a prominent national structure; rushed emergency legislation; the rise of a secretive national security apparatus; attempts to resister both firearms and people, coupled with preemptive wars of aggression propelled by fervent nationalism." (p. 7)
"Lenin apparently came to understand that he was being manipulated. "The state does not function as we desired," he once wrote. "A man is at the wheel and seems to lead it, but the car does not drive in the desired direction. It moves as another force wishes." " (p. 10)
"President Woodrow Wilson, who was intimately connected with conspiratorial power, once wrote, "Some of the biggest men in the United States, in the field of commerce and manufacture, are afraid of somebody, are afraid of something. They know there is a power somewhere so organized, so subtle, so watchful, so interlocked, so complete, so pervasive that they had better not speak above their breath when they speak in condemnation of it." "(p. 257)
"Forrestal noted, "These men are not incompetent or stupid. They are crafty and brilliant." " (p.258)
"The Bavarian Illuminati was formed on May 1, 1776, by Adam Weishaupt . . . . Weishaupt also evoked a philosophy that has been used with terrible results down through the years by Hitler and many other tyrants. "Behold our secret. Remember that the end justifies the means," he wrote, "and that the wise ought to take all the means to do good which the wicked take to do evil." Thus, for the enlightened —or "illuminated"—any means to gain their ends is acceptable, whether this includes deceit, theft, murder, or war. The key to Illuminati control was secrecy. "The great strength of our Order lies in its concealment. Let it never appear in any place in its own name, but always covered by another name, and another occupation", stated Weishaupt." (p. 13)
"Why of course people don't want war. . . . That is understood. But after all it is the leaders of the country who determine the policy, and it is always a simple matter to drag the people along, whether it is a democracy, or a fascist dictatorship, or a parliament, or a communist dictatorship. . . . Voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is to tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger." (p. 345, said by Reichsmarschall Hermann Goering )
"While Nazi science was brought to America after World War II, so were attendant Nazi restrictions on scientific liberty. . . . Such tight inner control over scientific advances was reminiscent of the late-war Nazi SS control over technology in the Third Reich." (p.262)
Food for thought, at least. Be sure to read the Epilogue. The book has an index and a chapter-by-chapter list of sources.
See also http:////www.theoccidentalobserver.net/2012/01/age-of-the-psychopaths/
'the rich oppress' —James 2:6
UFOs for the 21st Century Mind: A Fresh Guide to an Ancient Mystery, Richard Dolan (2014) My review: This 486 page book is another well-written comprehensive overview of the complex, multifaceted field of UFOs by Dolan:
What we need is an up-to-date assessment of where we are in this incredible field. That is why I wrote this book. This a comprehensive overview of the UFO phenomenon for people of the 21st century. (page 3)
When assessing the full range of the UFO subject, one is struck by a richness, depth, and profundity that is nothing short of astonishing. Whether we examine it as detectives, philosophers, historians, political analysts, psychologists, intelligence experts, aviation geeks, biologists, astronomers, physicists, cynics, or utopians, we find that the subject simply gets deeper and deeper the further along we go. (page 470)
I am glad Dolan makes the last statement quoted above. Most of the UFO literature is about reports of sightings of UFOs, and caters to the "mystery addiction syndrome" of the public. Reading through all this stuff can get rather boring. There is virtually nothing published about what kind of phyiscs is used by UFOs, and nothing about how to reproducibly build a "model T" version of a UFO. Still, the mainstream scientific community seems to be slightly more accommodating to "non-local physics" than it has been in the past ( http://www.universetoday.com/108044/why-einstein-will-never-be-wrong/ See comments by Brian Fraser ) Dolan reminds us that this is still a rich and varied field that can keep all sorts of investigators busy for a very long time.
UFOs and the National Security State: The Cover-Up Exposed, 1973-1991, Richard M. Dolan (2009) (My review: This 638 page book is "the nearest thing to an official history of the UFO phenomenon that we'er ever to see" (back cover). It is meticulously researched, documents an abundance of UFO encounters, and presents insights into the complexities of public perceptions, government secrets, obfuscation, disinformation, financing of "black" projects, the role of the military/industrial complex, and how "break-away civilizations" can come into existence. It has an extensive bibliography and a thorough index. Overall, it is a very well crafted, very informative work. )
UFOs and the National Security State: Chronology of a Coverup-ip 1941-1973, revised edition, Richard M. Dolan (2002) (Very similar to the above by Dolan, but covering an earlier time period.)
"UFO Destroys Vandenberg Missile - Prof. Robert Jacobs Testifies"
Flying Saucers and Science, Stanton T. Friedman, MSc. (2008) (My review: The author takes the position that flying saucers are a manifestation of visits by extraterrestrial space aliens, apparently for no better reason than the claim that the technology is so far advanced beyond ours that saucers cannot be anything manufactured on Earth. Otherwise, the author's research and reasoning are careful and thorough. His insights about secret government projects are valuable and illuminating.)
Need to Know, Timothy Good (2007) (My review: This is another good book that describes an abundance of UFO sightings and encounters mostly from military sources. Military aircraft encounters with UFOs are definitely very interesting. The author leans towards the hypothesis that some UFOs are extraterrestrial, but that the United States and other governments have (very secretly) developed comparable technology. The presentation is clear, balanced, and well documented. The book has chapter bibliographies and a thorough index.)
"You'd have an aircraft flying along, doing around 500 knots and a UFO comes alongside and does some barrel rolls around the aircraft and then flies off at three times the speed of one of the fastest jets we have in the Air Force. So, obviously, it has a technology far in advance of anything we have."
(This kind of "hot-dog show-off" behavior would almost be expected from a group of military people whose country lost WWII, but which had far superior technology, and wants to show it off.)
UFOs: Generals, Pilots, and Government Officials Go on the Record, Leslie Kean (2011) (For a review see http://www.ufodigest.com/article/kean-eye-ufos-chris-rutkowski-leslie-keans-new-book ; My thoughts: Leslie states that "UFOs became the focus of my professional life after the publication of my first story about them in the Boston Globe. . . . I naively thought this would have to generate some kind of news buzz, and that other journalists would eagerly jump in to pick up where I had left off . . . Amazingly, nothing happened." I am sympathetic to her bewilderment. I wanted to start a group which was willing to build a "Model T" version of a flying saucer, just to prove that the technology to do so has been around for 115 years. The result: nobody was interested. I also wrote to probably about a thousand people about how to convert tens of thousands of tons of Spent Nuclear Fuel into valuable metals by a safe, inexpensive process—a terrific commercial opportunity for somebody. The result: nobody was interested. This leaves me wondering which is more of a mystery: the understanding of wondrous technology, or our utter lack of interest in the same? )
See also: COMETA Report (1999) http://www.ufoevidence.org/topics/Cometa.htm
UFO STEALTH IN THE 1940'S? http://ufodigest.com/article/ufo-stealth-0107 ". . .He photographed the exhaust trail which apparently had no flying machine generating the exhaust vapor that was leaving a telling trail. . . . ."
Russia's Roswell Incident and other Amazing UFO Cases from the Former Soviet Union, Paul Stone, Philip Mantle (2012) (My review: definitely very interesting reading. Covers the Dalnegorsk crash, the Tunguska Event, and many encounters with UFOs by scientists, pilots, military personnel, cosmonauts and astronomers.The former Soviet Union had an extensive military apparatus and so there are many reports from military sources involving UFOs interacting with nuclear weapons storage depots, ICBMs, destruction of radars, their presence during missile tests and training flights, pilots and fighter jets disappearing in flight without a trace, and so forth. There is a lot of potentially useful, specific information in this book.)
Triangular UFOs: An Estimate of the Situation, David Marler (2013) From the book: "This is the first UFO book dedicated solely to the triangular UFO phenomenon. It will examine the history of sightings; outline patterns within the data; and develop a working profile of these objects. Upon reading this book, I hope the data speaks for itself. I believe it strongly suggests we are dealing with a tangible reality that has been with us for a very long time and requires further scientific investigation. " Chapter 7 documents "twenty common characteristics repeatedly described by eye witnesses which help provide a working profile of these UFOs."
Indeed, such a "working profile" would be helpful in developing insights into the scientific principles used by these machines. My own belief is that the people who have built these machines have clearly become masters of "non-local physics", and Marler's list is consistent with that conclusion. Perhaps someday people will become more interested in the physics implied by UFOs instead of the current "golly gee whiz" focus on the entertainment value of sightings. This well written, well documented book is a step in the right direction.
Some additional insights on the physics and technology:
Kennedy's Last Stand: Eisenhower, UFOs, MJ-12 & JFK's Assassination, Michael E. Salla (2013) This was a book I did not think I would be interested in reading, but the reviews suggested otherwise, so I bought a copy. The book is very readable and very well researched and referenced. It is about Eisenhower, UFOs, the Military Industrial Complex, Kennedy, MJ-12, the CIA, and the strange deaths of James Forrestal, Marilyn Monroe, and of course Kennedy himself. A key conclusion is that the disclosure of information about UFOs (and a formerly secret base known as S4 in Nevada) is under the control of the CIA. The President, as commander in chief of the military, has access to military secrets, but not the secrets kept by the black world of the CIA. This creates "plausible deniability" by the President regarding UFO issues, and removes the topic from the vagaries of the four-year political election cycles.
I at first found it odd that the book uses the phrase "UFOs and extraterrestrial life" (or similar expressions) NUMEROUS times (sometimes thrice on a page). The author evidently believes that the two topics are intimately connected, but this theme is not developed in the book. Dr. Salla is a "pioneer in the development of 'exopolitics', the study of the main actors, institutions and political processes associated with extraterrestrial life" (p. 237; http://exopolitics.org ) and so his use of this terminology is hardly surprising. I do not share his view however; the Bible says nothing pro or con about the existence of "space aliens" or extraterrestrial life. But if such does exist, why is God permitting them to visit Earth at this time? This supposition seems to be totally inconsistent with themes in the Bible.
Anyway, the book was worth reading, and a real eye-opener. And very disturbing . . .
"Military Witnesses of UFOs at Nuclear Sites", National Press Club (2010) http://www.youtube.com/watch?feature=player_embedded&v=3jUU4Z8QdHI (Former Air Force officers discuss UFO sightings)
"A Preliminary Study of Sixty Four Pilot Sighting Reports Involving Alleged Electromagnetic Effects on Aircraft Systems", Richard F. Haines, Dominique F. Weinstein (2001) http://www.narcap.org/reports/emcarm.htm
"The primary purpose of this paper is to review over fifty years of pilot reports which both authors have compiled over the years. These cases involve one or more on-board systems (navigation, guidance and control equipment, cockpit displays, circuit breakers, other electro-magnetically controlled systems) were influenced allegedly when one or more UAP [Unidentified Aerial Phenomena] were physically near the aircraft. Clearly, it is both the physical proximity of the UAP as well as the transient nature of these E-M effects that make them so interesting. "
See also http://ufologie.patrickgross.org/htm/airmiss.htm (The effect on aircraft systems is now an even more serious concern for modern aircraft with "glass cockpits" and electrical flight controls ("fly by wire") ). ; http://www.ufoevidence.org/topics/EMEffects.htm
"The COMETA Report" http://www.archive.org/stream/TheCometaReport/COMETA_part1_djvu.txt
"Secret Access. UFO documentary" http://www.youtube.com/watch?v=aYsT6LxjXfo&NR=1 (My review: This is one of the best no-nonsense UFO documentaries I have seen so far).
"The Presidential UFO Libraries" http://www.youtube.com/watch?v=yWpcJt0kyxI
"Out of the Blue", http://www.youtube.com/watch?feature=player_detailpage&v=cYPCKIL7oVw (My review: another good documentary).
"UFO Landings And Physical Trace Cases" http://www.youtube.com/watch?v=z_n9nY0sqF8&feature=youtu.be"Secret government 'X Files' Reveal UFO Sightings", https://www.youtube.com/watch?v=bpWToEWdPjM
"DOCUMENTARY 2015- UFOs Over Texas & The Smoking Gun" http://www.youtube.com/watch?v=ZxWA7fCSa-A
"UFOs And the cold war! Air Force hunting UFOs in Belgium - British TV - Sightings" http://www.youtube.com/watch?v=WcibWS3MGJs&feature=youtu.be
https://youtu.be/_E23e9cye9M "INSANE! Best UFO Sightings Of June 2015 [Breaking News] Share This!" ("black halo" at 28:57 and 29:30)
http://www.youtube.com/watch?v=wPPSyqtFq28 "Documentary 2015 - UFO DOCUMENTARY 2015- Cops vs UFOs & Captured Aliens"
http://youtu.be/nbwglQItO4s "UFO Aliens under Antartic Ice Caps" (this is a Russian language documentary (with captions in English) about possible Nazi flying saucer bases in Antartica) https://youtu.be/MwUpPwyyvLw
http://www.youtube.com/watch?v=iezJY74GsX8 http://www.youtube.com/watch?v=wFdpBgCbv5E http://www.youtube.com/watch?v=eGbfG-hG3Qw Interviews with Bob Lazar
http://www.youtube.com/watch?v=Ab0iUcU7kZ4 (The Kapustin Yar incident, Russia 1948)
http://www.youtube.com/watch?v=dP9gExhKTjw "The UFO Experience"
"The Phoenix Lights" http://topdocumentaryfilms.com/phoenix-lights/
Lights of varying descriptions were seen by thousands of people between 19:30 and 22:30 MST, in a space of about 300 miles, from the Nevada line, through Phoenix, to the edge of Tucson. There were two distinct events involved in the incident: a triangular formation of lights seen to pass over the state, and a series of stationary lights seen in the Phoenix area.
(Years before this, I had a very similar experience, filed at: http://www.ufoevidence.org/sightings/report.asp?ID=13409 )
"The Portal - The Hessdalen Lights Phenomenon - UFO Documentary", http://www.youtube.com/watch?feature=player_embedded&v=sNObDdZPsY8 (My review: This is another good documentary involving automated multispectral observations of the Hessdalen lights. Illustrates a good scientific way of studying this phenomenon.)
"National Security Agency UFO Documents Index" http://www.nsa.gov/public_info/declass/ufo/index.shtml
"Evidence shows U.S. technology far beyond official levels", (December 2014) http://ufodigest.com/article/far-beyond-1230
"The Smoking Gun of Roswell - The Ramey Memo" http://youtu.be/TNmxXIQCnvQ
_____Alien abductions, crop circles, historical encounters, etc.
"Close Encounters of the Fourth Kind - Alien Abduction The Unwanted Piece of the UFO Puzzle", CE4 Research Group htttp://www.alienresistance.org/ce4.htm (My review: This is a website giving details and conclusions from about 100 personal case testimonies about "alien abduction experiences". It is not about physics, but is included here because UFOs and "alien abductions" are often linked together in the mind of the public. And the research reaches a conclusion that would be especially interesting to Christians:
"Through the research into the case testimonies it was found that some of the experiencers were able to stop or terminate the experience. There was a recognized commonality in the method that was used among the Christian experiencers."
(In that context, see also James 4:7, 1Peter 5:8-9, 1John 5:18. Satan's motive is to deny the need for The Ransom. See "The Master Lie and its Operation" In Eden, the human race, as represented in Adam, chose to deliberately reject God's sovereignty. But the only other rulership available was that of Satan. And that is the way things are today. Satan is the "God of this Age" (2Cor 4:4, Luke 4:5-6) and his sovereignty is subscribed to worldwide. People do not want the true God to govern their lives; they want to go their own way (except of course when things go really wrong, and then they wonder "where was God when . . . ?" , forgetting entirely that the human race rejected Him thousands of years ago in Eden). Today, Christians are the only ones who have deliberately opted out of this corrupt system of governance, and Satan has no hold on them, no permission to 'touch' them (1John 5:18). The essence of this new 'contract' (so-to-speak) is expressed in the Lord's Prayer: the words are short and simple, but the effects are powerful and wide in scope. --Matthew 6:9-13)
Here's an additional thought from Wikipedia ( http://en.wikipedia.org/wiki/Paranormal_and_occult_hypotheses_about_UFOs ):
The U.S. Government Printing Office issued a publication compiled by the Library of Congress for the Air Force Office of Scientific Research: "UFOs and Related Subjects: An Annotated Bibliography". In preparing this work, the senior bibliographer, Lynn E. Catoe, read thousands of UFO articles and books. In her preface to this 400-page book she states:
A large part of the available UFO literature is closely linked with mysticism and the metaphysical. It deals with subjects like mental telepathy, automatic writing and invisible entities as well as phenomena like poltergeist (ghost) manifestations and possession. Many of the UFO reports now being published in the popular press recount alleged incidents that are strikingly similar to demonic possession and psychic phenomena.
The perceptual association of "alien abductions" with UFOs is problematic for the US government. Disclosing advanced secret technologies to the public is enough of a problem in itself (especially ones as revolutionary as antigravity, and all the spinoffs ). But when that technology becomes associated with animal mutilations, crop circles, destruction of property, kidnappings, unauthorized medical experiments on humans, violation of Constitutionally guaranteed rights, and even claims of "space aliens" and alternative religions, any government will realize that it is into a mess that could go far out of control very rapidly (like, in a different context, the collapse of the Soviet Union). Having become accustomed to keeping UFO matters secret for decades, it would be easy to decide just to leave it that way.See also: "The UFO phenomenon as demonic activity?" (February 1, 2016) https://noriohayakawa.wordpress.com/2016/02/01/ufo-phenomenon-the-only-viable-interpretation/ ; http://ufoculture.blogspot.com/2012/01/ufo-phenomenon-as-demonic-activity.html
Some additional thoughts from Dr. Joseph Burkes: ( http://nyufo.com/entries/ufo-sightings-news/virtual-hologram-ufo-sightings-hypothesis-21616 Joseph Burkes MD )
Many/most UFO sightings are not of physical objects but are the products of non-human intelligence employing a kind of hologram type technology to project images into the sky that all observers can see, and /or project images into the visual apparatus of selected observers in a group so that only some but not others looking at the same patch of sky are able to see the UFO. This explains what I have witnessed again and again during fieldwork. So most of so called UFOlogy which as you may understand I believe is a pseudoscience, is naturally going to reject this because it contradicts the beloved "nuts and bolts approach" that says dutifully taking down sighting reports will teach you important details about "craft." No instead it teaches about the technology of producing illusion. Perhaps it is a kind of intelligence test that most UFO fans fail miserably.
What we are left with is perhaps a basic understanding of the technology of producing illusion and how UFO Intelligence has been in the belief business for centuries. They co-create with us encounters that to a large extent match our pre-existing notions about what the phenomenon should be. So now in the space age we have flying saucers piloted by alien astronauts. Curiously this message is reinforced just as we were getting out into exploring space in the 1940s with modern rocketry. In the 1890s the airship wave recorded sightings of blimp like objects just before large hydrogen filled Zeppelins were constructed. The pilots were thought to be not spacemen but genius inventors. . . . We need to study it as a psychosocial phenomenon. How various new age belief systems, victim based "abduction" theories, and contactee cults forms with gurus all insisting that their narrow pet theories are the only ones that "make sense."
See also: "UFO Contact Creating a Belief System 22416" http://nyufo.com/entries/ufo-disclosure/ufo-contact-creating-a-belief-system-22416 http://mysteriousuniverse.org/2016/04/ufos-extraterrestrial-probably-not/ http://mysteriousuniverse.org/2016/04/unidentified-flying-objects-the-great-deception/ Weird, confusing, or contradictory information makes the recipient more open to suggestion. See http://en.wikipedia.org/wiki/Gaslighting
A point to note here is that field reports indicate UFOs sometimes appear to some observers but not others, even though they are looking at the same patch of sky at the same time. This is credited to a "non-human intelligence" employing a "technology" to put images into the minds of the observers. However, it is also consistent with the Christian view that this is the activity of demons. The selectivity is implied by 1 John 5:18 : "We know that no one who is born of God sins; but He who was born of God keeps him and the evil one does not touch him. We know that we are of God, and the whole world lies in the power of the evil one." In other words, Satan has no automatic permission to induce visions in Christians, but this is not generally true of the rest of humanity. (Luke 10:17)
Note that these kinds of induced visions could take place in any time period, even thousands of years ago. Satan undoubtedly anticipated the development of real advanced interstellar propulsion systems and produced counterfeit images and descriptions of real spacecraft that would actually exist in the future to deceive people into thinking that "space aliens" have been visiting Earth for thousands of years. As mentioned above, this is incompatible with the Ransom doctrine.
The believability of these lies is enhanced by the secrecy and compartmentalization surrounding UFO phenomena. If someone handed you a $3 bill, you would immediately recognize it as fake, because no such denomination has ever been issued in the United States. Counterfeiters know this too, and so they only counterfeit real currency. But suppose you were from a foreign country and did not know that $3 bills were fake. If someone handed you a $3 bill, and the engraving and printing and "feel" were what you expected from a real bill, you might accept the counterfeit as real. Your lack of information has allowed you to be misled. Likewise, Satan counterfeits real things, or things that could be expected to be real. He is the "father of the lie" and his lies serve only his purposes of deception.
Researchers have noted that secrecy and censorship have aided in the belief of "space aliens" and ETs:
". . . the government's handling of censorship on UFOs has contributed to a widespread belief in the existence of extraterrestrial beings visiting the Earth. . . . any historical discussion of the UFO controversy must credit, or blame, the U. S. government for at least an assist to ET belief." (p. 133, 135) UFOs and Government A Historical Inquiry, Michael Swords, Robert Powell, et al. (2012)
Note that UFOlogists generally view UFO phenomena through two mind sets: the "spiritual" mind set and the "nuts and bolts" mind set. The former is Satanic and the latter is pure physics. BOTH exist in the field of UFOlogy. But often they are mingled together in reports. ( "UFO Intervention - The Possibility", R. Perry Collins (1986) http://www.ignaciodarnaude.com/ufologia/UFO%20Intervention%20in%20Earth.pdf )
For "nuts-and-bolts" UFO propulsion systems see: UFO Physics
For the space-aliens-are-demons viewpoint, see "UFOs and the Christian Worldview" Jefferson Scott, http://www.jeffersonscott.com/nonfiction/ufos.htm
In a completely different context, see http://en.wikipedia.org/wiki/God_helmet)
"He who digs a pit will fall into it,
And he who rolls a stone,
it will come back on him"
“Only puny secrets need keeping.
The biggest secrets are kept by public incredulity.”
“The general population doesn’t even know what’s happening,
and it doesn’t even know that it doesn’t know.”
"An editor is one who separates the wheat from the chaff and prints the chaff."
"We don't make the news. We just ignore it."
"UFOs select their witnesses . . . . their appearances are staged!"
(Jaques Vallee; see also http://cufos.org/swords2.pdf )
Pentagon Aliens, 3rd edition, William R. Lyne (1999) (My review: This book takes the position that space aliens "are actually people, whose philosophy and bizarre masquerade are alien to the American way of life, since they believe in government by anti-democratic hoax to maintain the secret power of the Trilateral commission elite, to whom our lives are very cheap." The book has a chapter on "How to Build a Flying Saucer" along with some construction tips in the Appendix, and numerous comments about Tesla technology. Unfortunately, the author's tone is frequently angry, opinionated, and resentful; the language is also a bit coarse at times.)
Messengers of Deception UFO Contacts and Cults, Jaques Vallee (2008) online at https://docs.google.com/file/d/0BwaXvvZmDODlZTBkOE1YSldYZG8/edit?pli=1 Some samples:
"At the time I was a student, had no access to good information, and could only wonder about government attitudes. I became seriously interested in 1961, when I saw French astronomers erase a magnetic tape on which our satellite-tracking team had recorded eleven data points on an unknown flying object which was not an airplane, a balloon, or a known orbiting craft. "People would laugh at us if we reported this!" was the answer I was given at the time. Better forget the whole thing. Let's not bring ridicule to the observatory. Let's not confess to the public that there is something we don't know.
The main argument against UFOs at the time was that "astronomers don't see anything unexplained." Well, there we were, a team of professional astronomers, seeing things we couldn't explain. Not only were we denying it, we had destroyed the data!" (p. 6)
"In my spare time, I pursued my UFO studies, trying to find some pattern in the global distribution of sightings. The most clear result was that the phenomenon behaved like a conditioning process. The logic of conditioning uses absurdity and confusion to achieve its goal while hiding its mechanism. There is a similar structure in the UFO stories." (p. 7)
"The followers of modern UFO cults are often persons who, like Gregory, have become disenchanted with science and technology. Scientific reluctance to consider valid claims of paranormal phenomena is slowly driving many people to accept any claim of superior or mystical contact. The voice of science has lied too often. A large fraction of the public has tuned it out completely." (p. 13)
" "Expert opinion" on any subject of policy - from energy supply to cloning, from the ban of the SST to the censorship of TV violence - has become a game in which the answers are constantly revised, not to reflect new knowledge, but to follow the trends of academic fashion. The language of each discipline has become an esoteric jargon that cannot be penetrated even by someone with an advanced education in another field." (p. 17 )
"This is one of the little-recognized facts of the UFO problem that any theory has yet to explain. The theory of random visitation does not explain it. Either the UFOs select their witnesses, or they are something entirely different from space vehicles. In either case, their appearances are staged! (p. 29)
"Where does this exploration lead? . . . They also suggest that our civilization may be headed for very serious trouble, with irrational forces tearing apart the old structures and replacing them by the blind institutions of inhuman beliefs." (p. 61)
This book has a lot of insightful things to say about the social consequences of the UFO phenomena. | <urn:uuid:bbe78090-c9b5-48af-b1e2-c1e54578dfb1> | 2.5625 | 103,300 | Nonfiction Writing | Science & Tech. | 50.004994 | 95,544,771 |
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Marine Biology History Timeline
Transcript of Marine Biology History Timeline
Marine Biology History Timeline
HMS Challenger- Sir Charles Wyvill Thomson
Spencer Fullerton Baird
Mid 18th Century
Aristotle identified a variety of marine species in his writings. These species include crustaceans, echinoderms, mullosks, and fish. He also recognized that marine vertebrates are either opivarous (eggs hatch outside the body) or viviparous (eggs hatch inside the body).
Aristotle's writings are the first specific references to marine life that were recorded. Because of this, he is often referred to as the father of marine biology.
A three year voyage led by Sir Charles Wyville Thomson collected and analyzed thousands of marine specimens from all of the oceans of the world. This voyage led to the discovery of the Mid-Atlantic Ridge
This voyage resulted in 30,000 pages of oceanographic information. It disproved Forbes' theory that life could not exist below 1,800 feet. It was the first systematic plot of currents and temperatures in the ocean.
Captain James Cook
Captain Cook took extensive voyages of discovery for the British Navy. During this time, he mapped much of the world's unchartered waters, and logged descriptions of numerous plans and animals. These species were unknown to most of mankind at the time.
Cook's voyages began the modern day study of marine biology. Because of Cook's explorations, a number of scientists because studying marine biology much more closely.
Spencer Fullerton Baird, the first director of the US Commission of Fish and Fisheries began a collection station in Woods Hole, Massachusetts in 1871. This laboratory still exists, known as the Northeast Fisheries Science Center.
This laboratory is the oldest fisheries research facility in the world.
The U.S. Fisheries Commission Steamer
, begins operations.
The Albatross is the first
vessel built by any government as an oceanographic research vessel from the keel up.
On April 15th, the White Star Liner Titanic sinks after striking an iceberg in the North Atlantic Ocean. The wreck killed several thousand people.
This disaster leads to a united effort to devise an acoustic means of discovering objects in the water, forward of the bow of a moving vessel.
Sylvia Earle leads four female aquanauts on an all-female expedition known as Tektite II, Mission 6. They lived underwater for two weeks.
This was the first ever all-female expedition. Sylvia Earle's work opened the doors for women to become actively involved in marine biology.
In 1977, scientists discovered seafloor vents gushing warm, mineral-rich fluid into the cold water at the depths of the Pacific Ocean. These became known as hydrothermal vents. Many are located along the Galapagos Rift.
On September 1, 1985, a team led by Dr. Robert Ballard discovered the
, the most famous shipwreck in modern history.
This marked an important day in marine history because it included the discovery of an ecosystem that is able to live without sunlight. These ecosystems rely on biota absorbing chemical energy from venting materials in a process called chemosynthesis.
The discovery of the
goes to prove the great lengths that deep sea diving has come. This field of marine science has made vast improvements in the past decades, making it possible to explore areas that were previously off limits.
This is the first attempt at forming a marine biology program using scuba for aquatic research.
Conrad Limbaugh forms a scientific diving team at Scripts Institution of Oceanography in California. | <urn:uuid:752f5744-235b-42c3-9290-851343f4fb8d> | 3.765625 | 827 | Truncated | Science & Tech. | 36.911938 | 95,544,777 |
Researchers have identified a number of planets outside our solar system that both resemble Earth and carry a better than zero chance at containing life. None of these is as close to Earth as newly-discovered Proxima b.
Only 4.2 light-years away — about 25 trillion miles — Proxima b is our nearest neighbor outside this solar system. If you’re keeping score at home, 25 trillion miles is about 300,000 times further than the Earth is from the sun. While the distance sounds daunting, it’s a relative stone throw in the greater scheme of things and further details just how vast our solar system is — and how much we’ve yet to discover.
Blockchain and cryptocurrency news minus the bullshit.
Visit Hard Fork.
While little we know about Proxima b at this point — and we may be years from finding out more — the planet is believed to be roughly the size of earth. While astronomers don’t know if it does — or ever has — supported life, they do know it’s within the habitable zone of a small star called Proxima Centauri — the smaller version of our sun. The distance from Proxima Centauri leaves reason to be excited that it could contain water and perhaps even life.
Then again, it may not support an atmosphere, which would almost certainly rule out life due to the amount of radiation Proxima Centauri emits. Although smaller, the radiation levels emitted are far higher than that of our sun.
Newly confirmed, Proxima b was discovered more than a decade ago. Due to its distance, astronomers and observers were never able to provide conclusive proof that what they were seeing was indeed a planet. This all changed recently when scientists put Proxima b directly in their crosshairs and focused the attention of multiple high-powered telescopes on the area — leading to its confirmation.
Unfortunately, to confirm the planet could contain life, scientists would need to study pictures of the planet itself. Currently, we don’t have the instruments available that could snap such a photograph. Scientists are optimistic, however, that this could soon change. In fact, due to the relatively close distance from Earth, we may one day be able to send robots to explore its surface.
For now, though, it’ll remain a huge discovery that finds scientists struggling for superlatives that suggest just how big a deal this is for the future of space exploration. | <urn:uuid:f368ebc2-6b20-4f97-b934-598fd8c356c9> | 3.4375 | 501 | News Article | Science & Tech. | 49.811111 | 95,544,779 |
The advance is the result of investigative work done at the National Institute of Standards and Technology's Center for Neutron Research (NCNR), and at the National High Magnetic Field Laboratory (NHMFL) at Florida State University (FSU).
Stray magnetic fields suppress superconductivity, the resistance-free passage of electric current. But the object of the team's scrutiny—a uranium-ruthenium-silicon compound (URu2Si2)—somehow accommodates the normal adversity between magnetism and superconductivity. At 17.5 degrees above absolute zero, once-nomadic electrons that had roamed freely about the compound's lattice-like atomic structure—and generated their own magnetic fields—behave in a more orderly and cooperative fashion. This coherence sets the stage for superconductivity.
URu2Si2 belongs to a class of materials called heavy fermions, known to be reluctant superconductors. This is because current-carrying electrons in the intermetallic material interact with surrounding particles and truly gain from the experience. The association adds mass—making the electrons behave as though they were a few hundred times more massive than "normal." The heavy electrons once were thought to make superconductivity impossible.
However, numerous heavy fermion superconductors now are known, and URu2Si2 ranks among the most curious of the lot.
Unexplained was how a "hidden order" suddenly arose in the wake of the magnetic instabilities caused by the roving electrons, each one spinning and producing its own miniature magnetic field. With neutron probes, researchers managed to track electron movements and determined that the wandering particles work out an unexpected accommodation in the spacing of their energy levels.
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:7e5b9881-8530-46b0-b885-c4845cdb0148> | 3.46875 | 928 | Content Listing | Science & Tech. | 30.396448 | 95,544,794 |
salivary gland chromosomes
salivary gland chromosomes defined in 1951 yearsalivary gland chromosomes - SALIVARY GLAND CHROMOSOMES (SALIVARIES);
salivary gland chromosomes - Giant chromosomes occurring in salivary glands (and some other tissues) of dipterous insects (including Drosophila). Nuclei of these tissues have their chromosomes microscopically visible, unlike normal resting nuclei. Each pair of homologous chromosomes is closely adherent together (paired). The chromosomes are stretched out to a much greater length than usual (up to i mm.) and greatly thickened by repeated duplication, i.e. they are polythene. They are marked by an elaborate pattern of transverse basophilic bands, formed by homologous regions of the numerous chromosome strands lying side by side. The pattern is due to the arrangement of the genes along the chromosomes. From genetic differences associated with changes in the pattern, numerous genes have been localized. See also: Chromosome Map.
near salivary gland chromosomes in Knolik
definition of word "salivary gland chromosomes" was readed 784 times | <urn:uuid:f1a5a1d6-899d-4e06-8b9d-4ce6b2e2d29f> | 3.3125 | 239 | Knowledge Article | Science & Tech. | 12.416598 | 95,544,802 |
Species Detail - Didemnum vexillum - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Invasive Species: Invasive Species || Invasive Species: Invasive Species >> High Impact Invasive Species || Invasive Species: Invasive Species >> Regulation S.I. 477 (Ireland)
10 March (recorded in 2009)
9 November (recorded in 2007)
National Biodiversity Data Centre, Ireland, Didemnum vexillum, accessed 22 July 2018, <https://maps.biodiversityireland.ie/Species/134525> | <urn:uuid:bdb52d2c-54bb-46e0-be43-e40807286a2c> | 2.5625 | 170 | Structured Data | Science & Tech. | 27.405 | 95,544,806 |
Nucleic acids are tiny bits of matter with large roles to play. Named for their location -- the nucleus -- these acids carry information that help cells make proteins and replicate their genetic information exactly. Nucleic acid was first identified during the winter of 1868–69. A Swiss doctor, Friedrich Miescher, found a molecule in a cell’s nucleus that could not be identified. Even at that early date, Miescher suggested that the substance could be involved in creating new cells and passing along existing traits.
A Three-for-One Deal
RNA, ribonucleic acid, is composed of phosphate, a sugar -- ribose -- and the bases adenine, uracil, cytosine and guanine. Though typically located in the cytoplasm of the cell, RNA is usually produced in the cell’s nucleus. Three major types of RNA are found in cells: messenger RNA (mRNA), ribosomal RNA (rRNA) and transfer RNA (tRNA). Managing RNA is an important part of a cell’s business. RNA is continually being produced, used, separated into parts and reused.
The primary job of RNA is to help the cell produce proteins. The mRNA begins the process by carrying the instructions for protein production from the DNA in the nucleus to the ribosomes, organelles in the cytoplasm that make protein. The ribosomes, made up of protein and rRNA, follow those directions. Amino acids are needed to build proteins, and it is the job of tRNA to carry them to the ribosomes so the organelles can finish their job.
DNA, deoxyribonucleic acid, has a twisted ladder or double helix structure. It is composed of phosphate, a sugar -- deoxyribose -- and four different bases. Three of these are the same as those in RNA: adenine, guanine and cytosine. One base, thymine, is specific to DNA. Most of an organism’s DNA is in the cell nucleus. A gene is made up of a small segment of DNA and holds genetic directions about a specific trait. The genes are organized on longer structures called chromosomes.
By the Book
Humans have 23 pairs of chromosomes in each cell that provide the blueprints for growth and development. DNA is the “instruction booklet” for the cell, containing the genetic information that each organism received from its parents. The “booklet” stores all the information the cell needs to carry out its functions. Organisms grow and repair themselves by making new cells. In order for this to happen, the DNA replicates itself, so each new cell usually has identical genetic information. | <urn:uuid:969d5a3e-ca50-4822-8bc5-22a4cbf2208f> | 4.375 | 563 | Knowledge Article | Science & Tech. | 47.419072 | 95,544,808 |
Following its success with an innovative "Working for Water" program, South Africa has begun experimenting with a whole new approach to conservation and restoration; an approach that has scientists "mapping" ecosystem services and land-users "farming" them. The Ecosystem Marketplace takes a closer look at these recent developments and considers whether or not "trading" will be the next new verb for ecosystem services in the RSA. The photo looked like it came from Mars: a reddish, dry riverbed running beneath yellowed marshlands and brown hills. Even the sky, where white streaks striated a pale blue horizon, looked parched. The printer had run out of blue ink. The result was a photograph version of South Africa's Gariep River that looked decidedly thirsty. And yet, embedded as it was in a document entitled, "Working for Wetlands, South Africa," the unintended photograph seemed strikingly appropriate, even prescient. It read like a warning: take care of South Africa's wetlands or the Gariep Basin may, itself, run out of blue in the decades ahead. South Africa is a dry country and recent climate projections suggest that much of the nation will grow drier in the years to come. By 2025, according to a recent WWF document, "the country's water requirements will outstrip supply unless urgent steps are taken to manage the resource more sustainably." Fortunately, South Africans are taking steps to conserve their water resources and, notably, they are using an ecosystem service-based approach to fuel their progress. Protecting watershed services in South Africa has, in fact, become the catalyst for a whole new approach to conservation and restoration in the country, an approach that some in the business are calling 'ecosystem farming.' Ecosystem farming is interesting because it implies a very different approach to a long recognized environmental conundrum: biodiversity conservation and people's need to earn a living from their land don't always coincide. In the United States, the government generally has resolved this conflict by paying farmers to take their land out of production. In South Africa, both public and private interests are currently testing the feasibility of paying landowners and laborers to do the opposite, i.e. to put land into a new form of production – one geared towards ecosystem services. Against this backdrop, the recently published 'ecosystem services map' of the country's Gariep Basin is especially intriguing. Could the project – a product of the Millennium Ecosystem Assessment (MA), a four-year international effort to assess the state of Earth's ecosystems – represent a road-map, not just for the development of the Gariep Basin, but also for the whole of South Africa and, by extension, for the rest of the world?
A New Kind of Map
The Gariep is not only the longest river in South Africa, it is also among the most important and most heavily regulated. Large dams and complicated transfer schemes knit together over 665,000 square kilometers of catchment, as the river flows from the mountain nation of Lesotho through Gauteng Province and on into the arid western reaches of South Africa and Namibia. On its way, the Gariep system supplies water to Johannesburg, the economic hub of Southern Africa, fuels South Africa's "grain basket" (where food for approximately 70% of the nation is produced), and supports two international biodiversity hotspots. Given its ecological and economic importance, the Gariep Basin is, in many ways, just the sort of place in need of an ecosystem services map. And so it was that, in 2000, The Gariep Basin Millennium Ecosystem Assessment was born. Broadly speaking, the aim of the project was to provide a map that would be useful to policy-makers balancing the trade-offs associated with the protection and use of ecosystem services at the local, national and regional scales. Toward this end, the scientists used models and participatory methods to assess the location and "irreplaceability" of three types of ecosystem service in the basin: water services; food and fuel production; and services linked to biodiversity. Water reaches were assigned classifications ranging from A to F, according to their level of ecological integrity and industrial/agricultural function. 'A' regions of the river carried proposals for strict management practices that would support biodiversity in a near natural state. In descending order, Bs, Cs and Ds allowed for successively greater alterations of natural flow, water quality and temperature. Finally, recommendations were made that Es and Fs – stretches of the river so modified by human activity that their function potentially was impaired irreversibly – should be restored to D level when and where possible, but that some reaches of the river should be treated as sacrificial "workhorses" for industrial, agricultural and municipal water needs. Once the basin's present water resources had been mapped, the researchers next used models to forecast attainable classifications for each area in the future. This second map thus described the 'restoration capacity' of the catchment over the course of five years, charting a path toward an increased net flow of watershed services to a variety of sectors. A similar approach was taken in the mapping of biodiversity and food production services. The basin was gridded and each cell, representing a piece of land, was ranked according to the level of service it provided in three areas – the production of protein, cereal and biodiversity. "We used the notion of irreplaceability to assign comparable values to areas of land," explains the report. "Irreplaceability is a measure of how important the features that an area contains are to the achievement of a stated goal." In the case of the Basin, the scientists defined their goals as the provision of the nutritional needs (in terms of protein and calories) of 70% of South Africa's population and the preservation of a baseline measure of biodiversity. The resulting map, in which areas with high irreplaceability values look like bright spots on an electricity grid, indicates those regions that should be managed most carefully for each of the respective services. Importantly, the map also reveals regions of overlap, where the same geographic area provides irreplaceable services in terms of both biodiversity and food production. It is in these "ecosystem service hotspots," stress the scientists, that different management practices should be considered most carefully, with decision processes that weigh trade-offs explicitly and pricing policies that reflect the full cost of the land being used. While the project's managers are quick to point out that, "framing a question of ecosystem services only as an economic issue has several shortcomings," they also acknowledge that market forces can play an especially important role in assigning values to services in areas where the trade-offs between two or more management regimes must be considered. The basic notion of economics, of course, is that economic forces give price signals that, because they are continually revised, are an especially useful means of assigning and tracking value in dynamic systems. Thus, it is in the basin's most irreplaceable 'ecosystem service hotspots' that market-based conservation mechanisms may have a role to play: "We are currently exploring markets for ecosystem services," says Christo Fabricius, one of the lead investigators on the project, "but there are no examples in South Africa, that I know of, where this has been successfully implemented…yet."
Brick by Brick, Tree by Tree
While true market-based conservation programs per se aren't up and running in South Africa, a suite of public works programs is laying the foundations upon which they might soon be built. Throughout the 20th century, public works projects generally focused on regulating rivers – through dams, dikes or irrigation schemes – in ways that made them less natural. In the first decade of the 21st century, South Africa has been widely recognized for turning this paradigm on its ear through a program called 'Working for Water'. By restoring watersheds to their 'natural' state, South Africans are harvesting the benefits of ecosystem services while simultaneously providing jobs to their nation's poor. Invasive species suck up a great deal of water in South Africa – a single eucalyptus can use up to 400 liters of water in a day. Consequently, their removal immediately increases the amount of water available to recharge water tables. Recognizing that two of the country's wrongs – unemployment and water-scarcity – might make a right, the South African government began paying people to clear invasive species out of river catchments in 1996. They called the program Working for Water and, in the decade since its inception, they have watched it grow from strength to strength. Click here for more on WfW. Now, Working for Water's impact is rippling ever wider through a series of spin-off programs: Working for Wetlands opened up shop in 2000 to restore the water filtration services of native marsh habitat; likewise, Working on Fire began dispatching crews to sustain healthy forests/veld and prevent wildfires last year; and a new program near Port Elizabeth, called Working for Woodlands, is beginning to restore pastoral lands to sustain biodiversity and sequester carbon. Taken as a whole, the projects constitute not only the largest conservation program on the African continent, but also a sea-change in terms of the recognition of the value of the services provided by healthy ecosystems. "Programmes like Working for Wetlands, Working on Fire, and Working for Woodlands, not only provide 'value' and employment because of their pro-poor policies, but also engender a conservation ethic amongst their workforce," says Val Charlton, Advocacy Coordinator of the Working on Fire Programme. "We could use more programmes like this – with lots of synergy and potential income generation possibilities based on land-users acting responsibly, looking after their land." The idea that land-users might not only act as stewards of the ecosystem services flowing from their land but also benefit financially from doing so is the win-win goal of modern conservation – the environmentalist's version, so to speak, of having one's cake and eating it too. In the case of South Africa, however, the idea is that land-users are, at the same time, baking more cakes. This, in a nutshell, is the basic notion behind ecosystem farming. And so, importantly, the new programs in South Africa are not only mapping and harvesting ecosystem services like soil protection, water delivery and carbon sequestration, they are also investigating the long-term economic returns that might convince private stake-holders to invest in increasing them. Those at Working on Fire, for instance, are stating their case to private agricultural and silvicultural enterprises. "The commercial sectors of Forestry and Agriculture suffer extensive financial loss as uncontrolled fires destroy crops, plantations, buildings and equipment," reads the program's website. "As this project aims to provide direct benefits to private sector bodies, it is expected that this sector will in return, support the venture." Working for Woodlands, meanwhile, is investigating potential income streams to entice private and communal land-users to undertake restoration work on their land. "Ultimately the aim is to remunerate the land-user for delivering services such as biodiversity conservation and the protection and maintenance of ecosystem functions – i.e. erosion/soil regimes, water delivery and quality and –the most talked about one at the moment—carbon sequestration," says Christo Marais, the Executive Manager of Strategic Partnerships at South Africa's Working for Water Programme. Working for Water and its sister programs – because they are seeded and sustained by government money rather than by direct payments from the users of their services – are not true market-based mechanisms, but rather an excellent example of how innovative public programs can create positive synergies between poverty alleviation and ecosystem restoration. Nonetheless, as the programs explore new funding streams in the private sector and begin to cultivate the notion of ecosystem farming among landowners, they are inching South Africa ever closer to the widespread deployment of market-based conservation. Experts in the field, however, warn that, before the 'mapping' and 'farming' of ecosystem services can actually generate 'trading' in South Africa, uncharted and tricky waters have yet to be navigated.
Here be sea-monsters?
"South Africa is a mix of both first and third world economies, with all the challenges associated with such," says Charlton of Working on Fire. "At the third world level, poverty is dire, and it is extraordinarily difficult to preach ecosystem services approaches to an audience that is starving – they are not thinking about tomorrow, only the meal that needs to be put on the table today. Thus the first challenge is to make conservation meaningful to the poor." Charlton's point is an important one. In a country where 8 million people still lack access to safe drinking water, the notion of farming the resource for others is a foreign, even absurd, idea. "Although Payments for Environmental Services (PES) are understood as a market based mechanism, in most instances in this country and region poor communities are providing environmental services without compensation and these are typical cases of market failure and a lack of bargaining power in the transaction of services," says Paula Nimpuno of the Ford Foundation. "[Our] current work with Resource Africa has the intention of mapping but also of improving our understanding of how to make the ecosystem market approach benefit the poor." Bread and butter politics reign supreme among the poorest of the poor in South Africa. If they are to succeed, market-based conservation mechanisms like the users-pay approach to ecosystem farming described above (also known as a PES or ESP model) must justify their relevance in these starkest of terms. Equally important, they must make sure the poor can access buyers for the services they render and that they can negotiate with them on equitable grounds. Click here for more on PES programs and the rural poor.
It is not all Poverty
Although poverty is perhaps the most pressing issue relating to ecosystem services in South Africa, it is worth noting that 83% of the country is privately owned, much of it by white landholders who are not impoverished. What of market mechanisms in these areas? Mark Botha, of the Conservation Unit at the Botanical Society of SA, cautions that important conservation opportunities are likely to be missed if these parties are not welcomed to the table. "Tenure and ownership are well defined in the private areas, and there are opportunities to link specific payments to well defined management actions (biodiversity friendly land use, increased run-off, increased carbon storage). However, international NGOs and donors are not prepared to test PES with this politically marginal and not-poverty stricken group," says Botha, who has looked carefully at the PES approach in South Africa. "The focus on poverty and communities has taken us away from some more direct PES opportunities." Evidence exists to suggest that Botha is right in looking closely at private, as well as communal, land-users. Private landholders in the country have had the right to use and manage the wildlife on their land for the last several decades, and the result, according to the Millennium Assessment "has been a doubling of protected land as well as increased economic benefits." Clearly, private property owners are well positioned, and perhaps ready, to take up the challenge of ecosystem farming on a much wider scale than anyone else.
The Voyage Ahead
Finding the synergies between poverty alleviation and ecosystem service conservation, while at the same time ensuring market-access to both economically and politically marginalized populations, is the next challenge for South Africa as it moves from mapping and farming its ecosystem services, through to trading them. Tied up as this challenge is in the past as well as the future, striking the right balance will be neither simple nor easy. History has shown, however, that it would be a mistake to count out this particular nation's possibility of success simply because of political, social and economic complexities. When it comes to the successful navigation of troubled waters, there may be no better boat to follow than that flying the green, black and yellow flag of the New South Africa. Amanda Hawn is the Assistant Editor of The Ecosystem Marketplace, she may be reached at email@example.com. | <urn:uuid:752a4b70-f2af-4a00-9c2b-56063a935afe> | 2.875 | 3,297 | Nonfiction Writing | Science & Tech. | 24.797394 | 95,544,811 |
Join the Conversation
To find out more about Facebook commenting please read the Conversation Guidelines and FAQs
Michigan officials identify 2 new aquatic invasive species
LANSING — State environmental officials are warning that Michigan waterways are facing new threats from two invasive species.
Scientists recently discovered freshwater algae known as didymo in the St. Mary’s River near Sault Ste. Marie. The other species, New Zealand mud snails, were found in the Pere Marquette River near Ludington, according to the state Department of Environmental Quality.
The algae had been found in small, sporadic concentrations over the past century. The snail had never been seen in Michigan before.
Both species pose a threat to recreational activities because they can easily attach to boats and fishing gear. They also have the potential to affect drinking water supplies.
“These two species have each had significant impact on native ecosystems,” Michigan Department of Environmental Quality Director Dan Wyant said. “They degrade and in some cases ruin popular fisheries, and they can significantly alter the foundation of an entire waterway.”
Didymo algae is known to blanket rivers and can kill food sources for fish. New Zealand mud snails also can disrupt the food chain for fish by removing algae from rocks that normally feed insects that serve as a dietary staple for species like salmon and trout.
As a result of the discoveries, the agency is reminding boaters and anglers to take steps to clean, drain and dry their equipment to help prevent the spread of didymo, New Zealand mud snails and other aquatic invasive species.
Other invasive species wreaking havoc on ecosystems in Michigan are sea lampreys, zebra and quagga mussels, and other fish and plants.
Copyright 2015 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:c8d0102a-bdca-49c5-b8fb-57cdfaf97f66> | 3.125 | 377 | News Article | Science & Tech. | 37.185425 | 95,544,827 |
Bird Non-Evolution from Dinosaurs
December 29, 2016
Losing traits won't make a dinosaur fly, and other conundrums in the presumed dinosaur-to-bird evolutionary story.
Reality Rushes Evolutionists
December 14, 2016
"Earlier than thought" is a common phrase encountered when evolutionists test their speculations against the real world.
Proof of Dinosaur Feathers?
December 9, 2016
Opinions are swirling about an amazing piece of amber with enclosed feathers. Let's look at what is known so far.
More Original Protein Found in Older Bird Fossil
November 22, 2016
The new discovery from China dates back 130 million years on the evolutionary timeline.
Early Fossil Bird Feathers Were Modern and Colorful
November 19, 2016
The predicted gradual sequence from fuzz to flight remains mythical, as modern feathers show up too early for Darwin's comfort.
More Original Dinosaur Protein Found
November 10, 2016
This time Mary Schweitzer's team found keratin protein on a claw of an ostrich-sized dinosaur from Mongolia.
Dinosaur Pickles Its Brain
October 29, 2016
An unusual rock found on a beach is a dinosaur's fossilized brain, paleontologists say. How did a squishy thing turn to stone?
Fossil Flaw Tosses Years of Evolutionary Research
October 26, 2016
Fossils may be real, but the methods used to analyze them have come under fire, with implications for Darwinian theory.
Finding Dinosaurs Is Not the Same as Explaining Them
October 19, 2016
What's turning up in dinosaur digs around the world? Bones, footprints and speculations.
Evolution: A Theory in Constant Revision
October 10, 2016
Darwinian evolution survives by constant patching of weaknesses in its web of belief.
Human Evolution Surprises Continue
October 6, 2016
The observational environment causes evolutionary theory to adapt.
Mark Armitage Wins Legal Victory
October 4, 2016
The microscopist fired for his publication of Darwin-embarrassing dinosaur soft tissue has won a historic settlement against Cal State University.
Medieval Dinosaurs Too Incredible for Materialists
October 1, 2016
Window dressing on the rock wall of a medieval church stirs unbelief, anger among anti-creationists.
Darwin Fish Lacks Tetrapod Legs
September 9, 2016
You can see a transition between a fish and a land creature in fossils and genes only if you have a vivid imagination.
Birds and Pterosaurs Flew Together
September 2, 2016
Does it make evolutionary sense to find birds flying with pterosaurs? | <urn:uuid:3a084577-a80b-41a3-9a62-eaa2bbf117b0> | 2.9375 | 548 | Content Listing | Science & Tech. | 38.602283 | 95,544,835 |
(Walking Times) As the term geoengineering creeps evermore into public consciousness, people are being persuaded that weather modification schemes will play an important role in public safety as a response to climate change. Most people are behind the curve on this issue, though, as independent journalists and private individuals have for years been documenting what is is happening in our skies.
For the last several years, a Washington state resident has been documenting geoengineerinng activity in the skies over the Olympic Peninsula, reporting on unnatural cloud formations, atmospheric spraying, and electromagnetic warfare testing by the Navy. In addition to the obvious manipulations of the sky at large, she has documented an abundance of mysterious white fibrous materials falling from the sky.
In a post on September 6th, 2017, V. Susan Ferguson shares the following images along with an explanation of where her research in this matter has led to.
Regarding the above photo, she notes:
“The above photo was taken by me a few months ago. Please note the patterns that look like spikes or limbs off a stem. These remind me very much of the kind of cloud formations I often find on NASA Worldview now. This photo is enlarged and contrast enhanced to show the structural details.” [Source]
All of the photos were taken at her home within the last two years. The substances appear as a wiry dust covering everything outside, and occasionally inside. When magnified, the images reveal fine white fibrous materials of unknown material.
Speculation on the material and methods of dispersal for this type of event is not limited to Washington state, as there have been similar incidents around the world in recent years. A small publication in France in 2013 reported on a similar fibrous substance which fell from the sky in 2013 after suspicious cloud formations were seen. In West Texas, and Arizona an example was seen in 2015, when many reported seeing larger clumps of filaments descending from the sky and sticking to everything.
Ferguson took the above photo of the cover of a flashlight after allowing floating material to collect on the lens. Her suspicion is that this material is CHAFF, which is a combination of metallic and plastic strips sprayed by military aircraft to affect communications equipment. This theory is quite plausible given the circumstances surrounding the Navy’s ongoing program of electromagnetic warfare testing in her area, in which low flying and extremely noisy military aircraft frequently fly in this area.
”Above is the flashlight I captured the particles that the US Navy Growlers were and are still dropping on me and everyone here on the Olympic Peninsula. The size of these aluminum strips which are commonly known to be CHAFF reflect their age, because now all the metals are nano-sized. The nano-versions make them much more difficult to ‘see’ but easier to breathe. Not that the military gives a damn about the people it is supposed to be defending. It does not! Note the various colors which are also typical of metal CHAFF.” [Source]
The Navy’s involvement here is not a theory, and is publicly known.
”The U.S. Forest Service, in a draft decision released Tuesday, will grant the Navy a permit to conduct electronic warfare training with ground-based transmitters in the Olympic National Forest.
The training involves Whidbey Island-based EA-18G Growler aircraft crews who would be tasked with detecting signals from ground-based mobile transmitters. The Navy could place these transmitters at any of 11 Forest Service sites under the proposed five-year permit.
The Navy Growler crews already train over the Olympic Peninsula. The addition of the mobile transmitters will enable the Navy to expand that training to include exercises now done at a more distant location in Idaho.” [Source]
Below is another photo from her residence in Washington, showing this type of explicitly fibrous dust appearing on dishes within the home:
”One more (above), you can see the colors in the fibers. My friend says they are standing ‘erect’ because they are still electrically charged and the ceramic glaze on the platter does not conduct electric charge.” [Source]
Conspiracy Theory or Truth Untold?
Public knowledge of phenomenon like this is kept under wraps by larger media organizations who report on this subject as if it were a meritless conspiracy theory, even when photographic evidence is presented. Even when no credible government or scientific organization can or will explain what is happening, though, and therefore the citizen’s movement to uncover the truth about chemtrails and government interference with the atmosphere continues to grow.
If people are breathing these materials, and their origin and makeup is unknown, then the question becomes, ‘is this phenomenon contributing to public health issues?’ Without fair and earnest reporting into the matter, we may never fully understand what is happening in our atmosphere today and how it is affecting our health. | <urn:uuid:757aef77-1c12-42fe-a049-184ff3002120> | 2.71875 | 1,006 | News Article | Science & Tech. | 35.59781 | 95,544,841 |
In this chapter, we examined some of the more advanced problems associated with processing mouse events. We looked at using mouse cursors to give a greater level of feedback to the end user. You can use the stock cursors, or you can create custom cursors.
You saw that hit testing is made easy by functionality in the Rectangle structure, and the Region and GraphicsPath classes.
You also saw the difference between drawing during a Paint event and drawing during a mouse event. You can get a Graphics object for the form during the mouse event and draw without waiting for a Paint event.
The chapter next covered how to draw directly to the display using an alpha technique, creating a semitransparent image of the object being dragged. We also examined the preventative measures to take so that you don’t have the situation where your control thinks that the mouse button is down when it is not.
Finally, you saw how to code an application that allows the user to select parts of the display area by holding the mouse just outside the client area in a manner similar to how selection works in many text editors.
KeywordsCustom Control Mouse Button Graphic Object Event Handler Mouse Cursor
Unable to display preview. Download preview PDF. | <urn:uuid:d049aa45-f843-4b2b-b425-7e9dd6e2ad05> | 3.203125 | 251 | Truncated | Software Dev. | 46.352188 | 95,544,853 |
Although these findings need confirmation with further research, the suggestion may provide cosmologists with a long-sought clue about how the infant universe evolved.
This study will be published online by the journal Science, at the Science Express website, on 25 October, 2007. Science is published by AAAS, the nonprofit science society.
“These findings open up the possibility of looking for cosmic defects, similar to crystal defects, in the fabric of the universe. Although their existence has been proposed by theorists for decades, no defects have been seen. The jury is still out on the cold spot’s origin, but this surprising finding will be testable and may lead to new views of the cosmos in its infancy in years to come,” said Joanne Baker, associate editor at Science.
“Science is honored to be publishing this important research, and it seems fitting that an international collaboration between Spanish and British scientists be presented the same week that Spain is celebrating the importance of scientific achievement, through the Prince of Asturias Awards,” she said.
The research team, led by Marcos Cruz of the Instituto de Fisica de Cantabria, in Santander, Spain, was careful to say that they have not definitively discovered a defect. Rather, they have found evidence in the cosmic microwave background -- the frozen map of the early universe from the time when the first atoms formed and became separate from photons, hundreds of thousands of years after the Big Bang -- that could be explained by the presence of a defect.
Because defects would have formed at extremely high temperatures, at particle energies far in excess of those achievable at laboratory accelerators, their properties would provide physicists with powerful clues as to the fundamental nature of elementary particles and forces.
"It will be very interesting to see whether this tentative observation firms up in coming years. If it does, the implications will be extraordinary. The properties of the defect will provide an absolutely unique window onto the unification of particles and forces," said Neil Turok of the University of Cambridge in Cambridge, United Kingdom, who is a coauthor of the Science study.
Shortly after the Big Bang, the universe began to cool and expand, undergoing a variety of phase transitions -- more exotic versions of the gas-liquid-solid transitions that matter experiences on Earth.
In both the early universe and the average kitchen freezer, when matter changes phase, it does so irregularly. In an ice cube, for example cloudy spots mark defects that formed as the water crystallized.
In the mid-1970's, particle physicists realized that different sorts of defects should also have developed as various particles separated from the infant universe's hot plasma.
One such defect, known as a texture, is “a three-dimensional object like a blob of energy. But within the blob the energy fields making up the texture are twisted up,” according to Turok.
Textures and other defects should be detectable as temperature variations in the cosmic microwave background.
“The cosmic microwave background is the most ancient image we have of the universe and therefore it’s one of the most valuable tools to understand the universe’s origins. If this spot is a texture, it would allow us to discriminate among different theories that have been proposed for how the universe evolved,” said Cruz.
When Turok and his colleagues first described cosmic texture and showed how it might be detected, the cosmic microwave background hadn’t been mapped accurately enough to detect them. But since 2001, the Microwave Anisotropy Probe, also known as WMAP, has provided a detailed survey of the temperature changes across the cosmic microwave background.
The Science study began with Cruz and his colleagues at the Instituto de Física de Cantabria puzzling over an unusual cold spot in the WMAP data and trying to figure out what could have caused it. When the problem defied all explanations other than a defect, they brought their problem to Turok.
The research team then analyzed WMAP data and determined that the cold spot had the properties that would be expected if it had been caused by a cosmic texture.
“Now, here is an example where this exotic theory trumps more mundane ones,” said Baker.
"We're not certain this is a texture by any means. The probability that it's just a random fluctuation is about 1 percent. But what makes this so interesting is that there are a number of follow-up checks which can now be done. So the texture hypothesis is actually very testable," said Turok.
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:7d8c6213-5ea6-4a0b-ad24-ca80cce734f6> | 3 | 1,575 | Content Listing | Science & Tech. | 38.348147 | 95,544,858 |
Radio dating rocks the dating game game show
by Tas Walker A geologist works out the relative age of a rock by carefully studying where the rock is found in the field.The field relationships, as they are called, are of primary importance and all radiometric dates are evaluated against them.Many rocks and organisms contain radioactive isotopes, such as U-235 and C-14.
Here he can see that some curved sedimentary rocks have been cut vertically by a sheet of volcanic rock called a dyke.Half-life is the amount of time it takes for half of the parent isotopes to decay. In another 5,730 years, the organism will lose another half of the remaining C-14 isotopes.This process continues over time, with the organism losing half of the remaining C-14 isotopes each 5,730 years.As the isotopes decay, they give off particles from their nucleus and become a different isotope.The parent isotope is the original unstable isotope, and daughter isotopes are the stable product of the decay. In the first 5,730 years, the organism will lose half of its C-14 isotopes.
Over time, radioactive isotopes change into stable isotopes by a process known as radioactive decay. | <urn:uuid:60045ab6-769e-4b49-90d4-9e8a5f03920e> | 3.875 | 254 | Knowledge Article | Science & Tech. | 45.320993 | 95,544,861 |
Diablo wind is the name to the hot, dry offshore wind from the northeast that typically occurs in north-central California, in particular, the San Francisco Bay Area, during the spring and fall. The same wind pattern also affects other parts of California's coastal ranges. The term was coined by National Weather Service forecasters (San Francisco Bay Area Weather Forecast Office) John Quadros and Jan Null shortly after the 1991 Oakland firestorm to distinguish it from the comparable, and more familiar, hot dry wind in Southern California known as the Santa Ana winds. In fact, in decades previous to the 1991 fire, the term "Santa Ana" was occasionally used as well for the Bay Area dry northeasterly wind, such as the one that was associated with the 1923 Berkeley Fire.
The name "Diablo wind" refers to the fact that the wind blows into the inner Bay Area from the direction of Mt. Diablo in adjacent Contra Costa County, and mindful of the fiery, romantic connotation inherent in the term that translates to "devil wind". The Diablo winds are created by the combination of strong inland high pressure at the surface, strongly sinking air aloft, and lower pressure off the California coast. The air descending from aloft as well as from the Coast Ranges compresses as it sinks to sea level where it warms as much as 20 °F (11 °C), and loses relative humidity.
Like the Santa Ana wind, which because of its source position drains surface air off the high deserts, the Diablo wind pattern is associated with areas of strongly sinking air aloft at jet stream levels and the development of high surface atmospheric pressure. Since the jet stream is mostly absent in the summer, Diablo winds begin to occur in the Fall.
Because of the elevation of the coastal ranges in north-central California, the thermodynamic structure that occurs with the Diablo wind pattern favors the development of strong ridge-top and lee-side downslope winds associated with a phenomenon called the "hydraulic jump". While hydraulic jumps can occur with Santa Ana winds, the same thermodynamic structure that occurs with them typically favors "gap" flow more frequently. Thus, Santa Anas are strongest in canyons, whereas a Diablo wind is first noted and blows strongest atop and on the western slopes of the various mountain peaks and ridges around the Bay Area, although channeling by canyons is also significant.
In both cases, as the air sinks, it heats up by compression and its Relative humidity drops. This warming is in addition to, and usually greater than, any contact heating that occurs as the air stream crosses the Central Valley and the Diablo Valley. This is the reverse of the normal summertime weather pattern in which an area of low pressure (called the California Thermal Low) rather than high pressure lies east of the Bay Area, drawing in cooler, more humid air from the ocean. The dry offshore wind, already strong because of the offshore pressure gradient, can become quite strong with gusts reaching speeds of 40 miles per hour (64 km/h) or higher, particularly along and in the lee of the ridges of the Coast Range. This effect is especially dangerous with respect to wildfires as it can enhance the updraft generated by the heat in such fires.
While the Diablo Wind Pattern occurs in both the spring and fall, it is most dangerous in the fall, when vegetation is at its driest. The same pattern can occur during winter, but the air masses drawn to the coastline are quite cold, despite the compressional warming.
- Santa Ana wind
- Norte (wind)
- Sundowner wind
- Hydraulic jump
- Dine's compensation
- Thermal low
- Relative humidity
- Monteverdi, John (1973). "The Santa Ana weather type and extreme fire hazard in the Oakland-Berkeley Hills". Weatherwise. 26: 118-121.
- extract from the Report on the Berkeley, California Conflagration of September 17, 1923, issued by the National Board of Fire Underwriters’ Committee on Fire Prevention and Engineering Standards, reprinted in the Virtual Museum of the City of San Francisco
- WEATHER CORNER, San Jose Mercury News, Jan Null, October 26, 1999
- Durran, D. (1990). "Mountain Waves and Downslope Winds". Meteorological Monographs. 23: 59–83.
- Gabersek, S.; Durran, D. (2006). "The dynamics of gap flow over idealized topography. Part II: Effects of rotation and surface friction". Journal of Atmospheric Science. 26: 2720–2739. | <urn:uuid:a4e035b1-9330-4282-b2f4-0c64c2967ae6> | 3.328125 | 945 | Knowledge Article | Science & Tech. | 47.742591 | 95,544,881 |
User Profile: Dr. Róisín Commane
Who uses NASA Earth science data? Dr. Róisín Commane, to study the effects of terrestrial pollution on the atmosphere’s chemical composition.
Dr. Róisín Commane, Research Associate, Harvard School of Engineering and Applied Sciences, Cambridge, MA (Note: Starting in July 2018 Dr. Commane will be an Assistant Professor, Columbia University, New York, NY, and affiliated with Columbia University’s Lamont-Doherty Earth Observatory)
Research interests: Using airborne gas concentration data, atmospheric transport models, and ecosystem models to understand surface processes affecting atmospheric chemistry. This includes measuring carbon dioxide (CO2) and methane (CH4) from Arctic ecosystems and measuring continental pollution (from, for example, fires and aerosols) over remote oceans.
Research highlights: Dr. Róisín Commane likely has more frequent flyer miles than you. As part of the joint NASA/Harvard University Atmospheric Tomography Mission (ATom), Dr. Commane just completed her fourth series of global flights aboard NASA’s four-engine DC-8 research aircraft. Flying as high as 40,000 feet to skimming the surface at 500 feet (check out the amazing videos of low-level flights over Arctic sea ice and the open ocean on the ATom Twitter feed), ATom instruments collected data about chemical components of the atmosphere between 85° north and south latitude.
To say these flights were frill-free might be an understatement. Flights often lasted 10 hours or longer and the aircraft, which was built in 1969 and acquired by NASA in 1985, is a flying laboratory with instruments receiving priority over people (or soundproofing—headsets are recommended to “save your ears,” according to the ATom daily schedule mission planning page). A typical “day” for Dr. Commane during the recently-completed fourth and final ATom deployment might begin well before local sunrise for flight preparations and end after sunset several time zones later, a schedule that was repeated throughout the almost one-month series of flights conducted between April 24 and May 21, 2018.
The “T” in “ATom” stands for tomography. Tomography is a technique for imaging by sections or sectioning using any kind of penetrating wave (magnetic resonance imaging, or MRI, is a type of tomography that uses strong magnetic fields and radio waves to create high resolution images of soft tissue in the human body that can be looked at slice by slice). ATom uses 24 aircraft-mounted instruments to sample slices of the atmosphere and analyze the chemical composition of these slices. These data are used to study the impact of human-produced air pollution on greenhouse gasses and on chemically reactive gasses in the atmosphere, especially over remote ocean areas. Data from ATom are helping to validate and improve satellite and model atmospheric data as well as the algorithms used to produce these data. Between the summer of 2016 and this past spring, Dr. Commane participated in ATom flights that sampled the atmosphere in all seasons.
Dr. Commane is a co-investigator for the Harvard University-develped Quantum Cascade Laser System (QCLS) instrument. The QCLS measures atmospheric concentrations of carbon monoxide (CO), methane (CH4), nitrous oxide (N2O), and carbon dioxide (CO2). She uses a range of tools, including airborne gas concentration data, atmospheric transport models, and ecosystem models, to develop a better understanding of processes occurring on Earth’s surface that affect atmospheric chemistry. She is particularly interested in the different chemical signatures created by fires occurring in Africa and how these fires affect the chemical composition of the atmosphere over the Atlantic Ocean. She also is examining how clouds in the Arctic can hide the chemical signature of fires and make them more difficult to detect.
ATom is closely linked to satellite missions designed to measure atmospheric chemistry, and provides unique complementary data for missions including NASA’s Orbiting Carbon Observatory-2 (OCO-2; launched in 2014), the Global Ozone Monitoring Experiment–2 (GOME-2) instrument aboard the European Space Agency’s (ESA) MetOp-A and MetOp-B satellites, the Tropospheric Monitoring Instrument (TROPOMI) aboard the ESA’s Copernicus Sentinel-5 Precursor satellite, and the Japan Aerospace Exploration Agency’s Greenhouse Gases Observing Satellite (GOSAT).
ATom researchers, in turn, use satellite data to extend the data collected from their airborne observations to a global scale and deliver a single, large-scale, contiguous in situ dataset that can be used for evaluating and improving computer models designed to forecast atmospheric conditions. One such model is NASA’s Goddard Earth Observing System Model, Version 5 (GEOS-5), which is located at NASA’s Goddard Space Flight Center in Greenbelt, MD.
Much of the ATom data collected by Dr. Commane and her colleagues are being archived at NASA's Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). ORNL DAAC is the NASA Earth Observing System Data and Information System (EOSDIS) DAAC responsible for archiving and distributing NASA Earth observing data related to biogeochemical dynamics, ecology, and environmental processes.
While Dr. Commane and her ATom research colleagues are still finalizing mission data and digging into science questions, she notes that they have been really impressed at how well the GEOS-5 atmospheric forecast has predicted pollution. In looking specifically at comparisons between GEOS-5 model predictions and observed concentrations of atmospheric CO, for example, she points out that some events, like Siberian forest fires, were completely missed by the model due to Arctic clouds masking the fires. Overall, though, she and her colleagues found that the model accurately predicted both the location and magnitude of atmospheric pollution plumes.
The real strength of ATom, observes Dr. Commane, will be when all the mission data are final and complete, giving the research community data representing all four seasons that can be used to evaluate and improve atmospheric chemistry models on a global scale. For a frequent flyer like Dr. Commane, these data are a price worth paying for her long days in the air.
Representative data products used:
- Data from ORNL DAAC:
Atmospheric Tomography Mission main dataset page
- ATom: Merged Atmospheric Chemistry, Trace Gases, and Aerosols (DOI: 10.3334/ORNLDAAC/1581); Dr. Commane contributed CO2, CH4, CO, and N2O data for this collection
- Level 2 Atmospheric CO2, CO, and CH4 Concentrations (DOI: 10.3334/ORNLDAAC/1419) from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE); Dr. Commane contributed airborne platform CO2, CH4, and CO data for this collection
- CARVE Level 4 Gridded Footprints from the Weather Research and Forecasting (WRF) Stochastic Time-Inverted Lagrangian Transport (STILT) model (DOI: 10.3334/ORNLDAAC/1431)
- Arctic-Boreal Vulnerability Experiment (ABoVE) airborne CO2 and CH4 concentrations
- Atmospheric Tomography Mission main dataset page
- Moderate Resolution Imaging Spectroradiometer (MODIS) Snow Cover 8-Day Level 3 Global 500m Grid, Version 6 from NASA’s Aqua (MYD10A2; DOI: 10.5067/MODIS/MYD10A2.006) and Terra (MOD10A2; DOI: 10.5067/MODIS/MOD10A2.006) Earth observing satellites; available through NASA’s National Snow and Ice Data Center Distributed Active Archive Center (NSIDC DAAC)
- CO total column Measurements Of Pollution In The Troposphere (MOPITT) data; available through the Atmospheric Science Data Center (ASDC) at NASA’s Langley Research Center in Hampton, VA
- GEOS-5 model Forward Processing (FP) CO fields; available through NASA’s Global Modeling and Assimilation Office (GMAO) at NASA’s Goddard Space Flight Center
Read about the research:
NASA ATom website: https://www.nasa.gov/content/earth-expeditions-atom
Strode, S.A., Liu, J., Lait, L., Commane, R., Daube, B., Wofsy, S., Conaty, A., Newman, P. & Prather, M. (in review, 2018). “Forecasting Carbon Monoxide on a Global Scale for the ATom-1 Aircraft Mission: Insights from Airborne and Satellite Observations and Modeling.” Atmospheric Chemistry and Physics, Discussion Papers [doi: 10.5194/acp-2018-150].
Commane, R., Lindaas, J., Benmergui, J., Luus, K.A., Chang, R.Y.-W., Daube, B.C., Euskirchen, E.S., Henderson, J.M., Karion, A., Miller, J.B., Miller, S.M., Parazoo, N.C., Randerson, J.T., Sweeney, C., Tans, P., Thoning, K., Veraverbeke, S., Miller, C.E. & Wofsy, S.C. (2017). “Carbon dioxide sources from Alaska driven by increasing early winter respiration from Arctic tundra.” Proceedings of the National Academy of Sciences, 114(21): 5361-5366 [doi: 10.1073/pnas.1618567114].
Last Updated: Jul 17, 2018 at 3:22 PM EDT | <urn:uuid:fbd884f9-b39a-4648-a74f-d4ddddbfbcb7> | 3.171875 | 2,082 | About (Org.) | Science & Tech. | 41.372133 | 95,544,883 |
Altocumulus castellanus clouds take their name from their resemblance to the turrets of castles and are often a warning of thunderstorms.
Height of base: (7,000 - 18,000 ft).
Shape: A collection of small individual clouds, sometimes with “castle” towers proceeding from the top.
Latin: altum - height; cumulus – heap; castellanus = like a castle
Precipitation: Usually a few large droplets of rain, often evaporating before they reach the ground.
What are Castellanus clouds?
Altocumulus Castellanus clouds take their name from a resemblance to the turrets of castles and are often a warning of thunderstorms.
How do Castellanus clouds form?
Like other cumulus clouds, Castellanus clouds are caused by unstable air heated from below rising rapidly, causing water droplets to condense. The difference is that, whereas cumulus or cumulonimbus clouds are triggered by instability near the surface and heat from the sea or the ground, Altocumulus Castellanus clouds occur when the instability only starts much higher up. Although the cloud shapes look small to the eye, this is only because we see them from a great distance as they are so high up.
What weather is associated with Castellanus clouds?
Castellanus clouds are associated with lightning, often jumping from cloud to cloud without getting anywhere near the ground. Seeing Castellanus clouds is often a sign that Cumulonimbus clouds are on their way, with their associated heavy showers, strong gusty winds as well as thunder and lightning.
How do we categorize Castellanus clouds?
Castellanus are a subset of Altocumulus clouds. | <urn:uuid:a6de3ca3-b14e-478a-bf55-868aba637b3b> | 3.5625 | 360 | Knowledge Article | Science & Tech. | 38.73186 | 95,544,885 |
From Event: SPIE Optical Engineering + Applications, 2016
An accurate model and parameterization of fog is needed to increase the reliability and usefulness of electro-optical systems in all relevant environments. Current models vary widely in their ability to accurately predict the size distribution and subsequent optical properties of fog. The Advanced Navy Aerosol Model (ANAM), developed to model the distribution of aerosols in the maritime environment, does not currently include a model for fog. One of the more prevalent methods for modeling particle size spectra consists of fitting a modified gamma function to fog measurement data. This limits the fog distribution to a single mode. Here we establish an empirical model for predicting complicated multimodal fog droplet size spectra using machine learning techniques. This is accomplished through careful measurements of fog in a controlled laboratory environment and measuring fog particle size distributions during outdoor fog events.
Joshua J. Rudiger, Kevin Book, Brooke Baker, John Stephen deGrassie, and Stephen Hammel, "A model for predicting fog aerosol size distributions," Proc. SPIE 9979, Laser Communication and Propagation through the Atmosphere and Oceans V, 99790V (Presented at SPIE Optical Engineering + Applications: August 31, 2016; Published: 19 September 2016); https://doi.org/10.1117/12.2238279.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 12,000 conference presentations, including many plenary and keynote presentations. | <urn:uuid:6569d1ce-c4b3-4662-96a5-4347c4b893a6> | 2.515625 | 348 | Academic Writing | Science & Tech. | 19.94992 | 95,544,889 |
What Makes the Unique ‘Scots-pine’ Smell? — Chromatography Explores
Jun 22 2018 Read 475 Times
Pine is a commonly used wood for all different applications. Its mechanical properties — good elasticity and strength — enhances it use as a building material and its decorative appearance means it is used for furniture and other household uses. As a conifer, it is easy to grow in many places in the world from south-east Asia, North America and Europe.
Another favourable property of pine trees is their pleasant odour — variously described as pleasant, natural and fresh. The smell of pine alongside its decorative properties increases its appeal as a material used in toys and as a decorative building material. It is these odour properties that are captured in pine essential oils which are used in natural household cleaners, disinfectants and air fresheners. The aroma of pine is reportedly associated with relaxing the human body and spirit after research showed that it reduces stress markers in animal studies. But what is it that makes pine smell so piney?
Woody odours — unknown sources?
In Germany, Pinus sylvestris L. is one of the most common trees and finds its way into many homes as the source of many wood products. So, a team of German researchers decided to find out what components cause the unique pine smell of Pinus sylvestris L. or Scots pine. In a paper published online in Nature Scientific Reports — Resolving the smell of wood - identification of odour-active compounds in Scots pine (Pinus sylvestris L.) — they set out to demystify the scent of Scots pine.
The team report that there are not many studies on wood odorants, and those that have been carried out focus on the relevance of wood odorants for alcoholic drinks. They have previously identified the main odorants in cedar wood — the wood that gives high quality pencils their unique smell. Now they decided to turn to Scots pine.
Smelling the wood with chromatography
They started out by getting wood samples rated in terms of pleasantness and intensity of pine aroma. The underlying active substances behind these descriptions were then analysed using gas chromatography olfactometry (GC-O) that allowed the team to detect 44 odour active compounds. From these compounds, 39 were identified using a combination of gas chromatography-mass spectrometry/olfactometry (GC-MS/O) and two-dimensional gas chromatography-mass spectrometry/olfactometry (2D-GC-MS/O). The use of multi-dimensional gas chromatography is discussed in the article, Comprehensive, Non-Target Characterisation of Blinded Environmental Exposome Standards Using GCxGC and High Resolution Time-of-Flight Mass Spectrometry.
The team were able to show that there was close agreement between the sensory and gas chromatography results, so certain compounds could be related to certain aroma characteristics like resin-like, cardboard-like and grassy. They also reported 11 substances for the first time as odour-active components in wood including heptanoic acid, γ-octalactone, δ-nonalactone and (E,Z,Z)-trideca-2,4,7-trienal.
Which of these will you find in your pine disinfectant?
Do you like or dislike what you have read? Why not post a comment to tell others / the manufacturer and our Editor what you think. To leave comments please complete the form below. Providing the content is approved, your comment will be on screen in less than 24 hours. Leaving comments on product information and articles can assist with future editorial and article content. Post questions, thoughts or simply whether you like the content.
In This Edition Articles - Enhanced Sample Preparation - Identifying Inherent Contamination in Deep Well Microplates - How to Determine Extra Column Dispersion and Extra Column Volume - Th...
View all digital editions
Jul 29 2018 Washington DC, USA
Aug 02 2018 Barcelona, Spain
Aug 06 2018 Berlin, Germany
Aug 26 2018 Florence, Italy
Sep 05 2018 Chiba, Japan | <urn:uuid:92205c97-a12e-48d9-9c55-3f80c0f4a68b> | 2.859375 | 854 | Truncated | Science & Tech. | 37.855251 | 95,544,911 |
+44 1803 865913
The dangers that we face from geohazards appear to be getting worse, especially with the impact of increasing population and global climate change. This collection of papers illustrates how remote sensing technologies - measuring, mapping and monitoring the Earths surface from aircraft or satellites - can help us to rapidly detect and better manage geohazards. The hazardous terrains examined include areas of landslides, flooding, erosion, contaminated land, shrink-swell clays, subsidence, seismic activity and volcanic landforms. Key aspects of remote sensing are introduced, making this a book that can easily be read by those who are unfamiliar with remote sensing. The featured remote sensing systems include aerial photography and photogrammetry, thermal scanning, hyperspectral sensors, airborne laser altimetry (LiDAR), radar interferometry and multispectral satellites (Landsat, ASTER). Related technologies and methodologies, such as the processing of Digital Elevation Models and data analysis using Geographical Information Systems, are also discussed.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Fantastic service at a great price – I'll definitely use you again.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:f5b26ee5-b36c-42d2-9903-ea80a03ae097> | 3.25 | 283 | Product Page | Science & Tech. | 13.362693 | 95,544,916 |
Monitoring rock density
- 14 Downloads
Of all known methods of monitoring the density of coarse detrital material under field conditions, the most effective is to fill a pit with water, using a polyethylene film.
In determining the density of the rock by this method, the sample volume must be not less than 5dmax, and the polythene film should be 0.15–0.20 mm thick.
The density determined by the film method is somewhat (2%) higher than the actual density at the measurement point. The amount of overestimation can be allowed for by using a conversion coefficient of 0.98.
If the method is used correctly, the maximum error of individual determinations will not exceed 1.5%, the mean value of this error over a large number of determinations is 0.5%.
The recommended pit shape for filling with water by means of polyethylene film is aninverted hexagonal truncated pyramid, with a top diameter of 160 cm and slopes of 1∶0.2. The volume of the pit should be not less than 2 m3.
KeywordsHexagonal Pyramid Measurement Point Power Engineer Renewable Energy Source
Unable to display preview. Download preview PDF. | <urn:uuid:d2056f2b-a457-4c92-9f87-c0837993bad4> | 3.203125 | 253 | Truncated | Science & Tech. | 55.27297 | 95,544,938 |
doi:10.1038/nindia.2018.46 Published online 23 April 2018
Physicists have devised a technique that can help measure the viscosity of small volumes of different fluids1. This method could potentially be used to measure the viscosity of blood and even biological cells in which viscosity changes because of diseases.
To measure the viscosity of small volumes, the physicists from the Indian Institute of Science Education and Research (IISER) in Kolkata, India, led by Ayan Banerjee, immersed two polymer particles in water or in a mixture of water and glycerol. The particles, smaller than the diameter of a biological cell, were then trapped using laser light. A distance of about a hundredth of the thickness of a human hair separated the particles.
One of the particles, defined as control, was exposed to laser light, which made it to vibrate at different frequencies. The control particle then induced vibration of the other particle known as probe. When the control particle vibrates at a particular frequency, the probe particle’s vibration reaches its maximum value.
Such interactions of the two particles depend on the viscosity of the fluid in which they are suspended and optically trapped. This, in turn, helps measure the viscosity of the fluid.
Since the particles are smaller than a biological cell, it is possible to measure the viscosity of a cell by inserting them inside a cell, says co-researcher Subhajit Paul.
In the future, this technique can help detect cellular viscosity changes caused by blood disorders such as sickle-cell anemia and even viral infections, says lead researcher Banerjee.
1. Paul, S. et al. Two-point active microrheology in a viscous medium exploiting a motional resonance excited in dual-trap optical tweezers. Phys. Rev. E. 97, 042606 (2018) | <urn:uuid:4b18b326-86c9-42ce-ab6b-491af38e8bde> | 3.65625 | 405 | Truncated | Science & Tech. | 43.2851 | 95,544,949 |
Could India's vital monsoons dry up if China manipulates the weather to avoid drought in northern China? © Arindam Dey/Getty Images
Climate change is a problem in desperate need of a solution. According to the authoritative Carbon Action Tracker, even if all nations honour their pledges to cut their greenhouse gas emissions, the globe will still warm by around 3.2°C by 2100 – with catastrophic consequences for humanity and the animal kingdom.
If cutting greenhouse gas emissions isn’t enough, is it time for a plan B? Recent times have seen a surge of interest in geoengineering: China has recently embarked on a substantial research plan, while in the US, Prof David Keith of Harvard University is planning to launch a high-altitude balloon this year to test the feasibility of spraying reflective particles into the stratosphere. Meanwhile, other researchers are looking at the possibility of increasing the brightness of marine clouds to reflect more sunlight back into space.
But there are a number of risks, and not just because we’re unsure about how effective these interventions would be. There are fears that one country’s efforts to solve its climate problem could inadvertently mess up the weather elsewhere, creating a new source of political tension. And ultimately, this leads to a worrying question: could we be looking at the dawn of a new kind of war – one fuelled by a battle for dominance over our planet’s climate system?
The problem with geoengineering
Geoengineering is defined as a deliberate, large-scale intervention in the climate system, and schemes come in two varieties. The first type aims to remove carbon dioxide from the atmosphere. This can be done by capturing it from the air using natural or artificial means; making biochar (a type of charcoal) from vegetation waste; or adding lime to the oceans to reduce their acidity and therefore maintain their ability to absorb carbon dioxide from the atmosphere. The greatest hurdle for these schemes lies in finding somewhere to permanently store the huge quantities of carbon. The deep ocean offers one possible solution, but we’re still a long way from a feasible method of doing this.
The second kind of geoengineering scheme is known as solar radiation management or albedo modification. These techniques look to reflect a small amount of sunlight away from the planet to reduce warming. Some of these proposals are relatively benign, but also pretty ineffective. The technology receiving most attention – and the one most likely to be deployed because it’s cheap and feasible – is known as sulphate aerosol spraying.
The idea is to spray sulphur dioxide or sulphuric acid into the stratosphere or upper atmosphere to form tiny particles that reflect an extra 1 to 3 per cent of incoming solar radiation back into space, thereby cooling the planet in the way that large volcanic eruptions are known to do.
In effect, humans would be installing a radiative shield between the Earth and the Sun: one that could be adjusted by those who control it to regulate the temperature of the planet. The models indicate that if we reduced the amount of sunlight reaching the planet, the Earth would cool fairly…
- Protecting the tropical rainforests – an achievable global goal?
- The experiments making us feel better about our future
- Will Europe get more hurricanes in the future?
- Exciting new green technology of the future
- Climate change is turning dehydration into a deadly epidemic
- Will there be another Ice Age? | <urn:uuid:40956879-80a8-4d98-8450-eb945d69da88> | 3.46875 | 698 | Truncated | Science & Tech. | 31.558488 | 95,544,967 |
|Citations||15 publications cited this model|
The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) is a NASA project to map disturbance, regrowth, and permanent forest conversion across the continent (Masek et al., 2006). The LEDAPS processes Landsat imagery to surface reflectance, using atmospheric correction routines developed for the Terra MODIS instrument (Vermote et al., 1997). The original LEDAPS software was developed at NASA GSFC by Eric Vermote, Nazmi Saleous, Jonathan Kutler, and Robert Wolfe with support from the NASA Terrestrial Ecology program (PI: Jeff Masek). This version was adapted by Dr. Feng Gao. It is a stand-alone version of Landsat 1T calibration, TOA reflectance, cloud masking, and atmospheric correction preprocessing chain. This version is designed to work with the standard web enabled EROS L1T product for Landsat-5 and Landsat-7.
See our Data Citations and Acknowledgements policy for more information.
|1||LEDAPS Landsat Calibration, Reflectance, Atmospheric Correction Preprocessing Code||2012-05-07|
|2||LEDAPS Calibration, Reflectance, Atmospheric Correction Preprocessing Code, Version 2||2013-03-01| | <urn:uuid:32002594-1f17-4ae8-b051-7db82cf95a89> | 2.609375 | 271 | Knowledge Article | Science & Tech. | 26.763542 | 95,544,969 |
When do marginal seas and topographic sills modify the ocean density structure?
We ask what effect marginal seas at high latitudes have on the abyssal densities and stratification of the oceans. Although marginal seas are not necessary for the formation of dense abyssal waters, topographic sills tend to restrict exchange flows and increase density differences. Laboratory experiments with a steady state large-scale overturning circulation, forced by a gradient in surface temperatures or heat fluxes, show that a marginal sea and topographic sill influence the abyssal density...[Show more]
|Collections||ANU Research Publications|
|Source:||Journal of Geophysical Research|
|01_Stewart_When_do_marginal_seas_and_2011.pdf||1.8 MB||Adobe PDF|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated. | <urn:uuid:8d82a529-9a1e-4443-9820-bc2b7d7c9265> | 2.609375 | 185 | Academic Writing | Science & Tech. | 19.166667 | 95,544,972 |
Tide−Tsunami Interaction in Columbia River, as Implied by Historical Data and Numerical Simulations
- 361 Downloads
The East Japan tsunami of 11 March 2011 propagated more than 100 km upstream in the Columbia River. Visual analysis of its records along the river suggests that the tsunami propagation was strongly affected by tidal conditions. A numerical model of the lower Columbia River populated with tides and a downstream current was developed. Simulations of the East Japan tsunami propagating up the tidal river reproduced the observed features of tsunami waveform transformation, which did not emerge in simulations of the same wave propagating in a quiescent-state river. This allows us to clearly attribute those features to nonlinear interaction with the tidal river environment. The simulation also points to possible amplification of a tsunami wave crest propagating right after the high tide, previously deduced from the recordings of the 1964 Alaska tsunami in the river.
KeywordsTide tsunami nonlinear interaction numerical modeling Columbia River
This work was supported by the Joint Institute for the Study of the Atmosphere and Ocean (JISAO, University of Washington) under NOAA Cooperative Agreement No. NA10OAR4320148, Contribution #1889 (JISAO), #3775 (Pacific Marine Environmental Laboratory, NOAA). The study was originally motivated by the Workshop on Tsunami Hydrodynamics in a Large River held at Oregon State University, Corvallis, OR, 15–16 August 2011 (http://www.isec.nacse.org/workshop/2011_orst/agenda.html). Columbia River bathymetry was provided by Dr. Joseph Zhang (Oregon Health and Science University) and Prof. Harry Yeh (Oregon State University) as part of the workshop materials. Beaver tide gauge record is courtesy of the U.S. Geological Survey, Oregon Water Science Center (http://www.or.water.usgs.gov/). Gauge records at other Columbia River locations used in this work have been obtained from the NOAA/NOS/CO-OPS public website (http://www.tidesandcurrents.noaa.gov/). DART record was obtained from NOAA’s National Data Buoy Center public website (http://www.ndbc.noaa.gov/dart.shtml). Sincere thanks to Sandra Bigley (PMEL), Jean Newman (PMEL), and Stewart Allen (Centre for Australian Weather and Climate Research) for language-editing the manuscript.
- Burwell, D., Tolkova, E., and Chawla, A. (2007), Diffusion and Dispersion Characterization of a Numerical Tsunami Model, Ocean Modelling 19, 10–30.Google Scholar
- Jay, D., Giese, B., and Sherwood, C. (1990), Energetic and Sedimentary Processes in the Columbia River Estuary, Progress in Oceanography 25, 157–174.Google Scholar
- Jay, D. (1991), Green’s Law Revisited: Tidal Long-Wave Propagation in Channels With Strong Topography, J. Geophys. Res. 96(C11), 20585–20598.Google Scholar
- Jay, D., Some Thoughts on Columbia River Tsunami Propagation, Presented at the Workshop on Tsunami Hydrodynamics in a Large River (Corvallis, OR, USA, 2011). http://www.isec.nacse.org/workshop/2011_orst/Jay.pdf.
- Kowalik, Z., and Proshutinsky, A. (2010), Tsunami-tide interactions: A Cook Inlet Case Study, Continental Shelf Research 30, 633–642.Google Scholar
- Kowalik, Z., Proshutinsky, T., and Proshutinsky, A. (2006), Tide-tsunami interactions, Sci. Tsunami Hazards 24, 242–256.Google Scholar
- Lander, J. F., Lockridge, P. A., and Kozuch, M. J., Tsunamis Affecting the West Coast of the United States, 1806–1992 (U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite, Data, and Information Service, National Geophysical Data Center, Boulder, CO, USA, 1993).Google Scholar
- Stoker J.J., Water Waves (Interscience Pub., Inc., New York, NY, USA, 1957).Google Scholar
- Talke, S., Historical tsunamis on the Columbia River, Presented at the Workshop on Tsunami Hydrodynamics in a Large River (Corvallis, OR, USA, 2011). http://www.isec.nacse.org/workshop/2011_orst/Talke.pdf.
- Tang, L., Titov, V. V., and Chamberlin, C. D. (2009), Development, testing, and applications of site-specific tsunami inundation models for real-time forecasting, J. Geophys. Res. 114, C12025.Google Scholar
- Titov, V.V., and Synolakis, C.E. (1998), Numerical Modeling of Tidal Wave Runup, J. Waterway, Port, Coastal and Ocean Engineering 124, 157–171.Google Scholar
- Wilson, B.W., and Torum, A., Runup Heights of the Major Tsunami on North American Coasts, In The Great Alaska Earthquake of 1964: Oceanography and Coastal Engineering (Committee on the Alaska Earthquake, National Research Council) (National Academy of Sciences, Washington, D.C., USA, 1972a) pp. 158–180.Google Scholar
- Wilson, B.W., and Torum, A., Effects of the Tsunamis: An Engineering Study, In The Great Alaska Earthquake of 1964: Oceanography and Coastal Engineering (Committee on the Alaska Earthquake, National Research Council) (National Academy of Sciences, Washington, D.C., USA, 1972b) pp. 361–526.Google Scholar | <urn:uuid:ca4c733e-0bce-43b9-9088-cad8e8d5f0c2> | 2.546875 | 1,273 | Academic Writing | Science & Tech. | 50.983035 | 95,544,987 |
FromDavid Sischo on facebook (with accompanying photo):
“Have you ever held an entire species in your hand? Its hard to describe how this makes one feel. Exhilarated, filled with deep sadness at the state of the world, terrified that these beings are in our care, but also relieved that we intervened before this species met oblivion. Around a year ago the last six Achatinella fulgens, an endangered tree snail species once common in the gulches and ridges surrounding Honolulu, were brought to our DLNR captive rearing facility. This is after the habitat this last population occupied slumped down the mountain leaving all of the trees knocked down. From six we now have 22! It feels as though we are trying to start a fire that was left unattended and only a tiny ember remains. I guess we’ll just keep fanning the flame.” #racingextinction #extinction #hawaii #oahu #honolulu #conservation#kahuli #nature #wildlife #inourhands #fantheflame
To survive in this harsh environment, the flies perform a feat that Mark Twain described with great fascination in 1872. “You can hold them under water as long as you please—they do not mind it—they are only proud of it,” he wrote in a passage of his book Roughing It. “When you let them go, they pop up to the surface as dry as a patent office report.”
This is the best description ever, and makes me want to go into taxonomy.
Read more about these aberrations here!
If you see a hagfish don’t anger it. Under attack, these bottom scavengers and hunters releases thick, clear slime in astonishing quantities. Potential predators back off quickly when presented with the slime, because it clogs their gills. The hagfish itself escape their own mucus that they tie their bodies into a knot and scrape it off (A highway in Oregon was harder to clean up after a truck full of hagfish crashed there last year.)
However, it turns out that this mucus is a precious resource for a hagfish. After sliming a predator, the fish can take nearly a month to refill its slime glands. So leave the poor slime monsters alone.
Read about it here.
The spectacled (or Andean) bear – which turns out to be more common around Machu Picchu than previously believed – is the only South American bear, found in the ranges of the Andes from Venezuela in the north to Peru and Bolivia in the south.
But the species isn’t unique just for being the only bruin on a huge continent: it’s also the sole remaining representative of a bear family that once encompassed some of the all-out most formidable mammals ever to exist.
Want to read more? Check it out here.
The Mexican cavefish have no eyes, little pigment, and require about two hours of sleep per night to survive.
Imagine what you could do with those extra hours! So we should ask cavefish, how do they do it?
Read more about that very research here.
And I for one am wondering how the hell they do it! I just moved to a bigger city and I love it, but I’m finding my time a little stretched thin.
And I have opposable thumbs, and grocery stores. I can’t imagine how wildlife are doing it. Are they better at adulting than I am?
Read about how mountain lions are handling it here. | <urn:uuid:aabe6a00-24eb-488d-8e1c-c7e2c6112749> | 2.703125 | 748 | Personal Blog | Science & Tech. | 57.036549 | 95,544,989 |
A UK, Canadian and Italian study has provided what researchers believe is the first observational evidence that our universe could be a vast and complex hologram.
Scientists behind a theory that the speed of light is variable - and not constant as Einstein suggested - have made a prediction that could be tested.
The universe is not spinning or stretched in any particular direction, according to the most stringent test yet.
Five years ago, the Nobel Prize in Physics was awarded to three astronomers for their discovery, in the late 1990s, that the universe is expanding at an accelerating pace.
A group of three researchers from KEK, Shizuoka University and Osaka University has for the first time revealed the way our universe was born with 3 spatial dimensions from 10-dimensional superstring theory in which spacetime ...
Two possiblities exist: either the Universe is finite and has a size, or it's infinite and goes on forever. Both possibilities have mind-bending implications.
Radio waves, microwaves and even light itself are all made of electric and magnetic fields. The classical theory of electromagnetism was completed in the 1860s by James Clerk Maxwell. At the time, Maxwell's theory was revolutionary, ... | <urn:uuid:ff2bde01-80c4-4587-8f22-526d5a7036dd> | 3.515625 | 242 | Content Listing | Science & Tech. | 39.24184 | 95,545,043 |
Radio carbon dating dinosaur bones
As ICR scientist Brian Thomas points out, "Because radiocarbon decays relatively quickly, fossils that are even 100,000 years old should have virtually no radiocarbon left in them.
But they do." Using the services of five different commercial and academic laboratories, the research team tested seven dinosaur bones and detected carbon-14 in them all.
It also has some applications in geology; its importance in dating organic materials cannot be underestimated enough.
In 1979, Desmond Clark said of the method “we would still be foundering in a sea of imprecisions sometime bred of inspired guesswork but more often of imaginative speculation” (3).
The other two isotopes in comparison are more common than carbon-14 in the atmosphere but increase with the burning of fossil fuels making them less reliable for study (2); carbon-14 also increases, but its relative rarity means its increase is negligible. After this point, other Absolute Dating methods may be used.
Today, the radiocarbon-14 dating method is used extensively in environmental sciences and in human sciences such as archaeology and anthropology.
Research has been ongoing since the 1960s to determine what the proportion of in the atmosphere has been over the past fifty thousand years.
The resulting data, in the form of a calibration curve, is now used to convert a given measurement of radiocarbon in a sample into an estimate of the sample's calendar age.
An article in the spring 2015 edition of the presents never-before-seen carbon dates for 14 different fossils, including a Triceratops and other dinosaurs.Other corrections must be made to account for the proportion of throughout the biosphere (reservoir effects).Additional complications come from the burning of fossil fuels such as coal and oil, and from the above-ground nuclear tests done in the 1950s and 1960s.Evolutionists were so sure that dinosaur fossils are too old to contain any carbon-14, they never even bothered to check.Or perhaps they were afraid of what they would find.
The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. | <urn:uuid:af27128e-0502-4176-bbab-ce1b19b9074a> | 4.3125 | 452 | Knowledge Article | Science & Tech. | 33.950989 | 95,545,050 |
Given a Nanowire with cross sectional dimensions of 10 nm x 10 nm, what momentum would an electron in the ground state need in order to possess the same energy as a stationary electron
(zero momentum) in the n=1,2 state?
I need step-by-step solution please.© BrainMass Inc. brainmass.com July 18, 2018, 10:26 pm ad1c9bdddf
To solve this problem, we need to solve the time-independent Schrodinger equation for an electron in a nanowire. This equation says
(1) H psi(x, y, z) = -hbar^2/2m laplacian(psi(x,y,z)) + V(x,y,z) psi(x,y,z) = E psi(x,y,z),
where m is the mass of the electron, H = -hbar^2/2m laplacian + V is the Hamiltonian of the electron, V(x,y,z) = 0 is its potential (which is zero in this case because the electron is free inside the nanowire), psi(x,y,z) is its wavefunction, and E is its energy. We solve (1) by separation of variables. We let
psi(x,y,z) = X(x) Y(z) Z(z)
where the z-axis points along the direction of the nanowire and the x and y axes point along the sides of the square cross section, with x = y = 0 at one of the ...
We solve the Schrodinger equation along a nanowire with a square cross section of given dimensions to determine the speed of an electron in the 1,1 mode which would give it the same energy as a stationary electron in the 1,2 mode. | <urn:uuid:5a6ef90b-068b-43c8-a422-eb9718e6478e> | 2.765625 | 394 | Q&A Forum | Science & Tech. | 73.51486 | 95,545,112 |
|Named after||Gérard Desargues|
|Automorphisms||240 (S5× Z/2Z)|
In the mathematical field of graph theory, the Desargues graph is a distance-transitive cubic graph with 20 vertices and 30 edges. It is named after Girard Desargues, arises from several different combinatorial constructions, has a high level of symmetry, is the only known non-planar cubic partial cube, and has been applied in chemical databases.
There are several different ways of constructing the Desargues graph:
- It is the generalized Petersen graph G(10, 3). To form the Desargues graph in this way, connect ten of the vertices into a regular decagon, and connect the other ten vertices into a ten-pointed star that connects pairs of vertices at distance three in a second decagon. The Desargue graph consists of the 20 edges of these two polygons together with an additional 10 edges connecting points of one decagon to the corresponding points of the other.
- It is the Levi graph of the Desargues configuration. This configuration consists of ten points and ten lines describing two perspective triangles, their center of perspectivity, and their axis of perspectivity. The Desargues graph has one vertex for each point, one vertex for each line, and one edge for every incident point-line pair. Desargues' theorem, named after 17th-century French mathematician Gérard Desargues, describes a set of points and lines forming this configuration, and the configuration and the graph take their name from it.
- It is the bipartite double cover of the Petersen graph, formed by replacing each Petersen graph vertex by a pair of vertices and each Petersen graph edge by a pair of crossed edges.
- It is the bipartite Kneser graph H5,2. Its vertices can be labeled by the ten two-element subsets and the ten three-element subsets of a five-element set, with an edge connecting two vertices when one of the corresponding sets is a subset of the other. Equivalently, the Desargues graph is the induced subgraph of the 5-dimensional hypercube determined by the vertices of weight 2 and weight 3.
- The Desargues graph is Hamiltonian and can be constructed from the LCF notation: [5,−5,9,−9]5. As Erdős conjectured that for k positive, the subgraph of the 2k+1-dimensional hypercube induced by the vertices of weight k and k+1 is Hamiltonian, the Hamiltonicity of the Desargues graph is no surprise. (It also follows from the stronger conjecture of Lovasz that except for a few known counter-examples, all vertex-transitive graphs have Hamiltonian cycles.)
The Desargues graph is a symmetric graph: it has symmetries that take any vertex to any other vertex and any edge to any other edge. Its symmetry group has order 240, and is isomorphic to the product of a symmetric group on 5 points with a group of order 2.
One can interpret this product representation of the symmetry group in terms of the constructions of the Desargues graph: the symmetric group on five points is the symmetry group of the Desargues configuration, and the order-2 subgroup swaps the roles of the vertices that represent points of the Desargues configuration and the vertices that represent lines. Alternatively, in terms of the bipartite Kneser graph, the symmetric group on five points acts separately on the two-element and three-element subsets of the five points, and complementation of subsets forms a group of order two that transforms one type of subset into the other. The symmetric group on five points is also the symmetry group of the Petersen graph, and the order-2 subgroup swaps the vertices within each pair of vertices formed in the double cover construction.
The generalized Petersen graph G(n, k) is vertex-transitive if and only if n = 10 and k = 2 or if k2 ≡ ±1 (mod n) and is edge-transitive only in the following seven cases: (n, k) = (4, 1), (5, 2), (8, 3), (10, 2), (10, 3), (12, 5), (24, 5). So the Desargues graph is one of only seven symmetric Generalized Petersen graphs. Among these seven graphs are the cubical graph G(4, 1), the Petersen graph G(5, 2), the Möbius–Kantor graph G(8, 3), the dodecahedral graph G(10, 2) and the Nauru graph G(12, 5).
The characteristic polynomial of the Desargues graph is
In chemistry, the Desargues graph is known as the Desargues–Levi graph; it is used to organize systems of stereoisomers of 5-ligand compounds. In this application, the thirty edges of the graph correspond to pseudorotations of the ligands.
The Desargues graph has chromatic number 2, chromatic index 3, radius 5, diameter 5 and girth 6. It is also a 3-vertex-connected and a 3-edge-connected Hamiltonian graph. It has book thickness 3 and queue number 2.
- Weisstein, Eric W. "Desargues Graph". MathWorld.
- Kagno, I. N. (1947), "Desargues' and Pappus' graphs and their groups", American Journal of Mathematics, The Johns Hopkins University Press, 69 (4): 859–863, doi:10.2307/2371806, JSTOR 2371806.
- Frucht, R.; Graver, J. E.; Watkins, M. E. (1971), "The groups of the generalized Petersen graphs", Proceedings of the Cambridge Philosophical Society, 70 (02): 211–218, doi:10.1017/S0305004100049811.
- Balaban, A. T.; Fǎrcaşiu, D.; Bǎnicǎ, R. (1966), "Graphs of multiple 1, 2-shifts in carbonium ions and related systems", Rev. Roum. Chim., 11: 1205
- Mislow, Kurt (1970), "Role of pseudorotation in the stereochemistry of nucleophilic displacement reactions", Acc. Chem. Res., 3 (10): 321–331, doi:10.1021/ar50034a001
- Klavžar, Sandi; Lipovec, Alenka (2003), "Partial cubes as subdivision graphs and as generalized Petersen graphs", Discrete Mathematics, 263: 157–165, doi:10.1016/S0012-365X(02)00575-7
- Wolz, Jessica, Engineering Linear Layouts with SAT. Master Thesis, University of Tübingen, 2018
- Brouwer, A. E.; Cohen, A. M.; and Neumaier, A. Distance-Regular Graphs. New York: Springer-Verlag, 1989. | <urn:uuid:fe4bd73c-1107-4441-b822-8df5d0b3deed> | 3 | 1,525 | Knowledge Article | Science & Tech. | 61.341861 | 95,545,113 |
Scientists at The University of Texas at Austin have observed for the first time that separate populations of the same species -- in this case, coral -- can diverge in their capacity to regulate genes when adapting to their local environment. The research, published today in Nature Ecology and Evolution, reveals a new way for populations to adapt that may help predict how they will fare under climate change.
The new research was based on populations of mustard hill coral, Porites astreoides, living around the Lower Florida Keys. Corals from close to shore are adapted to a more variable environment because there is greater fluctuation in temperature and water quality: imagine them as the more cosmopolitan coral, adapted to handling occasional stressful events that the offshore coral are spared.
When researchers swapped corals from a close-to-shore area with a population of the same species from offshore waters, they found that the inshore-reef corals made bigger changes in their gene activity than the corals collected from an offshore reef. This enabled the inshore corals to adapt better to their new environment.
"It is exciting that populations so close together -- these reefs are less than 5 miles apart -- can be so different," says corresponding author Carly Kenkel, currently affiliated with the Australian Institute of Marine Science. "We've discovered another way that corals can enhance their temperature tolerance, which may be important in determining their response to climate change."
Differences in gene regulation -- the body's ability to make specific genes more or less active -- can be inherited and are pivotal for adapting to environmental change. It was already known that separate populations often develop differences in average levels of gene activity, but now scientists have found that populations can also diverge in their ability to switch genes on and off.
"We show that one population has adapted to its more variable environment by developing an enhanced ability to regulate gene activity," says Mikhail Matz, co-author of the study and an associate professor in the Department of Integrative Biology.
Researchers swapped 15 genetically distinct coral colonies from inshore with 15 colonies found offshore to see whether the corals would regulate their genes to match the pattern observed in the local population. After a year, the transplanted populations did show differences: Formerly inshore corals transplanted offshore changed their gene activity dramatically to closely resemble the locals, whereas offshore corals transplanted inshore were able to go only halfway toward the local gene activity levels. In short, corals that originated from the more variable, close to shore environment were more flexible in their gene regulation.
The lack of flexibility took its toll on the offshore corals, which did not fare well at the inshore reef and experienced stress-induced bleaching. Their higher bleaching levels were linked to the diminished ability to dynamically regulate activity of stress-related genes, confirming that flexibility of gene regulation was an important component of adaptation to the inshore environment.
"We saw different capacity for gene expression plasticity between coral populations because we looked at the behavior of all genes taken together instead of focusing on individual genes," says Kenkel. "If we hadn't, we would have missed the reef for the coral, so to speak."
The research was funded by the National Science Foundation's Division of Environmental Biology.
Kristin Philips | EurekAlert!
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:8688c3ff-86d6-4f0b-bf50-2b57ec50f78b> | 3.671875 | 1,303 | Content Listing | Science & Tech. | 29.007251 | 95,545,119 |
How to work with time-zone in rails
This is an issue where time-zone of one request is passed to another request.
Lets say we have web-server running on a single dyno and we have three different users with different timezones('Mumbai', 'Central America' and 'London').
Here is my code setup that will produce time-zone leaks,
class UserController < ApplicationController before_filter :set_user_time_zone, only: [:method_one_with_time_zone]
def method_one_with_time_zone puts Time.zone #before_filter sets the time zone for this method execution. end
def method_two_without_time_zone puts Time.zone #This method doesn't call any before filter, So it will use the Time.zone end
def set_user_time_zone puts Time.zone Time.zone = current_user.time_zone end end
We won't get the time-zone leaks locally, if our testing goes in following way.
Req 1: USER1 ==> method_one_with_time_zone Output: 'UTC' 'Mumbai' Req 2: USER1 ==> method_two_without_time_zone Output: # 'Mumbai'
If the code is setup this way, then developer might not be able to find issues locally as he will see that tasks are creating properly in 'mumbai' time-zone in second request. even though the second request did not set the time-zone.
Req 1: USER1 ==> method_one_with_time_zone Output: 'UTC' 'Mumbai' USER2 ===> method_two_without_time_zone 'Mumbai' # Expected zone: 'Central America', Here it will create tasks in wrong zone.
As this method doesn't call the before filter it will have the time-zone of USER1, So it will create tasks in wrong time-zone.
How to avoid such issues.
- Never use Time.zone = 'some_time_zone_name' in any rails code, As Time.zone is shared across multiple requests in same thread.
- Use around filter and Time.use_zone to properly setup your controllers, For reference: [Timezone setup for controllers (Timezone setup for controllers)]
Most of the time zone issues are not due to improper time zone, It's due to how we use time_zone.
def test_method Time.use_zone('Mumbai') do #Developer's tend to think that Time.now will give the time in Mumbai zone. #But it is not the case, It will still use the machine time-zone #Proper usage is Time.zone.now Time.parse '27-10-2014 6am' # This is wrong as it will parse time in machine zone #Proper way to do this so that parsing happens in proper zone is as follows Time.zone.parse '27-10-2014 6am' end puts Time.zone # UTC, Reverted to old zone, i.e machine zone end
Things to avoid and it's replacements.
Time.now, Avoid this in all the coding including specs, replace it with Time.zone.parse
Replace Time.parse with Time.zone.parse
Replace Time.at(v) with Time.zone.at(v) OR Time.at(v).in_time_zone
Sometimes you need to convert a time in different zone to another zone. In this case we need to use in_time_zone(time_zone_to_be_passed = Time.zone), in third point it will take current time-zone
Never depend on you machine time-zone, So avoid usage of .localtime as it will get your time in machine irrespective of any time zone
ex: For Heroku server, default machine zone is UTC,
Time.zone.now.localtime, will give me the time in UTC, even after doing .zone at the beginning the front
[Useful link on time-zone (Useful Link on Timezone)]
Subscribe to Engineering At Kiprosh
Get the latest posts delivered right to your inbox | <urn:uuid:9c2e4fee-5a04-4b1d-a93b-2f7329761b99> | 2.59375 | 894 | Tutorial | Software Dev. | 44.292365 | 95,545,135 |
|Desert Goby, (C. eremius)|
Most species live in extreme environments; for example, several species of Chlamydogobius are found in the water that emerges from geothermal springs, such as the Dalhousie goby, found in the waters around Dalhousie Springs.
These fish can live in water with a wide range of temperatures, pH, salinity, and oxygen levels; for example they are found in water with a pH between 6.8 and 9.0, and temperatures between 3 and 43 °C (37–109 °F). They can tolerate salinity as high as 60 parts per thousand (almost twice that of sea water). They have been found in water with extremely low oxygen levels (as low as 0.8 ppm). Their water habitats often exhibit oxygen levels below 5 milligrams of oxygen per litre.
To cope with extremely low oxygen levels, they will emerge from the water to "gulp" air (known as aerial respiration). They also will position themselves over beds of algae to capture the produced oxygen.
They will hide in the mud and silt at the bottom of a stream, or in a plant or under a rock to avoid more extreme water temperatures. Sometimes they will emerge from very hot water for brief periods to take advantage of evaporative cooling.
They can survive even if there are drought conditions that reduce the size of their habitat. If there is a flood that results in drastically increased water flow, they anchor themselves to rocks with their pelvic fins.
Chlamydogobius fish are able to change their colours to blend in with their environments.
- Chlamydogobius eremius (Zietz (fi), 1896) (Desert goby)
- Chlamydogobius gloveri Larson, 1995 (Dalhousie goby)
- Chlamydogobius japalpa Larson, 1995 (Finke goby)
- Chlamydogobius micropterus Larson, 1995 (Elizabeth springs goby)
- Chlamydogobius ranunculus Larson, 1995 (Tadpole goby)
- Chlamydogobius squamigenus Larson, 1995 (Edgbaston goby)
|Wikimedia Commons has media related to Chlamydogobius.| | <urn:uuid:85f09031-f93e-45d5-b3fa-c950bccbe0f9> | 3.765625 | 488 | Knowledge Article | Science & Tech. | 39.799543 | 95,545,169 |
On the way towards a thermonuclear fusion reactor there are several technological and physical uncertainties to be understood and solved. One of the most fundamental problems is the appearance in the machines of many sorts of instabilities which can either enhance the energy outflow or even destroy the magnetic confinement of the fusion plasma. The knowledge of such instabilities is a prerequisite to a good understanding of the behaviour of actual experiments, and to the design of new devices. Most of the effort is devoted to the study of axisymmetric toroidal configurations such as tokamaks or spheromaks and to helically twisted toroidal devices such as stellarators.
KeywordsUnstable Mode Kink Mode Confinement Time Toroidal Geometry Spectral Code
Unable to display preview. Download preview PDF. | <urn:uuid:840701d1-cf83-4b61-a4a6-969d60e27bf4> | 2.59375 | 159 | Truncated | Science & Tech. | 16.139695 | 95,545,172 |
Get Premium for free!
The atomic theory
8 months ago
Jhon Dalton Suggested the idea of electrons
ARISTOTLE He stated that everything was made from 4 basic elements
DEMOCRITUS He stated that everything was of indivisble particles
JHON WOLFGANG JHON NEWLANDS DIMITRI MENDELEEV HENRY MOSELEY Worked in the creation adn organization of the periodic table
In that same year in mexico, the war of "reforma" had begun.
HENRI BECQUEREL Exposed the idea of empty space in an atom
Porfiro diaz presidency started wich then turned into an unoffical dictatorship
J.J. THOMSON Proved the existance of empty space on the atom
MARIE & PIERRE CURIE Discovered two new elements and the term radioactive
ERNEST RUTHERFORD Experimented with beta rays in gold foil and discovered that atoms weren't all piled up together
One year after, the Mexican revolution started.
NIELS BOHR MAX PLANCK ALBERT EINSTEIN Suggested that the atom had several energy levels
In this same year in mexico, the revolution ended.
HEISSENBERG explained that it was impossible to know the e xact position of the atom
HEISSENBERG explained that it was impossible to know the e xact position of the atom (copy)
Share on Google+
Share on Facebook
Submit to Reddit
Share on LinkedIn
Post to Tumblr | <urn:uuid:14f1699f-e615-4f99-94f3-198fb8389c4a> | 3.34375 | 333 | Structured Data | Science & Tech. | 18.965326 | 95,545,188 |
A number of physical phenomena in crystals are determined by their electron energy spectrum. Some phenomena are associated with the motion of electrons in the periodic field of the lattice and with their scattering on lattice vibrations. The optical, electrical, magnetic, galvanomagnetic, and other properties of crystals of dielectrics, semiconductors, and metals are intimately connected with the nature of the electron energy spectrum and the geometry of the isoenergetic surfaces of the electrons in the crystal, the peculiarities of vibrations of the lattice atoms, and the dispersion of the frequencies of these vibrations. This chapter discusses the energy spectrum of the electrons in a crystal.
KeywordsWave Vector Free Electron Energy Band Fermi Surface Brillouin Zone
Unable to display preview. Download preview PDF.
- 3.1B.K. Vainshtein: Sovremennaya kristallografiya. T.I. Simmetriya kristallov. Me-tody structurnoi kristallografiya (Nauka, Moscow 1979) [English transi.: Modern Crystallography I. Symmetry of Crystals, Methods of Structural Crystallography, SpringerSer. Solid-StateSci., Vol. 15 (Springer, Berlin, Heidelberg 1981)] B.K. Vainshtein: Fundamentals of Crystals, 2nd edn., Modern Crystallography 1 (Springer, Berlin, Heidelberg 1994)Google Scholar | <urn:uuid:57f64b3d-ee18-4ceb-b3ae-a1b288011c39> | 2.84375 | 308 | Truncated | Science & Tech. | 28.124068 | 95,545,275 |
So, you might ask, if your primary interest in electricity is to understand how machines, instruments and electrical equipment work, is there any point in studying electricity from the very “academic” and abstract approach that will be used in these notes, completely divorced as they appear to be from the world of practical reality? The answer is that electrical engineers more than anybody must understand the basic scientific principles before they even begin to apply them to the design of practical appliances. So – do not even think of electrical engineering until you have a thorough understanding of the basic scientific principles of the subject.
- 1.0: Prelude to Electric Fields
- The subject of electromagnetism is an amalgamation of what were originally studies of three apparently entirely unrelated phenomena, namely electrostatic phenomena of the type demonstrated with pieces of amber, pith balls, and ancient devices such as Leyden jars and Wimshurst machines; magnetism, and the phenomena associated with lodestones, compass needles and Earth’s magnetic field; and current electricity – the sort of electricity generated by chemical cells such as Daniel and Leclanché cells
- 1.2: Triboelectric Effect
- It was long ago noticed that if a sample of amber is rubbed with cloth, the amber became endowed with certain apparently wonderful properties. For example, the amber would be able to attract small particles of fluff to itself. The effect is called the triboelectric effect. The amber, after having been rubbed with cloth, is said to bear an electric charge, and space in the vicinity of the charged amber within which the amber can exert its attractive properties is called an electric field.
- 1.3: Experiments with Pith Balls
- There are two kinds of electric charge, with exactly opposite properties. We observe that like charges (i.e. those of the same sign) repel each other, and unlike charges (i.e. those of opposite sign) attract each other.
- 1.5: Coulomb's Law
- Coulomb’s Law is that two electric charges of like sign repel each other with a force that is proportional to the product of their charges and inversely proportional to the square of the distance between them:
- 1.6: Electric Field E
- The region around a charged body within which it can exert its electrostatic influence may be called an electric field. In principle, it extends to infinity, but in practice it falls off more or less rapidly with distance.
- 1.7: Electric Field D
- We have been assuming that all “experiments” described have been carried out in a vacuum or (which is almost the same thing) in air. But what if the point charge, the infinite rod and the infinite charged sheet of Section 1.6 are all immersed in some medium whose permittivity is not ϵ0 , but is instead ϵ ?
- 1.8: Flux
- The product of electric field intensity and area is the flux . Whereas E is an intensive quantity, flux is an extensive quantity.
- 1.9: Gauss's Theorem
- Gauss’s theorem argues that the total normal component of the D -flux through any closed surface is equal to the charge enclosed by that surface. It is a natural consequence of the inverse square nature of Coulomb’s law.
Thumbnail: The electric field lines and equipotential lines for two equal but opposite charges. The equipotential lines can be drawn by making them perpendicular to the electric field lines, if those are known. Note that the potential is greatest (most positive) near the positive charge and least (most negative) near the negative charge. Image used with permission (CC-BY; OpenStax). | <urn:uuid:2e256d5b-772c-4aca-8307-a2cd0cdb00dc> | 3.671875 | 773 | Academic Writing | Science & Tech. | 43.181759 | 95,545,277 |
This article was originally posted in the Forest Carbon newsletter. Click here to read the original.
5 March 2015 | Even in the Amazon, you can’t escape PowerPoint. Last month, 80 members of the Gavião people met in their territory (called Igarapé Lourdes) in the state of Rondí´nia, Brazil to discuss their “Life Plan.” Dressed in a combination of traditional and western clothes feathered headdresses, ceremonial beads, and jeans the group hung shrouds around the open-air structure to block out sunlight for the presentations.
Life Plans for indigenous peoples have been proliferating across the Amazon for the last 20 years, starting in Colombia in 1992. The plans are shared visions for the future, often built around spatial maps that identify important hunting and harvesting areas, sacred sites, and forested areas, detailed with the quality of cover and species. The Gavião’s Life Plan is based on low-impact agriculture and the sale of native crafts and non-timber forest products such as nuts and copaiba oil. It lays out a strategy for preventing unwanted logging by building monitoring stations and strengthening cooperation with police and government agencies.
As they knock up against funding challenges, one question that many Amazonian indigenous groups are now asking is this: Are Life Plans a version of REDD (Reducing Emissions from Deforestation and Degradation of forests), the carbon finance mechanism that pays for forest protection?
At first glance, Life Plans and the project documentation required around REDD projects seems very different. REDD requires reference levels of deforestation and measurement of carbon stocks technical aspects not found in Life Plans. But the basic principle of fighting the threats to forests by creating long-term economic alternatives is analagous.
During the climate change negotiations in Lima last December, leaders from COICA (Coordinadora de las Organizaciones Indígenas de la Cuenca Amazí³nica), a federation of indigenous organizations across Latin America, discussed the idea of REDD+ Indigena Amazí³nico (RIA), or indigenous REDD.
“We’ve been working on our Life Plan since the 1990s,” said Fermín Chimantani, co-president of Peru’s Amaracaeri Reserve. “We’ve created governance structures, we’ve valued our ecosystem services such as water filtration, biodiversity conservation, and evapotranspiration and we’ve shown that we can use our indigenous vision to save and manage our forest.”
The Gavião and the Arara, a neighboring tribe, are exploring the possibility of using carbon finance not at the project but at the jurisdictional level, as has been done in the state of Acre, Brazil. There, the state handles the carbon accounting and earns payments for reducing emissions but then distributes income based on its own criteria and some of it flows to indigenous peoples. Juan Carlos Jintiach, former head of COICA, says most indigenous people will likely bypass project-level REDD which is more directly tied to the carbon markets and go the jurisdictional route.
“Think about all the mega projects that are going to be developed,” said Jintiach. “We know what´s going to happen: islands of deforestation, contamination, and criminal activities but we, the indigenous people of the Amazon, have an answer.”
Read the full story on Life Plans and Indigenous REDD from Ecosystem Marketplace.
And if your organization is developing or transacting offsets from forest carbon projects, be sure to respond to our annual survey&nnbsp;here. The survey informs our State of the Forest Carbon Markets report and is used to understand market dynamics and shape policy around avoided deforestation. The deadline for filling it out is today, March 4 but please get in touch with Allie (email@example.com) if you’d like to provide information.
More stories from the forest carbon market are summarized below, so keep reading.
â€â€The Ecosystem Marketplace Team
If you have comments or would like to submit news stories, write to us at firstname.lastname@example.org.
The Forest Carbon Portal provides relevant daily news, a bi-weekly news brief, feature articles, a calendar of events, a searchable member directory, a jobs board, a library of tools and resources. The Portal also includes the Forest Carbon Project Inventory, an international database of projects including those in the pipeline. Projects are described with consistent ‘nutrition labels’ and allow viewers to contact project developers.
|ABOUT THE ECOSYSTEM MARKETPLACE|
Ecosystem Marketplace is a project of Forest Trends, a tax-exempt corporation under Section 501(c)3. This newsletter and other dimensions of our voluntary carbon markets program are funded by a series of international development agencies, philanthropic foundations, and private sector organizations. For more information on donating to Ecosystem Marketplace, please contact email@example.com. | <urn:uuid:c17bfdf9-9cbc-47d7-83c7-76cd555f4e72> | 3.015625 | 1,064 | News (Org.) | Science & Tech. | 28.328806 | 95,545,283 |
GraalVM allows you to compile your programs ahead-of-time into a native executable. The resulting program does not run on the Java HotSpot VM, but uses necessary components like memory management, thread scheduling from a different implementation of a virtual machine, called Substrate VM. Substrate VM is written in Java and compiled into the native executable. The resulting program has faster startup time and lower runtime memory overhead compared to a Java VM.
Currently, native images work mainly for JVM-based languages, e.g., Java, Scala, Kotlin. The resulting image can, optionally, execute dynamic languages like Ruby, R, or Python, but it does not pre-compile their code itself.
To build a native image of your program use the
native-image utility located in the
bin directory of the GraalVM distribution. For compilation
native-image depends on the local toolchain, so please make sure:
zlib-devel (header files for the C library and
gcc are available on your system.
Image Generation Options
native-image command line needs to provide the class path for all classes using the familiar option from the
-cp is followed by a list of directories or .jar files, separated by
:. The name of the class containing the
main method is the last argument; or you can use
-jar and provide a .jar file that specifies the main method in its manifest.
Go over the
native-image command line options that may be useful:
native-image [options] classbuilds an executable file for a class in the current working directory. Invoking it executes the native-compiled code of that class.
native-image [options] -jar jarfilebuilds an image for a jar file.
You may provide additional options to native-image building:
--class-path <class search path of directories and zip/jar files>help to search for class files through separated list of directories, JAR archives, and ZIP archives;
--sharedbuilds shared library;
--staticbuilds statically linked executable (requires static libc and zlib)) (since 1.0.0-rc2)
-D<name>=<value>sets a system propertys
-J<flag> passes <flag>directly to the JVM running the image generator;
-genables debug info generation;
-O<level>0 - no optimizations, 1 - basic optimizations (default);
-eaenables assertions in the generated image;
--verboseenables verbose output;
--versionprints product version and exits;
--helpprints the help message;
--help-extraprints the help message on non-standard options;
--report-unsupported-elements-at-runtimereports usage of unsupported methods and fields at run time when they are accessed the first time, instead of as an error during image building.
Macro-options are mainly helpful for polyglot capabilities of native images:
--language:pythonto make sure Python is available as a language for the image;
--language:llvmto make sure LLVM bitcode is available for the image;
--language:rubyto make sure Ruby is available as a language for the image;
--tool:chromeinspectoradds truffle debugging support to a truffle-language image;
--tool:profileradds truffle profiling support to a truffle-language image.
Get acquainted with the non-standard native-image building options, that are subject to change without notice:
--no-servertells to not use image-build server;
--server-listlists current image-build servers;
--server-list-detailslists current image-build servers with more details;
--server-cleanupremoves stale image-build servers entries;
--server-shutdownshutdowns image-build servers under current session ID;
--server-shutdown-allshutdown all image-build servers;
--server-session=<custom-session-name>uses custom session name instead of system provided session ID of the calling process;
--debug-attach[=<port>]attaches to debugger during image building (default port is 8000);
--dry-runoutputs the command line that would be used for image building;
--expert-optionslists image build options for experts;
--expert-options-alllists all image build options for experts (to be used at your own risk);
--configurations-path <search path of option-configuration directories>a separated list of directories to be treated as option-configuration directories.
If the environment variable
NATIVE_IMAGE_CONFIG_FILE is set to a Java properties file,
native-image will use the defaults defined in there on each invocation.
Here is an example of configuration file (saved as
NativeImageArgs = --no-server \ --configurations-path /home/user/custom-image-configs
If the user has this configuration file and export
every time the native-image gets used, it will implicitly use the arguments
NativeImageArgs, plus the arguments specified on command line.
For a more comprehensive list of options please check the documentation on Github.
Generating Heap Dumps
GraalVM also supports monitoring and generating heap dumps of the native image processes. This functionality is available in the Enterprise edition of GraalVM. It is not available in the Community edition of GraalVM.
To find out more about generating heap dumps of the native image processes, refer to the step-by-step documentation.
Limitations of AOT Compilation
To achieve such aggressive ahead-of-time optimizations, we run an aggressive static analysis that requires a closed-world assumption. We need to know all classes and all bytecodes that are reachable at run time. Therefore, it is not possible to load new classes that have not been available during ahead-of-time-compilation.
For a more detailed description of which features of Java are not supported by the native images or supported partially please refer to the documentation on the GitHub. And if you’re interested in learning about the support for Java reflection, there’s a special document for Java reflection support.
Operational information for native images
When does it make sense to run native images instead of the JVM?
- When startup time and memory footprint is important.
- When you want to embed Java code with existing C/C++ applications
What’s the typical performance profile on the SVM?
- Right now peak performance is a bit worse than HotSpot, but we don’t want to advertise that (and we want to fix it of course).
What tools work with SVM: debugger, profilers? How to use them?
- The community version does not support DWARF information. The enterprise version supports all native tools that rely on DWARF information, like debuggers (gdb) and profilers (VTune). | <urn:uuid:c16ce202-fb6e-48a1-acf2-95f198935e1d> | 2.609375 | 1,475 | Documentation | Software Dev. | 22.549953 | 95,545,284 |
The natural numbers have been studied for thousands of years yet most undergraduate textbooks present number theory as a long list of theorems with little mention of how these results were discovered or why they are important. An excellent contribution to the list of elementary number theory textbooks number theory it is true has as rich a history as any branch of mathematics and watkins has done terrific work in integrating the stories of the people behind this subject with the traditional topics of elementary number theory. In the preface to this book the author asserts p xii that many other mathematical subjects calculus for example would have undoubtedly evolved much as they are today quite independent of the individual people involved in the actual development but number theory has had a wonderfully quirky evolution that depended heavily on the . Number theory a historical approach ebook written by john j watkins read this book using google play books app on your pc android ios devices download for offline reading highlight bookmark or take notes while you read number theory a historical approach. Uses a unique historical approach to teaching number theory features numerous problems helpful hints and fully worked solutions discusses fun topics like pythagorean tuning in music sudoku puzzles and arithmetic progressions of primes
How it works:
1. Register Trial Account.
2. Download The Books as you like ( Personal use ) | <urn:uuid:5b1fc58f-4b7e-4272-81ee-c35ce0cb4850> | 3.03125 | 258 | Product Page | Science & Tech. | 20.254286 | 95,545,307 |
Fluorescence is the emission of light by a substance that has absorbed light or other electromagnetic radiation. It is a form of luminescence. In most cases, the emitted light has a longer wavelength, and therefore lower energy, than the absorbed radiation. The most striking example of fluorescence occurs when the absorbed radiation is in the ultraviolet region of the spectrum, and thus invisible to the human eye, while the emitted light is in the visible region, which gives the fluorescent substance a distinct color that can be seen only when exposed to UV light. Fluorescent materials cease to glow nearly immediately when the radiation source stops, unlike phosphorescent materials, which continue to emit light for some time after.
Fluorescence has many practical applications, including mineralogy, gemology, medicine, chemical sensors (fluorescence spectroscopy), fluorescent labelling, dyes, biological detectors, cosmic-ray detection, and, most commonly, fluorescent lamps. Fluorescence also occurs frequently in nature in some minerals and in various biological states in many branches of the animal kingdom.
- 1 History
- 2 Physical principles
- 3 Rules
- 4 Fluorescence in nature
- 4.1 Biofluorescence vs. bioluminescence vs. biophosphorescence
- 4.2 Mechanisms of biofluorescence
- 4.3 Phylogenetics
- 4.4 Aquatic biofluorescence
- 4.5 Terrestrial biofluorescence
- 4.6 Abiotic fluorescence
- 5 Applications of fluorescence
- 6 See also
- 7 References
- 8 Bibliography
- 9 External links
An early observation of fluorescence was described in 1560 by Bernardino de Sahagún and in 1565 by Nicolás Monardes in the infusion known as lignum nephriticum (Latin for "kidney wood"). It was derived from the wood of two tree species, Pterocarpus indicus and Eysenhardtia polystachya. The chemical compound responsible for this fluorescence is matlaline, which is the oxidation product of one of the flavonoids found in this wood.
In 1819, Edward D. Clarke and in 1822 René Just Haüy described fluorescence in fluorites, Sir David Brewster described the phenomenon for chlorophyll in 1833 and Sir John Herschel did the same for quinine in 1845.
In his 1852 paper on the "Refrangibility" (wavelength change) of light, George Gabriel Stokes described the ability of fluorspar and uranium glass to change invisible light beyond the violet end of the visible spectrum into blue light. He named this phenomenon fluorescence : "I am almost inclined to coin a word, and call the appearance fluorescence, from fluor-spar [i.e., fluorite], as the analogous term opalescence is derived from the name of a mineral." The name was derived from the mineral fluorite (calcium difluoride), some examples of which contain traces of divalent europium, which serves as the fluorescent activator to emit blue light. In a key experiment he used a prism to isolate ultraviolet radiation from sunlight and observed blue light emitted by an ethanol solution of quinine exposed by it.
- Fluorescence (emission):
S0 is called the ground state of the fluorophore (fluorescent molecule), and S1 is its first (electronically) excited singlet state.
A molecule in S1 can relax by various competing pathways. It can undergo non-radiative relaxation in which the excitation energy is dissipated as heat (vibrations) to the solvent. Excited organic molecules can also relax via conversion to a triplet state, which may subsequently relax via phosphorescence, or by a secondary non-radiative relaxation step.
Relaxation from S1 can also occur through interaction with a second molecule through fluorescence quenching. Molecular oxygen (O2) is an extremely efficient quencher of fluorescence just because of its unusual triplet ground state.
In most cases, the emitted light has a longer wavelength, and therefore lower energy, than the absorbed radiation; this phenomenon is known as the Stokes shift. However, when the absorbed electromagnetic radiation is intense, it is possible for one electron to absorb two photons; this two-photon absorption can lead to emission of radiation having a shorter wavelength than the absorbed radiation. The emitted radiation may also be of the same wavelength as the absorbed radiation, termed "resonance fluorescence".
Molecules that are excited through light absorption or via a different process (e.g. as the product of a reaction) can transfer energy to a second 'sensitized' molecule, which is converted to its excited state and can then fluoresce.
The maximum possible fluorescence quantum yield is 1.0 (100%); each photon absorbed results in a photon emitted. Compounds with quantum yields of 0.10 are still considered quite fluorescent. Another way to define the quantum yield of fluorescence is by the rate of excited state decay:
where is the rate constant of spontaneous emission of radiation and
is the sum of all rates of excited state decay. Other rates of excited state decay are caused by mechanisms other than photon emission and are, therefore, often called "non-radiative rates", which can include: dynamic collisional quenching, near-field dipole-dipole interaction (or resonance energy transfer), internal conversion, and intersystem crossing. Thus, if the rate of any pathway changes, both the excited state lifetime and the fluorescence quantum yield will be affected.
The fluorescence lifetime refers to the average time the molecule stays in its excited state before emitting a photon. Fluorescence typically follows first-order kinetics:
where is the concentration of excited state molecules at time , is the initial concentration and is the decay rate or the inverse of the fluorescence lifetime. This is an instance of exponential decay. Various radiative and non-radiative processes can de-populate the excited state. In such case the total decay rate is the sum over all rates:
where is the total decay rate, the radiative decay rate and the non-radiative decay rate. It is similar to a first-order chemical reaction in which the first-order rate constant is the sum of all of the rates (a parallel kinetic model). If the rate of spontaneous emission, or any of the other rates are fast, the lifetime is short. For commonly used fluorescent compounds, typical excited state decay times for photon emissions with energies from the UV to near infrared are within the range of 0.5 to 20 nanoseconds. The fluorescence lifetime is an important parameter for practical applications of fluorescence such as fluorescence resonance energy transfer and fluorescence-lifetime imaging microscopy.
The Jablonski diagram describes most of the relaxation mechanisms for excited state molecules. The diagram alongside shows how fluorescence occurs due to the relaxation of certain excited electrons of a molecule.
Fluorophores are more likely to be excited by photons if the transition moment of the fluorophore is parallel to the electric vector of the photon. The polarization of the emitted light will also depend on the transition moment. The transition moment is dependent on the physical orientation of the fluorophore molecule. For fluorophores in solution this means that the intensity and polarization of the emitted light is dependent on rotational diffusion. Therefore, anisotropy measurements can be used to investigate how freely a fluorescent molecule moves in a particular environment.
Fluorescence anisotropy can be defined quantitatively as
where is the emitted intensity parallel to polarization of the excitation light and is the emitted intensity perpendicular to the polarization of the excitation light.
Strongly fluorescent pigments often have an unusual appearance which is often described colloquially as a "neon color." This phenomenon was termed "Farbenglut" by Hermann von Helmholtz and "fluorence" by Ralph M. Evans. It is generally thought to be related to the high brightness of the color relative to what it would be as a component of white. Fluorescence shifts energy in the incident illumination from shorter wavelengths to longer (such as blue to yellow) and thus can make the fluorescent color appear brighter (more saturated) than it could possibly be by reflection alone.
There are several general rules that deal with fluorescence. Each of the following rules has exceptions but they are useful guidelines for understanding fluorescence (these rules do not necessarily apply to two-photon absorption).
Kasha's rule dictates that the quantum yield of luminescence is independent of the wavelength of exciting radiation. This occurs because excited molecules usually decay to the lowest vibrational level of the excited state before fluorescence emission takes place. The Kasha–Vavilov rule does not always apply and is violated severely in many simple molecules. A somewhat more reliable statement, although still with exceptions, would be that the fluorescence spectrum shows very little dependence on the wavelength of exciting radiation.
Mirror image rule
For many fluorophores the absorption spectrum is a mirror image of the emission spectrum. This is known as the mirror image rule and is related to the Franck–Condon principle which states that electronic transitions are vertical, that is energy changes without distance changing as can be represented with a vertical line in Jablonski diagram. This means the nucleus does not move and the vibration levels of the excited state resemble the vibration levels of the ground state.
In general, emitted fluorescence light has a longer wavelength and lower energy than the absorbed light. This phenomenon, known as Stokes shift, is due to energy loss between the time a photon is absorbed and when a new one is emitted. The causes and magnitude of Stokes shift can be complex and are dependent on the fluorophore and its environment. However, there are some common causes. It is frequently due to non-radiative decay to the lowest vibrational energy level of the excited state. Another factor is that the emission of fluorescence frequently leaves a fluorophore in a higher vibrational level of the ground state.
Fluorescence in nature
There are many natural compounds that exhibit fluorescence, and they have a number of applications. Some deep-sea animals, such as the greeneye, use fluorescence.
Biofluorescence vs. bioluminescence vs. biophosphorescence
Biofluorescence is the absorption of electromagnetic wavelengths from the visible light spectrum by fluorescent proteins in a living organism, and the emission of light at a lower energy level. This causes the light that is emitted to be a different color than the light that is absorbed. Stimulating light excites an electron, raising energy to an unstable level. This instability is unfavorable, so the energized electron is returned to a stable state almost as immediately as it becomes unstable. This return to stability corresponds with the release of excess energy in the form of fluorescence light. This emission of light is only observable when the stimulant light is still providing light to the organism/object and is typically yellow, pink, orange, red, green, or purple. Biofluorescence is often confused with the following forms of biotic light, bioluminescence and biophosphorescence.
Bioluminescence differs from biofluorescence in that it is the natural production of light by chemical reactions within an organism, whereas biofluorescence is the absorption and reemission of light from the environment.
Biophosphorescence is similar to biofluorescence in its requirement of light wavelengths as a provider of excitation energy. The difference here lies in the relative stability of the energized electron. Unlike with biofluorescence, here the electron retains stability, emitting light that continues to “glow-in-the-dark” even long after the stimulating light source has been removed.
Mechanisms of biofluorescence
Pigment cells that exhibit fluorescence are called fluorescent chromatophores, and function somatically similar to regular chromatophores. These cells are dendritic, and contain pigments called fluorosomes. These pigments contain fluorescent proteins which are activated by K+ (potassium) ions, and it is their movement, aggregation, and dispersion within the fluorescent chromatophore that cause directed fluorescence patterning. Fluorescent cells are innervated the same as other chromatphores, like melanophores, pigment cells that contain melanin. Short term fluorescent patterning and signaling is controlled by the nervous system. Fluorescent chromatophores can be found in the skin (e.g. in fish) just below the epidermis, amongst other chromatophores.
Epidermal fluorescent cells in fish also respond to hormonal stimuli by the α–MSH and MCH hormones much the same as melanophores. This suggests that fluorescent cells may have color changes throughout the day that coincide with their circadian rhythm. Fish may also be sensitive to cortisol induced stress responses to environmental stimuli, such as interaction with a predator or engaging in a mating ritual.
It is suspected by some scientists that GFPs and GFP like proteins began as electron donors activated by light. These electrons were then used for reactions requiring light energy. Functions of fluorescent proteins, such as protection from the sun, conversion of light into different wavelengths, or for signaling are thought to have evolved secondarily.
The incidence of fluorescence across the tree of life is widespread, and has been studied most extensively in a phylogenetic sense in fish. The phenomenon appears to have evolved multiple times in multiple taxa such as in the anguilliformes (eels), gobioidei (gobies and cardinalfishes), and tetradontiformes (triggerfishes), along with the other taxa discussed later in the article. Fluorescence is highly genotypically and phenotypically variable even within ecosystems, in regards to the wavelengths emitted, the patterns displayed, and the intensity of the fluorescence. Generally, the species relying upon camouflage exhibit the greatest diversity in fluorescence, likely because camouflage is one of the most common uses of fluorescence.
Currently, relatively little is known about the functional significance of fluorescence and fluorescent proteins. However, it is suspected that biofluorescence may serve important functions in signaling and communication, mating, lures, camouflage, UV protection and antioxidation, photoacclimation, dinoflagellate regulation, and in coral health.
Water absorbs light of long wavelengths, so less light from these wavelengths reflects back to reach the eye. Therefore, warm colors from the visual light spectrum appear less vibrant at increasing depths. Water scatters light of shorter wavelengths, meaning cooler colors dominate the visual field in the photic zone. Light intensity decreases 10 fold with every 75 m of depth, so at depths of 75 m, light is 10% as intense as it is on the surface, and is only 1% as intense at 150 m as it is on the surface. Because the water filters out the wavelengths and intensity of water reaching certain depths, different proteins, because of the wavelengths and intensities of light they are capable of absorbing, are better suited to different depths. Theoretically, some fish eyes can detect light as deep as 1000 m. At these depths of the aphotic zone, the only sources of light are organisms themselves, giving off light through chemical reactions in a process called bioluminescence.
Fluorescence is simply defined as the absorption of electromagnetic radiation at one wavelength and its reemission at another, lower energy wavelength. Thus any type of fluorescence depends on the presence of external sources of light. Biologically functional fluorescence is found in the photic zone, where there is not only enough light to cause biofluorescence, but enough light for other organisms to detect it. The visual field in the photic zone is naturally blue, so colors of fluorescence can be detected as bright reds, oranges, yellows, and greens. Green is the most commonly found color in the biofluorescent spectrum, yellow the second most, orange the third, and red is the rarest. Fluorescence can occur in organisms in the aphotic zone as a byproduct of that same organism’s bioluminescence. Some biofluorescence in the aphotic zone is merely a byproduct of the organism’s tissue biochemistry and does not have a functional purpose. However, some cases of functional and adaptive significance of biofluorescence in the aphotic zone of the deep ocean is an active area of research.
Bony fishes living in shallow water, due to living in a colorful environment, generally have good color vision. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.
Many fish that exhibit biofluorescence, such as sharks, lizardfish, scorpionfish, wrasses, and flatfishes, also possess yellow intraocular filters. Yellow intraocular filters in the lenses and cornea of certain fishes function as long-pass filters, thus enabling the species that possess them to visualize and potentially exploit fluorescence to enhance visual contrast and patterns that are unseen to other fishes and predators that lack this visual specialization. Fishes that possess the necessary yellow intraocular filters for visualizing biofluorescence potentially exploit a light signal from members of it or a similar functional role. Biofluorescent patterning was especially prominent in cryptically patterned fishes possessing complex camouflage, and that many of these lineages also possess yellow long-pass intraocular filters that could enable visualization of such patterns.
Another adaptive use of fluorescence is to generate red light from the ambient blue light of the photic zone to aid vision. Red light can only be seen across short distances due to attenuation of red light wavelengths by water. Many fish species that fluoresce are small, group-living, or benthic/aphotic, and have conspicuous patterning. This patterning is caused by fluorescent tissue and is visible to other members of the species, however the patterning is invisible at other visual spectra. These intraspecific fluorescent patterns also coincide with intra-species signaling. The patterns present in ocular rings to indicate directionality of an individual’s gaze, and along fins to indicate directionality of an individual’s movement. Current research suspects that this red fluorescence is used for private communication between members of the same species. Due to the prominence of blue light at ocean depths, red light and light of longer wavelengths are muddled, and many predatory reef fish have little to no sensitivity for light at these wavelengths. Fish such as the fairy wrasse that have developed visual sensitivity to longer wavelengths are able to display red fluorescent signals that give a high contrast to the blue environment and are conspicuous to conspecifics in short ranges, yet are relatively invisible to other common fish that have reduced sensitivities to long wavelengths. Thus, fluorescence can be used as adaptive signaling and intra-species communication in reef fish.
Additionally, it is suggested that fluorescent tissues that surround an organism’s eyes are used to convert blue light from the photic zone or green bioluminescence in the aphotic zone into red light to aid vision.
Fluorescence serves a wide variety of functions in coral. Fluorescent proteins in corals may contribute to photosynthesis by converting otherwise unusable wavelengths of light into ones for which the coral’s symbiotic algae are able to conduct photosynthesis. Also, the proteins may fluctuate in number as more or less light becomes available as a means of photoacclimation. Similarly, these fluorescent proteins may possess antioxidant capacities to eliminate oxygen radicals produced by photosynthesis. Finally, through modulating photosynthesis, the fluorescent proteins may also serve as a means of regulating the activity of the coral’s photosynthetic algal symbionts.
Alloteuthis subulata and Loligo vulgaris, two types of nearly transparent squid, have fluorescent spots above their eyes. These spots reflect incident light, which may serve as a means of camouflage, but also for signaling to other squids for schooling purposes.
Another, well-studied example of biofluorescence in the ocean is the hydrozoan Aequorea victoria. This jellyfish lives in the photic zone off the west coast of North America and was identified as a carrier of green fluorescent protein (GFP) by Osamu Shimomura. The gene for these green fluorescent proteins has been isolated and is scientifically significant because it is widely used in genetic studies to indicate the expression of other genes.
Several species of mantis shrimp, which are stomatopod crustaceans, including Lysiosquillina glabriuscula, have yellow fluorescent markings along their antennal scales and carapace (shell) that males present during threat displays to predators and other males. The display involves raising the head and thorax, spreading the striking appendages and other maxillipeds, and extending the prominent, oval antennal scales laterally, which makes the animal appear larger and accentuates its yellow fluorescent markings. Furthermore, as depth increases, mantis shrimp fluorescence accounts for a greater part of the visible light available. During mating rituals, mantis shrimp actively fluoresce, and the wavelength of this fluorescence matches the wavelengths detected by their eye pigments.
Siphonophorae is an order of marine animals from the phylum Hydrozoa that consist of a specialized medusoid and polyp zooid. Some siphonophores, including the genus Erenna that live in the aphotic zone between depths of 1600 m and 2300 m, exhibit yellow to red fluorescence in the photophores of their tentacle-like tentilla. This fluorescence occurs as a by-product of bioluminescence from these same photophores. The siphonophores exhibit the fluorescence in a flicking pattern that is used as a lure to attract prey.
The predatory deep-sea dragonfish Malacosteus niger, the closely related genus Aristostomias and the species Pachystomias microdon are capable of harnessing the blue light emitted from their own bioluminescence to generate red biofluorescence from suborbital photophores. This red fluorescence is invisible to other animals, which allows these dragonfish extra light at dark ocean depths without attracting or signaling predators.
The Polka-dot tree frog, widely found in the Amazon was discovered to be the first fluorescent amphibian in 2017. The frog is pale green with dots in white, yellow or light red. The fluorescence of the frog was discovered unintentionally in Buenos Aires, Argentina. The fluorescence was traced to a new compound found in the lymph and skin glads. The main fluorescent compound is Hyloin-L1 and it gives a blue-green glow when exposed to violet or ultra violet light. Scientists behind the discovery say that the fluorescence can be used for communication. They also think that about 100 or 200 species of frogs are likely to be fluorescent.
Swallowtail (Papilio) butterflies have complex systems for emitting fluorescent light. Their wings contain pigment-infused crystals that provide directed fluorescent light. These crystals function to produce fluorescent light best when they absorb radiance from sky-blue light (wavelength about 420 nm). The wavelengths of light that the butterflies see the best correspond to the absorbance of the crystals in the butterfly's wings. This likely functions to enhance the capacity for signaling.
Parrots have fluorescent plumage that may be used in mate signaling. A study using mate-choice experiments on budgerigars (Melopsittacus undulates) found compelling support for fluorescent sexual signaling, with both males and females significantly preferring birds with the fluorescent experimental stimulus. This study suggests that the fluorescent plumage of parrots is not simply a by-product of pigmentation, but instead an adapted sexual signal. Considering the intricacies of the pathways that produce fluorescent pigments, there may be significant costs involved. Therefore, individuals exhibiting strong fluorescence may be honest indicators of high individual quality, since they can deal with the associated costs.
Spiders fluoresce under UV light and possess a huge diversity of fluorophores. Remarkably, spiders are the only known group in which fluorescence is “taxonomically widespread, variably expressed, evolutionarily labile, and probably under selection and potentially of ecological importance for intraspecific and interspecific signaling.” A study by Andrews et al. (2007) reveals that fluorescence has evolved multiple times across spider taxa, with novel fluorophores evolving during spider diversification. In some spiders, ultraviolet cues are important for predator-prey interactions, intraspecific communication, and camouflaging with matching fluorescent flowers. Differing ecological contexts could favor inhibition or enhancement of fluorescence expression, depending upon whether fluorescence helps spiders be cryptic or makes them more conspicuous to predators. Therefore, natural selection could be acting on expression of fluorescence across spider species.
Scorpions also fluoresce.
The Mirabilis jalapa flower contains violet, fluorescent betacyanins and yellow, fluorescent betaxanthins. Under white light, parts of the flower containing only betaxanthins appear yellow, but in areas where both betaxanthins and betacyanins are present, the visible fluorescence of the flower is faded due to internal light-filtering mechanisms. Fluorescence was previously suggested to play a role in pollinator attraction, however, it was later found that the visual signal by fluorescence is negligible compared to the visual signal of light reflected by the flower.
Gemology, mineralogy and geology
Many types of calcite and amber will fluoresce under shortwave UV, longwave UV and visible light. Rubies, emeralds, and diamonds exhibit red fluorescence under long-wave UV, blue and sometimes green light; diamonds also emit light under X-ray radiation.
Fluorescence in minerals is caused by a wide range of activators. In some cases, the concentration of the activator must be restricted to below a certain level, to prevent quenching of the fluorescent emission. Furthermore, the mineral must be free of impurities such as iron or copper, to prevent quenching of possible fluorescence. Divalent manganese, in concentrations of up to several percent, is responsible for the red or orange fluorescence of calcite, the green fluorescence of willemite, the yellow fluorescence of esperite, and the orange fluorescence of wollastonite and clinohedrite. Hexavalent uranium, in the form of the uranyl cation, fluoresces at all concentrations in a yellow green, and is the cause of fluorescence of minerals such as autunite or andersonite, and, at low concentration, is the cause of the fluorescence of such materials as some samples of hyalite opal. Trivalent chromium at low concentration is the source of the red fluorescence of ruby. Divalent europium is the source of the blue fluorescence, when seen in the mineral fluorite. Trivalent lanthanides such as terbium and dysprosium are the principal activators of the creamy yellow fluorescence exhibited by the yttrofluorite variety of the mineral fluorite, and contribute to the orange fluorescence of zircon. Powellite (calcium molybdate) and scheelite (calcium tungstate) fluoresce intrinsically in yellow and blue, respectively. When present together in solid solution, energy is transferred from the higher-energy tungsten to the lower-energy molybdenum, such that fairly low levels of molybdenum are sufficient to cause a yellow emission for scheelite, instead of blue. Low-iron sphalerite (zinc sulfide), fluoresces and phosphoresces in a range of colors, influenced by the presence of various trace impurities.
Crude oil (petroleum) fluoresces in a range of colors, from dull-brown for heavy oils and tars through to bright-yellowish and bluish-white for very light oils and condensates. This phenomenon is used in oil exploration drilling to identify very small amounts of oil in drill cuttings and core samples.
Organic solutions such anthracene or stilbene, dissolved in benzene or toluene, fluoresce with ultraviolet or gamma ray irradiation. The decay times of this fluorescence are on the order of nanoseconds, since the duration of the light depends on the lifetime of the excited states of the fluorescent material, in this case anthracene or stilbene.
Scintillation is defined a flash of light produced in a transparent material by the passage of a particle (an electron, an alpha particle, an ion, or a high-energy photon). Stilbene and derivatives are used in scintillation counters to detect such particles. Stilbene is also one of the gain mediums used in dye lasers.
Fluorescence is observed in the atmosphere when the air is under energetic electron bombardment. In cases such as the natural aurora, high-altitude nuclear explosions, and rocket-borne electron gun experiments, the molecules and ions formed have a fluorescent response to light.
Common materials that fluoresce
- Vitamin B2 fluoresces yellow.
- Tonic water fluoresces blue due to the presence of quinine.
- Highlighter ink is often fluorescent due to the presence of pyranine.
- Banknotes, postage stamps and credit cards often have fluorescent security features.
Applications of fluorescence
The common fluorescent lamp relies on fluorescence. Inside the glass tube is a partial vacuum and a small amount of mercury. An electric discharge in the tube causes the mercury atoms to emit mostly ultraviolet light. The tube is lined with a coating of a fluorescent material, called the phosphor, which absorbs the ultraviolet and re-emits visible light. Fluorescent lighting is more energy-efficient than incandescent lighting elements. However, the uneven spectrum of traditional fluorescent lamps may cause certain colors to appear different than when illuminated by incandescent light or daylight. The mercury vapor emission spectrum is dominated by a short-wave UV line at 254 nm (which provides most of the energy to the phosphors), accompanied by visible light emission at 436 nm (blue), 546 nm (green) and 579 nm (yellow-orange). These three lines can be observed superimposed on the white continuum using a hand spectroscope, for light emitted by the usual white fluorescent tubes. These same visible lines, accompanied by the emission lines of trivalent europium and trivalent terbium, and further accompanied by the emission continuum of divalent europium in the blue region, comprise the more discontinuous light emission of the modern trichromatic phosphor systems used in many compact fluorescent lamp and traditional lamps where better color rendition is a goal.
Fluorescent lights were first available to the public at the 1939 New York World's Fair. Improvements since then have largely been better phosphors, longer life, and more consistent internal discharge, and easier-to-use shapes (such as compact fluorescent lamps). Some high-intensity discharge (HID) lamps couple their even-greater electrical efficiency with phosphor enhancement for better color rendition.
White light-emitting diodes (LEDs) became available in the mid-1990s as LED lamps, in which blue light emitted from the semiconductor strikes phosphors deposited on the tiny chip. The combination of the blue light that continues through the phosphor and the green to red fluorescence from the phosphors produces a net emission of white light.
Many analytical procedures involve the use of a fluorometer, usually with a single exciting wavelength and single detection wavelength. Because of the sensitivity that the method affords, fluorescent molecule concentrations as low as 1 part per trillion can be measured.
Fluorescence in several wavelengths can be detected by an array detector, to detect compounds from HPLC flow. Also, TLC plates can be visualized if the compounds or a coloring reagent is fluorescent. Fluorescence is most effective when there is a larger ratio of atoms at lower energy levels in a Boltzmann distribution. There is, then, a higher probability of excitement and release of photons by lower-energy atoms, making analysis more efficient.
Usually the setup of a fluorescence assay involves a light source, which may emit many different wavelengths of light. In general, a single wavelength is required for proper analysis, so, in order to selectively filter the light, it is passed through an excitation monochromator, and then that chosen wavelength is passed through the sample cell. After absorption and re-emission of the energy, many wavelengths may emerge due to Stokes shift and various electron transitions. To separate and analyze them, the fluorescent radiation is passed through an emission monochromator, and observed selectively by a detector.
Biochemistry and medicine
Fluorescence in the life sciences is used generally as a non-destructive way of tracking or analysis of biological molecules by means of the fluorescent emission at a specific frequency where there is no background from the excitation light, as relatively few cellular components are naturally fluorescent (called intrinsic or autofluorescence). In fact, a protein or other component can be "labelled" with an extrinsic fluorophore, a fluorescent dye that can be a small molecule, protein, or quantum dot, finding a large use in many biological applications.
The quantification of a dye is done with a spectrofluorometer and finds additional applications in:
- When scanning the fluorescence intensity across a plane one has fluorescence microscopy of tissues, cells, or subcellular structures, which is accomplished by labeling an antibody with a fluorophore and allowing the antibody to find its target antigen within the sample. Labelling multiple antibodies with different fluorophores allows visualization of multiple targets within a single image (multiple channels). DNA microarrays are a variant of this.
- Immunology: An antibody is first prepared by having a fluorescent chemical group attached, and the sites (e.g., on a microscopic specimen) where the antibody has bound can be seen, and even quantified, by the fluorescence.
- FLIM (Fluorescence Lifetime Imaging Microscopy) can be used to detect certain bio-molecular interactions that manifest themselves by influencing fluorescence lifetimes.
- Cell and molecular biology: detection of colocalization using fluorescence-labelled antibodies for selective detection of the antigens of interest using specialized software, such as CoLocalizer Pro.
- FRET (Förster resonance energy transfer, also known as fluorescence resonance energy transfer) is used to study protein interactions, detect specific nucleic acid sequences and used as biosensors, while fluorescence lifetime (FLIM) can give an additional layer of information.
- Biotechnology: biosensors using fluorescence are being studied as possible Fluorescent glucose biosensors.
- Automated sequencing of DNA by the chain termination method; each of four different chain terminating bases has its own specific fluorescent tag. As the labelled DNA molecules are separated, the fluorescent label is excited by a UV source, and the identity of the base terminating the molecule is identified by the wavelength of the emitted light.
- FACS (fluorescence-activated cell sorting). One of several important cell sorting techniques used in the separation of different cell lines (especially those isolated from animal tissues).
- DNA detection: the compound ethidium bromide, in aqueous solution, has very little fluorescence, as it is quenched by water. Ethidium bromide's fluorescence is greatly enhanced after it binds to DNA, so this compound is very useful in visualising the location of DNA fragments in agarose gel electrophoresis. Intercalated ethidium is in a hydrophobic environment when it is between the base pairs of the DNA, protected from quenching by water which is excluded from the local environment of the intercalated ethidium. Ethidium bromide may be carcinogenic – an arguably safer alternative is the dye SYBR Green.
- FIGS (Fluorescence image-guided surgery) is a medical imaging technique that uses fluorescence to detect properly labeled structures during surgery.
- Intravascular fluorescence is a catheter-based medical imaging technique that uses fluorescence to detect high-risk features of atherosclerosis and unhealed vascular stent devices. Plaque autofluorescence has been used in a first-in-man study in coronary arteries in combination with optical coherence tomography. Molecular agents has been also used to detect specific features, such as stent fibrin accumulation and enzymatic activity related to artery inflammation.
- SAFI (species altered fluorescence imaging) an imaging technique in electrokinetics and microfluidics. It uses non-electromigrating dyes whose fluorescence is easily quenched by migrating chemical species of interest. The dye(s) are usually seeded everywhere in the flow and differential quenching of their fluorescence by analytes is directly observed.
- Fluorescence-based assays for screening toxic chemicals. The optical assays consist of a mixture of environmental-sensitive fluorescent dyes and human skin cells that generate fluorescence spectra patterns. This approach can reduce the need for laboratory animals in biomedical research and pharmaceutical industry.
Fingerprints can be visualized with fluorescent compounds such as ninhydrin or DFO (1,8-Diazafluoren-9-one). Blood and other substances are sometimes detected by fluorescent reagents, like fluorescein. Fibers, and other materials that may be encountered in forensics or with a relationship to various collectibles, are sometimes fluorescent.
Fluorescent colors are frequently used in signage, particularly road signs. Fluorescent colors are generally recognizable at longer ranges than their non-fluorescent counterparts, with fluorescent orange being particularly noticeable. This property has led to its frequent use in safety signs and labels.
Fluorescent compounds are often used to enhance the appearance of fabric and paper, causing a "whitening" effect. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white. Optical brighteners are used in laundry detergents, high brightness paper, cosmetics, high-visibility clothing and more.
- Absorption-re-emission atomic line filters use the phenomenon of fluorescence to filter light extremely effectively.
- Black light
- Blacklight paint
- Fluorescence correlation spectroscopy
- Fluorescence image-guided surgery
- Fluorescence in plants
- Fluorescence spectroscopy
- Fluorescent lamp
- Fluorescent multilayer card
- Fluorescent Multilayer Disc
- High-visibility clothing
- Integrated fluorometer
- Laser-induced fluorescence
- List of light sources
- Microbial art, using fluorescent bacteria
- Mössbauer effect, resonant fluorescence of gamma rays
- Organic light-emitting diodes can be fluorescent
- Phosphor thermometry, the use of phosphorescence to measure temperature.
- Two-photon absorption
- Vibronic spectroscopy
- X-ray fluorescence
- Acuña, A. Ulises; Amat-Guerri, Francisco; Morcillo, Purificación; Liras, Marta; Rodríguez, Benjamín (2009). "Structure and Formation of the Fluorescent Compound of Lignum nephriticum" (PDF). Organic Letters. 11 (14): 3020–3023. doi:10.1021/ol901022g. PMID 19586062. Archived (PDF) from the original on 28 July 2013.
- Safford, William Edwin (1916). "Lignum nephriticum". Annual report of the Board of Regents of the Smithsonian Institution (PDF). Washington: Government Printing Office. pp. 271–298.
- Valeur, B.; Berberan-Santos, M. R. N. (2011). "A Brief History of Fluorescence and Phosphorescence before the Emergence of Quantum Theory". Journal of Chemical Education. 88 (6): 731–738. Bibcode:2011JChEd..88..731V. doi:10.1021/ed100182h.
- Muyskens, M.; Ed Vitz (2006). "The Fluorescence of Lignum nephriticum: A Flash Back to the Past and a Simple Demonstration of Natural Substance Fluorescence". Journal of Chemical Education. 83 (5): 765. Bibcode:2006JChEd..83..765M. doi:10.1021/ed083p765.
- Clarke, Edward Daniel (1819). "Account of a newly discovered variety of green fluor spar, of very uncommon beauty, and with remarkable properties of colour and phosphorescence". The Annals of Philosophy. 14: 34–36. Archived from the original on 17 January 2017.
The finer crystals are perfectly transparent. Their colour by transmitted light is an intense emerald green; but by reflected light, the colour is a deep sapphire blue
- Haüy merely repeats Clarke's observation regarding the colors of the specimen of fluorite which he (Clarke) had examined: Haüy, Traité de Minéralogie, 2nd ed. (Paris, France: Bachelier and Huzard, 1822), vol. 1, p. 512 Archived 17 January 2017 at the Wayback Machine.. Fluorite is called "chaux fluatée" by Haüy: "... violette par réflection, et verdâtre par transparence au Derbyshire." ([the color of fluorite is] violet by reflection, and greenish by transmission in [specimens from] Derbyshire.)
- Brewster, David (1834). "On the colours of natural bodies". Transactions of the Royal Society of Edinburgh. 12 (2): 538–545. doi:10.1017/s0080456800031203. Archived from the original on 17 January 2017. On page 542, Brewster mentions that when white light passes through an alcoholic solution of chlorophyll, red light is reflected from it.
- Herschel, John (1845). "On a case of superficial colour presented by a homogeneous liquid internally colourless". Philosophical Transactions of the Royal Society of London. 135: 143–145. doi:10.1098/rstl.1845.0004. Archived from the original on 24 December 2016.
- Herschel, John (1845). "On the epipŏlic dispersion of light, being a supplement to a paper entitled, "On a case of superficial colour presented by a homogeneous liquid internally colourless"". Philosophical Transactions of the Royal Society of London. 135: 147–153. doi:10.1098/rstl.1845.0005. Archived from the original on 17 January 2017.
- Stokes, G. G. (1852). "On the Change of Refrangibility of Light". Philosophical Transactions of the Royal Society of London. 142: 463–562. doi:10.1098/rstl.1852.0022. Archived from the original on 17 January 2017. From page 479, footnote: "I am almost inclined to coin a word, and call the appearance fluorescence, from fluor-spar, as the analogous term opalescence is derived from the name of a mineral."
- Stokes (1852), pages 472–473. In a footnote on page 473, Stokes acknowledges that in 1843, Edmond Becquerel had observed that quinine acid sulfate strongly absorbs ultraviolet radiation (i.e., solar radiation beyond Fraunhofer's H band in the solar spectrum). See: Edmond Becquerel (1843) "Des effets produits sur les corps par les rayons solaires" Archived 31 March 2013 at the Wayback Machine. (On the effects produced on substances by solar rays), Comptes rendus, 17 : 882–884; on page 883, Becquerel cites quinine acid sulfate ("sulfate acide de quinine") as strongly absorbing ultraviolet light.
- Lakowicz, p. 1
- Holler, F. James; Skoog, Douglas A. and Crouch, Stanley R. (2006) Principles Of Instrumental Analysis. Cengage Learning. ISBN 0495012017
- Lakowicz, p. 10
- Valeur, Bernard, Berberan-Santos, Mario (2012). Molecular Fluorescence: Principles and Applications. Wiley-VCH. ISBN 978-3-527-32837-6. p. 64
- "Animation for the Principle of Fluorescence and UV-Visible Absorbance" Archived 9 June 2013 at the Wayback Machine.. PharmaXChange.info.
- Lakowicz, pp. 12–13
- Valeur, Bernard, Berberan-Santos, Mario (2012). Molecular Fluorescence: Principles and Applications. Wiley-VCH. ISBN 978-3-527-32837-6. p. 186
- Schieber, Frank (October 2001). "Modeling the Appearance of Fluorescent Colors". Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 45 (18): 1324–1327. doi:10.1177/154193120104501802.
- IUPAC. Kasha–Vavilov rule – Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") Archived 21 March 2012 at the Wayback Machine.. Compiled by McNaught, A.D. and Wilkinson, A. Blackwell Scientific Publications, Oxford, 1997.
- Lakowicz, pp. 6–8
- Lakowicz, pp. 6–7
- "Fluorescence in marine organisms". Gestalt Switch Expeditions. Archived from the original on 21 February 2015.
- Wucherer, M. F.; Michiels, N. K. (2012). "A Fluorescent Chromatophore Changes the Level of Fluorescence in a Reef Fish". PLoS ONE. 7 (6): e37913. Bibcode:2012PLoSO...737913W. doi:10.1371/journal.pone.0037913. PMC . PMID 22701587.
- Fujii, R (2000). "The regulation of motile activity in fish chromatophores". Pigment cell research / sponsored by the European Society for Pigment Cell Research and the International Pigment Cell Society. 13 (5): 300–19. doi:10.1034/j.1600-0749.2000.130502.x. PMID 11041206.
- Abbott, F. S. (1973). "Endocrine Regulation of Pigmentation in Fish". Integrative and Comparative Biology. 13 (3): 885–894. doi:10.1093/icb/13.3.885.
- Beyer, Steffen. "Biology of underwater fluorescence". Fluopedia.org.
- Sparks, J. S.; Schelly, R. C.; Smith, W. L.; Davis, M. P.; Tchernov, D.; Pieribone, V. A.; Gruber, D. F. (2014). Fontaneto, Diego, ed. "The Covert World of Fish Biofluorescence: A Phylogenetically Widespread and Phenotypically Variable Phenomenon". PLoS ONE. 9 (1): e83259. Bibcode:2014PLoSO...983259S. doi:10.1371/journal.pone.0083259. PMC . PMID 24421880.
- Matz, M. "Fluorescence: The Secret Color of the Deep". Office of Ocean Exploration and Research, U.S. National Oceanic and Atmospheric Administration. Archived from the original on 31 October 2014.
- Heinermann, P (2014-03-10). "Yellow intraocular filters in fishes". Experimental Biology. 43 (2): 127–147. PMID 6398222.
- Michiels, N. K.; Anthes, N.; Hart, N. S.; Herler, J. R.; Meixner, A. J.; Schleifenbaum, F.; Schulte, G.; Siebeck, U. E.; Sprenger, D.; Wucherer, M. F. (2008). "Red fluorescence in reef fish: A novel signalling mechanism?". BMC Ecology. 8: 16. doi:10.1186/1472-6785-8-16. PMC . PMID 18796150.
- Gerlach, T; Sprenger, D; Michiels, N. K. (2014). "Fairy wrasses perceive and respond to their deep red fluorescent coloration". Proceedings of the Royal Society B: Biological Sciences. 281 (1787): 20140787. doi:10.1098/rspb.2014.0787. PMC . PMID 24870049.
- Salih, A.; Larkum, A.; Cox, G.; Kühl, M.; Hoegh-Guldberg, O. (2000). "Fluorescent pigments in corals are photoprotective". Nature. 408 (6814): 850–3. Bibcode:2000Natur.408..850S. doi:10.1038/35048564. PMID 11130722. Archived from the original on 22 December 2015.
- Roth, M. S.; Latz, M. I.; Goericke, R.; Deheyn, D. D. (2010). "Green fluorescent protein regulation in the coral Acropora yongei during photoacclimation". Journal of Experimental Biology. 213 (21): 3644–3655. doi:10.1242/jeb.040881. PMID 20952612.
- Bou-Abdallah, F.; Chasteen, N. D.; Lesser, M. P. (2006). "Quenching of superoxide radicals by green fluorescent protein". Biochimica et Biophysica Acta (BBA) - General Subjects. 1760 (11): 1690–1695. doi:10.1016/j.bbagen.2006.08.014. PMC . PMID 17023114.
- Field, S. F.; Bulina, M. Y.; Kelmanson, I. V.; Bielawski, J. P.; Matz, M. V. (2006). "Adaptive Evolution of Multicolored Fluorescent Proteins in Reef-Building Corals". Journal of Molecular Evolution. 62 (3): 332–339. Bibcode:2006JMolE..62..332F. doi:10.1007/s00239-005-0129-9. PMID 16474984.
- Mäthger, L. M.; Denton, E. J. (2001). "Reflective properties of iridophores and fluorescent 'eyespots' in the loliginid squid Alloteuthis subulata and Loligo vulgaris". The Journal of Experimental Biology. 204 (Pt 12): 2103–18. PMID 11441052. Archived from the original on 4 March 2016.
- Tsien, R. Y. (1998). "The Green Fluorescent Protein". Annual Review of Biochemistry. 67: 509–544. doi:10.1146/annurev.biochem.67.1.509. PMID 9759496.
- Mazel, C. H. (2004). "Fluorescent Enhancement of Signaling in a Mantis Shrimp". Science. 303 (5654): 51. doi:10.1126/science.1089803. PMID 14615546.
- Bou-Abdallah, F.; Chasteen, N. D.; Lesser, M. P. (2006). "Quenching of superoxide radicals by green fluorescent protein". Biochimica et Biophysica Acta (BBA) - General Subjects. 1760 (11): 1690–1695. doi:10.1016/j.bbagen.2006.08.014. PMC . PMID 17023114.
- Douglas, R. H.; Partridge, J. C.; Dulai, K.; Hunt, D.; Mullineaux, C. W.; Tauber, A. Y.; Hynninen, P. H. (1998). "Dragon fish see using chlorophyll". Nature. 393 (6684): 423–424. Bibcode:1998Natur.393..423D. doi:10.1038/30871.
- Wong, Sam (13 March 2017). "Luminous frog is the first known naturally fluorescent amphibian". Archived from the original on 20 March 2017. Retrieved 22 March 2017.
- King, Anthony (13 March 2017). "Fluorescent frog first down to new molecule". Archived from the original on 22 March 2017. Retrieved 22 March 2017.
- Vukusic, P; Hooper, I (2005). "Directionally controlled fluorescence emission in butterflies". Science. 310 (5751): 1151. doi:10.1126/science.1116612. PMID 16293753.
- Arnold, K. E. (2002). "Fluorescent Signaling in Parrots". Science. 295 (5552): 92. doi:10.1126/science.295.5552.92. PMID 11778040.[permanent dead link]
- Andrews, K; Reed, S. M.; Masta, S. E. (2007). "Spiders fluoresce variably across many taxa". Biology Letters. 3 (3): 265–7. doi:10.1098/rsbl.2007.0016. PMC . PMID 17412670.
- Stachel, S. J.; Stockwell, S. A.; Van Vranken, D. L. (1999). "The fluorescence of scorpions and cataractogenesis". Chemistry & Biology. 6 (8): 531–539. doi:10.1016/S1074-5521(99)80085-4. PMID 10421760.
- Iriel, A. A.; Lagorio, M. A. G. (2010). "Is the flower fluorescence relevant in biocommunication?". Naturwissenschaften. 97 (10): 915–924. Bibcode:2010NW.....97..915I. doi:10.1007/s00114-010-0709-4. PMID 20811871.
- McDonald, Maurice S. (2 June 2003). Photobiology of Higher Plants. John Wiley & Sons. ISBN 9780470855232. Archived from the original on 21 December 2017.
- Gilmore, F. R.; Laher, R. R.; Espy, P. J. (1992). "Franck–Condon Factors, r-Centroids, Electronic Transition Moments, and Einstein Coefficients for Many Nitrogen and Oxygen Band Systems". Journal of Physical and Chemical Reference Data. 21 (5): 1005. Bibcode:1992JPCRD..21.1005G. doi:10.1063/1.555910. Archived from the original on 9 July 2017.
- Harris, Tom. "How Fluorescent Lamps Work". HowStuffWorks. Discovery Communications. Archived from the original on 6 July 2010. Retrieved 27 June 2010.
- Rye, H. S.; Dabora, J. M.; Quesada, M. A.; Mathies, R. A.; Glazer, A. N. (1993). "Fluorometric Assay Using Dimeric Dyes for Double- and Single-Stranded DNA and RNA with Picogram Sensitivity". Analytical Biochemistry. 208 (1): 144–150. doi:10.1006/abio.1993.1020. PMID 7679561.
- Harris, Daniel C. (2004). Exploring chemical analysis. Macmillan. ISBN 978-0-7167-0571-0. Archived from the original on 31 July 2016.
- Lakowicz, p. xxvi
- Calfon MA, Vinegoni C, Ntziachristos V, Jaffer FA (2010). "Intravascular near-infrared fluorescence molecular imaging of atherosclerosis: toward coronary arterial visualization of biologically high-risk plaques". J Biomed Opt. 15 (1): 011107. Bibcode:2010JBO....15a1107C. doi:10.1117/1.3280282. PMC . PMID 20210433.
- Ughi GJ, Wang H, Gerbaud E, Gardecki JA, Fard AM, Hamidi E, et al. (2016). "Clinical Characterization of Coronary Atherosclerosis With Dual-Modality OCT and Near-Infrared Autofluorescence Imaging". JACC Cardiovasc Imaging. 9 (11): 1304–1314. doi:10.1016/j.jcmg.2015.11.020. PMC . PMID 26971006.
- Hara T, Ughi GJ, McCarthy JR, Erdem SS, Mauskapf A, Lyon SC, et al. (2015). "Intravascular fibrin molecular imaging improves the detection of unhealed stents assessed by optical coherence tomography in vivo". Eur Heart J. 38 (6): 447–455. doi:10.1093/eurheartj/ehv677. PMC . PMID 26685129.
- Shkolnikov, V; Santiago, J. G. (2013). "A method for non-invasive full-field imaging and quantification of chemical species" (PDF). Lab on a Chip. 13 (8): 1632–43. doi:10.1039/c3lc41293h. PMID 23463253. Archived (PDF) from the original on 5 March 2016.
- Moczko, E; Mirkes, EM; Cáceres, C; Gorban, AN; Piletsky, S (2016). "Fluorescence-based assay as a new screening tool for toxic chemicals". Scientific Reports. 6: 33922. Bibcode:2016NatSR...633922M. doi:10.1038/srep33922. PMC . PMID 27653274.
- Hawkins, H. Gene; Carlson, Paul John and Elmquist, Michael (2000) "Evaluation of fluorescent orange signs" Archived 4 March 2016 at the Wayback Machine., Texas Transportation Institute Report 2962-S.
- Lakowicz, Joseph R. (1999). Principles of Fluorescence Spectroscopy. Kluwer Academic / Plenum Publishers. ISBN 978-0-387-31278-1.
|Wikimedia Commons has media related to Fluorescence.|
- Fluorophores.org, the database of fluorescent dyes
- FSU.edu, Basic Concepts in Fluorescence
- "A nano-history of fluorescence" lecture by David Jameson
- Excitation and emission spectra of various fluorescent dyes
- Database of fluorescent minerals with pictures, activators and spectra (fluomin.org)
- "Biofluorescent Night Dive – Dahab/Red Sea (Egypt), Masbat Bay/Mashraba, "Roman Rock"". YouTube. 9 October 2012.
- Steffen O. Beyer. "FluoPedia.org: Publications". fluopedia.org.
- Steffen O. Beyer. "FluoMedia.org: Science". fluomedia.org. | <urn:uuid:27c6f24a-2400-44d9-a797-2eefc3a25609> | 3.78125 | 12,650 | Knowledge Article | Science & Tech. | 43.131494 | 95,545,312 |
That's why one of the most common lab tests performed in industry is one that looks for traces of water in other substances, even though the test itself is complicated and time-consuming.
A new method for detection and measurement of small amounts of water, developed in the lab of Dr. Milko van der Boom in the Weizmann Institute's Organic Chemistry Department, might allow such tests to be performed accurately and quickly. Van der Boom and postdoctoral fellow Dr. Tarkeshwar Gupta created a versatile film on glass that is only 1.7 nanometers thick. The film can measure the number of water molecules in a substance even when it contains only a few parts per million.
"The problem," says van der Boom, "is that water is hard to detect and to quantify." His method is a departure from previous sensing techniques. In general, such sensor systems are based on relatively weak but selective "host-guest" interactions. In the Weizmann Institute team's sensor, metal complexes embedded in the film steal electrons from the water molecules.
When the number of electrons in the metal complexes changes, so does their color, and this change can be read optically. Devices based on optical readout do not need to be wired directly to larger-scale electronics – an issue that's still a tremendous challenge for much of molecular-based electronics.
The test can be done in as little as five minutes, and the molecular film can be returned to its original state by washing it with a simple chemical. The film also remains stable, even at high temperatures and with repeated use. In addition, it can be deposited in an inexpensive, one-molecule-thick layer on glass, silicon, optical fiber, or plastic.
The ease and low cost of fabrication may also make such films ideal for one-time use. Testing for water in fuel or solvents might become as simple as checking chlorine levels in a swimming pool. Optical detection and quantification by electron transfer could potentially work for numerous substances other than water. The scientists are now exploring the possibility of adapting the method to testing for trace amounts of materials or substances such as specific metal ions or gasses.
Jennifer Manning | EurekAlert!
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:d5dfd130-3129-4b19-b001-af6cb892ad9d> | 3.734375 | 1,091 | Content Listing | Science & Tech. | 42.757981 | 95,545,317 |
A good application strives to ensure the integrity of data before saving them to permanent storage in part by screening new and changed with validation rules.
Validation in a web application must be performed on the server … Client-side validation is no substitute for server-side validation.
Validating form input with Java Script is easy to do and can save a lot of unnecessary calls to the server as all processing is handled by the web browser.
It can prevent people from leaving fields blank, from entering too little or too much or from using invalid characters.
We expect the app to catch bone head mistakes before we submit them.This topic covers the most important aspects of the Breeze validation system.Validation is a process of judging the current state of an entity with validation rules.For an alternative approach to client-side form validation, without Java Script, check out our new article on HTML5 Form Validation which is available now in most modern browsers.When form input is important, it should always be verified using a secure server-side script. | <urn:uuid:16a0f3af-e2e3-4650-922e-26032c98334b> | 2.609375 | 210 | Truncated | Software Dev. | 40.351203 | 95,545,357 |
Analysis of optical absorption in GaAs nanowire arrays
- 3.9k Downloads
In this study, the influence of the geometric parameters on the optical absorption of gallium arsenide [GaAs] nanowire arrays [NWAs] has been systematically analyzed using finite-difference time-domain simulations. The calculations reveal that the optical absorption is sensitive to the geometric parameters such as diameter [D], length [L], and filling ratio [D/P], and more efficient light absorption can be obtained in GaAs NWAs than in thin films with the same thickness due to the combined effects of intrinsic antireflection and efficient excitation of resonant modes. Optimized geometric parameters are obtained as follows: D = 180 nm, L = 2 μm, and D/P = 0.5. Meanwhile, the simulation on the absorption of GaAs NWAs for oblique incidence has also been carried out. The underlying physics is discussed in this work.
PACS: 81.07.Gf nanowires; 81.05.Ea III-V semiconductors; 88.40.hj efficiency and performance of solar cells; 73.50.Pz photoconduction and photovoltaic effects.
KeywordsGaAs Gallium Arsenide Effective Refractive Index Filling Ratio Single Nanowire
Semiconductor nanowire arrays [NWAs] are presently under intense research and development for next-generation solar cells due to their potential for lower cost and greater energy conversion efficiency compared to conventional thin film devices [1, 2, 3, 4]. Among semiconductor nanowires [NWs], gallium arsenide [GaAs] NWs show particular promise due to the superior electrical and optical properties of III to V materials. For example, the GaAs material system features a direct band gap and high absorption coefficient. This makes GaAs NWs prime candidates for future optoelectronic devices, just as bulk materials [5, 6, 7]. Recently, many advances have been reported in the fabrication of GaAs NW solar cells. For example, Czaban et al. observed a photovoltaic [PV] effect with a photoconversion efficiency of 0.83% from vertically oriented GaAs NWs grown on n-GaAs(111)B substrates . Colombo et al. reported a coaxial p-i-n single nanowire cell with an efficiency of 4.5% . These results illustrate that the efficiency of GaAs NW PV devices is much lower than that of thin film counterparts. There are still many problems to be resolved before GaAs NWs can become available for practical applications.
One of the main issues on nanowire solar cells is the determination of the nanowire geometry. It has been proved by many theoretical and experimental works that NWAs with well-defined geometric parameters such as diameter, length, and filling ratio exhibit a much more efficient light absorption in the solar spectrum [1, 2, 3, 4, 8, 9, 10, 11]. In this paper, the influence of geometric parameters on the optical absorption in GaAs NWAs is analyzed using finite-difference time-domain [FDTD] simulations . Optimized geometric parameters are obtained through the simulations. The underlying physics is discussed in this work.
Results and discussion
where f = πD 2 /4P 2 and nair and nGaAs are the refractive indexes of air and GaAs, respectively. Therefore, the effective refractive index of the NWA is much lower than that of the thin film counterparts, resulting in a perfect refractive index matching at the top interface between the air and NWAs, hence leads to good coupling of the incident light into the NWAs [14, 15, 16]. In long wavelength region, the results clearly indicate that longer NWs have higher absorptance due to the increased optical path length in the NWAs. For photovoltaic device applications, however, it should be noted that longer NWs would sacrifice efficient carrier extraction properties and lead to unnecessary material consumption. Hence, in the following simulations, we fixed the length of the NWs to L = 2 μm.
Figure 2b, d compares the reflectance, transmittance, and absorptance of NWAs with D/P = 0.4, 0.5, 0.6, and 0.8 for a fixed diameter of 180 nm. The calculated spectra reveal that the absorption is uniquely determined by the reflection and decreases with larger filling ratios in the visible wavelength region (λ < 700 nm). As seen from Figure 2d, only zero-order transmission exists owing to the high extinction coefficient of GaAs in these wavelengths. The trend of enhanced reflectance with the increased D/P as indicated in Figure 2c can be attributed to the heightened effective refractive index of the NWAs. In the long wavelength region, however, NWAs would undergo a significant transmission and reflection loss. The absorptance curve, shown in Figure 2b, tends to shift towards larger wavelengths as the filling ratio is increased. From these results, it can be concluded that the optimal filling ratio is determined by the trade-off between the reflection enhancement and light transmission suppression with the increase of D/P.
where ε″ is the imaginary part of the permittivity and E is the electric field intensity. Figure 3b shows the cross-sectional distribution of the optical generation rate in a single nanowire for a same incident wave power of 100 mW/cm2 with different wavelengths (λ = 400, 600, 800 nm). The optical generation rate for small wavelengths (e.g., 400 nm) is concentrated near the top and sides of the nanowire due to the strong wire-wire light scattering and short absorption length of GaAs at these photon energies. However, the generation rates for most of the solar spectrum (e.g., 600 and 800 nm) are concentrated near the core, demonstrating the internal absorption enhancement mode in the nanowire. Each nanowire acts as a nanoscale cylindrical resonator, which can trap light by multiple total internal reflections.
In summary, we have analyzed the optical properties of GaAs NWAs by FDTD simulations, which were found to be sensitive to the structural parameters such as wire diameter D, length L, and filling ratio D/P. The optimal results for the normal incidence are evaluated as D = 180 nm, L = 2 μm, and D/P = 0.5. Our calculation shows that the absorptance exceeds 90% in well-designed GaAs NWAs in the visible light region, which is much higher than that of thin films with the same thickness due to the combined effects of the intrinsic antireflection and efficient excitation of resonant modes. The simulated optical generation rates in a single GaAs nanowire for most of the solar spectrum are concentrated near the core, illustrating the internal wire absorption enhancement mode. For the oblique incidence, perfect antireflection properties for the coupling of oblique incident light into the NWAs are demonstrated at incident angles up to 60°, while the absorption declines as the incident angle is over 60° due to the large reflectance. Meanwhile, a higher absorption is observed in TM polarization than in TE polarization, which is attributed to the electric field component along the axis of TM polarization.
Financial support from the National Natural Science Foundation of China (No. 50872134) is gratefully acknowledged.
- 9.Kelzenberg MD, Putnam MC, Turner-Evans DB, Lewis NS, Atwater HA: Predicted efficiency of Si wire array solar cells. 34th IEEE PVSC 2009, 1–6.Google Scholar
- 12.Taflove A, Hagness SC: Computational Electrodynamics: The Finite-Difference Time-Domain Method. Boston, MA: Artech House; 2005.Google Scholar
- 13.Levinshtein M, Rumyantsev S, Shur M: Handbook Series on Semiconductor Parameters, Ternary and Quaternary III-V Compounds. Volume 2. Singapore: World Scientific; 1999.Google Scholar
- 16.Joannopoulos JD, Meade RD, Winn JN: Photonic Crystals: Molding the Flow of Light. Princeton, NJ: Princeton University Press; 2008.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:427b9899-3d88-40c0-ad27-5650c618a823> | 2.640625 | 1,790 | Academic Writing | Science & Tech. | 45.088764 | 95,545,391 |
|Scientific Name:||Pyxis planicauda (Grandidier, 1867)|
Acinixys planicauda (Grandidier, 1867)
Testudo morondavaensis Vuillemin, 1972
Testudo planicauda Grandidier, 1867
|Red List Category & Criteria:||Critically Endangered A4acd ver 3.1|
|Assessor(s):||Leuteritz, T., Randriamahazo, H. & Lewis, R. (Madagascar Tortoise and Freshwater Turtle Red List Workshop)|
|Reviewer(s):||Rhodin, A. & Mittermeier, R.A. (IUCN SSC Tortoise & Turtle Freshwater Turtle Red List Authority)|
Pyxis planicauda has suffered a minimum of 32% essential habitat loss during the period 1963-1993, and habitat loss rates continue at a similar level, leading to a compound habitat loss of well over 70% in a three generation period. This habitat impact was compounded by the removal of at least 20-25% of the total estimated population of adults in the three-year period 2000-2002. Combined, this indicates a minimum of 60% population decline in the past two generations, with a further 30% anticipated for the next generation, qualifying the species as Critically Endangered under criterion A4acd. Population modelling predicting extinction before 2030 is no longer applicable as some of the the modelling assumptions are no longer operational.
|Previously published Red List assessments:|
|Range Description:||This species mainly occurs in fragments of dry deciduous forest in the region of Menabe between the Morondava and Tsiribihina Rivers. However, a small subpopulation occurs north of the Tsiribihina Rivers (Behler et al. 1993, Bloxam et al. 1993, Goetz et al. 2003).|
At the 2001 Conservation Assessment and Conservation Planning (CAMP) workshop, P. planicauda's extent of occurrence was estimated as less than 5,000 sq. km, and total area of occupancy was estimated as under 500 sq.km (CBSG 2001).
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||Based on density estimates, habitat reduction and trade figures it is believed that the total population of P. planicauda is less than 10, 000 animals (Anonymous 2001). Recent surveys yield a calculated total population of over 16,000 animals, but the methodology used requires further data to confirm this number. |
Summary of various P. planicauda studies (from CITES AC18 Doc. 7.1, 2002):
1991 - Kirindi - 8 km² surveyed - tortoises encountered on 54 occasions - 6.75 per km², but no data on recaptures — Quentin and Hayes (1991).
1996 - Kirindi - 20 km² / 20,000 ha surveyed - 12 tortoises in 11 days, 83% recapture - 0.6/km2 —
Bloxam et al. (1996).
"main forest block":
- 0.5/ha (50/km²) — Durbin and Randriamanampisoa (2000).
- 2-6/ha (200-600/km²) —- Durbin and Randriamanampisoa (2000, as cited in CITES Proposal 12.55)
- 1/ha (100/km²) — Kuchling in litt. 2001, Rakotombololona (2001 cited in Rakotombololona & Durbin in litt. to SSC Wildlife Trade Programme, 23 Nov 2001)
|Current Population Trend:||Decreasing|
|Habitat and Ecology:||The forests inhabited by Pyxis planicauda grow on loose sandy soils and the tortoises take refuge amongst the leaf litter of the forest floor. They burrow and are inactive in leaf litter during dry season (late May through October), but become active in the wet season. They are crepuscular and seek shelter during mid-day (Durrell et al. 1989, Rakotombololona 1998, Gibson and Buley 2004). Tortoises feed on fallen fruits such as Breonia perrieri and Aleanthus greveanus. (Glaw and Vences 1994, Gibson and Buley 2004). Fungi and fallen flowers have also been reported as diet items (Goetz et al. 2003).|
Adult P. planicauda reach a carapace length of 13.7 cm (Ernst et al. 2000) to 14.8 cm (Pedrono 2008). Based on information from Durrell Wildlife breeding center in northwestern Madagascar females do not reach maturity until ten years of age. Generation time was estimated at the 2008 Madagascar Tortoise and Freswater Turtle workshop as at least 25 years. Mating occurs in the first half of the wet season and females produce 1-3 single egg clutches in the latter half of the wet season (Goetz et al. 2003, Pedrono 2008). Observation of nests in the wild, show incubation periods of 250-340 days (Razandrimamilafiniarivo et al. 2000).
|Generation Length (years):||25|
|Use and Trade:||Legal pet trade exports of the species ceased in 2003.|
Pyxis planicauda is exclusively associated with closed-canopy dry forest and its major threat comes from habitat loss, particularly from burning and clearing for agricultural lands/cattle grazing, highway development, mining, and petroleum exploration (Tidd et al. 2001, Goetz et al. 2003, Bonin et al. 2006). Analyses of satellite imagery by Tidd et al. (2001) between 1963 and 1993 showed a 32% reduction in the primary dry forests. Deforestation rates have increased, and up to 50% of the 76,000 ha remaining in the southern portion of the tortoises range may be destroyed before 2010. A 50% reduction in the remaining 73,000 ha of habitat in the northern portion of its known range may occur by 2040 (Tidd et al. 2001), for a combined forest habitat loss estimated at over 70% in the period 1963-2040. Similar deforestation rates were documented by Harper et al. (2007).
Secondary pressure comes from collection for the pet trade (Goetz et al. 2003, Bonin et al. 2006); a pulse of exploitation for pet trade export removed about 4,000 adult animals during 2000 to 2002, representing 20 to 40% of the total number of adults (depending on total population estimates). The reproductive capacity and recruitment potential of this species are particularly low, even by tortoise standards.
The species is not consumed locally or traded locally/regionally.
Population modelling at the 2001 CAMP workshop (CBSG 2001) predicted extinction before 2030 based on rates of habitat loss and pet trade collection then in effect, but legal export trade is no longer permitted and thus the modelling assumptions are no longer valid.
Pyxis planicauda was recommended to be listed as Critically Endangered (CR A3acd + B1b) at the 2001 CAMP workshop (CBSG 2001).
In 2003, P. planicauda was uplisted to CITES Appendix I from Appendix II (in which the tortoise had been listed since 1977; UNEP-WCMC 2007). This is generally perceived to have reduced exploitation of the species. The tortoise is protected nationally by Ordinance No. 60-126 of 3 October 1960, which regulates hunting and fishing and provides for the protection of nature, but the problem is that it is not stated what level of protection this legislation affords to P. planicauda, or how this is enforced (CITES AC18 Doc. 7.1, 2002).
The tortoise is protected at three sites within its range. It is protected in the special reserve of Andranomena 6,420 ha and in the Sites of Biological Interest of (1) Analabe 2,000-12,000 ha and (2) the Kirindy Forest (Morondava) 100,000 ha by private or local interests [CFPF] (Nicoll and Langrand 1989).
Pyxis planicauda is bred at the Durrell Wildlife chelonian captive breeding centre in Ampijoroa (Razandrimamilafiniarivo et al. 2000) and at a number of zoos around the world.
|Citation:||Leuteritz, T., Randriamahazo, H. & Lewis, R. (Madagascar Tortoise and Freshwater Turtle Red List Workshop). 2008. Pyxis planicauda. The IUCN Red List of Threatened Species 2008: e.T19036A8789990.Downloaded on 23 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:e879dabe-f5b1-4901-aa94-87f8e06ba330> | 2.6875 | 1,917 | Knowledge Article | Science & Tech. | 55.400557 | 95,545,398 |
Data and Tools
Biology and Ecosystems Datasets
Mapping, Remote Sensing, and Geospatial Data
The data collected and the techniques used by USGS scientists should conform to or reference national and international standards and protocols if they exist and when they are relevant and appropriate. For datasets of a given type, and if national or international metadata standards exist, the data are indexed with metadata that facilitates access and integration.
The Southwest Exotic Plant Mapping Program (SWEMP) is a collaborative effort to compile and distribute regional data on the occurrence of non-native invasive plants in the southwestern U.S. The database represents the known sites of non-native invasive plant infestations within AZ and NM, and adjacent portions of CA, CO, NV and UT. These data were collected from 1911to 2006.
LPJ biomes (30-year mean) simulated using monthly historical (1901-2000) CRU TS 2.1 climate data and projected future (2001-2099) CMIP3 A2 and A1B simulated climate data on a 30-second grid of the northwest United States and southwest Canada
LPJ simulated biomes for the northwest United States and southwest Canada in netCDF files.
Bioclimatic variables calculated from statistically-downscaled historical (1901-2000) CRU TS 2.1 climate data and projected future (2001-2099) CMIP3 A2 and A1B simulated climate data on a 30-second grid of the northwest United States and southwest Canada
Bioclimatic variables for the northwest United States and southwest Canada in netCDF files.
Statistically-downscaled monthly historical (1901-2000) CRU TS 2.1 and projected future (2001-2099) CMIP3 A2 and A1B simulated temperature, precipitation, and sunshine data on a 30-second grid of the northwest United States and southwest Canada
Downscaled climate data for the northwest United States and southwest Canada in netCDF files.
This dataset includes two spreadsheets: The "Avian_abundance_oak_mistletoe_bird_data" spreadsheet contains data regarding Oregon White Oak tree (Quercus garryana) measurements. The "Avian_abundance_oak_mistletoe_surveys_data" spreadsheet contains bird survey observations.
The Borehole Temperature Logs provide temperature measurements acquired in permafrost regions of arctic Alaska between 1950 and 1988 by the USGS at 87 sites deep enough to penetrate the base of permafrost.
Meteorological data and repeat photography captured at Climate Impact Meteorological (CLIM-MET) stations located in the Canyonlands National Park and Mojave National Preserve. Data from ecologically sensitive sites in the Southwest help determine the distribution and types of surficial deposits and the processes responsible for the deposits, and contribute to understanding vegetation...
A catalog of dust events in the southwestern United States since 2009. Dust emission from sources in the southwestern United States is important on local and regional scales because of its effects on air quality, human health and safety, snow melt timing and water management, and on ecosystem function through the depletion and (or) addition of soil nutrients.
The Geologic Map of North America portrays the grand architecture of the continent as we understood it in the closing years of the 20th century (The Geological Society of America, Inc., 2005). It is the final product of the Geological Society of America's Decade of North American Geology project, and covers about 15% of the Earth's surface at a scale of 1:5,000,000.
Data from the Global Ecosystems activity allow for a fine resolution inventory of land-based ecological features anywhere on Earth, and contribute to increased understanding of ecological pattern and ecosystem distributions. Ongoing efforts focus on an ecological land classification approach emphasizing ecologically meaningful characteristics of the land, i.e. bioclimate, landform, and...
Access to regional web cameras located in the desert southwest and arctic Alaska. Desert cameras capture and document regional climate variability with a specific focus on local and regional dust emission and transport. The artic Alaska cameras capture and document regional climate variability with a specific focus on snow cover and permafrost feature evolution.
The USGS/NOAA North American Packrat Midden Database makes thousands of identified specimens and hundreds of published reports available in a standardized, quality-controlled format. This version offers the most comprehensive, high-quality archive of midden data available for North America, and facilitates Quaternary paleoenvironmental studies on a range of local to regional scales. | <urn:uuid:364086aa-a9e6-427d-88a6-b9711347d24c> | 2.953125 | 949 | Content Listing | Science & Tech. | 19.220883 | 95,545,429 |
Now a team of scientists around Prof. Immanuel Bloch (Chair for Experimental Physics at the Ludwig-Maximilians-Universität Munich and Director at MPQ) in collaboration with the theoretical physicist Dr. Belén Paredes (CSIC/UAM Madrid) developed a new experimental method to simulate these systems using a crystal made of neutral atoms and laser light.
Fig. 1 Cyclotron orbits of atoms exposed to extremely strong effective magnetic fields in specially engineered light crystals. The effective field strengths realized in the experiment correspond to tens of thousands of Tesla magnetic field strength applied to a real material. In the experiment the celebrated Hofstadter-Harper as well as the Quantum Spin Hall Hamiltonian could thereby be implemented.
In such artificial quantum matter, the atoms could be exposed to a uniform effective magnetic field several thousand times stronger than in typical condensed matter systems (Phys. Rev. Lett. 111, 185301, 2013).
Charged particles in a magnetic field experience a force perpendicular to their direction of motion – the Lorentz force –, which makes them move on circular (cyclotron) orbits in a plane perpendicular to the magnetic field. A sufficiently strong magnetic field can thereby dramatically change the properties of a material, giving rise to novel quantum phenomena such as the Quantum Hall effect. The cyclotron orbits shrink with increasing magnetic field. For typical field strengths, their size is much larger than the distance between neighbouring ions in the material, and the role of the crystal is negligible. However, for extremely large magnetic fields the two length scales become comparable and the interplay between the magnetic field and the crystal potential leads to striking new effects.
These are manifested for instance in a fractal structure of the energy spectrum, which was first predicted by Douglas Hofstadter in 1976 and is known as the Hofstadter’s butterfly. Many intriguing electronic material properties are related to it, but so far experiments could not explore the full complexity of the problem.
For real materials, entering the Hofstadter regime is typically very challenging because the spacing between neighbouring ions is very small. Therefore inaccessibly large magnetic fields have to be applied. One solution is to synthesize artificial materials with effectively larger lattice constants, such as in two superimposed sheets of graphene and boron-nitride.
The experiments performed by the Munich research team follow an alternative approach. In their experiments large magnetic fields are created artificially by exposing ultracold atoms to specially designed laser fields. The system consists of Rubidium atoms cooled to very low temperatures, which are confined in a period structure formed by standing waves of laser light. “Atoms can only sit in regions of high light intensities and arrange in a 2D structure similar to eggs in an egg carton”, explains Monika Aidelsburger, a physicist in the team of Professor Bloch. “The laser beams play the role of the ion crystal and the atoms the one of the electrons.”
Since the atoms are neutral, however, they do not experience a Lorentz force in the presence of an external magnetic field. The challenge was to develop a technique that mimics the Lorentz force for neutral particles. A combination of tilting the lattice and shaking it simultaneously with an additional pair of crossed laser beams allows the atoms to move in the lattice and perform a cyclotron-like motion similar to charged particles in a magnetic field. In this way, the team succeeded in achieving strong artificial magnetic fields, high enough to access the regime of the Hofstadter butterfly.In addition, the researchers were able to realize what is known as the Spin Hall Effect, i.e. two particles with opposite spin experience a magnetic field of the same strength but pointing in the opposite direction. As a consequence, the direction of the Lorentz force is opposite for the two spins and therefore the cyclotron-motion is reversed. In their experiments the two spin states were effectively realized by two different states of the Rubidium atoms.
In future experiments the method employed by the researchers could be used to explore the rich physics of the Hofstadter model using the clean and well-controlled environment of ultracold-atoms in optical lattices. Various new experimental techniques such as the quantum gas microscope to detect single atoms could contribute to a deeper understanding of the material properties by directly looking at the microscopic motion of the particles in the lattice. The new method might also open the door for the exploration of novel quantum phases of matter under extreme experimental conditions. [M.A.]
Original publication:M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes and I. Bloch
Physical Review Letters 111, 185301 (2013)
Contact:Prof. Dr. Immanuel Bloch
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Power and Electrical Engineering
17.07.2018 | Life Sciences
16.07.2018 | Physics and Astronomy | <urn:uuid:f52395ae-513d-46ba-90e2-b74fc17539ce> | 3.34375 | 1,636 | Content Listing | Science & Tech. | 36.498989 | 95,545,442 |
Define object oriented software and describe the basic concepts of object oriented software development.© BrainMass Inc. brainmass.com July 18, 2018, 7:04 am ad1c9bdddf
Object oriented software is software that is designed and implemented using object oriented software design principles and concepts. There are four major object oriented design (OOD) principles:
Encapsulation is the concept of hiding data and only allowing access to that data through the use of access methods (methods for both reading and writing). This allows the object to make sure that values are correct. For example, consider storing the age of a person. Using procedural programming this would typically be represented by an integer. There is nothing keeping this value from being negative or incredibly large (e.g., 1000). With encapsulation the age integer is hidden and could only be set by calling a ...
This solution explores the concept of object oriented software development within the context of computer science | <urn:uuid:2b36af05-de31-4512-ac3e-a1b2ceef0956> | 3.5625 | 193 | Truncated | Software Dev. | 38.951048 | 95,545,448 |
In the future, soldiers may be communicating silently with sophisticated "thought helmets." The devices would harness a person´s brain waves and transmit them as radio waves, where they would be translated into words in the headphones of other soldiers.
The US Army has recently awarded a five-year $4 million contract to researchers from the University of California at Irvine (led by UCI´s Mike D´Zmura), Carnegie Mellon University, and the University of Maryland to study the concept. It will likely be a decade or two before the thought helmet becomes a reality, but the rough technology is already under investigation. Researchers have been working on other brain-computer interfaces, such as Emotiv Systems´ brain-wave headset for video games, which is expected to be available commercially next summer.
The Army's version would of course be more sophisticated and reliable than the gaming headset. To make the thought helmet a feasible piece of equipment for soldiers, scientists need to combine advances in computing power together with our understanding of the human brain.
At the moment, the thought helmet concept consists of 128 sensors buried in a soldier´s helmet. Soldiers would need to think in clear, formulaic ways, which is similar to how they are already trained to talk. The key challenge to making the system work is a software system that can read an electroencephalogram (EEG) generated by the sensor data, and pick out when a soldier is thinking words, and what those words are.
Because the brain is a complex system and generates such large amounts of data, researchers must also make improvements in computing power. Soldiers will also have to be trained to think "loudly" to make it easier for the system to pick out their words from the brain´s background noise. Also, every individual´s EEG signals are a little different, so users and computers will have to be calibrated so that computers recognize each person´s unique mental pattern.
In early versions, recipients will most likely hear messages rendered by a robotic voice in their headphones. But the researchers also think it´s possible to render commands in the speaker´s own voice, as well as indicate the location of the speaker relative to the listener.
For people concerned about the ethics of the technology, Elmar Schmeisser, the Army neuroscientist overseeing the program, reassures that the technology will not allow mind-reading. As he explains, since every user has to be trained with the system, it would be impossible to use the technology against an individual´s will and without their cooperation.
Instead, the researchers are interested in potential civilian benefits. One such application might be a Bluetooth headpiece which could read speakers´ thoughts and transmit them to the person they´re calling - eliminating those loud, one-sided conversations in public.
via: Engadget and Time
Explore further: Recent years have seen U.S. military reinventing trauma care | <urn:uuid:f6b033ea-1b31-4ca3-a7be-1b3c7d01e999> | 3.5625 | 593 | News Article | Science & Tech. | 35.730185 | 95,545,453 |
By Ed Yong
The legendary naturalist John Muir once wrote: “Whenever I met a new plant, I would sit down beside it for a minute or a day, to make its acquaintance, hear what it had to tell.” The first step to making an acquaintance is to get a name — and naming nature is not easy. This weekend, while walking through Great Falls Park, a butterfly landed on my friend’s leg. It was large, with yellow and black wings — clearly a swallowtail, but what species? That same day, a large black insect landed on a flower in front of me, and I snapped a portrait of it before it flew off. It was a dragonfly, but what kind of dragonfly?
Many of our experiences of nature take this form. You see something, but you don’t know what it is. You are surrounded by life, but much of it is anonymous. “People don’t identify as a naturalist but if you ask them if they’ve ever been outside, seen something, and wondered what it is, they’ll say: Oh yeah, sure,” says Scott Loarie from the California Academy of Sciences.
You are surrounded by life, but much of it is anonymous.
Loarie and his team have developed an app that can help. Known as iNaturalist, it began as a crowdsourced community, where people can upload photos of animals and plants for other users to identify. But a month ago, the team updated the app so that an artificial intelligence now identifies what you’re looking at. In some cases, it’ll nail a particular species — it correctly pegged the dragonfly I spotted as a slaty skimmer (Libellula incesta). For the butterfly, it was less certain. “We’re pretty sure this is in the genus Papilio,” it offered, before listing ten possible species.
RELATED: Surveilling the Birds
“Our ecosystem is just unravelling in front of our eyes, and the pace of environmental change can be really overwhelming,” says Loarie. “But in our handbags, there’s another thing that has had the same pace of unbelievable change — the cellphone.” He hopes that the latter can help with the former by acting as a pocket naturalist, a cross between Shazam and an old-fashioned field guide.
The iNaturalist site began in 2008 as the master’s project of three students, and has since blossomed into a thriving community of around 150,000 people. Together, they’ve captured around 5.3 million photos representing 117,000 species. By labeling these images and tagging where they were taken, the site’s users are conducting an inadvertent census of the world’s animals. And sometimes, they make surprising discoveries.
In 2011, Luis Mazariegos, a retired Colombian businessman, uploaded a picture of a striking red-and-black frog, found on the patch of rainforest land that he had recently bought. Frog expert Ted Kahn realized that it was a completely new species, and the duo published a paper describing the amphibian a few years later. In 2014, a wildlife photographer named Scott Trageser uploaded a photo of a snail that he had taken in Vietnam. Twenty months later, mollusc expert Junn Kitt Foon identified the animal as Myxostoma petiverianum — a species that James Cook’s crew had discovered in the 1700s, but that no one had photographed before.
“It’s a rare win-win,” says Loarie. “We’re engaging people but also producing this stream of high-quality data for science. And we’re sitting on the biggest pile of accurately labeled images for living things that’s out there.” But iNaturalist could become a victim of its own exponential success. Around 20,000 new photos are uploaded every day, threatening to overwhelm the community of expert identifiers. Already, it takes an average of 18 days to get an identification.
Loarie and his colleagues realized that the only way of avoiding an inevitable backlog of unidentified critters was to train a computer in the art of taxonomy. They could feed a neural network — a computer system modeled on the brain — with images from the iNaturalist collection, and allow it to learn the distinctive features of each species. “The expectation, even a year ago, was that this stuff was light years away and unrealistic,” says Alex Shepard. But now, this kind of machine learning is increasingly powerful and user-friendly. Computers learned to program prosthetic arms, reverse-engineer smells, identify galaxies, or devise funny new names for colors.
Artificial intelligence is only as intelligent as the data you use to train it. Shepard only used “research-grade” photos that have been vetted by the iNaturalist community, and he only trained his neural network on the 13,730 species that were represented by at least 20 such photos. Using these photos, and after training himself using online tutorials, Shepard built a “training wheels” prototype that was good enough to identify visually distinctive things like monkeyflowers—and to impress his bosses at the California Academy of Sciences.
The proper version, released on June 29, is surprisingly good. It has learned to recognize several species from unusual angles — like the head-on slaty skimmer dragonfly that I asked it to identify. It can even cope with species that come in various patterns. “We spent a lot of time on ladybirds,” Shepard says. “Asian ladybirds come with a lot of different characteristics — you might see one that’s mostly black with red spots, and another that’s red with black spots. But even the early versions of our system could understand that.” (The app, however, seems to struggle with human children, who have variously been billed as northern leopard frogs and ringneck snakes.)
Identification apps aren’t new but almost all are restricted to specific groups of organisms, like birds or plants. A recently announced one, which claims to use AI to “identify any mushroom instantly with just a pic,” was swiftly derided by experts for being “potentially deadly.” Given how poisonous some mushrooms can be, a wrong ID from a blundering AI could be catastrophic.
Loarie’s team has tried to circumvent these risks by designing the app to be almost self-conscious about its own limitations. Rather than providing firm identifications, it instead gives “suggestions” or “recommendations.” For each photo, it offers ten possible species; so far, one of those ends up being right 78 percent of the time. It also gives one overarching suggestion, which varies in detail depending on how confident it is. When I showed it the crisp photo of the slaty skimmer, it assertively guessed at the species. When I challenged it with a blurry photo of a frog, it suggested that the animal was a frog, but didn’t venture further.
So, iNaturalist isn’t quite a biological version of Shazam — the app that identifies songs. It’s more like autocomplete, which offers increasingly accurate suggestions depending on the information you provide. “We want something that’s always accurate even if it’s not precise,” says Loarie.
Karen James, a biologist who has worked on citizen science projects, praises the app but notes that it’s not a “panacea for identification.” Since it relies entirely on photos, “the organism has to be big enough and its diagnostic characters have to be visible, which rules out large swaths of the tree of life.” It is also limited by what its users photograph. For that reason, it works better for North American animals than South American ones, for example, and for mammals and birds than nematode worms or nudibranch slugs.
Still, the app will only improve as it gorges on more data. Every couple of hours, another species crosses the magic threshold of 20 research-grade photos, allowing the computer to learn its features. And at a recent computer vision conference, the team ran a competition, sponsored by Google, to improve their AI.
Eventually, Loarie hopes that iNaturalist will be useful to other communities too, such as border agents who open suitcases full of smuggled animals, or biologists analyzing images captured by camera-traps. But James hopes that before this happens, the app’s results are independently verified. So far, “its accuracy is measured by comparing the computer-vision identifications against the very crowdsourced identifications that are used to train the computer. There should be ways of checking those,” such as by analyzing the DNA of samples that are then run through the app, or relying on trained taxonomists.
It all comes back to people in the end. If the app is successful, it’s only because it learned from the thousands of identifications that iNaturalist’s bustling community have contributed. They are still involved in checking the computer’s answers. When the app suggested that the butterfly I saw was a swallowtail, the community quickly confirmed that it was specifically the eastern tiger swallowtail (Papilio glaucus).
This story originally appeared on TheAtlantic.com. | <urn:uuid:25044cd5-0852-4a37-acdf-766a8a2c67d7> | 2.59375 | 1,982 | News Article | Science & Tech. | 44.773429 | 95,545,457 |
By Khadija L. on June 25, 2018
A chemical element is a group of atoms which have a certain number of protons in their atomic nuclei, making up its atomic number. These elements constitute all of the matter found in the universe and they have been said to be incapable of being chemically altered or broken down into smaller substances.
118 elements have been identified , 98 of which are naturally occurring on earth and the other 20 which have been synthesized artificially. Most of the elements are used to create metals and other structural elements. Some of them are even used by the human body for biological processes such as respiration, bone formation and to maintain homeostasis. There are also those which are used to create weapons and spacecraft, as well as some which are poisonous if inhaled or ingested.
We all had to memorize at least the first 20 elements in the periodic table for chemistry class, but for those of us who left high school a while back and for those who didn't have a science major in college, it will be difficult to remember the symbols associated with the different elements. If you were given these symbols, would you be able to guess what element it represents? Let's find out! | <urn:uuid:5dcd20cd-3208-4fd8-827c-9000b79e61dc> | 3.28125 | 246 | Listicle | Science & Tech. | 43.750174 | 95,545,475 |
Kelp that grows in the upper tidal zone, such as bladder wrack and knotted wrack, is especially rich in phlorotannins. But the concentration of the compounds varies considerably. Elisabet Brock has studied what influences this concentration and presents in her dissertation findings that show that dessication and exposure to UVB radiation, on the one hand, and, on the other, animals that eat algae affect the levels of phlorotannins in bladder wrack and knotted wrack.
Since bladder wrack and knotted wrack on the west coast of Sweden grow close to the waterline, they are exposed to environmentally varying conditions. Even though the tides only marginally affect the sea level, climate changes can create differences of up to two meters in the level of the seas. This in turn means that algae dry out, and without the protection of water the dry algae are exposed to stronger solar radiation.
As long as the algae are covered by sea water, the levels of phlorotannins increase in tissues, but when they dry out the chemical compounds are disseminated to the surface of the algae. If this is an active process, one of the functions of the phlorotannins could be to serve as a sun block, according to Elisabet Brock's findings in her dissertation. If the compounds are also secreted into the sea, a further ecological effect could be to protect other organisms that are close to the surface from harmful radiation.
Phlorotannins have also been proven to have other ecologically interesting effects. The dissertation presents findings that show that phlorotannins in extracts have an inhibitory effect on the willingness of acorn barnacle larvae to take hold. This effect proved for the most part to be dependent on the concentration of phlorotannins, but also on what type of algae were tested. The findings thus indicate that phlorotannins should be interesting substances for use in the production of environmentally friendly paint for boat hulls.
For more information, please contact Elisabet Brock, Department of Marine Ecology, Göteborg University, cell phone: +46 702-47 49 50; e-mail: email@example.com or her supervisor Henrik Pavia, phone: +46 526-686 85; e-mail: Henrik.Pavia@tmbl.gu.se.
Title of dissertation: Phlorotannins in Intertidal Brown Algae: Inducing Factors and Ecological Roles. The dissertation has been publicly defended.
Camilla Carlsson | idw
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d18cc406-ed9f-44cc-91a2-d3d93c126ec5> | 3.4375 | 1,116 | Content Listing | Science & Tech. | 39.025762 | 95,545,482 |
Fractional Quantization of the Hall Effect
The Fractional Quantum Hall Effect is caused by the condensation of a two-dimensional electron gas in a strong magnetic field into a new type of macroscopic ground state, the elementary excitations of which are fermions of charge 1/m, where m is an odd integer.
KeywordsCharge Density Wave Elementary Excitation Lawrence Livermore National Laboratory Lower Landau Level Fractional Quantum Hall Effect
Unable to display preview. Download preview PDF.
- 1.R.B. LAUGHLIa: Phys. Rev. Lett. 50, 1395 (1983); Proceedings of the Fifth International Conference on Electronic Properties of Two-Dimensional Systems, Oxford, England, Published in Surface Science.Google Scholar
- 2.J.P. HANSEN and D. LEVESQUE: J. Phys. C14, L603 (1981).Google Scholar
- 5.S. LANG: Algebra ( Addison-Wesley, Reading, Mass., 1965 ), p. 132.Google Scholar | <urn:uuid:fa505570-048a-467c-9024-736b60c30993> | 2.765625 | 224 | Truncated | Science & Tech. | 55.670292 | 95,545,498 |
My aim is to see how the effect of catalase has on the breakdown of hydrogen peroxide into water and oxygen. 2H202(aq) 2H20(l) + 02(g) My prediction is as you increase the concentration of hydrogen peroxide the catalase will break it down faster and therefore the time will be quicker The Independent variable is hydrogen peroxide I will use an appropriate range including the concentrations of 0. 1%, 0. 2%, 0. 5%, 1%, and 2% The dependant variable will be the time taken in seconds to measure how long the catalase takes to breakdown the hydrogen peroxide into water and oxygen.
There were a few controlled variables. The amount of hydrogen peroxide used was kept at a constant 10cm3. This allows for a fair test as the filter paper would have to travel the same distance. I repeated the experiment an extra 3 times, allowed me to calculate a mean time. There is always a need for a repeat as it improves the reliability of the experiment done. It also allows us to remove any anomalies in our results which h gives us a good rough estimate of the average. I carried out a suitable control experiment to prove that it was the enzyme in fact breaking down the hydrogen peroxide.
In this experiment I boiled the enzyme “catalase” to denature it so it wouldn’t be able to function properly. I then tested this enzyme by dipping the filter paper in it and then into the concentration of 2% hydrogen peroxide. As the enzyme was denatured it could not break down the hydrogen peroxide and could not get the filter paper to rise from the base of the test tube. There were many hazards with this experiment. Hydrogen peroxide is an irritant that can cause irritation if left on skin.
Boiling water is another hazard as that can cause scolding of the skin. There is also the risk of smashed glass on which anyone could cut themselves on. Another danger is the Bunsen burner on which someone could quite easily burn themselves. There are procedures in place to prevent these problems, wearing eye protection and washing hands after the use of hydrogen peroxide. Another prevention is keeping glass wear, boiling water and the Bunsen burner away from the edge of the table to prevent anything being knocked and falling off.
My data shows that what I predicted to be true, as you increased the concentration of hydrogen peroxide the time taken for the filter paper to reach the top of the test tube is smaller. This data is also shown in my graphs. The reliability of my results are good as they are shown to be accurate looking at other data that has been accumulated. The accuracy of my results seem high as they correspond to other results although improvements could be made by making sure that the level of hydrogen peroxide is accurate and more repeats are taken.
My aim was to see the how catalase would breakdown hydrogen peroxide into water and oxygen and to conclude following my data and results it clearly shows that my aim was very well reached as an appropriate target. I could do further work by changing the independent variable changing the pH. The controlled variables would be the level of hydrogen peroxide and level of water used. In this result I would notice that as the pH decreased the enzymes would denature and the breakdown of catalase would slow down. | <urn:uuid:a4ea576b-cb93-4187-a3cf-b3acd4e8740b> | 2.796875 | 682 | Personal Blog | Science & Tech. | 45.498315 | 95,545,546 |
Stopping Power of Multiply Charged Ions
Received Date: Nov 01, 2014 / Accepted Date: Jan 12, 2015 / Published Date: Jan 15, 2015
Keywords: protons; electronic energy; excitation energy
Fast ions, such as protons and alphas, interact with, and deposit energy in, target ions and molecules by converting kinetic energy of the projectile to target electronic energy. Such energy deposition occurs in situations as different as deep space and plasmas and can involve targets as different as atomic ions and rather complicated organic molecules . In most cases, the deposition of electronic energy by a fast ion with velocity v in a target of scatterer particle density n is described by the equation:
Where S(v) is the stopping cross section of the target and, in the Bethe approximation which assumes the projectile velocity is much larger than the target electron velocities, is given by:
Here Z1 and Z2 are the projectile charge and target electron number, respectively, and I0 is the target mean excitation energy, The mean excitation energy is defined as the first energy weighted moment of the dipole oscillator strength distribution:
and is the determining factor for the amount of electronic energy deposited in a target by a fast ion. Thus, for electronic energy deposition by a fast ion at a given velocity in a target, the larger the mean excitation energy of target, the less electronic energy will be deposited. It should also be noted that the target may fragment or some projectile energy may be transferred to the target nuclear kinetic energy, but those possibilities are not considered here.
As an example, consider the simple case of protons colliding with an aluminum ion . The table presents the calculated mean excitation energy and stopping cross section for a proton with an (arbitrary) velocity of 20 a.u. (10 MeV) colliding with various aluminum ions. No values for the stopping cross section are given for Al11+ and Al12+ as the velocity of the 1s electrons in Al is larger than the projectile velocity, and thus the Bethe approximation does not apply. Otherwise, the results are as expected with theincreasing mean excitation energy of more highly charged ions leading to a decrease in stopping cross section.
It is also interesting to note that the largest changes in the mean excitation energy with ion charge, and thus in the stopping cross section as well, come when the outermost electrons, which give the largest contribution to the stopping cross section, come from differing electronic subshells, such as Al2+ → Al3+, Al8+ → Al9+, and Al10+ → Al11+ (Table 1).
|I0 (au)||S(v=20) (au)|
Table 1: Mean Excitation Energies and Stopping Cross Sections for v = 20a.u. Protons colliding with Aluminum Ions.
Similar results are found for many other of the light ions . Although similar studies have not yet been carried out for heavier atomic ions, similar results are to be expected.
If the target is a molecule or molecular ion, things are much more complicated. In principle the same collision of an ion with a polyatomic target leads to the same conversion of projectile kinetic energy to electronic energy and deposition of electronic energy in the target. Although each target atomic ion has a well-defined mean excitation energy, the same is not true for polyatomic targets. For molecules, the mean excitation energy depends on the molecular conformer and orientation of the target with respect to the projectile . In addition, while an atomic ion can be excited or ionized, a polyatomic target can be excited, ionized, fractionated, reoriented, or some combination of the foregoing.
Due to the complications mentioned here, very little has been done on polyatomic targets. Little has been done here, but much more work needs to be done, both theoretically and experimentally.
Another, as yet to be theoretically studied system is energy deposition by a polyatomic projectile!
- Belloche A, GarrodRT, Muller HS, Menten KM (2014) Detection of a branched alkyl molecule in the interstellar medium: iso-propyl cyanide. Science 345: 1584-1587.
- Bethe H (1930) ZurTheorie des DurchgangsschnellerKoepuskularstrahlendurchMaterie. Ann Phys (Leipzig) 5: 325-400.
- Akar A, Gumus H, Okumusoglu NT (2006) Electron inelastic mean free path formula and CSDA-range calculation in biological compounds for low and intermediate energies. ApplRadiatIsot 64: 543-550.
- Sauer SPA, Oddershede J, Sabin JR (2015) The Mean Excitation Energy of Atomic Ions. Adv Quantum Chem 71: 29-40.
- Sabin JR, Oddershede J, Sauer SPA (2013) Glycene: Theory of the Interaction with Fast Ion Radiation.In Glycine: Biosynthesis, Physiological Functions and Commercial Uses. Wilhelm V (ed). NOVA Publishers.pp: 79-96.
- Sabin JR, Cabrerra-Trujillo R, Stolterfoht N, Deumens E, Öhrn Y (2009) Fragmentation of Water on Swift 3He2+ Ion Impact. NuclInst and Meth B 267: 196-200.
Citation: Sabin JR (2016) Stopping Power of Multiply Charged Ions. J Phys Chem Biophys 6: e132. Doi: 10.4172/2161-0398.1000e132
Copyright: © 2016 Sabin JR. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Select your language of interest to view the total content in your interested language
Share This Article
International Conference on Advanced Spectroscopy and Chromatography
September 12-13, 2018 Singapore City, Singapore
September 12-13, 2018 Singapore City, Singapore
- Total views: 8091
- [From(publication date): 2-2016 - Jul 18, 2018]
- Breakdown by view type
- HTML page views: 8025
- PDF downloads: 66 | <urn:uuid:1f53352e-d7dd-47cb-b5b5-49f95644ab84> | 3.203125 | 1,341 | Academic Writing | Science & Tech. | 37.096747 | 95,545,549 |
>there are at least thousands of animals we have never seen because they are too deep in the ocean
It's probably for the best
Extraterrestrial life probably looks like that
>between 2000-2009, there were 176,311 newly discovered species
>Nearly 2 million species have been identified since 1758.
>It is estimated that 10 million additional plant and animal species still await discovery.
>It has been speculated that up to 20 million new marine microbial species may still be discovered.
How do you even keep up with so many species?
It is possible that you constantly are surrounded by unidentified insects. Shield bug with a slightly differing shield color pattern would already mean that's a new species.
>There are millions of animals that have gone extinct and we will possibly never know about them
>priceless fossils and artifacts have gone undiscovered and potentially lost to time thanks to natural and human forces
>you'll never help discover one of these species and contribute to the index | <urn:uuid:aaa47f96-8d55-49ae-ba71-384e6da5976e> | 3.453125 | 203 | Comment Section | Science & Tech. | 35.400714 | 95,545,576 |
Seasons and Sun. Investigation 3 – Part 3 Sun Angle and Solar Heating. Points to Remember. Earth rotates on its axis to produce day and night. The tilt of Earth’s axis produces changes in day length over the course of a year (one revolution of Earth around the Sun). Points to Remember.
Investigation 3 – Part 3
Sun Angle and Solar Heating
The way a light beam covers a larger area when it hits a surface at an angle is called Beam Spreading
The angle between the incoming rays of light and the surface of the land is the solar angle.
Light energy from the Sun is distributed over a larger area when it hits Earth’s surface at an angle. The beam spreads more and more the farther north or south you go. | <urn:uuid:1b80c678-118f-4573-9a60-3b292e31d540> | 3.8125 | 160 | Truncated | Science & Tech. | 69.706232 | 95,545,580 |
Higher-than-normal sea-level pressure north of the Amundsen Sea sets up westerly winds that push surface water away from the glaciers and allow warmer deep water to rise to the surface under the edges of the glaciers, said Eric Steig, a UW professor of Earth and space sciences.
“This part of Antarctica is affected by what’s happening on the rest of the planet, in particular the tropical Pacific,” he said.
The research involves the Pine Island and Thwaites glaciers on the West Antarctic Ice Sheet, two of the five largest glaciers in Antarctica. Those two glaciers are important because they drain a large portion of the ice sheet. As they melt from below, they also gain speed, draining the ice sheet faster and contributing to sea level rise. Eventually that could lead to global sea level rise of as much as 6 feet, though that would take hundreds to thousands of years, Steig said.
NASA scientists recently documented that a section of the Pine Island Glacier the size of New York City had begun breaking off into a huge iceberg. Steig noted that such an event is normal and scientists were fortunate to be on hand to record it on film. Neither that event nor the new UW findings clearly link thinning Antarctic ice to human causes.
But Steig’s research shows that unusual winds in this area are linked to changes far away, in the tropical Pacific Ocean. Warmer-than-usual sea-surface temperatures, especially in the central tropics, lead to changes in atmospheric circulation that influence conditions near the Antarctic coast line. Recent decades have been exceptionally warm in the tropics, he said, and to whatever extent unusual conditions in the tropical Pacific can be attributed to human activities, unusual conditions in Antarctica also can be attributed to those causes.
He noted that sea-surface temperatures in the tropical Pacific last showed significant warming in the 1940s, and the impact in the Amundsen Sea area then was probably comparable to what has been observed recently. That suggests that the 1940s tropical warming could have started the changes in the Amundsen Sea ice shelves that are being observed now, he said.
Steig presents his findings Tuesday (Dec. 6) at the fall meeting of the American Geophysical Union in San Francisco. In another presentation Wednesday, he will discuss evidence from ice cores on the history of Antarctic climate in the last century.
He emphasized that natural variations in tropical sea-surface temperatures associated with the El Niño Southern Oscillation play a significant role. The 1990s were notably different from all other decades in the tropics, with two major El Niño events offset by only minor La Niña events.
“The point is that if you want to predict what’s going to happen in the next fifty, one-hundred, one-thousand years in Antarctica, you have to pay attention to what’s happening elsewhere,” he said. “The tropics are where there is a large source of uncertainty.”
Other researchers involved with the work are Qinghua Ding and David Battisti of the UW and Adrian Jenkins of the British Antarctic Survey. The research is supported by grants from the National Science Foundation, the United Kingdom’s Natural Environment Research Council and the UW Quaternary Research Center.
For more information, contact Steig at 206-685-3715, 206-543-6327 or email@example.com.
To view a NASA video of the crack in the Pine Island Glacier ice shelf, see: http://bit.ly/uPFruW
Vince Stricherz | Newswise Science News
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:7b191740-2991-4c32-81a7-3e2d9726af00> | 3.9375 | 1,316 | Content Listing | Science & Tech. | 43.227171 | 95,545,593 |
Explore the culture of the stars…
If you have a passion for star gazing, telescopes, the Hubble and the universe and this thing we call “astronomy”, you are far from alone. Of course, we know that astronomy is a highly respected science that has produced some of the most amazing accomplishments of the twentieth century. On top of that, it is a thriving area of fascination and one of the most exciting hobby areas going with thousands of astronomy clubs and tens of thousands of amateur astronomers watching the stars every night just like we do.
Astronomy is one of the oldest sciences, but also one in which amateurs still play an active role, contributing to many recent important astronomical discoveries.
Our Astronomy Section provides a programme of talks and, occasionally, a visit to an observatory or other site of astronomical importance!
Some of our guest speakers include those from external organisations, members of the Society or members of other astronomical societies. We are fortunate to have many well-known public speakers giving talks, for example Dr David Whitehouse, retired BBC Science Correspondent and Dr Allan Chapman, a well-known historian of astronomy at Oxford University.
We also contribute to Open Days with solar telescopes and other equipment to be seen and used, as well as supporting Young Explorers sessions with hands-on activities and a planetarium show.
Reports on 2017 Talks
Reports on 2016 Talks
Active and Adaptive Optics
Pluto from myth to discovery
Comet 67P Churyumov-Gerasimenko
Macrographia on the Moon
The South African Large Telescope
The First Billion Years
The Cassini Mission
Volcanism in the Solar System
Upcoming Astronomy events
Lecture: Rodinia and the Boring Billion15th September 2018 at 2.30pm
Speaker: James Fradgley What makes an exoplanet habitable? A look at Earth’s history and what clues it may give us. (Rodinia was a supercontinent upon which not much happened for a billion years.) | <urn:uuid:8bae3103-9a74-4535-9b61-10d1e797e3c4> | 2.65625 | 422 | About (Org.) | Science & Tech. | 26.699764 | 95,545,601 |
What are peroxidase?
- Include a brief description of the general type of reaction they catalyze.
- Describe a metabolic pathway from any organism (bacterium, plant, animal) that peroxidase is involved.
- Include the name of the specific peroxidase, the components of the reaction catalyzed by the peroxidase and the name of the organism.
I will use a number of sources for information about peroxidase enzymes - these will be cited with weblink when possible so you can look at the references yourself for further information.
A peroxidase (eg. EC 220.127.116.11) is an enzyme, which may contain heme, that catalyzes a reaction of the form:
ROOR' + electron donor (2 e-) + 2H+ → ROH + R'OH
For many of these enzymes the optimal substrate is hydrogen ...
This Solution contains over 200 words to aid you in understanding the Solution to this question. | <urn:uuid:ae7787c3-a032-47cd-93be-5e1f1b2fcc6b> | 3.390625 | 211 | Q&A Forum | Science & Tech. | 44.432667 | 95,545,605 |
Research and design company SPECIFIC is celebrating the success of its first "energy-positive" classroom, a space that generates 1.5 times the amount of energy it needs to operate. SPECIFIC is a part of U.K. Innovation and Knowledge Centre at Swansea University, and they're celebrating the success of their Active Classroom, as it's called according to Inhabitat.
Their findings from the first year of its implementation were released just as the group began work on their next project, an Active Office.
Science Daily reports that buildings currently account for 40 percent of the energy consumption in the United Kingdom. Researchers have been working to develop buildings as power stations, with design aspects that can make the most use of solar power—trapping it, storing it, and sharing it.
The research director for SPECIFIC and Swansea University College of Engineering, Professor Dave Worsley, says its important to go beyond research to real-world application, as in the case of the Active Classroom.
"SPECIFIC's research focuses on developing solar technologies and the processing techniques that take them from the lab to full-scale buildings," he said.
"With our building demonstration program we are testing and proving the 'buildings as power stations' concept in real buildings, which are used every day. The data obtained from these buildings is then fed back into our fundamental research into solar energy technologies and used to accelerate and steer their development."
Some aspects of the Active spaces that have made them such powerful generators of clean power are curved roofs with integrated solar cells, a Photovoltaic Thermal system installed in the south facing wall, and lithium ion batteries that store the generated electricity.
The thermal system can generate both heat and electricity from the solar panels within the one system. The facility is also outfitted with a 530 gallon water tank that can store additional solar heat. The current plan is to connect the Active Office to the Active Classroom, to allow them to share energy with each other and even electric vehicles.
The team hopes that showing the practical application of their work will encourage the development of more projects like their across the country; a 2017 analysis showed that not only would more energy-positive buildings reduce carbon emissions, they'd also save money and reduce the pressure on the National Grid.
To encourage that, SPECIFIC designed the Active Classroom so that it's easy to make copies—it only takes about a week to assemble because of prefabrication technologies. It relies on systems that are already commercially available, to make construction more accessible.
"Offices are enormous consumers of energy, so turning them energy-positive has the potential to slash fuel bills and dramatically reduce their carbon emissions," said Kevin Bygate, chief operating officer of SPECIFIC,.
"Turning our buildings into power stations is a concept that works, as the Active Classroom shows. This new building will enable us to get data and evidence on how it can be applied to an office, helping us refine the design further."
"The Active Office is a first, but it isn't a one-off. It is quick to build using existing supply chains, and uses only materials that are already available. This is tomorrow's office, but it can be built today."
The government is also excited by the possibilities in the project, as the country has set a number of clean energy goals for 2030. Alun Cairns, Secretary of State for Wales, said:
"The Active Office is a living example of how a building can make a difference to us and our environment using innovative technologies -and equally importantly creating jobs in Wales."
He continued, "I have no doubt that I'll be back to Swansea University in the near future because of the great strides they are taking in the science and research field which are being recognized around the world."
When it comes to plastic bags, one question persists: Are they recyclable, or not?
Tsumoru Shintake has invented a turbine that converts wave energy into clean electricity currently powering hotels.
This town in Long Island is using leftover shells from local restaurants to build a "living" barrier reef. | <urn:uuid:0d7648cc-78e5-4f82-94f2-b7c901dd518c> | 3.5 | 841 | News Article | Science & Tech. | 33.325255 | 95,545,609 |
Home » Posts filed under Named Reaction
Branches of Chemistry
Chemistry is branch of Science, which is further divided into many branches like-
हिंदी माध्यम में रसायन विज्ञान नोट्स हिंदी में हमारे रसायन शास्त्र के नोट्स पाने के लिए नीचे दिखाए चरणों का पालन करें - अ - स...
11th & 12th Classes Formula in PDF Below is the list of Chemical Formulas Resources 1. Chemistry formulas for Atoms, Molecules and...
Solid State Solid: - Matter which posses rigidity having definite shape &volume is called solid. Types of solid:- ...
Atomic Theory of matter :- According to this theory , atom is the ultimate particle of m...
Classification of Elements Mendeleev periodic law :- Mendeleev explanation that properties ...
Surface Chemistry Adsorption: - The accumulation of molecular species at the surface rat...
ELECTROCHEMISTRY Electrolysis: It is the process of decomposition of an electrolyte by the passage of electricity...
States Of Matter · Water exists in three state i.e. solid (ice), liquid (portable water), gas (steam, vapors). · In thes...
Organic Chemistry What is Organic Chemistry ? Organic chemistry is a branch of chemistry which involve the scientific study of struct...
Chemistry of Elements of First Transition Series There are four types of orbital i.e. s, p, d and f. On the basis of electronic configu...
Chemistry Quiz contains different pages linked below containing chemistry quiz question...
1. Science Quiz Part11
2. Science Quiz Part12
3. Science Quiz Part13
4. Science Quiz Part14
5. Chemical Reactions Quiz Part15
6. Chemical Reaction Quiz Part16
7. Pharmaceutical Chemistry Quiz Part17
8. Acids and Bases Quiz Part18
9. Some Basic Concept of Chemistry MCQs
10. Structure of Atom MCQs | <urn:uuid:0c5fdb19-3978-4e16-bcd9-06e21a89c585> | 3.671875 | 566 | Content Listing | Science & Tech. | 73.760892 | 95,545,623 |
A team of eleven of the world's top tropical forest scientists, coordinated by the University of Leeds, warn that while cutting clearance of carbon-rich tropical forests will help reduce climate change and save species in those forests, governments could risk neglecting other forests that are home to large numbers of endangered species.
Under new UN Framework Convention on Climate Change (UNFCCC) proposals, the Reduced Emissions from Deforestation and Degradation (REDD) scheme would curb carbon emissions by financially rewarding tropical countries that reduce deforestation.
Governments implicitly assume that this is a win-win scheme, benefiting climate and species. Tropical forests contain half of all species and half of all carbon stored in terrestrial vegetation, and their destruction accounts for 18% of global carbon emissions.
However, in a paper published in the latest issue of Current Biology, the scientists warn that if REDD focuses solely on protecting forests with the greatest density of carbon, some biodiversity may be sacrificed.
"Concentrations of carbon density and biodiversity in tropical forests only partially overlap," said Dr Alan Grainger of the University of Leeds, joint leader of the international team. "We are concerned that governments will focus on cutting deforestation in the most carbon-rich forests, only for clearance pressures to shift to other high biodiversity forests which are not given priority for protection because they are low in carbon."
"If personnel and funds are switched from existing conservation areas they too could be at risk, and this would make matters even worse."
If REDD is linked to carbon markets then biodiversity hotspot areas – home to endemic species most at risk of extinction as their habitats are shrinking rapidly – could be at an additional disadvantage, because of the higher costs of protecting them.
According to early estimates up to 50% of tropical biodiversity hotspot areas could be excluded from REDD for these reasons. Urgent research is being carried out across the world to refine these estimates.
Fortunately, the UN Framework Convention on Climate Change is still negotiating the design of REDD and how it is to be implemented.
The team is calling for rules to protect biodiversity to be included in the text of the Copenhagen Agreement. It also recommends that the Intergovernmental Panel on Climate Change give greater priority to studying this issue, and to producing a manual to demonstrate how to co-manage ecosystems for carbon and biodiversity services.
"Despite the best of intentions, mistakes can easily happen because of poor design" said Dr Grainger. "Clearing tropical forests to increase biofuel production to combat climate change is a good example of this. Governments still have time at Copenhagen to add rules to REDD to ensure that it does not make a similar mistake. A well designed REDD can save many species and in our paper we show how this can be done."
For more information
The paper 'Biodiversity and REDD at Copenhagen' is available to journalists on request.
Dr Alan Grainger is available for interview, please contact Clare Ryan in the University of Leeds press office on 0113 343 4031 or email firstname.lastname@example.org.
Contact Details for Co-Authors in Other Countries
This paper is a joint effort by scientists from across the globe. The following co-authors are currently available to speak to local journalists.
USA:Professor Stuart L. Pimm, Duke University, North Carolina. Tel: + 1 646 489 5481.
Email: email@example.com.Dr Douglas H. Boucher and Dr Peter C. Frumhoff, Union of Concerned Scientists, Washington DC.
Contact: Lisa Nurnberger, Press Office. Tel: +1 202 331 6959.
Germany:Dr. Manfred Niekisch, Zoologischer Garten, Frankfurt. Tel: +49 69 212 33727.
Contact: Sarah Horsley, Press Office. Tel: +41 22 999 0127; +41 79 528 3486 (mobile)
Dr Navjot S. Sodhi, National University of Singapore. Tel: +65 6516 2700 (office); +65 6275 4229 (home). Email: firstname.lastname@example.org.
Notes to editors
1. The 2008 Research Assessment Exercise showed the University of Leeds to be the UK's eighth biggest research powerhouse. The University is one of the largest higher education institutions in the UK and a member of the Russell Group of research-intensive universities. The University's vision is to secure a place among the world's top 50 by 2015. www.leeds.ac.uk
2. REDD - Reducing Emissions from Deforestation and Forest Degradation in Developing Countries - is an effort to create a financial value for the carbon stored in forests, offering incentives for developing countries to reduce emissions from forested lands and invest in low-carbon paths to sustainable development. REDD is a collaboration between Food and Agriculture Organization (FAO), the UN Development Programme (UNDP) and the UN Environment Programme (UNEP).
3. The 190 countries that make up the UN Framework Convention on Climate Change (UNFCCC) will meet in Copenhagen in December to negotiate a successor to the 1997 Kyoto Protocol.
4. Leeds hosts one of the largest and most innovative geography departments in the world, and this year sees us celebrate 90 years of excellence in teaching and research. Ranked in the top 6 geography departments in the UK in the 2008 RAE and awarded an 'Excellent' grading by HEFCE for the quality of our teaching, our staff of 70 disseminate cutting edge knowledge and research on topics as diverse as tropical ecology, social inclusion and city futures. www.geog.leeds.ac.uk
Clare Ryan | EurekAlert!
Further reports about: > Agriculture Organization > Climate change > Convention > Current Biology > Deforestation > UNFCCC > biodiversity forests > biodiversity hotspot areas > carbon emissions > carbon markets > carbon-rich forests > carbon-rich tropical forests > degradation > emissions > tropical forest > tropical forests
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Health and Medicine
23.07.2018 | Earth Sciences
23.07.2018 | Science Education | <urn:uuid:a1ea873e-963a-4d4a-b7e7-0d623c7b0942> | 3.796875 | 1,814 | Content Listing | Science & Tech. | 40.291899 | 95,545,628 |
- 560 Downloads
KeywordsCouncil Working United Nations Framework Convention Mountain Glacier International Polar Arctic Council
The cryosphere collectively describes elements of the Earth System containing water in its seasonally and perennially frozen state. The cryospheric components of the Arctic represent a globally unique system, parts of which are inextricably linked with each other, with the landscapes, seascapes, ecosystems and humans in the Arctic, and with the global climate and ecological systems. Consequently, shifts in the Arctic cryosphere have great significance, not just regionally within the Arctic but across the planet as a whole.
As a follow up to the 2004/2005 “Arctic Climate Impact Assessment” (ACIA), the Arctic Council in 2008 requested the Arctic Council Working Group, Arctic Monitoring and Assessment Programme (AMAPWG) to facilitate the synthesis of the latest scientific knowledge on changes in the Arctic cryosphere and to assess the effects of these changes. In doing so the AMAPWG has partnered with the International Polar Year (IPY) project office, IASC (International Arctic Science Committee), IASSA (International Arctic Social Sciences Association), WCRP/CliC (World Climate Research Programme/Climate and the Cryosphere) in the “Snow, Water, Ice and Permafrost in the Arctic” (SWIPA) project.
The objectives of the SWIPA Project are to provide timely, up-to-date, and synthesized scientific knowledge about the present status, processes, trends in Arctic sea ice, snow cover, permafrost, mountain glaciers and ice caps, the Greenland Ice Sheet, and related hydrological conditions, and to assess the consequences of these changes on arctic biological systems, and human societies and lifestyles.
The SWIPA assessment was produced by more than 200 scientists and experts from the arctic and non-arctic countries. The experts were charged with compiling and evaluating information from Arctic monitoring networks and recent national and international research activities, such as those carried out during the International Polar Year (IPY; 2007–2008), focusing on new information gathered since the ACIA assessment, which serves as the benchmark of the study.
The project has been guided by an integration team consisting of coordinating lead authors of the different modules and representatives of participating organizations. A strict and independent peer-review process was established by the AMAP working Group to secure and document the integrity of the process. | <urn:uuid:27c83d0e-bb5e-40a5-a2cb-f882eeb43125> | 2.90625 | 494 | Academic Writing | Science & Tech. | 5.5705 | 95,545,641 |
Beijing's Smog Experiment
Efforts to reduce pollution will let scientists see how the climate responds.
One of the most watched projects during the run-up to the 2008 Beijing Olympics, along with the construction of the Bird’s Nest Stadium and the glowing Aquatics Cube, was the Chinese government’s efforts to cut emissions by 60 percent in the city. It has been a colossal undertaking in an area where air-pollution is five times higher than World Health Organization safety standards and smog can get so dense that it sometimes occludes the sun. The effort has involved ordering half of the city’s cars off the roads and temporarily moving or closing dozens of steel mills, foundries, and factories across the capital.
But such a dramatic decrease in pollution could provide more than just healthier conditions for competing athletes. It may also afford scientists a rare opportunity to see how climate change responds to such a massive adjustment in emissions.
A team led by Veerabhadran Ramanathan, a climate and atmospheric scientist at the University of California, San Diego, will spend the next six weeks flying autonomous unmanned aerial vehicles (AUAVs) downwind of Beijing to measure emissions reductions. “By cutting down on the pollution over Beijing during the Olympics, the Chinese have created a huge natural laboratory for understanding how pollution impacts climate,” Ramanathan says. He and collaborators at Seoul National University, in South Korea, have packed an assortment of miniature instruments into 30-kilogram, three-metre-wide AUAVs and set up a research station on South Korea’s Cheju Island, about 500 kilometers south of Beijing.
Using the island as their base, the researchers are flying the small aircraft in groups of three: over, under, and through the polluted plume as it travels past the island. Because different layers of wind carry air from different regions, and because the wind currents change direction, they can also sample air from other parts of China that have not implemented emissions cutbacks. “We fly up to 12,000 feet,” Ramanathan says, “so I don’t have to go anywhere. The mountain comes to Mohammed.”
The team will also be running simultaneous flights in California, to see how far away they can detect the plume. “We want to see what the global impact of this one city is,” he says. Ramanathan is especially interested in unraveling the relationship between air pollution and climate change, since prior research from his lab has shown that airborne particles in emissions can mask up to half of the greenhouse effect by reflecting sunlight back into space.
The researchers plan to combine the AUAV measurements with those from NASA satellites and back-calculated wind trajectories. The results should give them a clearer picture of what the air looks like and whether more sunlight is reaching the earth’s surface as a result of the decreased emissions.
Ramanathan is concerned that worldwide efforts to reduce air pollution over the next few decades could as much as double the rate of global warming. With a huge number of unknowns in this equation, he’s hoping that their work can help him better understand the problem.
In order to accurately measure the impact of Beijing’s emissions reductions, the scientists must also know what normal air pollution conditions would be. Greg Carmichael, an environmental and chemical engineer at the University of Iowa, has been modeling Beijing’s emissions and creating estimates for what the pollution levels would have been pre-cutbacks, as well as estimates for what they should be now.
Carmichael can’t give specifics until the final numbers are in, but he can say that the air quality in Beijing is somewhere between 10 and 50 percent better than it would have been without the strict controls. This is a wide margin to be sure, he says, “but it’s a very complex system. And to take Beijing and be able to reduce your emissions by 50 percent is a huge success. It’s a difficult thing to do.” Los Angeles and other cities have spent 20 years or more trying to improve their air quality, he notes, and they still have a ways to go.
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:90b8bc83-0117-49ad-b35b-9af172c35673> | 3.5 | 897 | Truncated | Science & Tech. | 40.796374 | 95,545,645 |
To cite this page, please use the following:
· For print: . Accessed
· For web:
Found most commonly in these habitats: 2 times found in Wet forest, 1 times found in Primary wet forest.
Found most commonly in these microhabitats: 2 times Ex sifted leaf litter, 1 times sifted leaf litter.
Collected most commonly using these methods: 3 times Winkler.
Elevations: collected from 830 - 1000 meters, 915 meters average
AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb.
Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:1 | <urn:uuid:dc0ebebd-e6d7-4a1f-8a66-62fa00d6a0e8> | 2.59375 | 225 | Knowledge Article | Science & Tech. | 56.113684 | 95,545,656 |
Species Detail - Nuttall's Waterweed (Elodea nuttallii) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Anacharis nuttallii, Hydrilla lithuanica pro parte
Invasive Species: Invasive Species || Invasive Species: Invasive Species >> High Impact Invasive Species || Invasive Species: Invasive Species >> Regulation S.I. 477 (Ireland)
(Planch.) H. St. John
1 January (recorded in 2002)
2 December (recorded in 2015)
National Biodiversity Data Centre, Ireland, Nuttall's Waterweed (Elodea nuttallii), accessed 18 July 2018, <https://maps.biodiversityireland.ie/Species/41365> | <urn:uuid:fbc04d10-c631-4bcf-84a6-5dcbdee9343b> | 2.578125 | 209 | Structured Data | Science & Tech. | 30.068453 | 95,545,672 |
10 graphs of experimental data are given. Can you use a spreadsheet to find algebraic graphs which match them closely, and thus discover the formulae most likely to govern the underlying processes?
Are these estimates of physical quantities accurate?
Analyse these beautiful biological images and attempt to rank them in size order.
Work with numbers big and small to estimate and calulate various quantities in biological contexts.
How would you go about estimating populations of dolphins?
Can you work out which processes are represented by the graphs?
How much energy has gone into warming the planet?
Starting with two basic vector steps, which destinations can you reach on a vector walk?
In Fill Me Up we invited you to sketch graphs as vessels are filled with water. Can you work out the equations of the graphs?
Can you draw the height-time chart as this complicated vessel fills with water?
Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from?
A problem about genetics and the transmission of disease.
Can you sketch graphs to show how the height of water changes in different containers as they are filled?
Get some practice using big and small numbers in chemistry.
When a habitat changes, what happens to the food chain?
Explore the properties of perspective drawing.
Work with numbers big and small to estimate and calculate various quantities in biological contexts.
Formulate and investigate a simple mathematical model for the design of a table mat.
Work with numbers big and small to estimate and calculate various quantities in physical contexts.
Explore the relationship between resistance and temperature
This problem explores the biology behind Rudolph's glowing red nose.
An observer is on top of a lighthouse. How far from the foot of the lighthouse is the horizon that the observer can see?
Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents.
Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . .
How do you write a computer program that creates the illusion of stretching elastic bands between pegs of a Geoboard? The answer contains some surprising mathematics.
Could nanotechnology be used to see if an artery is blocked? Or is this just science fiction?
Investigate circuits and record your findings in this simple introduction to truth tables and logic.
Examine these estimates. Do they sound about right?
Work out the numerical values for these physical quantities.
Use the computer to model an epidemic. Try out public health policies to control the spread of the epidemic, to minimise the number of sick days and deaths.
Simple models which help us to investigate how epidemics grow and die out.
How efficiently can you pack together disks?
To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling...
Estimate these curious quantities sufficiently accurately that you can rank them in order of size
Various solids are lowered into a beaker of water. How does the water level rise in each case?
Andy wants to cycle from Land's End to John o'Groats. Will he be able to eat enough to keep him going?
How would you design the tiering of seats in a stadium so that all spectators have a good view?
What shapes should Elly cut out to make a witch's hat? How can she make a taller hat?
In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book.
Can you work out what this procedure is doing?
Which dilutions can you make using only 10ml pipettes?
Can you deduce which Olympic athletics events are represented by the graphs?
When you change the units, do the numbers get bigger or smaller?
Practice your skills of measurement and estimation using this interactive measurement tool based around fascinating images from biology.
Use your skill and knowledge to place various scientific lengths in order of size. Can you judge the length of objects with sizes ranging from 1 Angstrom to 1 million km with no wrong attempts?
Does weight confer an advantage to shot putters?
Have you ever wondered what it would be like to race against Usain Bolt?
Is it really greener to go on the bus, or to buy local?
Can Jo make a gym bag for her trainers from the piece of fabric she has?
Which units would you choose best to fit these situations? | <urn:uuid:a4ddca60-b8b2-4db6-88a1-db97d264afa8> | 3.96875 | 935 | Content Listing | Science & Tech. | 53.533517 | 95,545,686 |
About this book
This new edition is a concise introduction to the basic methods of computational physics. Readers will discover the benefits of numerical methods for solving complex mathematical problems and for the direct simulation of physical processes.
The book is divided into two main parts: Deterministic methods and stochastic methods in computational physics. Based on concrete problems, the first part discusses numerical differentiation and integration, as well as the treatment of ordinary differential equations. This is extended by a brief introduction to the numerics of partial differential equations. The second part deals with the generation of random numbers, summarizes the basics of stochastics, and subsequently introduces Monte-Carlo (MC) methods. Specific emphasis is on MARKOV chain MC algorithms. The final two chapters discuss data analysis and stochastic optimization. All this is again motivated and augmented by applications from physics. In addition, the book offers a number of appendices to provide the reader with information on topics not discussed in the main text.
Numerous problems with worked-out solutions, chapter introductions and summaries, together with a clear and application-oriented style support the reader. Ready to use C++ codes are provided online.
- DOI https://doi.org/10.1007/978-3-319-27265-8
- Copyright Information Springer International Publishing Switzerland 2016
- Publisher Name Springer, Cham
- eBook Packages Physics and Astronomy
- Print ISBN 978-3-319-27263-4
- Online ISBN 978-3-319-27265-8
- Buy this book on publisher's site | <urn:uuid:fd1be11f-5519-44d1-8ea4-1e04262b3035> | 2.65625 | 320 | Product Page | Science & Tech. | 35.3925 | 95,545,688 |
The Plateau Problem and the Partially Free Boundary Problem for Minimal Surfaces
The remainder of this book is essentially devoted to boundary value problems for minimal surfaces. The simplest of such problems was named Plateau’s problem, in honour of the Belgian physicist J.A.F. Plateau, although it had been formulated much earlier by Lagrange, Meusnier, and other mathematicians. It is the question of finding a surface of least area spanned by a given closed Jordan curve Γ.
KeywordsMinimal Surface Branch Point Jordan Curve Soap Film Plateau Problem
Unable to display preview. Download preview PDF. | <urn:uuid:4600780e-80c9-4c14-ae02-633cbb88a67e> | 2.8125 | 133 | Product Page | Science & Tech. | 48.3675 | 95,545,689 |
A green sea turtle is most easily recognized by its top shell. The shell covers most of the animal’s body, except for its flippers and head. Despite its name, a green sea turtle's shell is not always green. The smooth, heart-shaped shell can be a blend of different colors, including, brown, olive, gray, or black. The underside is a yellowish-white color. The green sea turtle’s head has brown and yellow markings.
Green sea turtles have paddle-like limbs called flippers that allow the turtle to move quickly and easily through the water. These dense, heavy animals can reach three to four feet in length and weigh upward of 300 to 350 pounds (136 to 159 kilograms). Despite their size, they are still not the world's largest sea turtles—that title belongs to the leatherback sea turtle.
Green sea turtles are found around the world in warm subtropical and tropical ocean waters, and nesting occurs in over 80 countries. There are populations with different colorings and markings in the Atlantic, Indian, and Pacific Oceans. Though not well understood, these turtles are highly migratory and undertake complex movements and migrations.
Once a green sea turtle hatches and heads into the ocean, it rarely returns to land. Instead it feeds on offshore plant blooms around islands and beaches. Green sea turtles stay in shallow waters until the breeding season. Every time the females breed, they make a long migration back to their natal beach, or the beach where they were born. They will travel long distances, even across oceans, to return to their preferred breeding site.
In the United States, green sea turtles are most often seen on the Hawaiian Islands, Puerto Rico, the Virgin Islands, and the east coast of Florida. Less frequent nesting also occurs on the Atlantic coast in Georgia, South Carolina, and North Carolina.
Adult green sea turtles are herbivores. The jaw is serrated to help the turtle easily chew its primary food source—seagrasses and algae. Juvenile green sea turtles are omnivores. They eat a wide variety of plant and animal life, including insects, crustaceans, seagrasses, and worms.
The breeding season occurs in late spring and early summer. The males arrive in offshore waters first and wait for the females to come to the beaches. Adult males can breed every year, but females only breed every three to four years.
A few weeks after mating, a female green sea turtle arrives on the beach and digs a hole in the sand for her eggs. Inside the hole, she lays 75 to 200 eggs and covers the hole with sand. At this point, her role is complete, and she leaves her eggs to fend for themselves. A female green sea turtle can lay several clutches of eggs before she leaves the nesting grounds.
After about two months, the eggs hatch and the hatchlings make their way to the water. The newly hatched green sea turtles are very susceptible to predators, exposure, and losing their way. Birds, mammals, and other predators love feasting on the young turtles.
For green sea turtle hatchlings that reach the water, it takes at least 20 to 50 years to reach sexual maturity. A healthy individual can be expected to live 80 to 100 years.
Green sea turtles are an endangered species that have undergone an estimated 90 percent population decrease over the past half century. Climate change and habitat loss are threats to these animals, as well as diseases such as fibropapilloma. Light pollution near beach nesting sites poses a risk to sea turtle hatchlings, which may get confused and crawl toward the light instead of traveling to the ocean. Green sea turtles and their food also face overhunting, including for use in sea turtle soup.
Green sea turtles are protected by national and state laws, as well as international treaties, and the National Oceanic and Atmospheric Administration’s National Marine Fisheries Service conducts regular monitoring of green sea turtle populations. Restoration efforts are underway in places like the Gulf of Mexico, where nesting beaches are being restored and enhanced.
Many coastal communities in Florida have developed lighting ordinances to help more hatchlings reach the sea. Bycatch (accidental capture by commercial and sport fishermen) of green sea turtles is being reduced thanks to fishing gear modifications (such as the use of TEDs, or turtle exclusion devices), changes to fishing practices, and closures of certain areas to fishing during nesting and hatching seasons. However, according to the U.S. Fish & Wildlife Service, due to the long-range migratory movements of sea turtles between nesting beaches and foraging areas, long-term international cooperation is essential for the recovery and stability of nesting populations.
Green sea turtles received their name for the color of their body fat, which is green.
Animal Diversity Web, University of Michigan Museum of Zoology
Fish and Wildlife Research Institute, Florida Fish and Wildlife Conservation Commission
Office of Protected Resources, National Oceanic and Atmospheric Administration
U.S. Fish & Wildlife Service
Place your order today for the themed box that delivers everything you need to create family memories while discovering nature and wildlife.Read More
Find out what it means to source wood sustainably, and see how your favorite furniture brands rank based on their wood sourcing policies, goals, and practices.Read More
Climate change is allowing ticks to survive in greater numbers and expand their range—influencing the survival of their hosts and the bacteria that cause the diseases they carry.Read More
Tell your members of Congress to save America's vulnerable wildlife by supporting the Recovering America's Wildlife Act.Read More
You don't have to travel far to join us for an event. Attend an upcoming event with one of our regional centers or affiliates. | <urn:uuid:1727629d-6002-4c56-894c-2696f8cda169> | 3.59375 | 1,171 | Knowledge Article | Science & Tech. | 46.08838 | 95,545,700 |
In this section "What is JSF?" you will get detailed overview of JSF technology, which is ready to revolutionize the web application development process. JSF is complex system but you will be highly benefited by JSF technology. For example, you can easily make rich graphical Web application that can't be easily developed in HTML. Here we have tried to explain the JSF in easily understandable manner so that beginner can also understand easily.
What is JSF?
JSF is new standard framework, developed through Java Community Process (JCP), that makes it easy to build user interfaces for java web applications by assembling reusable components in a page. You can think of JSF framework as a toolbox that is full of ready to use components where you can quickly and easily add and reuse these components many times in a page and capture events generated by actions on these components. So JSF applications are event driven. You typically embed components in a jsp page using custom tags defined by JSF technology and use the framework to handle navigation from one page to another. Components can be nested within another component , for example, input box, button in a form.
JSF is based on well established Model-View-Controller (MVC) design pattern. Applications developed using JSF frameworks are well designed and easy to maintain then any other applications developed in JSP and Servlets.
JSF eases the development of web applications based on Java technologies. Here are some of benefits of using JSF:
JSF provides standard, reusable components for creating user interfaces for web applications.
JSF provides many tag libraries for accessing and manipulating the components.
It automatically saves the form data and repopulates the form when it is displayed at client side.
JSF encapsulates the event handling and component rendering logic from programmers, programmers just use the custom components.
JSF is a specification and vendors can develop the implementations for JSF.
There are many GUIs available these days to simplify the development of web based application based on JSF framework.
JSF includes mainly:
The UI (user interface) created using JSF technology runs on server and output is shown to the client.
Goal of JSF is to create web
applications faster and easier. Developers can focus on
UI components, events handling, backing beans and their interactions
rather than request, response and markup. JSF hides complexities to enable
developers to focus on their own specific work. | <urn:uuid:3be6f5fa-9ba3-4f98-aea8-755424c045a7> | 3.203125 | 503 | Knowledge Article | Software Dev. | 37.76482 | 95,545,718 |
Two buses leave at the same time from two towns Shipton and Veston on the same long road, travelling towards each other. At each mile along the road are milestones. The buses' speeds are constant. . . .
The triathlon is a physically gruelling challenge. Can you work out which athlete burnt the most calories?
In the diagram the radius length is 10 units, OP is 8 units and OQ is 6 units. If the distance PQ is 5 units what is the distance P'Q' ?
An article for teachers which discusses the differences between ratio and proportion, and invites readers to contribute their own thoughts.
A garrison of 600 men has just enough bread ... but, with the news that the enemy was planning an attack... How many ounces of bread a day must each man in the garrison be allowed, to hold out 45. . . .
Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do.
Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do.
Mo has left, but Meg is still experimenting. Use the interactivity to help you find out how she can alter her pouch of marbles and still keep the two pouches balanced.
If it takes four men one day to build a wall, how long does it take 60,000 men to build a similar wall?
Mainly for teachers. More mathematics of yesteryear.
In the ancient city of Atlantis a solid rectangular object called a Zin was built in honour of the goddess Tina. Your task is to determine on which day of the week the obelisk was completed. | <urn:uuid:f12ec6c7-99c5-4973-abd3-ad26869191b4> | 3.265625 | 369 | Content Listing | Science & Tech. | 73.605631 | 95,545,725 |
“ There is only one thing more powerful and explosive than all the armies in the world, and that is an idea whose time has come.” — Victor Hugo
THE G-ENGINE HAS COME —
a propellantless quantum electrodynamic spacedrive.
Quantum Antigravity Space Propulsion is a type of propellantless propulsion, or field propulsion, that essentially stems from the Abraham force — a mechanical force of electromagnetic origin — and from the method of its amplification. It is capable, in principle, of propelling a spacecraft to near the speed of Light velocities.
The complete mathematical description of quantum antigravity will slowly come later in due time, in a manner similar to Faraday-Maxwell developments. After all, Thomas Edison didn’t need all the math of quantum mechanics, or of Einstein’s photoelectric effect, or of de Broglie’s wave–particle duality to make his lightbulb work.
For an interplanetary spaceship equipped with the G-Engine, i.e. near-the-speed-of-Light Propellantless Quantum Electrodynamics Space Drive, to be accelerated to near-the-speed-of-Light velocity, it needs to be powered. Solar power is good enough for slow orbital maneuvers. The spaceship could be powered by a thorium reactor, by a molten salt reactor, by a helium-3 reactor, by a Wendelstein stellarator, or by Taylor Wilson’s fusion reactor, in addition to a conventional nuclear reactor. All US Navy aircraft carriers and submarines built since 1975 are nuclear-powered. Some of them have up to 4 nuclear reactors on board. Since the last conventional aircraft carrier USS Kitty Hawk was decommissioned in May 2009, there have been only nuclear-powered aircraft carriers and submarines in the US Navy.
THE G-ENGINES ARE COMING, by Michael Gladych
Disclosure Project Statement by Bill Uhouse, Mechanical Avionics Engineer, Area 51 Disc Simulator Designer
I spent 10 years in the Marine Corps, and four years working with the Air Force as a civilian doing experimental testing on aircraft since my Marine Corps days. I was a pilot in the service, and a fighter pilot; fought in after the latter part of WWII and the Korean War Conflict, I was discharged as a Captain in the Marine Corps.
I didn’t start working on flight simulators until about – well the year was 1954, in September. After I got out of the Marine Corps, I took a job with the Air Force at Wright Patterson doing experimental flight-testing on various different modifications of aircraft.
While I was at Wright Patterson, I was approached by an individual who – and I’m not going to mention his name – [wanted] to determine if I wanted to work in an area on new creative devices. Okay? And, that was a flying disc simulator. What they had done: they had selected several of us, and they reassigned me to A-Link Aviation, which was a simulator manufacturer. At that time they were building what they called the C-11B, and F-102 simulator, B-47 simulator, and so forth. They wanted us to get experienced before we actually started work on the flying disc simulator, which I spent 30-some years working on.
I don’t think any flying disc simulators went into operation until the early 1960s – around 1962 or 1963. The reason why I am saying this is because the simulator wasn’t actually functional until around 1958. The simulator that they used was for the extraterrestrial craft they had, which is a 30-meter one that crashed in Kingman, Arizona, back in 1953 or 1952. That’s the first one that they took out to the test flight.
This ET craft was a controlled craft that the aliens wanted to present to our government – the U.S.A. It landed about 15 miles from what used to be an army air base, which is now a defunct army base. But that particular craft, there were some problems with: number one – getting it on the flatbed to take it up to Area 51. They couldn’t get it across the dam because of the road. It had to be barged across the Colorado River at the time, and then taken up Route 93 out to Area 51, which was just being constructed at the time. There were four aliens aboard that thing, and those aliens went to Los Alamos for testing.
They set up Los Alamos with a particular area for those guys, and they put certain people in there with them – people that were astrophysicists and general scientists – to ask them questions. The way the story was told to me was: there was only one Alien that would talk to any of these scientists that they put in the lab with them. The rest wouldn’t talk to anybody, or even have a conversation with them. You know, first they thought it was all ESP or telepathy, but you know, most of that is kind of a joke to me, because they actually speak – maybe not like we do – but they actually speak and converse. But there was only one who would [at Los Alamos].
The difference between this disc, and other discs that they had looked at was that this one was a much simpler design. The disc simulator didn’t have a reactor, [but] we had a space in it that looked like the reactor that wasn’t the device we operated the simulator with. We operated it with six large capacitors that were charged with a million volts each, so there were six million volts in those capacitors. They were the largest capacitors ever built. These particular capacitors, they’d last for 30 minutes, so you could get in there and actually work the controls and do what you had to – to gET the simulator, the disc to operate.
So, it wasn’t that simple, because we only had 30 minutes. Okay? But, in the simulator you’ll notice that there are no seat belts. Right? It was the same thing with the actual craft – no seat belts. You don’t need seat belts, because when you fly one of these things upside down, there is no upside down like in a regular aircraft – you just don’t feel it. There’s a simple explanation for that: you have your own gravitational field right inside the craft, so if you are flying upside down – to you – you are right side up. I mean, it’s just really simple, if people would look at it. I was inside the actual alien craft for a start-up.
There weren’t any windows. The only way we had any visibility at all was done with cameras or video-type devices [see the testimony of Mark McClandlish below]. My specialty was the flight deck and the instruments on the flight deck. I knew about the gravitational field and what it took to get people trained.
Because the disc has its own gravitational field, you would be sick or disoriented for about two minutes after getting in, after it was cranked up. It takes a lot of time to become used to it. Because of the area and the smallness of it, just to raise your hand becomes complicated. You have to be trained – trained with your mind, to accept what you are going to actually feel and experience.
Just moving about is difficult, but after a while you get used to it and you do it – it’s simple. You just have to know where everything is, and you [have] to understand what’s going to happen to your body. It’s no different than accepting the g-forces when you are flying an aircraft or coming out of a dive. It’s a whole new ball game.
Each engineer that had anything to do with the design was part of the start-up crew. We would have to verify all the equipment that we put in – be sure it [worked] like it [was] supposed to, etc. I’m sure our crews have taken these craft out into space. I’m saying it probably took a while to train enough of the people, over a sufficient time period. The whole problem with the disc is that it is so exacting in its design and so forth. It can’t be used like we use aircraft today, with dropping bombs and having machine guns in the wings.
The design is so exacting, that you can’t add anything – it’s got to be just right. There’s a big problem in the design of where things are put. Say, where the center of the aircraft is, and that type of thing. Even the fact that we raised it three feet so the taller guys could get in – the actual ship was extended back to its original configuration, but it has to be raised.
We had meetings, and I ended up in a meeting with an alien. I called him J-ROD (of course, that’s what they called him). I don’t know if that was his real name or not, but that’s the name the linguist gave him. I did draw a sketch, before I left, of him in a meeting. I provided it to some people and that was my impression of what I saw, an art picture of an alien that is working in cooperation with earth-people as told here. | <urn:uuid:da9fbd78-b01e-4c27-8ad7-3952b1bc473c> | 2.875 | 1,970 | Personal Blog | Science & Tech. | 60.546326 | 95,545,755 |
branch of geometry in which the fifth postulate of Euclidean geometry, which allows one and only one line parallel to a given line through a given external point, is replaced by one of two alternative postulates. Allowing two parallels through any external point, the first alternative to Euclid's fifth postulate, leads to the hyperbolic geometry developed by the Russian N. I. Lobachevsky in 1826 and independently by the Hungarian Janos Bolyai in 1832. The second alternative, which allows no parallels through any external point, leads to the elliptic geometry developed by the German Bernhard Riemann in 1854. The results of these two types of non-Euclidean geometry are identical with those of Euclidean geometry in every respect except those propositions involving parallel lines, either explicitly or implicitly (as in the theorem for the sum of the angles of a triangle).
In hyperbolic geometry the two rays extending out in either direction from a point P and not meeting a line L are considered distinct parallels to L; among the results of this geometry is the theorem that the sum of the angles of a triangle is less than 180°. One surprising result is that there is a finite upper limit on the area of a triangle, this maximum corresponding to a triangle all of whose sides are parallel and all of whose angles are zero. Lobachevsky's geometry is called hyperbolic because a line in the hyperbolic plane has two points at infinity, just as a hyperbola has two asymptotes. The analogy used in considering this geometry involves the lines and figures drawn on a saddleshaped surface.
In elliptic geometry there are no parallels to a given line L through an external point P, and the sum of the angles of a triangle is greater than 180°. Riemann's geometry is called elliptic because a line in the plane described by this geometry has no point at infinity, where parallels may intersect it, just as an ellipse has no asymptotes. An idea of the geometry on such a plane is obtained by considering the geometry on the surface of a sphere, which is a special case of an ellipsoid. The shortest distance between two points on a sphere is not a straight line but an arc of a great circle (a circle dividing the sphere exactly in half). Since any two great circles always meet (in not one but two points, on opposite sides of the sphere), no parallel lines are possible. The angles of a triangle formed by arcs of three great circles always add up to more than 180°, as can be seen by considering such a triangle on the earth's surface bounded by a portion of the equator and two meridians of longitude connecting its end points to one of the poles (the two angles at the equator are each 90°, so the amount by which the sum of the angles exceeds 180° is determined by the angle at which the meridians meet at the pole).
What distinguishes the plane of Euclidean geometry from the surface of a sphere or a saddle surface is the curvature of each (see differential geometry); the plane has zero curvature, the surface of a sphere and other surfaces described by Riemann's geometry have positive curvature, and the saddle surface and other surfaces described by Lobachevsky's geometry have negative curvature. Similarly, in three dimensions the spaces corresponding to these three types of geometry also have zero, positive, or negative curvature, respectively.
As to which of these systems is a valid description of our own three-dimensional space (or four-dimensional space-time), the choice can be made only on the basis of measurements made over very large, cosmological distances of a billion light-years or more; the differences between a Euclidean universe of zero curvature and a non-Euclidean universe of very small positive or negative curvature are too small to be detected from ordinary measurements. One interesting feature of a universe described by Riemann's geometry is that it is finite but unbounded; straight lines ultimately form closed curves, so that a ray of light could eventually return to its source.
See cosmology; relativity.
- See Euclidean and Non-Euclidean Geometry (1980). ,
- Non-Euclidean Geometry (1988). ,
The Elements of Euclid, formulated in the 3rd century BCE , for almost 2,000 years seemed to be the last word in geometry ; they gave a...
A form of geometry in which Euclid's postulates are not satisfied. In Euclidean geometry if two lines are both at right angles to a...
This dramatic story begins with a simple geometric scenario. Imagine a line l and a point P not on the line. How many lines can we draw through P pa | <urn:uuid:d8c6f5f3-c2e3-4f69-ae37-0bd13da51c96> | 3.96875 | 991 | Knowledge Article | Science & Tech. | 35.44769 | 95,545,767 |
The Great Pacific Garbage Patch may have met its match in this 21 year old
Boyan's taking the ocean clean-up timeline from 79,000 years to 20.
A 21-year old Dutch scientist and inventor, Boyan Slat, is getting ready to clean up the Great Pacific Garbage Patch.
He’s not heading out there with a scuba suit and and an armful of garbage bags.
His plan is more imaginative and a little more... oceanic.
What he’s dealing with
Each year, 8 million tons, or 16 billion pounds (7.2 billion kilograms) of plastic enter the world’s oceans. There are about 5.25 trillion pieces of plastic in the oceans today.
Each week, 2 Empire State Building’s worth of plastic enter the oceans.
Think about how much a piece of plastic weighs. Different kinds of plastic weigh different amounts (a soda bottle, a plastic grocery bag, a tupperware container), but they’re usually not that heavy. That’s part of plastic’s appeal: it’s durable yet easy to manipulate and use.
This seeming weightlessness, along with its ubiquity, leads people to use plastic for everything, while also treating it carelessly.
And that carelessness is why the oceans are choking with plastic: there’s so much of it being produced, and then so much it being thrown away rather than reused or recycled.
While it’s broadly understood that the oceans have a plastic problem, the actual scale of the problem is hard to fathom.
Some estimates of the Great Pacific Garbage Patch (the biggest of the plastic accumulations) say that it is twice the size of the US and about 9 feet deep.
Since plastic is so light, it is carried along by currents, eventually ending up in these massive rotating currents called gyres.
Past attempts to deal with the patch haven’t really gone anywhere. The problem is so massive that scientists generally consider it as waves of resignation wash over them. So most policymakers have advised an approach of crisis control: rather than dealing with all the existing plastic, limit how much plastic joins the patch in the future.
While enacting better recycling programs is essential, no effective solution can end here.
Plastic pollution in the oceans has too many consequences. It kills millions of creatures every year, poisons millions or billions of other creatures and causes billions in economic damages.
Plus, the longer plastic sits in the ocean, the more likely it will begin to deteriorate, turning into microplastic, which is far more difficult to remove and even more dangerous to animals.
As the ocean gets filled with plastic, vibrant ecosystems get hollowed out .
Boyan didn’t wake up one morning and start cleaning up the ocean. His plan has gone through years worth of refinement and outside collaboration. The enterprise is called The Ocean Cleanup .
By 2020, his team plans to get the program rolled out and will start removing plastic at an unprecedented rate.
His system essentially uses the rotating nature of the gyres to gradually drag the plastic to a v-shaped accumulation zone that does not disrupt marine life.
By 2030, the team plans to remove 42% of the Great Pacific Garbage Patch. By 2040, they predict the patch can be entirely cleared.
The 21 year old’s plan is revolutionary when compared to current efforts that would take 79,000 years to clear the Garbage Patch.
I think we can all agree that the oceans can’t wait that long. While Boyan and his team take care of what is in the ocean already, let’s stop adding to the problem.
5 Household Products That Are Slowly Destroying the Environment
And 5 things you should be using instead. Read More
Humanity Has Killed 83% of All Wild Mammals and Half of All Plants: Study
Of all the birds left in the world, 70% are poultry chickens and other farmed birds. Read More
Germany Is Planning Free Public Transit to Fight Air Pollution
This represents a surprising shift for Germany. Read More | <urn:uuid:29f4f1d9-0571-45f0-af6e-15a0ee544daa> | 3.65625 | 855 | Nonfiction Writing | Science & Tech. | 58.775301 | 95,545,769 |
NASA's Aqua satellite provided two different perspectives of this supertyphoon: a visible and an infrared. The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA's Aqua satellite captured a visible image of Super Typhoon Nanmadol over the Philippines at 12:50 a.m. EDT (4:50 UTC).
This infrared image of Super Typhoon Nanmadol's very cold cloud top temperatures point to where the strongest storms are (purple) within Nanmadol. The AIRS instrument on NASA's Aqua satellite captured this image on Aug. 26 at 12:47 a.m. EDT. The cloud mass at the eastern edge of the image is the western half of Tropical Storm Talas, a Category One Typhoon which is very large in extent. Credit: NASA/JPL, Ed Olsen
The Atmospheric Infrared Sounder (AIRS) is the instrument on Aqua that took an infrared image of Nanmadol's and nearby Tropical Storm Talas' cloud top temperatures on August 26 at 12:47 a.m. EDT. AIRS infrared image revealed that the super typhoon has highly symmetrical bands of thunderstorms wrapping tightly into its eye. Nanmadol has an eye that is 18 nautical miles (21 miles/33 km) in diameter. Tropical Storm Talas, located to the northeast of Nanmadol.
At 11 a.m. EDT (1500 UTC) on August 26, Super Typhoon Nanmadol's maximum sustained winds were near 135 knots (155 mph/250 kmh) with higher gusts making it the top end of the Category four typhoon status. Category five typhoons have sustained wind speeds of greater than 155 mph (135 knots).
Nanmadol was about 585 nautical miles (673 miles/1083 km) south-southwest of Kadena Air Base, Japan and northeast of Luzon, Philippines where it was dropping heavy rainfall. Nanmadol is moving to the north-northwest at 6 knots and is generating dangerous surf with wave heights reaching 32 feet (.7 meters)!
Forecasters at the Joint Typhoon Warning Center (JTWC) expect Nanmadol to intensify further into a Category Five Typhoon then gradually weaken. Nanmadol is expected to continue skirting Luzon, passing it on August 27, then passing to the east of Taiwan on August 28 and 29. Taiwan can also expect very rough seas, gusty winds and heavy downpours as Nanmadol passes by and heads to the northwest next week.
At the same time, and much farther to the northeast, Tropical Storm Talas had maximum sustained winds near 45 knots (52 mph/83 kmh). It was located about 185 nautical miles (212 miles/ 342 km) south-southwest of the island of Iwo Two, Japan near 22.3 North and 139.8 East. It was moving to the north-northwest near 6 knots (7 mph/11 kmh) and also generating rough seas, 22 feet high (6.7 meters). The AIRS infrared data showed bands of strong convection wrapping around the northeastern edge of the center, indicating strengthening.
The JTWC forecast calls for Talas to steadily intensify over the weekend because of warm sea surface temperatures and favorable upper level atmospheric conditions. Talas is expected to take a more northerly track and pass just to the west of Iwo To over the weekend, and past Chichi Jima on Monday, August 29.
It is going to be a busy weekend in the western North Pacific Ocean with strengthening Super Typhoon Nanmadol and a strengthening Tropical Storm Talas.
Rob Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:bea86db5-0aff-479c-8947-981a0c32b216> | 2.75 | 1,373 | Content Listing | Science & Tech. | 50.503235 | 95,545,786 |
Species Detail - Dark-barred Twin-spot Carpet (Xanthorhoe ferrugata) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Dark-barred Twin-spot Carpet
insect - moth
6 April (recorded in 2011)
13 September (recorded in 2009)
National Biodiversity Data Centre, Ireland, Dark-barred Twin-spot Carpet (Xanthorhoe ferrugata), accessed 18 July 2018, <https://maps.biodiversityireland.ie/Species/78708> | <urn:uuid:905a94be-1247-4075-9e7d-e1cc495c0580> | 2.734375 | 166 | Structured Data | Science & Tech. | 40.83514 | 95,545,798 |
Authors: M. Potter
Affilation: BiSen Tech LLC, United States
Pages: 147 - 150
Keywords: pathogen, dectection, real-time, bio-sensor
Current methods for detecting water-borne pathogens require a laboratory setting, analyte preparation and typically a day or more. The author, a solid state device physicist, has invented a methodology capable of the real-time (1 - 2 second) detection of water-borne pathogens without analyte preparation and without human intervention. The patent protected methodology combines a novel design of field effect transistor together with the developing science of molecular probes to create a sensor of extremely high sensitivity (single cell). Transistors can be sized for various pathogens from protozoa to bacteria to viruses, and then functionalized for targeted pathogens. The sensor is self-regenerating: After a short period of time, a transistor is again available for detection of a targeted pathogen. Bisen’s sensor and methodology is suitable for hand-held portable devices to large fixed installations. Every 8 seconds someone in the world, most likely a child, dies from a water-borne illness. One in six americans experience food poisoning each year. Bisen’s sensor is the only sensor capable of acting as a sentinel to prevent contaminated water and food from ever reaching consumers.
Nanotech Conference Proceedings are now published in the TechConnect Briefs | <urn:uuid:d7684483-8188-498f-8da4-5efee7b26281> | 2.8125 | 288 | Truncated | Science & Tech. | 18.17343 | 95,545,799 |
Extraterrestrial Helium in Seafloor Sediments: Identification, Characteristics, and Accretion Rate Over Geologic Time
Almost 40 years after the discovery of extraterrestrial helium in seafloor sediments, renewed attention is being focused on using helium as a proxy for the sedimentary abundance of extraterrestrial debris. Extraterrestrial He is carried to the seafloor by the finest fraction of interplanetary dust and is retained in at least some sediments for hundreds of millions of years. Helium isotope systematics uniquely identify the extraterrestrial component, which is apparently hosted within magnetite and silicate grains. In some sediments 3He is completely derived from this source, in others the extraterrestrial fraction can be computed from the measured 3He/4He ratio. Variations in the sedimentary concentration of extraterrestrial 3He must reflect both changes in sedimentation rate and fluctuations in the accretion rate of 3He from space. When changes in sedimentation rate can be controlled for, variations in extraterrestrial 3He can be related to changes in the accretion rate of IDPs arising from major solar system events including asteroid collisions and enhanced cometary activity. A 3He record in sediments spanning the last 70 Myr provides insights to such events, including the first compelling evidence for the occurrence of a shower of long-period comets, 35 Ma.
KeywordsAccretion Rate Late Eocene Oort Cloud Mass Accumulation Rate Interplanetary Dust
Unable to display preview. Download preview PDF.
- Higgins, S. M., Marcantonio, F., Anderson, R. F., Stute, M., and Schlosser P. A global estimate of the Late Quaternary helium-retentive IDP flux determined from 3He/xs230Th ratios in marine sediments. Eos, Trans. AGU 19, F50 (1998).Google Scholar
- Krylov, A. Y., Mamyrin, B. A., Silin, Y. I., and Khabarin, L. V Helium isotopes in ocean sediments. Geochem. Intl. 202–205 (1973).Google Scholar
- Mamyrin, B. and Tolstikhin, I. Helium isotopes in nature. Elsevier, Amsterdam, 267 pp. (1984).Google Scholar
- Murray, S. and Renard, A. Rept. Sci. Results Voyage H. M. S. Challenger. Neill and Co., Edinburgh, 214 pp. (1891).Google Scholar | <urn:uuid:c59572f2-0241-49f2-a3c6-56804aa8e01c> | 2.90625 | 520 | Truncated | Science & Tech. | 42.579307 | 95,545,800 |
What is Retention Time?
Jul 31 2014 Read 101851 Times
Retention time is the amount of time a compound spends on the column after it has been injected. If a sample containing several compounds, each compound in the sample will spend a different amount of time on the column according to its chemical composition i.e. each will have a different retention time. Retention times are usually quoted in units of seconds or minutes.
A components retention time is determined by the equilibrium constant (K) if all other factors are kept constant. In GC, specifically gas-liquid chromatography, there are two phases namely the:
- Mobile phase – usually a gas such as helium
- Stationary phase – a high boiling point liquid adsorbed onto a solid
A vaporised sample is injected into the head of the GC column, which contains a liquid stationary phase, adsorbed onto the surface of an inert solid. The inert solid support (usually diatomaceous earth or clay) is necessary to keep the liquid phase stationary in the column. The speed with which a particular compound travels through the column depends on how much of its time is spent moving with the gas as opposed to being attached to the liquid. Materials that prefer the stationary phase have longer retention times than those that prefer the mobile phase.
The equilibrium constant, K, is defined as the molar concentration of analyte in the stationary phase divided by the molar concentration of the analyte in the mobile phase. A high value of K means the compound is more soluble in the liquid phase than in the gas phase. K is temperature dependent.
Polar or Non-Polar Stationary Phase
One of the key factors when setting up a GC method is to choose the polarity of the stationary phase. The polarity is chosen using knowledge of the sample matrix and what separation is required. If the polarity of the target compound and the stationary phase are similar, then there is likely to be a greater interaction between the two. Consequently, the retention time will be longer for polar compounds on polar stationary phases and shorter on non-polar stationary phases.
What Other Factors Affect RT?
- If a component has a low boiling point, then it is likely to spend more time in the gas phase. Therefore its retention time will be lower than a compound with a higher boiling point. A compound’s boiling point can be related to its polarity.
- A high column temperature will give shorter retention times, as more components stay in the gas phase but this can result in poor separation. For better separation, the components have to interact with the stationary phase.
Carrier gas flow-rate
- A high flow rate lowers retention times but also yields a poor separation.
- A longer column will produce longer retention times but better separation. Unfortunately, if a component has too long a transit time in the column, there can be a diffusive effect that causes the peak width to broaden.
All these factors must be considered to determine the GC parameters that will produce the best separation in a reasonable time. For an in-depth discussion of the factors affecting retention time and separation refer to the article: Optimisation of Column Parameters in GC.
Do you like or dislike what you have read? Why not post a comment to tell others / the manufacturer and our Editor what you think. To leave comments please complete the form below. Providing the content is approved, your comment will be on screen in less than 24 hours. Leaving comments on product information and articles can assist with future editorial and article content. Post questions, thoughts or simply whether you like the content.
In This Edition Articles - Enhanced Sample Preparation - Identifying Inherent Contamination in Deep Well Microplates - How to Determine Extra Column Dispersion and Extra Column Volume - Th...
View all digital editions
Jul 29 2018 Washington DC, USA
Aug 02 2018 Barcelona, Spain
Aug 06 2018 Berlin, Germany
Aug 26 2018 Florence, Italy
Sep 05 2018 Chiba, Japan | <urn:uuid:3175e4ef-ff00-47a0-b297-377a5f657354> | 3.390625 | 816 | Knowledge Article | Science & Tech. | 42.154562 | 95,545,801 |
M7 is one of the most prominent open clusters of stars on the sky. The cluster, dominated by bright blue stars, can be seen with the naked eye in a dark sky in the tail of the constellation of the Scorpion (Scorpius). M7 contains about 100 stars in total, is about 200 million years old, spans 25 light-years across, and lies about 1000 light-years away. The above deep image, taken last June from Hungary through a small telescope, combines over 60 two-minute exposures. The M7 star cluster has been known since ancient times, being noted by Ptolemy in the year 130 AD. Also visible are a dark dust cloud and literally millions of unrelated stars towards the Galactic center.
This picture originally appeared at Nasa
Image Credit & Copyright: Lorand Fenyes
Are Antibiotics Leading To An Increased Risk Of Miscarriage?
According to a new study published in the CMAJ (Canadian Medical Association Journal), many classes of antibiotics are associated with an...May 1, 2017
Could a Carbon Tax Work?
Over the past couple of years, several suggestions for limiting the amount of greenhouse gases that are produced by the burning...May 1, 2017
Genes Might Be Helping the Tasmanian Devil Fight Off Face Cancer
Getty Images The Tasmanian devil is famous for two things. One, it’s ornery as all hell. And two, it’s the unfortunate...August 30, 2016
How to Use Physics to Paddle Board Like a Pro
Getty Images Question: How do you make a stand up paddle board go straight if you only paddle on one side?...August 29, 2016
Cluster of Big Earthquakes Rattles Iceland’s Katla Volcano
Alamy Last night, a brief earthquake swarm rattled the caldera at Katla in southern Iceland. The largest earthquakes were over M4,...August 29, 2016
Six Scientists Lived in a Tiny Pod for a Year Pretending They Were on Mars
Arguably one of the most Mars-like environments on Earth, the north side of Mauna Loa has been home sweet home to...August 29, 2016
Forget the Pool. This Guy Chased Tornadoes All Summer
This May, a massive supercell storm ripped through the countryside just outside of Dodge City, Kansas. It produced more than a...August 29, 2016
This Aquanaut Is Defining the Next Era of Spaceflight
NASA Megan McArthur has spent her life messing with microgravity. She was on the team that got the first commercial cargo...August 29, 2016
What Gives With Insects Pretending to Be Sticks and Leaves?
Imagine that you had one outfit and one outfit only: a jumpsuit that made you look like a leaf. You’d blend...August 29, 2016 | <urn:uuid:5fa07e39-820c-4119-81fd-5277f82c4134> | 3.703125 | 581 | Content Listing | Science & Tech. | 66.516364 | 95,545,809 |