text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
In this section, we are going to talk about angular elements. Angular elements is extremely helpful in building cross framework components. Consider a case where you are writing any shared library in angular and you would like to use the same in react. Initially, cross framework communication was pain. But with the introduction of angular elements, its fairly simple and easily achievable. In a nutshell it means something like
Web components are based on existing web standards. Features to support web components are currently being added to the HTML and DOM specs, letting web developers easily extend HTML with new elements with encapsulated styling and custom behavior.
165 total views, 8 views today | <urn:uuid:f632f497-2c9a-4ef4-a4aa-266e8756094b> | 2.640625 | 128 | Truncated | Software Dev. | 36.223451 | 95,536,586 |
The RISC Algorithm Language (RISCAL)
July 17, 2018: RISCAL 2.1.0 (visualization of evaluation trees)
- JKU Linz Student Projects and Theses
- See here how to join the further development of RISCAL.
The RISC Algorithm Language (RISCAL) is a specification language and
associated software system for describing mathematical algorithms, formally
specifying their behavior based on mathematical theories, and validating the
correctness of algorithms, specifications, and theories by execution/evaluation. The software has been implemented in Java; it is freely available under the terms of the GNU GPL.
Take a look at this video presentation
and this paper.
Click to enlarge the screenshot.
Virtual Machine: You can download a pre-configured
virtual Linux machine
(for the free VirtualBox
virtualization software) with RISCAL for execution under MS Windows,
Mac OS, or Linux:
Virtual Machine with RISCAL
RISC Users: RISCAL is installed in the RISC environment.
To start the softwarem execute
To start the software, login as user "guest" with password "guest", double-click the "Terminal" icon, then execute and "RISCAL &".
The installation of the software with sample specifications is available in "/software/RISCAL".
module load RISCAL
- Download RISCAL
- This includes all the files for running the software on GNU/Linux x86 computers (32-bit or 64-bit); for others, the appropriate version
of the Standard Widget Toolkit
(SWT) has to be downloaded and installed.
See the files README,
- Tutorial and Manual [HTML | PDF]
- This is the user documentation of the software.
- This is a web interface to the Subversion repository that holds
the source code of the program. The repository can be read anonymously
by any Subversion client from the URL
- Video Presentation
- A slide-based video presentation on RISCAL.
- Publications and reports on RISCAL.
- Further slide presentations on RISCAL.
Last modified: March 29, 2018 | <urn:uuid:2efec3aa-338d-4e0f-a54d-12352396e8e1> | 2.90625 | 464 | Product Page | Software Dev. | 34.639621 | 95,536,597 |
What is brown tree snake?
brown tree snake meaning the mildly venemous brown tree snake (Boiga irregularis) is an introduced species on some Pacific islands that has become a serious pest, especially on Guam. In the absence of natural population controls and with vulnerable prey on Guam, the snakes have become an exceptionally common pest causing major ecological and economic problems. The snakes probably arrived on Guam hidden in ship cargo from the New Guinea area. By 1968, they had dispersed throughout the island and caused havoc by virtually wiping out Guam’s native bird species and helped decimate their fruit bat populations. In addition to Guam, brown tree snakes have been sighted on Saipan, Tinian, Rota, Kwajalein, Wake, Oahu, Pohnpei, Okinawa, and Diego Garcia. To date, this snake is not known to be established on any of these islands except Guam
The brown tree snake(Boiga irregularis) is an invasive species that has caused great ecological and economic damage on Guam. (Photo: U. S. Geological Survey)
reference: Coral Reef Information System – GlossaryWhat is | <urn:uuid:8ee96526-cde1-49d9-9613-f88a1e8a57d0> | 3.125 | 230 | Knowledge Article | Science & Tech. | 32.597874 | 95,536,598 |
Too many sheep in Britain’s uplands could be responsible for the decline of some native birds according to research published today in the journal Biology Letters.
The research, led by Dr Darren Evans from the Centre for Ecology & Hydrology in Banchory, provides the first hard evidence of a link between increased sheep grazing and the breeding patterns of the meadow pipit, Britain’s most common upland bird.
The researchers examined the effects of sheep grazing on egg sizes of these ground-nesting birds. They varied sheep numbers in an upland field experiment and found that areas with high sheep numbers had meadow pipit nests with the smallest eggs, and that areas with low sheep numbers had nests with the largest eggs.
Marion O’Sullivan | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:34a22b8e-2014-4bb7-b108-b4ca438f5b61> | 3.15625 | 816 | Content Listing | Science & Tech. | 42.467211 | 95,536,602 |
How to Use a Meta Redirect
By Stephen Bucaro
A Meta Redirect is a META tag that, when a visitor arrives at your webpage, automatically
redirects their browser to a different webpage. There are several reasons why you might want
to use a meta redirect. You might need to take a webpage offline for a short time while
you edit it, or you might want to delete the webpage while not losing the traffic it receives.
<meta http-equiv="refresh" content="0;url=http://www.domain.com/newpage.htm">
Shown above is example code for a meta redirect. You would simply paste this code into the
<head> section of your webpage. The meta redirect has two parts. The first part is the
http-equiv="refresh"content attribute. This is where you enter the url that you
want to redirect the browser to.
Note that the content attribute has two parts separated by a semicolon. The first
part is a digit that tells the browser the number of seconds to delay before redirecting. Note
in the example above that the digit zero causes the browser to redirect immediately. In this
case, the visitor may not even see the webpage being redirected.
Some Webmaster's wanting to be extremely courteous to their visitors will place a message
on the webpage being redirected. The message might say "Please wait while we redirect you
to the new page". Sometimes the message will accompanied by a link to the new webpage in
case the visitor's browser doesn't work or the visitor is impatient and want to go to the
new webpage immediately.
In the case where you want to display a message before the redirect, you would set the
delay digit to 5 or 10 to give the visitor some time to read the message before the webpage
Be aware that search engines do not like meta redirects. When a search engine encounters
the meta refresh directive, it ignores the webpage containing the redirect and it does
not follow the url to the new page. This is because in the past meta redirects have been
used to show the search engine one page, while showing a different page to visitors.
More HTML Code:
• HTML Numbered or Ordered List
• How to Write and Run HTML
• HTML Horizontal Rule
• Text Features
• Setting the Number of Items Visible in a Select List
• HTML5 role Attribute
• Code For a Basic 2-Column Fluid Webpage Layout
• Nesting HTML Lists
• Set Form Controls Tab Order With tabindex Attribute
• Aligning an Image on Your Web Page | <urn:uuid:b38bbf0a-2bbc-47f6-bfac-458f9cdd1cf4> | 2.640625 | 550 | Tutorial | Software Dev. | 47.032324 | 95,536,606 |
See attachment for problem please.© BrainMass Inc. brainmass.com July 18, 2018, 3:48 pm ad1c9bdddf
4. Since the system is motionless before the string is burned, by conservation of momentum the total momentum of the blocks must remain zero after the string is burned, i.e. we have
m1 v1 + m2 v2 = 0.
Thus we have
v1 = -m2 v2/m1 = -(7.6)(4.6)/(0.28) = -124.9 m/s,
the minus sign meaning that v1 points to the left as shown. Now by conservation of energy, the potential energy E stored in the spring before the string is burned must be equal to the total kinetic energy of the blocks after the string is burned, i.e.
E = 1/2 (m1 v1^2 + m2 v2^2)
= 1/2[(0.28)(124.9)^2 + (7.6)(4.6)^2]
= 2260 J
All units are MKS, so there is no need to write them down in the calculation.
2. Note: The answer to this problem is not well-defined because it depends on where on the incline the portion with friction is located. We will assume that it is located at the top of the incline. Otherwise, the answer will be greater since the block will enter the portion with friction later and thus have a greater speed when decelerated by friction, and thus this constant deceleration will occur over a shorter period of time, slowing down the block less.
Let v1 be the speed of the block immediately after it leaves the spring and let v2 be its speed immediately after it leaves the portion of the incline with friction. By conservation of energy, we have
1/2 m v1^2 = 1/2 k x^2,
where x = 0.35 cm is the initial compression of the spring. Thus we have
v1 = x sqrt(k/m) = (0.35)sqrt(310/2) = 4.36 m/s.
Now as ...
The solution solves various problems in Newtonian mechanics. | <urn:uuid:c36ddac2-dd0e-4f1e-9ed8-bfe81313deb6> | 3.5625 | 482 | Tutorial | Science & Tech. | 95.599441 | 95,536,612 |
NASA Award Grant To Develop Platform For Detecting Amino Acids
News Sep 08, 2015
Purnendu “Sandy” Dasgupta, Hamish Small Chair in Ion Analysis of Chemistry and Biochemistry in the UT Arlington College of Science, has been awarded almost $1 million from NASA to further the search for amino acids, the so-called building blocks of life.
Dasgupta will do that by extending a platform that he developed to detect and separate ions called an open-tubular capillary chromatography platform. The method uses very small volumes of samples that are injected into tubes extremely small in diameter, between 10 to 25 microns.
“To give you some perspective, the finest human hair is about 100 microns. This is about one-tenth of that. You can’t see the holes in those tubes,” Dasgupta said. “We have to be much more specific with amino acids than when looking for inorganic ions. But you want the scale to be as small as possible, requiring as little power as possible, and consuming as little material as possible. All of those things have to be carried out to space, and every little bit of weight, volume and power, is expensive.”
Dasgupta concedes that his is an ambitious goal and that researchers will have to be much more specific than when looking for ions.
“If we find amino acids, that doesn’t necessarily prove it’s related to life,” he said. “One thing that it fairly unambiguously proves is that it was associated with some life process if the amino acids are dominantly of one chiral form.”
Amino acid molecules are chiral, that is, they have a rotational orientation. The rotational orientation of the molecules can be either right-handed or left-handed. All humans are composed of amino acids, but made of only one rotational orientation. Dasgupta said amino acids synthesized in a flask will be composed of equal amounts of both orientation, whereas something coming from a living system wholly or dominantly contains only one orientation (one chiral form).
“This double helix of DNA, for example, makes other DNA molecules by complementing each other,” Dasgupta said. “So, it’s like a mold. That handedness is preserved. Life is centered on one type of chirality. Our objective, if we can detect amino acids, is to separate the amino acids into chiral forms.”
“That means we’ll be able to tell whether we have an excess of one chiral form over another, or dominantly just one chiral form. In that case, it would definitely be related to life.”
Morteza Khaledi, dean of the UT Arlington College of Science, said Dasgupta’s research extends the boundaries of analytical chemistry to outer space.
He will tackle the challenging problems of designing a portable system for liquid-based separation and detection of chiral amino acids in remote and rather hostile environments, Dean Khaledi said.
“This is quite an exciting and important research from both fundamental analytical chemistry as well as potential for important discoveries in our search for evidence of life, as we know it, outside of our planet,” he said.
The grant is the latest in a series of recent national and international awards and honors that Dasgupta has earned for his work in chromatography. “
UT Arlington has a distinguished history of working with NASA on various space missions,” said Duane Dimos, UT Arlington vice president for research. “This recent grant is a great example of taking advantage of Dr. Dasgupta’s capabilities to contribute once more to the NASA mission.”
Getting to Know the Microbes that Drive Climate ChangeNews
A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change.READ MORE
Stormwater Ponds Not a Significant Source of Climate-Warming N2ONews
Stormwater retention ponds, a ubiquitous feature in developed landscapes worldwide, are not a significant source of climate-warming nitrous oxide (N2O) emissions, a new study finds. | <urn:uuid:10592ac8-a3e5-43c3-b9c7-4dc955d4fb8c> | 2.828125 | 895 | News Article | Science & Tech. | 34.283146 | 95,536,619 |
Invasive species in New Zealand
A number of introduced species, some of which have become invasive species, have been added to New Zealand's native flora and fauna. Both deliberate and accidental introductions have been made from the time of the first human settlement, with several waves of Polynesian people at some time before the year 1300, followed by Europeans after 1769.
Almost without exception, the introduced species have been detrimental to the native flora and fauna but some, such as farmed sheep and cows and the clover upon which they feed, now form a large part of the economy of New Zealand. Registers, lists and indexes of species that are invasive, potentially invasive, or a threat to agriculture or biodiversity are maintained by Biosecurity New Zealand.
This article needs additional citations for verification. (March 2013) (Learn how and when to remove this template message)
Many invasive animal species are listed in schedules 5 and 6 of the Wildlife Act 1953. Those in Schedule 5 have no protection and may be killed. Those in Schedule 6 are declared to be noxious animals and subject to the Noxious Animals Act 1956. In 2016 the New Zealand government introduced Predator Free 2050, a project to eliminate all non-native predators (such as rats, possums and stoats) by 2050.
Some of the invasive animal species are as follows.
The National Pest Plant Accord, with a listing of about 120 genus, species, hybrids and subspecies, was developed to limit the spread of plant pests. Invasive plants are classified as such on a regional basis with some plants declared as national plant pests. The Department of Conservation also lists 328 vascular plant species as environmental weeds.
Some of the better-known invasive plant species are:
- Acacia species (mostly Australian) especially wattle
- Acanthus - bears britches
- Arundo donax - giant reed (or elephant grass)
- Banana passionfruit
- Boxthorn (Lycium ferossimum)
- Darwin's barberry (Berberis darwnii)
- Cape sundew (Drosera capensis)
- Broom (Cytisus scoparius)
- Buckthorn (Rhamnus alaternus)
- Californian thistle
- Cape tulip
- Christmasberry (Schinus terebinthifolius)
- Climbing asparagus (Asparagus scandens)
- Didymosphenia geminata ("didymo" or "rock snot")
- Field horsetail (Equisetum arvense)
- Japanese honeysuckle
- Jasmine (Jasminum polyanthum)
- Kahili ginger (Hedychium gardnerianum)
- Lodgepole pine (Pinus contorta)
- Mexican daisy
- Mexican devil (Ageratina adenophora)
- Morning glory (Convolvulus)
- Moth plant
- Old man's beard
- Oxygen weed (Egeria)
- Oxygen weed (Lagarosiphon major)
- Pampas grass (Cortaderia selloana)
- Purple loosestrife
- Queen of the night (Cestrum nocturnum)
- Rhododendron ponticum
- Salix fragilis (crack willow)
- Salix cinerea (gray willow)
- Scotch thistle
- Tradescantia fluminensis
- Sycamore (Acer pseudoplatanus)
- Yellow flag (Iris pseudacorus)
Animals in New Zealand
- Australian magpies in New Zealand
- Canada geese in New Zealand
- Cats in New Zealand
- Common brushtail possum in New Zealand
- Gypsy moths in New Zealand
- Stoats in New Zealand
Plants in New Zealand
- Agapanthus in New Zealand
- Blue morning glory in New Zealand
- Didymo in New Zealand
- Gorse in New Zealand
- Old man's beard in New Zealand
- Privet as an invasive plant
- Wilding conifer
- Howe, K. R. (2003). The Quest for Origins. p. 179. ISBN 0-14-301857-4.
- Rat remains help date New Zealand's colonisation. New Scientist. 4 June 2008. Retrieved 23 June 2008.
- Abel Tasman did not land, so is unlikely to have introduced anything.
- It has been suggested that the harrier hawk may have benefited.
- "Registers, List and Indexes". MAF Biosecurity New Zealand. Retrieved 2012-01-22.
- Roy, Eleanor Ainge (2016-07-25). "No more rats: New Zealand to exterminate all introduced predators". The Guardian. ISSN 0261-3077. Retrieved 2017-01-21.
- Lowe S., Browne M., Boudjelas S. and de Poorter M. (2000, updated 2004). 100 of the World's Worst Invasive Alien Species: A selection from the Global Invasive Species Database. The Invasive Species Specialist Group (ISSG), a specialist group of the Species Survival Commission (SSC) of the World Conservation Union (IUCN), Auckland.
- "Trichosurus vulpecula alien range". Global Invasive Species Database. Retrieved 2009-04-10.
- "Erinaceus europaeus alien range". Global Invasive Species Database. Retrieved 2009-04-09.
- "Oryctolagus cuniculus alien range". Global Invasive Species Database. Retrieved 2009-04-10.
- "Mustela furo alien range". Global Invasive Species Database. Retrieved 2009-04-10.
- "Mus musculus alien range". Global Invasive Species Database. Retrieved 2009-04-09.
- "Mustela erminea alien range". Global Invasive Species Database. Retrieved 2009-04-10.
- "Plague skinks". Wellington, NZ: Department of Conservation. Retrieved 7 May 2017.
- "Management of invasive freshwater fish: striking the right balance!" (PDF). Department of Conservation.
There will be sites where the Department will want to eradicate salmonids species because they pose a significant threat to the maintenance of a threatened species or ecosystem...
- Howell, Clayson (May 2008). Consolidated list of environmental weeds in New Zealand (PDF). DRDS292. Wellington: Department of Conservation. ISBN 978-0-478-14413-0. Retrieved 2009-05-06.
- "New Zealand imports insects to fight plant invader". BBC News. 2017-01-19. Retrieved 2017-01-21.
- "Media release: Doctors prescribe attitude change for World's weediest city". Landcare Research. 23 January 2006. Archived from the original on November 27, 2010. Retrieved 2010-12-16.
- Landcare Research. "Attitude change prescribed for weedy Auckland" (Press release). Snoop.
Auckland has the dubious honour of being the weediest city in the world, with 220 weeds (and climbing).
- Allen, Robert B.; Lee, William G., eds. (2006). Biological Invasions in New Zealand. Berlin: Springer. ISBN 3-540-30022-8.
- Hackwell, Kevin (1999). Pests & Weeds: The Cost of Restoring an Indigenous Dawn Chorus: A Blueprint for Action Against the Impacts of Introduced Pest Organisms on the New Zealand Environment. Wellington [N.Z.]: New Zealand Conservation Authority. ISBN 0-9583301-8-2.
- King, Carolyn M. (1985). Immigrant Killers: Introduced Predators and the Conservation of Birds in New Zealand. Auckland: Oxford University Press. ISBN 978-0-19-558115-7.
- King, Carolyn M., ed. (1993). "The Great Lake Pest Summit - Proceedings of the National Mammalian Pest Forum, May 1993". New Zealand Journal of Zoology. Royal Society of New Zealand. 20 (4). doi:10.1080/03014223.1993.10420365. ISSN 0301-4223.
- Kirk, T. (1895). "The displacement of species in New Zealand". Transactions of the New Zealand Institute. 28: 1–27.
- Rahman, Anis and Ian Popay (1 August 2001). "Review of emerging weed problems in hill country pastures". Archived from the original on 10 January 2011.
- Thomson, George Malcolm (1922). The naturalisation of animals & plants in New Zealand. England: Cambridge University Press. doi:10.5962/bhl.title.55364. Retrieved 2 April 2016.
- Timmins, S; Williams, P. (1991). "Weed numbers in New Zealand's forest and scrub reserves" (PDF). New Zealand Journal of Ecol. New Zealand Ecological Society. 15 (2): 153–162.
- "The Future of Pest Management in New Zealand: A Think Piece" (PDF). Local Government New Zealand. August 2008. Retrieved 25 March 2011.
- New Zealand Plant Protection Society (2004). An illustrated guide to common weeds of New Zealand / Bruce Roy ... [et.al.] (2nd ed.). Lincoln, Canterbury, N.Z: New Zealand Plant Protection Society. ISBN 0473097605.
- Biosecurity New Zealand NZ Government Agency responsible for biosecurity
- New Zealand Department of Conservation - animal pests
- New Zealand Department of Conservation - plant pests (weeds)
- Searchable database on unwanted organisms at the Ministry for Primary Industries
- Information on plant pests at Weedbusters | <urn:uuid:20deefab-46c5-44a5-bf91-37c229b97d3d> | 3.296875 | 2,053 | Knowledge Article | Science & Tech. | 52.858682 | 95,536,646 |
A new study has revealed a mechanism that counters established thinking on how the rate at which tectonic plates separate along mid-ocean ridges controls processes such as heat transfer in geologic materials, energy circulation and even biological production.
The study also pioneered a new seismic technique – simultaneously shooting an array of 20 airguns to generate sound -- for studying the Earths mantle, the layer beneath the 10- to 40-kilometer-deep crust on the seafloor. The research, led by the Georgia Institute of Technology with funding from the National Science Foundation (NSF), will be reported in the Dec. 9, 2004 issue of the journal Nature.
"Mid ocean ridges produce most of the volcanism on the Earth, releasing a lot of heat – in some places enough to support large biological communities on the seafloor," said Daniel Lizarralde, lead author of the Nature paper and an assistant professor in the Georgia Tech School of Earth and Atmospheric Sciences.
Jane Sanders | EurekAlert!
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:b0a962b4-8f4b-49e1-8564-f006437d826a> | 3.609375 | 827 | Content Listing | Science & Tech. | 40.571883 | 95,536,676 |
Unbiased estimates of mountain goat (Oreamnos americanus) populations are key to meeting diverse harvest management and conservation objectives. We developed logistic regression models of factors influencing sightability of mountain goat groups during helicopter surveys throughout the Cascades and Olympic Ranges in western Washington during summers, 2004–2007. We conducted 205 trials of the ability of aerial survey crews to detect groups of mountain goats whose presence was known based on simultaneous direct observation from the ground (n ¼ 84), Global Positioning System (GPS) telemetry (n ¼ 115), or both (n ¼ 6). Aerial survey crews detected 77% and 79% of all groups known to be present based on ground observers and GPS collars, respectively. The best models indicated that sightability of mountain goat groups was a function of the number of mountain goats in a group, presence of terrain obstruction, and extent of overstory vegetation. Aerial counts of mountain goats within groups did not differ greatly from known group sizes, indicating that under-counting bias within detected groups of mountain goats was small. We applied Horvitz–Thompson-like sightability adjustments to 1,139 groups of mountain goats observed in the Cascade and Olympic ranges, Washington, USA, from 2004 to 2007. Estimated mean sightability of individual animals was 85% but ranged 0.75–0.91 in areas with low and high sightability, respectively. Simulations of mountain goat surveys indicated that precision of population estimates adjusted for sightability biases increased with population size and number of replicate surveys, providing general guidance for the design of future surveys. Because survey conditions, group sizes, and habitat occupied by goats vary among surveys, we recommend using sightability correction methods to decrease bias in population estimates from aerial surveys of mountain goats.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:602285fa-1843-4f5e-a6f1-5ab2a6d44e93> | 2.984375 | 376 | Academic Writing | Science & Tech. | 26.377588 | 95,536,677 |
As water is pulled into an opening by gravity, it begins to spin. Why does it spin?
Because angular momentum from the initial state of the water is preserved. It's the same thing that a skater uses to start in an open, slow spin and pull their arms in to go into a closed tight spin.
Actually, there are two regimes of spinning with two different speeds of spinning: immediately after opening a hole in the bath and about a minute later.
Immediately after opening the hole, conservation of angular momentum already works and one may see very slow spinning far from hole and faster spinning close to hole.
A minute later, the spinning becomes many times faster than in the very beginning. So, what is the reason of the fast spinning? Or what is the reason of increasing of the speed of spinning a minute later?
Angular momentum for a spinning object is mass times velocity times radius. As momentum is preserved and as the radius decreases (because of the water going out), the velocity must increase.
That's corect, but that was an answer to a different question.
Consider the numerical exapmle.
R = 30 cm, r = 3 cm.
Immediately after opening the hole in the bath, we have:
v(R) = 1 cm/sec, v(r) = 10 cm/sec
A minute later, BOTH of the speeds, the speed far from the funnel and the speed close to funnel becomes much larger,
v(R) = 12 cm/sec, v(r) = 120 cm/sec
The question is:
Why a minute later the speed at the distance 30 cm from funnel increased from 1 cm/sec to 12 cm/sec? Why a minute later the speed at the distance 3 cm from funnel increased from 10 cm/sec to 120 cm/sec?
i probably think that in this case what is decreased is the fluid mass, so it start to spinn faster... L=mvr. thats it.
[nitpick]In the case of water down a drain angular momentum is not precisely conserved. The tub (and maybe gravity?) does exert torque on the water.[/nitpick]Accounting for that amount of torque the rest of what has been said about angular momentum is correct.
In addition to the conservation of angular momentum there is also conservation of energy. As the water moves down into the drain there is some loss of PE. By conservation of energy you can also get an overall increase in KE in the tub depending on the KE of the water going down the drain. [nitpick]Of course, accounting for energy lost to viscous heating etc.[/nitpick]
There is a large bath, about 100 gallons of water and a small hole, about 1 inch diameter. A minute later there is still about 95 galons of water. Decrease of the mass of water is only 5%, but increase of rotation speed of the whole funnel is about 1200%.
I feel like I've been hustled. Your OP belied the depth of your knowledge on the subject.
But I don't think the whole volume participates at that point. Due to inertia and friction I imagine you can consider the dynamics of a smaller volume of only a few gallons surrounding the drain.
I am not sure about gravity, but the tub, actually bottom of it near the hole, exert friction. So it should reduce the angular momentum. But actually, the angular momentum increases a minute after beginning of the process.
Yes. There are two mechanisms of the speed increase as the water approaching the hole. The first one is that water goes closer to vertical ax, momentum conservation and so on... The second one is that water goes to a lower level, PE => KE and so on... But the question was not about speed increase as the water approaching the hole, but about increase of the speed of the funnel as whole a minute after beginning the process.
I am not satisfied with my knowledge of the subject... what I actually want is to find any effective measures against tornadoes and tropical storms that are too annoying in my lovely Florida. But in order to find something, I need deep understanding of rotation phenomena.
So, I am not satisfied with the hurricanes in Florida and not satisfied with the present knowledge of the subject...
You cannot consider part of the water in isolation to the rest. Viscous forces "connect" the water approaching the hole to the rest of the water in the funnel. The viscous forces are small, but not negligible. That is why, as you observed, it takes a rather large amount of time.
Consider only FIVE gallons of water surrounding the drain.
At t = 0 (or t = 10 sec), the funnel spins slowly.
At t = 1 min, the first five gallons are gone. There are another five gallons of water surrounding the drain. The funnel spins quickly. Why behavior of the next five gallons, which forms quickly spinning funnel is different from behavior of the first five gallons, which formed slowly spinning funnel?
That is exactly what I was thinking about, but I needed an independent opinion... thanks
The five gallons of water surrounding the drain is a very poor system to choose. It is not an isolated system and the boundaries and interactions are very difficult to define. You are much better off considering all of the water in the tub. That makes the boundaries much easier to define as well as the interactions with the surroundings.
Different initial conditions.
You are absolutely right!
And different boundary conditions!
i might be wrong but i think as the rotation progresses, the viscous resistance decreases, so as letting the velocity increases. just might be
but one thing that kicks me is the fact, it rotates. why does it rotate at all?? i have a large tank, full of water, i punch a hole in it, water, a lil after, drops below forming a vortex. why does it happen?? i asked this question, all through my course, but didnt get any answer OR i am ultra stupid;))
What exactly do you mean? The coefficient of viscosity decreases or the viscous resistance as a global phenomenon decreases at constant coefficient of viscosity?
I believe that answer to the question why does it rotate at all?, is the same as the answer to the question "why it rotates faster and faster as the rotation progresses?".
So, there is a mechanism exists that accelerates spinning the funnel as whole. In such a situation an initial fluctuations of angular momentum are enough to develop global spinning until nonlinearity restricts it at some reasonable level.
This question has got me thinking. The Wikipedia page (http://en.wikipedia.org/wiki/Vortex) mentions that for a free vortex "The tangential velocity v varies inversely as the distance r from the center of rotation, so the angular momentum, rv, is constant". I believe that this is constant as a function of r, not as a function of t.
As you indicate the whole thing can start spinning faster, so something must be exerting a net torque on the fluid in the same direction as the angular momentum. The viscous shear forces should exert a net torque in the opposite direction, the normal forces in a symmetric vessel should not exert a net torque, and I can't see how gravity would exert a torque about a vertical axis.
Where's the torque?
EDIT: I cannot reproduce the "points far away start spinning faster" thing in my sink even though anecdotally I think I have seen such occurences. The drain plug may be interfering. The situation you described above, was that just hypothetical, or have you done such an experiment?
I don’t think that precise measurements are possible in hydrodynamics. Deviations about 1% from the law 1/r may 'save' the model. It takes about 1 minute to develop stationary fast spinning funnel. For that time liquid makes several hundreds turns around center of vortex. So, the process of acceleration of the whole thing is comparatively slow. The torque required for such slow acceleration may cause deviations of about 1% or less from exact 1/r law.
I think that wikpedia describes a stationary, developed vortex. But during the process of slow acceleration the law may be a little bit different from 1/r and again the model works.
I believe the torque is within 1% of experimentally measured 1/r law.
The question is why the torque accelerates the whole thing. Friction due to walls and bottom must decelerate the whole thing.
First, you should remove drain plug at all and close the hole by your business card. Then wait 5 minutes until the water is in rest. After that remove business card using piece of wire (a long needle would be the best), moving it ALONG the bottom of the bath. After such non-disturbing opening of the hole, you should get the funnel WITHOUT spinning for a minute or more.
Well, if there is no external torque in the right direction then the only way possible for the "far away" fluid to gain angular momentum is if the fluid going down the drain has less angular momentum per unit mass than the rest of the fluid. In the ideal irrotational vortex the angular momentum is uniform throughout the fluid, so you would get no such effect.
However, I don't know the derivation of the irrotational vortex equations, it could be that they are assuming no viscosity. If so then it would make sense that the innermost fluid would have the highest shear rates and therefore rotate slightly slower than the inviscid limit and therefore have less angular momentum than the bulk of the fluid.
Yes, that is like Cheshire Cat smile...
Try Focault's pendulum - coriolis effect.
The effect is due to the rotation of the earth on its axis (of rotation...). We always subconciously assume Earth is staionary when it simply isn't.
And it's caused by moving in toward that axis (ever so slightly with Focault pendulum as it falls), and likewise opposite effect when it swings away away from the Earth's axis. It's like trying to keep a straight line as you walk inwards towards the centre of a roundabout or carousel. The fact you're already rotating throws you to one side of the line that you're trying keep.
You could also demonstrate the effect by dropping something verticlly from 100m at the equator. I think it should hit the ground about 1mm to one side.
So basically I think there wouldn't be any torque if you were on a planet that doesn't spin.
The reason the vortex speeds up is probably the positive feedback effect that somebody earlier mentioned. ie it's because all the particles in the water are connected by intermolecular forces :-) and obviously a larger effect from surface tension. -So what happens at the center has a knock on effect on the water further out. Especially true for the water on the surface where moleculr forces are stronger..
Separate names with a comma. | <urn:uuid:04bf4376-8fef-4589-9d65-df31c36ce37b> | 3.578125 | 2,284 | Comment Section | Science & Tech. | 60.10593 | 95,536,695 |
Microwave and millimeter wave high-power vacuum electron devices (VEDs) are essential elements in specialized military, scientific, medical and space applications. They can produce mega watts of power which would be equal to the power of thousands of solid state power devices (SSPDs). Similarly, in most of today's T/R-Modules of active phased array antennas for radars and electronic warfare applications GaAs based hybrid and MMIC amplifiers are used. The early applications of millimeter-wave MMICs were in military, space and astronomy systems. In the last three decades, microwave remote sensing has shown a high potential in characterization of land surface parameters (soil moisture, vegetation biomass, water covers, etc.). In this context, a very rich activity has been developed to propose techniques (satellite, airborne, in situ) and methodologies to optimize contribution of microwave remote sensing, in terms of precision, spatial, and temporal resolutions. Microwave Radar and Radiometric Remote Sensing provides you with theoretical models, system design and operation, and geoscientific applications of active and passive microwave remote sensing systems. It is aimed to the study of both reviews and original researches related to recent innovative microwave remote sensing instrumentation for land surface applications. Microwave remote sensing provides a unique capability towards achieving this goal. Over the past decade, significant progress has been made in microwave remote sensing of land processes through development of advanced airborne and space-borne microwave sensors, and the tools - such as physics-based models and advanced inversion algorithms - needed for analyzing the data. These activities have sharply increased in recent years since the launch of the ERS-1/2, JERS-1, and RADARS AT satellites, and with the availability of radiometric data from SSM/I. A new era has begun with the recent space missions ESA-ENVISAT, NASA-AQUA, and NASDA-ADEOSII, and the upcoming PALSAR and RADARSAT2 missions, which open new horizons for a wide range of operational microwave remote-sensing applications. This book highlights major activities and important results achieved in this area over the past years.
This book demonstrates the capabilities of passive microwave technique for enhanced observations of ocean features, including the detection of (sub)surface events and/or disturbances while laying out the benefits and boundaries of these methods. It represents not only an introduction and complete description of the main principles of ocean microwave radiometry and imagery, but also provides guidance for further experimental studies. Furthermore, it expands the analysis of remote sensing methods, models, and techniques and focuses on a high-resolution multiband imaging observation concept. Such an advanced approach provides readers with a new level of geophysical information and data acquisition granting the opportunity to improve their expertise on advanced microwave technology, becoming now an indispensable tool for diagnostics of ocean phenomena and disturbances.
Principles of Synthetic Aperture Radar Imaging: A System Simulation Approach demonstrates the use of image simulation for SAR. It covers the various applications of SAR (including feature extraction, target classification, and change detection), provides a complete understanding of SAR principles, and illustrates the complete chain of a SAR operation. The book places special emphasis on a ground-based SAR, but also explains space and air-borne systems. It contains chapters on signal speckle, radar-signal models, sensor-trajectory models, SAR-image focusing, platform-motion compensation, and microwave-scattering from random media. While discussing SAR image focusing and motion compensation, it presents processing algorithms and applications that feature extraction, target classification, and change detection. It also provides samples of simulation on various scenarios, and includes simulation flowcharts and results that are detailed throughout the book. Introducing SAR imaging from a systems point of view, the author: Considers the recent development of MIMO SAR technology Includes selected GPU implementation Provides a numerical analysis of system parameters (including platforms, sensor, and image focusing, and their influence) Explores wave-target interactions, signal transmission and reception, image formation, motion compensation Covers all platform motion compensation and error analysis, and their impact on final image radiometric and geometric quality Describes a ground-based SFMCW system Principles of Synthetic Aperture Radar Imaging: A System Simulation Approach is dedicated to the use, study, and development of SAR systems. The book focuses on image formation or focusing, treats platform motion and image focusing, and is suitable for students, radar engineers, and microwave remote sensing researchers.
Author: National Academies of Sciences, Engineering, and Medicine
Publisher: National Academies Press
Release Date: 2015-09-21
Active remote sensing is the principal tool used to study and to predict short- and long-term changes in the environment of Earth - the atmosphere, the oceans and the land surfaces - as well as the near space environment of Earth. All of these measurements are essential to understanding terrestrial weather, climate change, space weather hazards, and threats from asteroids. Active remote sensing measurements are of inestimable benefit to society, as we pursue the development of a technological civilization that is economically viable, and seek to maintain the quality of our life. A Strategy for Active Remote Sensing Amid Increased Demand for Spectrum describes the threats, both current and future, to the effective use of the electromagnetic spectrum required for active remote sensing. This report offers specific recommendations for protecting and making effective use of the spectrum required for active remote sensing.
Author: George P. Petropoulos
Publisher: CRC Press
Release Date: 2017-11-02
Extreme weather and climate change aggravate the frequency and magnitude of disasters. Facing atypical and more severe events, existing early warning and response systems become inadequate both in scale and scope. Earth Observation (EO) provides today information at global, regional and even basin scales related to agrometeorological hazards. This book will focus on drought, flood, frost, landslides, and storms/cyclones and will cover different applications of EO data used from prediction to mapping damages for each category. It will explain the added value of EO technology in comparison with conventional techniques applied today through many case studies.
Because prevailing atmospheric/troposcopic conditions greatly influence radio wave propagation above 10 GHz, the unguided propagation of microwaves in the neutral atmosphere can directly impact many vital applications in science and engineering. These include transmission of intelligence, and radar and radiometric applications used to probe the atmosphere, among others. Where most books address either one or the other, Microwave Propagation and Remote Sensing: Atmospheric Influences with Models and Applications melds coverage of these two subjects to help readers develop solutions to the problems they present. This reference offers a brief, elementary account of microwave propagation through the atmosphere and discusses radiometric applications in the microwave band used to characterize and model atmospheric constituents, which is also known as remote sensing. Summarizing the latest research results in the field, as well as radiometric models and measurement methods, this book covers topics including: Free space propagation Reflection, interference, polarization, and other key aspects of electromagnetic wave propagation Radio refraction and its effects on propagation delay Methodology of estimating water vapor attenuation using radiosonde data Knowledge of rain structures and use of climatological patterns to estimate/measure attenuation of rain, snow, fog, and other prevalent atmospheric particles and human-made substances Dual/multifrequency methodology to deal with the influence of clouds on radiometric attenuation Deployment of microwaves to ascertain various tropospheric conditions Composition and characteristics of the troposphere, to help readers fully understand microwave propagation Derived parameters of water, free space propagation, and conditions and variable constituents such as water vapor and vapor pressure, density, and ray bending | <urn:uuid:6e8f83fc-1547-400b-ac0b-69fdf97be590> | 2.921875 | 1,579 | Content Listing | Science & Tech. | -2.819847 | 95,536,718 |
Two theoretical physicists at Rensselaer Polytechnic Institute have uncovered what they believe is the long-sought-after pathway that an HIV peptide takes to enter healthy cells. The theorists analyzed two years of biocomputation and simulation to uncover a surprisingly simple mechanism describing how this protein fragment penetrates the cell membrane. The discovery could help scientists treat other human illnesses by exploiting the same molecules that make HIV so deadly proficient.
The findings are detailed in the Dec. 26, 2007, issue of the Proceedings of the National Academy of Sciences (PNAS).
For the last decade, scientists have known that a positively charged, 11-amino-acid chain of HIV (HIV-1 Tat protein) can do the nearly unthinkable ¡X cross through the cell membrane. Sometimes referred to as an "arrow protein," HIV-1 Tat pierces the cell membrane and carries a cargo though the cell membrane.
Its unique cell-puncturing ability has been the subject of hundreds of scientific articles investigating the type of materials that can piggyback on the peptide and also enter the cell. Researchers have proposed using the peptide to deliver genes for gene therapy and drugs that need to be delivered directly to a cell. But despite many potential medical applications, the actual mechanism that opens the holes in the cell remained undiscovered.
The Rensselaer researchers have discovered that the positively charged HIV peptide is drawn to negatively charged groups inside the cell membrane. When the HIV peptide cannot satisfy itself with the negative charges available on the cell membrane surface it is directly attached to, it reaches through the membrane to grab negatively charged groups in the molecules on the other side, opening a transient hole in the cell.
"What we saw in our computer calculations wasn't at all what we expected to see when we began," said co-lead author and Senior Constellation Professor of Biocomputation and Bioinformatics Angel Garcia. "The mechanism for entrance in the cell was clear in one simulation, but in some instances simulations show one result and you never see that result again. Then we started doing other simulations and it kept happening again and again."
Garcia and his collaborator, postdoctoral researcher Henry Herce, initially set out to uncover how the peptide interacts with a lipid bilayer that is used to model the cell membrane. A highly efficient biological system, the cell membrane is composed of a lipid bilayer (made up of two monolayers) designed to protect the cell by preventing the influx of material. Each lipid in the bilayer has a polar, or charged, end and a non-polar end. A monolayer of lipids faces the exterior of the cell, with the polar end facing the outside of the cell. Another monolayer is under the first layer, forming the bilayer. The polar end of the lower layer faces the interior of the cell, forming a middle section containing the uncharged halves of both monolayers.
Because charged particles seek each other in order to neutralize themselves and achieve a more stable state, the surface of the polar cell membrane and the positively charged HIV peptide are drawn to one another. But the interior of the bilayer is not charged and forms a strong barrier against the entrance of any charged material.
As was expected, in their simulations the researchers observed that the positive charges in the peptide quickly attached to the surface of the cell membrane and sought out and reacted with negatively charged phosphates from the charged portion of the lipid bilayer to satisfy their need for neutrality. "Then the peptide entered the forbidden territory of the cell," Garcia said. The positively charged peptide entered the membrane. "This is when this mechanism starts to challenge conventional wisdom," he said.
The researchers' model systems show the peptides grabbing for surrounding negative charges, but when no more of those charges are available due to their greedy peptide neighbors, some of the peptides reach into the cell membrane and grab negative charged phosphates from the other side. This opens a hole in the cell membrane and allows the flow of water and other material into the cell. Once all the peptides have been neutralized, the reaction stops and the hole closes, leaving behind a healthy, viable cell.
For the paper, the researchers reported a dozen different simulations run through a high-powered cluster of computers. Each simulation required a long process of testing and validating results. Garcia's computer cluster is now running simulations on the use of antimicrobial proteins which will open a pore in the cell and keep it open, killing the cell. Antimicrobial proteins have promising direct applications for killing harmful cells in the body.
Garcia hopes to harness the power of Rensselaer's newly opened Computational Center for Nanotechnology Innovations (CCNI), which houses the world's most powerful university-based supercomputing center. The CCNI will allow him to compile two years' worth of data on his normal cluster in just 10 to 20 days.
Gabrielle DeMarco | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:62c52fa8-76c0-44fd-8e70-46610f604e99> | 2.8125 | 1,688 | Content Listing | Science & Tech. | 37.82889 | 95,536,819 |
100 years of cosmic rays mystery
As physicists gather in early August to celebrate a century since the initial discovery of cosmic rays, Alan Watson, emeritus professor of physics at the University of Leeds, explains in this month’s Physics World how physicists have gradually revealed the nature of these mysterious objects and examines the progress being made in understanding where they come from.
It is now widely accepted that cosmic rays are the nuclei of atoms, from the entire range of naturally occurring elements, that travel at near-light-speeds for millions of years before reaching Earth. However, identifying the source of cosmic rays has proved to be a very difficult task.
The Pierre Auger Observatory – a 3000 km2 site in Argentina – is one of many institutions around the world scouring the universe for the source of cosmic rays and currently has 1600 Cherenkov detectors in operation, each looking to find the source of cosmic-ray showers with extremely high energies.
This is in massive contrast to the techniques used by Austrian scientist Victor Hess, who was the first to discover cosmic rays on 7 August 1912 by travelling 5000 m above ground in a hot-air balloon. He was awarded the Nobel Prize for Physics in 1936 for his efforts.
The story of cosmic rays started in the 1780s when French physicist Charles-Augustin de Coulomb noticed that an electrically charged sphere spontaneously lost its charge, which at the time was strange as scientists believed that air was an insulator, rather than a conductor.
Further investigations showed that air became a conductor when the molecules were ionized by charged particles or X-rays.
The source of these charged particles puzzled scientists as experiments revealed that objects were losing their charge even when shielded by a large volume of lead, which was known to block X-rays and other radioactive sources.
Hess was the first to discover that the ionization of air was three times greater at high altitudes than it was at ground level, leading him to conclude that there was a very large source of radiation penetrating our atmosphere from above.
In this feature, Watson states that there is an unexpected benefit stemming from Hess’s original cosmic-ray research: the designer of the communications system at the Pierre Auger Observatory has used the same sophisticated software to build a radio-based signalling system that now extends over 700 km of the single-track train line in the Scottish Highlands.
“The safety and reliability that rail travellers now enjoy while passing by lochs and through glens is a benefit from Hess’s daring flight a century ago that surely he could never have foreseen,” Watson writes. | <urn:uuid:8d2bd10c-9273-485b-9cfd-509c9348e9bf> | 4.0625 | 531 | Truncated | Science & Tech. | 24.716772 | 95,536,824 |
Water scarcity is a favourite topic for the headlines of doom, in company with overpopulation, climate chaos and nuclear war. The threat of a destabilising water crisis invariably wins attention in the annual Global Risks Reports published by the World Economic Forum.
In reality, our water woes reflect international political dysfunction as much as scarcity. If governments were more willing to collaborate in safeguarding the water cycle on which we all depend, and to share its beneficence fairly, they would discover that there is more than sufficient water to meet our needs.
In the absence of political leadership, water scares continue to populate the media. Scientists predict that the Middle East may become uninhabitable by 2100, largely on account of loss of freshwater. Residents of Cape Town, one of the world’s most affluent tourist destinations, spent the early months of 2018 counting down to “Zero Day”, the inauspicious moment when their taps would fail.
In 2014, the most recent year of data provided by the World Bank, global per capita availability of renewable freshwater from rivers, lakes, aquifers and rainfall averaged 6,000 cubic metres per annum, a potentially healthy figure, despite halving over the preceding 50 years. Only 9% of this resource is actually withdrawn, evidence of the worldwide abundance of freshwater.
However, freshwater is very unevenly distributed and scarcity is more realistically assessed within regions, countries or individual river basins. Seasonal variation can also be very significant. Amongst several approaches to a definition of water scarcity, there are two principal methods, each verifying that the reality experienced by millions of households is inconsistent with the statistics of global abundance.
The first approach considers the per capita availability of freshwater within a country or region, regardless of actual withdrawal. Availability of 1,000 cubic metres is regarded as the minimum necessary to meet the needs of households, agriculture, and industry – and to sustain local ecosystems.
A state of water scarcity exists below that threshold. Below 1,700 cubic metres, the less severe description of “water stress” applies. By way of illustration of regional extremes, renewable per capita freshwater availability in the US is over 8,800 cubic metres; in Jordan and Israel availability has fallen below 100 cubic metres. Water scarcity is most acute in the Middle East region where average availability is only a quarter of its level in 1962.
The second approach to a definition of water scarcity focuses on withdrawals as a proportion of available freshwater. Indicators approved for monitoring the Sustainable Development Goal for water adopt this method. A country that withdraws more than 25% of its availability is considered to be experiencing water stress; consumption of over 60% is classed as water scarcity.
There are 14 countries, nearly all in the Middle East, which consume more than 100% of their renewable water resources. Their aquifer levels are falling and they must seek alternative sources such as desalinisation.
The concept of “water security”, the inverse of scarcity, also lacks a consensual definition. It implies consistent and affordable access to unpolluted freshwater for households, agriculture and industry.
more Water Scarcity briefings (updated April 2018)
Water Energy Food Nexus
Causes of Water Scarcity
Climate Change and Water Scarcity
Solutions to Water Scarcity
Sustainable Development Goal for Water
Access to Safe Water
Source material and useful links | <urn:uuid:90a6e2dc-e5df-48c8-8c23-e73a4a268b9b> | 3.609375 | 683 | Knowledge Article | Science & Tech. | 20.842455 | 95,536,849 |
A biological approach to “off” CO2 emission
Ever since the 90s when concern over the impact of carbon emission on our environment was first raised, global-wide efforts in reducing emission have been met with mixed results. Just 2017 alone the global emission level grew by 1.4%. This year, iGEM NCKU_Tainan will design a device capable of piping CO2 and convert such carbon source into biomass via intergrating a non-native Calvin-Benson-Bassham cycle into E. coli. “Of Course” is a biological approach to “off” CO2 emission through the RuBisCO and PRK genes from Synechococcus sp, which encode for major enzymes involved in carbon fixation. Industrial gases will enter a pipe (inlet) at the bottom of a cylindrical container, flow through a ceramic nozzle and mix with liquids containing engineered E. coli that consumes CO2. Furthermore, CO2 concentration will be determined by monitoring the corresponding change in pH using an asr (acid shock RNA) promoter containing E. coli with a sfGFP as reporter. Our ultimate goal is that this novel approach will reduce CO2 in order to slow down global climate changes. | <urn:uuid:94b7eea1-cb60-43ab-a76c-65e168fdfae5> | 3.125 | 256 | News (Org.) | Science & Tech. | 42.726383 | 95,536,860 |
Magnetic Susceptibility of YbCuGa, YbAgGa and YbAuGa Compounds
The rare earth ions normally exist in a trivalent state in most of their alloys and compounds. However, Ce, Sm, Eu, Tm and Yb are exceptions and can exist in other than the trivalent state. Thus cerium can have a valency of four and Sm, Eu, Tm and Yb a valency of two. The valence adopted by these rare earth ions in an alloy or a compound is a function of several factors such as nature of other alloying elements, temperature, pressure etc. In some systems, there may be a rapid fluctuation between the two valence states. Such systems are characterized by the anomalies in the unit cell volume, maxima in the susceptibility, large electronic specific heat coefficient, two sets of lines in the XPS spectra corresponding to two different initial state configurations of the rare earth ion etc.
KeywordsMagnetic Susceptibility Unit Cell Volume Trivalent State Divalent State Ground State Configuration
Unable to display preview. Download preview PDF. | <urn:uuid:abb70380-0ac5-4a73-b145-d05376e3407f> | 2.59375 | 235 | Truncated | Science & Tech. | 30.84363 | 95,536,912 |
Show Me Science
Earth Science Our Changing PlanetStreaming Video - 2009
The 12-title Wonders of Earth Science series covers subjects from the Earth's Interior, Mineralogy, Meteorology, Energy, Glaciers to Understanding Earthquakes and the Ozone Blanket. Students will develop a basic understanding of the fundamentals of Earth Science and work their way up to more complex subjects. Erupting volcanoes, bone-rattling earthquakes, blasting geysers and boiling mud ? awesome natural forces like these have been at work for millions of years changing the Earth's...
Audience: Rated UNK.
Publisher: [United States] : TMW : Made available through hoopla, 2009.
Characteristics: 1 online resource (1 video file (approximately 14 min.)) : sound, color
Alternative Title: Earth science [electronic resource (e-video)] : our changing planet | <urn:uuid:7a4ce569-c914-4e53-85c3-b1dccd169e24> | 2.734375 | 181 | Truncated | Science & Tech. | 34.87831 | 95,536,948 |
Get Premium for free!
5 months ago
Greeks believed that all matter was made of 4 elements. Fire, earth, water, air
John Dalton (English chemist) proposed the atomic theory of matter. In Dalton's theory all matter is made up of indivisible spheres.
Joseph John Thomson discover electrons and its negative charge. Which then led him to discover the positive charge in an atom. He proposed the plum pudding model. In the model that atom is a ball of positive and negatively charged electrons.
Philipp Lenard described atoms as empty spaces filled with fast-moving dynamides.
Ernest Rutherford believed that atoms were mass contained in a small positive nucleus surrounded by negative electrons.
Niels Bohr model of atoms proposed that electrons can travel along certain pathways around the nucleus, called orbits. This model is sometimes called the planetary model.
James Chadwick discovered the neutron.
Share on Google+
Share on Facebook
Submit to Reddit
Share on LinkedIn
Post to Tumblr | <urn:uuid:946884bb-4a9e-426a-a458-4682812ef3d9> | 3.515625 | 204 | Listicle | Science & Tech. | 41.183033 | 95,536,960 |
Research at Yale reported in the journal Science identifies a new riboswitch (RNA regulatory sequence) class in bacteria that operates as a rare "ON" switch for genetic regulation of the three proteins in a glycine processing system.
"This seems like something only a biochemist can appreciate, but what it really means is that modern RNA has what it takes to run the complex metabolism of life. It is like what would have been needed in an "RNA World" - or a period in evolution where RNA served a much larger role," said Ronald T. Breaker, professor in the Department of Molecular, Cellular and Developmental Biology at Yale University.
The latest riboswitch is unique because it is the first RNA switch known to have "cooperative binding" to its target, a process that is common in protein enzymes but not usually associated with RNA. It is also surprising that such complex relics of an RNA World are seen in modern organisms.
Janet Rettig Emanuel | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:317219ab-f56a-44b9-9d3d-1fb31804d0d8> | 3.484375 | 783 | Content Listing | Science & Tech. | 38.146638 | 95,536,977 |
This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (February 2018) (Learn how and when to remove this template message)
|Part of a series of articles about|
Coulomb's law, or Coulomb's inverse-square law, is a law of physics for quantifying the amount of force with which stationary electrically charged particles repel or attract each other. In its scalar form, the law is:
where ke is Coulomb's constant (ke = ×109 N m2 C−2), q1 and q2 are the signed magnitudes of the charges, and the scalar r is the distance between the charges. The force of the interaction between the charges is attractive if the charges have opposite signs (i.e., F is negative) and repulsive if like-signed (i.e., F is positive). 9.0
The law was first published in 1785 by French physicist Charles-Augustin de Coulomb and was essential to the development of the theory of electromagnetism. Being an inverse-square law, it is analogous to Isaac Newton's inverse-square law of universal gravitation. Coulomb's law can be used to derive Gauss's law, and vice versa. The law has been tested extensively, and all observations have upheld the law's principle.
- 1 History
- 2 The law
- 3 Scalar form
- 4 Vector form
- 5 Simple experiment to verify Coulomb's law
- 6 Atomic forces
- 7 See also
- 8 Notes
- 9 References
- 10 External links
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BC, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus ("of amber" or "like amber", from ἤλεκτρον [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.
Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758.
Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance.
Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them.
The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law.
Coulomb's law states that:
The magnitude of the electrostatic force of attraction or repulsion between two point charges is directly proportional to the product of the magnitudes of charges and inversely proportional to the square of the distance between them.
The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive.
- and respectively,
where ke is Coulomb's constant (ke = 5517873681764×109 N m2 C−2), 8.987q1 and q2 are the signed magnitudes of the charges, the scalar r is the distance between the charges, the vector r21 = r1 − r2 is the vectorial distance between the charges, and r̂21 = r21/ (a unit vector pointing from q2 to q1). The vector form of the equation calculates the force F1 applied on q1 by q2. If r12 is used instead, then the effect on q2 can be found. It can be also calculated using Newton's third law: F2 = −F1.
When the electromagnetic theory is expressed using the International System of Units, force is measured in newtons, charge in coulombs, and distance in metres. Coulomb's constant is given by ke = 1⁄4πε0. The constant ε0 is the electric constant (also known as "the absolute permittivity of free space") in C2 m−2 N−1. It should not be confused with εr, which is the dimensionless relative permittivity of the material in which the charges are immersed, or with their product εa = ε0εr , which is called "absolute permittivity of the material" and is still used in electrical engineering.
Coulomb's law and Coulomb's constant can also be interpreted in various terms:
- Atomic units. In atomic units the force is expressed in hartrees per Bohr radius, the charge in terms of the elementary charge, and the distances in terms of the Bohr radius.
- Electrostatic units or Gaussian units. In electrostatic units and Gaussian units, the unit charge (esu or statcoulomb) is defined in such a way that the Coulomb constant k disappears because it has the value of one and becomes dimensionless.
- Lorentz–Heaviside units (also called rationalized). In Lorentz–Heaviside units the Coulomb constant is ke = 1/ and becomes dimensionless.
An electric field is a vector field that associates to each point in space the Coulomb force experienced by a test charge. In the simplest case, the field is considered to be generated solely by a single source point charge. The strength and direction of the Coulomb force F on a test charge qt depends on the electric field E that it finds itself in, such that F = qtE. If the field is generated by a positive source point charge q, the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge qt would move if placed in the field. For a negative point source charge, the direction is radially inwards.
The magnitude of the electric field E can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field E created by a single source point charge q at a certain distance from it r in vacuum is given by:
Coulomb's constant is a proportionality factor that appears in Coulomb's law as well as in other electric-related formulas. Denoted ke, it is also called the electric force constant or electrostatic constant, hence the subscript e.
The exact value of Coulomb's constant is:
There are three conditions to be fulfilled for the validity of Coulomb's law:
- The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere).
- The charges must not overlap (e.g. they must be distinct point charges).
- The charges must be stationary with respect to each other.
The last of these is the most important - it is known as the electrostatic approximation. When movement takes place, Einstein's theory of relativity must be taken into consideration, and a result, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force, and is described by magnetic fields. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct, but when the charges are moving more quickly in relation to each other, the full electrodynamic rules (incorporating the magnetic force) must be considered.
Quantum Field Theory origin
This subsection does not cite any sources. (February 2018) (Learn how and when to remove this template message)
where the covariant derivative (in SI units) is:
where is the gauge coupling parameter. By putting the covariant derivative into the lagrangian explicitly, the interaction term (the term involving both and ) is seen to be:
The most basic Feynman diagram for a QED interaction between two fermions is the exchange of a single photon, with no loops. Following the Feynman rules, this therefore contributes two QED vertex factors () to the potential, where Q is the QED-charge operator (Q gives the charge in terms of the electron charge, and hence is exactly -1 for electrons, etc.). For the photon in the diagramme, the Feynman rules demand the contribution of one bosonic massless propagator (). Ignoring the momentum on the external legs (the fermions), the potential is therefore:
which can be more usefully written as
Where is the QED-charge on the ith particle. Recognising the integral as just being a Fourier transform enables the equation to be simplified:
For various reasons, it is more convenient to define the fine-structure constant , and then define . Re-arranging these definitions gives:
It is worth noting that in natural units (since, in those units, , , and ). Continuing in SI units, the potential is therefore
Defining , as the macroscopic 'electric charge', makes e the macroscopic 'electric charge' for an electron, and enables the formula to be put into the familiar form of the coulomb potential:
The force () is therefore :
The derivation makes clear that the force law is only an approximation — it ignores the momentum of the input and output fermion lines, and ignores all quantum corrections (ie. the myriad possible diagrams with internal loops).
The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential (specifically, the case where the exchanged boson - the photon - has no rest mass).
When it is of interest to know the magnitude of the electrostatic force (and not its direction), it may be easiest to consider a scalar version of the law. The scalar form of Coulomb's Law relates the magnitude and sign of the electrostatic force F acting simultaneously on two point charges q1 and q2 as follows:
where r is the separation distance and ke is Coulomb's constant. If the product q1q2 is positive, the force between the two charges is repulsive; if the product is negative, the force between them is attractive.
Coulomb's law states that the electrostatic force F1 experienced by a charge, q1 at position r1, in the vicinity of another charge, q2 at position r2, in a vacuum is equal to:
where r21 = r1 − r2, the unit vector r̂21 = r21/, and ε0 is the electric constant.
The vector form of Coulomb's law is simply the scalar definition of the law with the direction given by the unit vector, r̂21, parallel with the line from charge q2 to charge q1. If both charges have the same sign (like charges) then the product q1q2 is positive and the direction of the force on q1 is given by r̂21; the charges repel each other. If the charges have opposite signs then the product q1q2 is negative and the direction of the force on q1 is given by −r̂21 = r̂12; the charges attract each other.
The electrostatic force F2 experienced by q2, according to Newton's third law, is F2 = −F1.
System of discrete charges
The [superposition principle|law of superposition] allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed.
The force F on a small charge q at position r, due to a system of N discrete charges in vacuum is:
where qi and ri are the magnitude and position respectively of the ith charge, R̂i is a unit vector in the direction of Ri = r − ri (a vector pointing from charges qi to q).
Continuous charge distribution
In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge dq. The distribution of charge is usually linear, surface or volumetric.
For a linear charge distribution (a good approximation for charge in a wire) where λ(r′) gives the charge per unit length at position r′, and dl′ is an infinitesimal element of length,
For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where σ(r′) gives the charge per unit area at position r′, and dA′ is an infinitesimal element of area,
For a volume charge distribution (such as charge within a bulk metal) where ρ(r′) gives the charge per unit volume at position r′, and dV′ is an infinitesimal element of volume,
The force on a small test charge q′ at position r in vacuum is given by the integral over the distribution of charge:
Simple experiment to verify Coulomb's law
It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass m and same-sign charge q, hanging from two ropes of negligible mass of length l. The forces acting on each sphere are three: the weight mg, the rope tension T and the electric force F.
In the equilibrium state:
Let L1 be the distance between the charged spheres; the repulsion force between them F1, assuming Coulomb's law is correct, is equal to
If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge q/. In the equilibrium state, the distance between the charges will be L2 < L1 and the repulsion force between them will be:
We know that F2 = mg tan θ2. And:
Measuring the angles θ1 and θ2 and the distance between the charges L1 and L2 is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation:
Using this approximation, the relationship (6) becomes the much simpler expression:
In this way, the verification is limited to measuring the distance between the charges and check that the division approximates the theoretical value.
Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable.
|Wikimedia Commons has media related to Coulomb force.|
- Biot–Savart law
- Darwin Lagrangian
- Electromagnetic force
- Gauss's law
- Method of image charges
- Molecular modelling
- Newton's law of universal gravitation, which uses a similar structure, but for mass instead of charge
- Static forces and virtual-particle exchange
- Stewart, Joseph (2001). Intermediate Electromagnetic Theory. World Scientific. p. 50. ISBN 981-02-4471-1
- Simpson, Brian (2003). Electrical Stimulation and the Relief of Pain. Elsevier Health Sciences. pp. 6–7. ISBN 0-444-51258-6
- Baigrie, Brian (2007). Electricity and Magnetism: A Historical Perspective. Greenwood Press. pp. 7–8. ISBN 0-313-33358-0
- Chalmers, Gordon (1937). "The Lodestone and the Understanding of Matter in Seventeenth Century England". Philosophy of Science. 4 (1): 75–95. doi:10.1086/286445
- Socin, Abel (1760). Acta Helvetica Physico-Mathematico-Anatomico-Botanico-Medica (in Latin). 4. Basileae. pp. 224–25.
- Heilbron, J.L. (1979). Electricity in the 17th and 18th Centuries: A Study of Early Modern Physics. Los Angeles, California: University of California Press. pp. 460–462 and 464 (including footnote 44). ISBN 0486406881.
- Schofield, Robert E. (1997). The Enlightenment of Joseph Priestley: A Study of his Life and Work from 1733 to 1773. University Park: Pennsylvania State University Press. pp. 144–56. ISBN 0-271-01662-0.
Priestley, Joseph (1767). The History and Present State of Electricity, with Original Experiments. London, England. p. 732.
May we not infer from this experiment, that the attraction of electricity is subject to the same laws with that of gravitation, and is therefore according to the squares of the distances; since it is easily demonstrated, that were the earth in the form of a shell, a body in the inside of it would not be attracted to one side more than another?
- Elliott, Robert S. (1999). Electromagnetics: History, Theory, and Applications. ISBN 978-0-7803-5384-8.
Robison, John (1822). Murray, John, ed. A System of Mechanical Philosophy. 4. London, England.
On page 68, the author states that in 1769 he announced his findings regarding the force between spheres of like charge. On page 73, the author states the force between spheres of like charge varies as x−2.06:
When making experiments with charged spheres of opposite charge the results were similar, as stated on page 73:
The result of the whole was, that the mutual repulsion of two spheres, electrified positively or negatively, was very nearly in the inverse proportion of the squares of the distances of their centres, or rather in a proportion somewhat greater, approaching to x−2.06.
Nonetheless, on page 74 the author infers that the actual action is related exactly to the inverse duplicate of the distance:
When the experiments were repeated with balls having opposite electricities, and which therefore attracted each other, the results were not altogether so regular and a few irregularities amounted to 1⁄6 of the whole; but these anomalies were as often on one side of the medium as on the other. This series of experiments gave a result which deviated as little as the former (or rather less) from the inverse duplicate ratio of the distances; but the deviation was in defect as the other was in excess.
On page 75, the authour compares the electric and gravitational forces:
We therefore think that it may be concluded, that the action between two spheres is exactly in the inverse duplicate ratio of the distance of their centres, and that this difference between the observed attractions and repulsions is owing to some unperceived cause in the form of the experiment.
Therefore we may conclude, that the law of electric attraction and repulsion is similar to that of gravitation, and that each of those forces diminishes in the same proportion that the square of the distance between the particles increases.
- Maxwell, James Clerk, ed. (1967) . "Experiments on Electricity: Experimental determination of the law of electric force.". The Electrical Researches of the Honourable Henry Cavendish... (1st ed.). Cambridge, England: Cambridge University Press. pp. 104–113.
On pages 111 and 112 the author states:
We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the 2 + 1⁄50 th and that of the 2 − 1⁄50 th, and there is no reason to think that it differs at all from the inverse duplicate ratio.
- Coulomb (1785a) "Premier mémoire sur l’électricité et le magnétisme," Histoire de l’Académie Royale des Sciences, pages 569-577 — Coulomb studied the repulsive force between bodies having electrical charges of the same sign:
Coulomb also showed that oppositely charged bodies obey an inverse-square law of attraction.
Il résulte donc de ces trois essais, que l'action répulsive que les deux balles électrifées de la même nature d'électricité exercent l'une sur l'autre, suit la raison inverse du carré des distances.
Translation: It follows therefore from these three tests, that the repulsive force that the two balls — [that were] electrified with the same kind of electricity — exert on each other, follows the inverse proportion of the square of the distance.
- Jackson, J. D. (1998) . Classical Electrodynamics (3rd ed.). New York: John Wiley & Sons. ISBN 978-0-471-30932-1. OCLC 535998.
- Coulomb's law, Hyperphysics
- Coulomb's law, University of Texas
- Charged rods, PhysicsLab.org
ke = H/m is not correct it must be F/m
- Coulomb, Charles Augustin (1788) . "Premier mémoire sur l'électricité et le magnétisme". Histoire de l’Académie Royale des Sciences. Imprimerie Royale. pp. 569–577.
- Coulomb, Charles Augustin (1788) . "Second mémoire sur l'électricité et le magnétisme". Histoire de l’Académie Royale des Sciences. Imprimerie Royale. pp. 578–611.
- Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X.
- Tipler, Paul A.; Mosca, Gene (2008). Physics for Scientists and Engineers (6th ed.). New York: W. H. Freeman and Company. ISBN 0-7167-8964-7. LCCN 2007010418.
- Young, Hugh D.; Freedman, Roger A. (2010). Sears and Zemansky's University Physics : With Modern Physics (13th ed.). Addison-Wesley (Pearson). ISBN 978-0-321-69686-1.
- Coulomb's Law on Project PHYSNET
- Electricity and the Atom—a chapter from an online textbook
- A maze game for teaching Coulomb's Law—a game created by the Molecular Workbench software
- Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike. | <urn:uuid:a29a8b81-5edd-4712-981f-a9b538f9ff46> | 4.0625 | 5,368 | Knowledge Article | Science & Tech. | 51.520694 | 95,536,984 |
Ben-Gurion University of the Negev (BGU) and University of Western Australia researchers have developed a new process to develop few-layer graphene for use in energy storage and other material applications that is faster, potentially scalable and surmounts some of the current graphene production limitations.
The new revolutionary one-step, high-yield generation process is detailed in the latest issue ofCarbon, published by a collaborative team that includes BGU Prof. Jeffrey Gordon of the Alexandre Yersin Department of Solar Energy and Environmental Physics at the Jacob Blaustein Institutes for Desert Research and Prof. H.T. Chua’s group at the University of Western Australia (UWA, Perth).
Their ultra-bright lamp-ablation method surmounts the shortcomings and has succeeded in synthesizing few-layer (4-5) graphene in higher yields. It involves a novel optical system (originally invented by BGU Profs. Daniel Feuermann and Jeffrey Gordon) that reconstitutes the immense brightness within the plasma of high-power xenon discharge lamps at a remote reactor, where a transparent tube filled with simple, inexpensive graphite is irradiated.
The process is relatively faster, safer and green — devoid of any toxic substances (just graphite plus concentrated light).
Following this proof of concept, the BGU-UWA team is now planning an experimental program to scale up this initial success toward markedly improving the volume and rate at which few-layer (and eventually single-layer) graphene can be synthesized.
About American Associates, Ben-Gurion University of the Negev
American Associates, Ben-Gurion University of the Negev (AABGU) plays a vital role in sustaining David Ben-Gurion’s vision, creating a world-class institution of education and research in the Israeli desert, nurturing the Negev community and sharing the University’s expertise locally and around the globe. With some 20,000 students on campuses in Beer-Sheva, Sede Boqer and Eilat in Israel’s southern desert, BGU is a university with a conscience, where the highest academic standards are integrated with community involvement, committed to sustainable development of the Negev. AABGU is headquartered in Manhattan and has nine regional offices throughout the U.S. For more information, please visithttp://www.
ST Staff Writers
This post was prepared by Solar Thermal Magazine staff. | <urn:uuid:01bf590a-735e-47e7-bbc6-4f557cb10b99> | 2.90625 | 508 | News (Org.) | Science & Tech. | 22.874586 | 95,536,985 |
(a) Do the balls ever pass each other?
(b) If they pass each other, give the time when this occurs. If they do not pass each other, enter NONE.
Recently Asked Questions
- when there are too much labors with limited capitals, what kind of agricultural innovation should countries invest in?
- Problem 1: Electric field on the axis of a charged rod 1) Draw and label an infinitesimal charge dq 1 on the rod (don’t place it at a special
- Please refer to the attachment to answer this question. This question was created from 86778-homework-financial-engineering. Additional comments: "Compute the | <urn:uuid:45b30552-963f-4575-a7a9-374476365bdb> | 2.75 | 143 | Q&A Forum | Science & Tech. | 57.703529 | 95,537,015 |
Mobilisation of phosphorus byPteridium aquilinum
- 33 Downloads
A leaching experiment was designed to assess the ability ofPteridium aquilinum rhizomes to mobilise phosphate from inorganic sources. It was found that in contrast to roots ofCalluna vulgaris Pteridium aquilinum residues effectively released phosphate in an available form from both ground mineral phosphate and a mineral soil.
KeywordsPhosphate Phosphorus Plant Physiology Mineral Soil Mineral Phosphate
Unable to display preview. Download preview PDF. | <urn:uuid:8035ee15-7396-47bd-8d72-51ed7c5f3993> | 2.5625 | 115 | Truncated | Science & Tech. | -4.395489 | 95,537,024 |
Cosmic rays, which are high-energy atomic nuclei driven by spectacular cosmic events, come to us from every direction on the sky. Most of them are destroyed high in the atmosphere, creating a shower of high-speed particles that penetrate sky and earth with ease. Surprising results from Japan's Super-Kamiokande underground observatory have recently shown that the distribution of cosmic rays on the sky is not uniform, a useful clue to the nature of these cosmic voyagers.
Supernovae and similar high-energy events can accelerate protons and heavier atomic nuclei to enormous speeds, imparting a kinetic energy thousands of times greater than the mass of the particle itself. Many are much more powerful than anything our best particle accelerators can produce, so cosmic rays are of great interest to particle physicists as well as astronomers. The strongest (and rarest) cosmic rays can pack as much kinetic energy as a good punch in the jaw -- no mean feat for a subatomic particle weighing 1027 times less than your fist!
For all their scientific potential, cosmic rays cannot be identified with any specific source. Because atomic nuclei are charged particles, they can be deflected by the Milky Way's magnetic field. While scientists have many ideas concerning the astronomical processes that can create cosmic rays, it has proven difficult to test these ideas.
What's more, most of the cosmic rays that meet the Earth never make it to the ground. Their annihilation in the atmosphere produces an “shower” of muons (heavy electrons, essentially), neutrinos, and other simple subatomic particles. Some of the byproducts can produce showers of their own, eventually dissipating most of the cosmic ray's energy into the atmosphere. (The photograph shown here was taken from a baloon high in the atmosphere.) The high-energy muons, however, interact only rarely with matter and can slide through miles of atmosphere and bedrock before coming to a halt.
Building a better μ-trap
Cosmic ray muons are elusive targets, and can't be counted by watching the sky. Instead researchers must surround an enormous volume of transparent material with detectors, and hope to catch a small fraction of the passing particles inside. The Super-Kamiokande experiment, for example, is an underground tank containing 50,000 tons of water.
Lining the walls of this tank are 11,200 photomultiplier tubes, sensitive instruments that amplify the faintest glimmer of light into a strong electrical current. If an interesting event occurs anywhere in the tank's volume, the nature of the interaction can be reconstructed from the pattern of captured light on the walls. When supernova 1987a exploded in the Large Magellanic Cloud, for example, Super-Kamiokande captured a dozen neutrinos in two separate pulses from the dying star.
Cosmic muons in particular have a distinctive signature. While they travel at speeds close to that of light, light is about 75% slower in water than in air. The muons therefore move faster than the light they emit, so the leading edges of the emitted waves pile up into a bright, cone-shaped pulse. The same phenomenon can be seen in the powerful crest that defines the wake of a speedboat, or heard in the boom of a supersonic jet or rocket. When a cosmic muon passes through, the photomultipliers trace out a perfect ellipse or hyperbola (a conic section) on the wall.
Collecting over 200 million cosmic ray muons from five years of Super-Kamiokande data, researchers Gene Guillian, Yuichi Oyama, and other collaborators were able to reconstruct a full-sky map of the cosmic ray flux. Two features are readily apparent: an excess of cosmic rays in the direction of the constellation Taurus, and a deficit in the direction of Virgo. (The scale on the right is the ratio of local flux to average flux.)
The excess and deficit are both detected with a very high confidence; the probability for each to have been produced by random fluctuations is less than one in a million. Their amplitudes are also roughly the same, and they are separated by an angle of about 130° on the sky. This odd angle seems to preclude the most obvious explanation, that Super-K is seeing the effect of the Earth's motion with respect to an isotropic cosmic ray background. If such were the case, then the separation between the two features should be exactly 180°.
Oyama and Guillian offer another possible explanation. The cosmic ray excess points into the denser regions of our spiral arm of the Milky Way galaxy, and the deficit is pointing roughly out of the galactic plane. Does this result prove that some of the cosmic rays come from nearby sources? “We have no idea about this,” responds Oyama, who goes on to explain that the entire theoretical community will want to debate the matter. Guillian's paper, for example, mentions a competing hypothesis: that local structure in the galactic magnetic field may focus or defocus the cosmic ray flux in certain directions.
These results provide an important clue to the origin of cosmic rays, and will certainly shed light on the question of how the galactic magnetic field influences their journey. “In 1987, Kamiokande started an astronomy beyond light.” Dr. Oyama explains, referring to the detection of supernova neutrinos mentioned above. “In 2005, Super-Kamiokande started an astronomy beyond neutral particles.”
Yuichi Oyama, 2006, “Anisotropy of the Primary Cosmic Ray Flux in Super-Kamiokande” http://xxx.lanl.gov/astro-ph/0605020
Gene Guillian et al., 2005, “Observation of the Anisotropy of 10 TeV Primary Cosmic Ray Nuclei Flux with the Super-Kamiokande-I Detector” http://xxx.lanl.gov/astro-ph/0508468
The cosmic ray photograph was taken from the website “The Exploration of the Earth's Magnetosphere” at URL http://www-spof.gsfc.nasa.gov/Education/index.html
By Ben Mathiesen, Copyright 2006 PhysOrg.com
Explore further: Researchers work to advance understanding of hydrodynamic instabilities in NIF, astrophysics | <urn:uuid:e92c6289-73f5-4ed7-bfb3-f0ec50ac1814> | 4.03125 | 1,316 | Knowledge Article | Science & Tech. | 42.496964 | 95,537,033 |
Most of the spectra were obtained from naturally occurring minerals and as a result often show elements that are not present in their chemical formulæ. In cases where the mineral is a member of a solid solution series the spectrum is labeled as being ‘end member rich,’ e.g. Pyrope rich Garnet (page 49) although the end member formula is given. The microscopist can estimate the approximate pyrope percentage based on relative Mg, Ca, and Fe peak heights.
Because of the difference in sensitivities between EDS detectors, it may be useful to compare spectra taken from known samples such as the Smithsonian Institution’s mineral standards. Unless otherwise noted spectra were collected using a 15 keV accelerating voltage.
Most of these spectra were obtained using a detector with a beryllium window, so no carbon or oxygen peaks were recorded. In cases where several different minerals produce the same spectra, this is noted.
KeywordsPhilosophical Magazine High Frequency Spectrum Calcite CaCO Magnesite MgCO
Unable to display preview. Download preview PDF. | <urn:uuid:22947eb8-f302-4128-9965-0024f41b1b35> | 2.625 | 229 | Academic Writing | Science & Tech. | 30.190325 | 95,537,034 |
Pathways & interactions
Multicopper oxidases, conserved site (IPR033138)
Short name: Cu_oxidase_CS
Multicopper oxidases [PMID: 2404764, PMID: 1995346] are enzymes that possess three spectroscopically different copper centres. These centres are called: type 1 (or blue), type 2 (or normal) and type 3 (or coupled binuclear). The enzymes that belong to this family include:
- Laccase (EC 126.96.36.199) (urishiol oxidase), an enzyme found in fungi and plants, which oxidizes many different types of phenols and diamines.
- L-ascorbate oxidase (EC 188.8.131.52), a higher plant enzyme.
- Ceruloplasmin (EC 184.108.40.206) (ferroxidase), a protein found in the serum of mammals and birds, which oxidizes a great variety of inorganic and organic substances. Structurally ceruloplasmin exhibits internal sequence homology, and seem to have evolved from the triplication of a copper-binding domain similar to that found in laccase and ascorbate oxidase.
In addition to the above enzymes there are a number of proteins which, on the basis of sequence similarities, can be said to belong to this family. These proteins are:
- Copper resistance protein A (copA) from a plasmid in Pseudomonas syringae. This protein seems to be involved in the resistance of the microbial host to copper.
- Blood coagulation factor V (Fa V).
- Blood coagulation factor VIII (Fa VIII).
- Yeast FET3 [PMID: 8293473], which is required for ferrous iron uptake.
- Yeast hypothetical protein YFL041w and SpAC1F7.08, the fission yeast homolog.
Factors V and VIII act as cofactors in blood coagulation and are structurally similar [PMID: 3052293]. Their sequence consists of a triplicated A domain, a B domain and a duplicated C domain; in the following order: A-A-B-A-C-C. The A-type domain is related to the multicopper oxidases.
This entry is drawn from a conserved region, which in ascorbate oxidase, laccase, in the third domain of ceruloplasmin, and in copA, contains five residues that are known to be involved in the binding of copper centres. However, it does not make any assumption on the presence of copper-binding residues and thus can detect domains that have lost the ability to bind copper (such as those in Fa V and Fa VIII).
- PS00079 (MULTICOPPER_OXIDASE1) | <urn:uuid:ecd2da61-e6e4-4e02-83d0-9ed04c4da93f> | 2.734375 | 603 | Knowledge Article | Science & Tech. | 45.830164 | 95,537,042 |
Queue is used when things don’t have to be processed immediately, but have to be processed in First In First Out order like Breadth First Search. This property of Queue makes it also useful in following kind of scenarios.
1) When a resource is shared among multiple consumers. Examples include CPU scheduling, Disk Scheduling.
2) When data is transferred asynchronously (data not necessarily received at same rate as sent) between two processes. Examples include IO Buffers, pipes, file IO, etc.
See this for more detailed applications of Queue and Stack.
- Priority Queue | Set 1 (Introduction)
- Implement Queue using Stacks
- Queue | Set 2 (Linked List Implementation)
- Sliding Window Maximum (Maximum of all subarrays of size k)
- Queue | Set 1 (Introduction and Array Implementation)
- Level order traversal in spiral form | Using one stack and one queue
- Multi Source Shortest Path in Unweighted Graph
- Interleave the first half of the queue with second half
- Check if a queue can be sorted into another queue using a stack
- Reverse a path in BST using queue | <urn:uuid:e81eed99-6102-4b1c-ab1a-227259c2e26a> | 3.59375 | 246 | Content Listing | Software Dev. | 39.255465 | 95,537,050 |
for what reason did the big bang took place?
Nobody knows - You can rationalize this as:
It's a quantum event - there is no need for a cause, it's just random.
Since time was created - there was no 'before' for any cause to happen in.
since it's fundamentally unknowable - it's not a valid question
It took place because God wanted to create something :)
I've always been partial to the software bug theory = it was a buffer overflow in Universe 1.0
I dont see any bugs in the universe, universe does not appear to be perfect to the common human sense but it is in fact very perfect (For every flaw we observe in the universe if we carefully think it over we ll see that it is not a flaw but it exists to give more meaning and make it more interesting) and this perfection proves that it is the creation of a supreme being which is what we refer as God.
What? That's absurd. Calling something a "flaw" is a subjective judgment. As is meaning and making things more interesting. Reality cannot conform to these subjective judgments. The only way you can think it's true is if you bend your subjective judgments to conform to reality.
For this reason, asking "for what reason" something takes place is usually, in science, a completely bogus question. It is reasonable to ask how the big bang started, how often such a thing might happen, and what sorts of universes are produced in your typical big bang. We don't know any of these answers yet, but they're still reasonable questions. Asking for the "purpose" of the big bang, however, is just an invalid question: there is none. It just is.
Whats the problem with subjective judgements? Life is not only science we dont have to consider only facts and objective truth. Universe supports life and life supports subjective judgements.
Again life is not only science. We dont have to see everything in a scientific context and how it relates to science and if it carries an objective truth. Ofcourse you might argue that this is a science forum so u imply that we may discuss something only if it relates to science but thats another subject.
Saying that the purpose (or the cause ) of the big bang is none might stand from a purely schientific point of view but from a subjective point of view we expect things to have a meaning thus they must have a cause and probably serve a purpose.
I believe we can know how it happened just not now. Think about the history of Astronomy: we thought the earth was flat, that changed, that we were the center of the Universe, that changed, that the sun and moon "moved" around the earth, that changed, that all we could see in the sky (mostly) was all that there was. I mean it was less than 100 years ago that we believed the entire Universe was the Milky Way. I belive our understanding is still incomplete and we do not at present have adequate tools to understand origins.
However I am comforted in reaching my own personal conclusion based on the "discontinuous" nature of phenomena in the Universe, that the reason it emerged was due to some larger system reaching a critical point like when a supersaturated solution of sugar is slowly cooled, it reaches such a critical point rapidly precipitating the sugar out of solution. In the same way, I suspect this larger system reached a critical point, and our Universe "precipitated" into existence.
Except that, since it's subjective, there are as many valid interpretations as there are people on the planet. Since everyone has an opinion, and none are wrong, you end up with a difference with no distinction. In other words, an utterly useless concept. I'm not saying there's anything wrong with the concept, just that it is of no use.
A useful concept, on the other hand, is one where there is enough internal logic that others - who may not have originally shared the same idea - are convinced it is sound.
Ah, but all you've done then is push back the point of creation. OK, so the BB is simply an effect of a larger cause.
To borrow the OP's words, for what reason did the larger cause took place?
Yeah, that's true but for now, I'm content with just trying to come to terms with the Big Bang. And keep in mind such (endless) regression may involve singularities which when pushed pass these points, concepts on one side of the singularity cannot be applied to explain phenomena on the other side of the singularity.
Take the sugar-crystal beings in the supersaturated solution. They may ask, "how could a sugar crystal emerge from "nothing" (something not a sugar crystal)? The answer of course is it did not emerge from sugar crystals but from something qualitatively different than a crystal: ions in solution. Applying that logic to origins, perhaps "cause an effect" we now observe in the Universe could emerge from something not cause and effect.
My main working hypothesis is the phase-transition that a system undergoes when it passes through a critical point and the realization that often qualitatively different concepts are needed to describe the system on either side of the critical point. My belief is the Big Bang was one such critical point.
I'm not saying that subjective judgments are bad, merely that they should be used properly. Subjective judgments cannot be statements about the nature of reality. Instead, subjective judgments are statements about the person making the judgment.
Thus questions of "meaning", "purpose", or "interest" are questions about us, or about whoever (or whatever) else is making these subjective judgments, not questions about the nature of reality.
That is completely invalid reasoning. You're basically saying that reality must conform to your whims. Sorry, but it doesn't work that way.
If it does not appear perfect to the common human sense, and assuming that you are, in fact, a human yourself, how do you know that it "is in fact very perfect?" Do you talk to God?
I am also assuming that you recognize the extreme logical fallacy in concluding that perfection universally proves the existence of God. This is a physics forum where people discuss science. Your assertion is non-empirical, objectively useless, and has no place here.
OK well, in his defense, we was the first person to acknowledge that:
Whoops. I didn't have the will power to read the second post. Apologies. Although I am still interested in finding out if Delta^2 is a prophet.
The other problem with this logic is that it is self-fulfilling. There is no possible way, even in principle, for it to be falsifiable. Any "flaw" will simply be rationalized as another element that makes it more "interesting and meaningful".
Since it can not, even in principle, be falsifiable, that means it contains no truth.
Yes, so, as required by the Physics Forums Rules,
stick to mainstream physic, or this thread will be locked, and warnings or infractions given.
why is it that before big bang TIME could not exist?
Well, that's more a statement about certain very specific models of the big bang, not necessarily a statement about reality.
Basically, in some models, such as in Stephen Hawkings' no boundary proposal, there simply isn't any time before the big bang. Asking "what came before the big bang" is analogous to asking "what lies north of the north pole." This is because in his no boundary proposal, the space-time manifold doesn't actually have any sort of edge, just like there is no end to the surface of the Earth (in the sense of people who thought the Earth was flat thought of an edge). It is, however, finite, wrapping back on itself in a very specific way. Thus what we see of as "time" has a beginning of sorts, but there is nothing "before" it (just as the Earth has a point that is furthest north, but with nothing north of that point).
so time was ''created'' after the big bang .
cant i say that the big bang actually triggered the creation of the ''things'' that could experience time rather than saying that big bang caused the creation of time (as before it there was nothing or none that could measure or evaluate time)????????????
It depends upon the model. We don't yet know which model is an accurate description of reality.
if we say that time was ''created'' after big bang wouldn't that imply that time only has existence when there is some one or some thing that can feel it but relativity gives a different picture of time to us?
No, not at all. You don't need an observer to experience time. But space and time themselves exist on what is called a manifold. Without a manifold, you have no space, no time.
so before the big bang that manifold existed the bang just expanded it right????? (or that is what i understood when i did some research on string theory).
No. Time is a direction within the manifold. There is no "before" or "after" outside of it.
Separate names with a comma. | <urn:uuid:fdd68253-4dd2-4aa3-9502-35cb066cb498> | 2.53125 | 1,908 | Comment Section | Science & Tech. | 56.088983 | 95,537,052 |
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (May 2009) (Learn how and when to remove this template message)
Planetary science or, more rarely, planetology, is the scientific study of planets (including Earth), moons, and planetary systems (in particular those of the Solar System) and the processes that form them. It studies objects ranging in size from micrometeoroids to gas giants, aiming to determine their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, originally growing from astronomy and earth science, but which now incorporates many disciplines, including planetary geology (together with geochemistry and geophysics), cosmochemistry, atmospheric science, oceanography, hydrology, theoretical planetary science, glaciology, and exoplanetology. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology.
There are interrelated observational and theoretical branches of planetary science. Observational research can involve a combination of space exploration, predominantly with robotic spacecraft missions using remote sensing, and comparative, experimental work in Earth-based laboratories. The theoretical component involves considerable computer simulation and mathematical modelling.
Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. There are several major conferences each year, and a wide range of peer-reviewed journals. In the case of some exclusive planetary scientists, many of whom are in relation to the study of dark matter, they will seek a private research centre and often initiate partnership research tasks.
- 1 History
- 2 Disciplines
- 3 Comparative planetary science
- 4 Professional activity
- 5 Basic concepts
- 6 See also
- 7 References
- 8 Further reading
- 9 External links
The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water.
In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo, who discovered the four largest moons of Jupiter, the mountains on the Moon, and first observed the rings of Saturn, all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself".
Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric and surface details of the planets. The Moon was initially the most heavily studied, as it always exhibited details on its surface, due to its proximity to the Earth, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes) and finally robotic exploratory spacecraft.
The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System.
This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood.
The best known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon, and the two neighbouring planets: Venus and Mars. Of these, the Moon was studied first, using methods developed earlier on the Earth.
Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features:
- Impact features (multi-ringed basins, craters)
- Volcanic and tectonic features (lava flows, fissures, rilles)
- Space weathering - erosional effects generated by the harsh environment of space (continuous micro meteorite bombardment, high-energy particle rain, impact gardening). For example, the thin dust cover on the surface of the lunar regolith is a result of micro meteorite bombardment.
- Hydrological features: the liquid involved can range from water to hydrocarbon and ammonia, depending on the location within the Solar System.
The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence, as first determined on terrestrial strata by Nicolas Steno. For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program, and these were used to prepare a lunar stratigraphic column and geological map of the Moon.
Cosmochemistry, geochemistry and petrology
One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analysed in the laboratory, where a large suite of tools are available and the full body of knowledge derived from terrestrial geology can be brought to bear. Fortunately, direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies and delivered as meteorites. Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere, but those meteorites collected in the last few decades from Antarctica are almost entirely pristine.
The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary (pallasites). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta.
The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere. As of July 24, 2013 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert.
During the Apollo era, in the Apollo program, 384 kilograms of lunar samples were collected and transported to the Earth, and 3 Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body beside the Earth. The numbers of lunar meteorites are growing quickly in the last few years – as of April 2008 there are 54 meteorites that have been officially classified as lunar. Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese Antarctic meteorite collection, and the other 37 are from hot desert localities in Africa, Australia, and the Middle East. The total mass of recognized lunar meteorites is close to 50 kg.
Space probes made it possible to collect data in not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.
If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts.
Planetary geodesy, (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena (polar motion in three-dimensional, time-varying space. The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge, and the competition of geologic processes such as the collision of plates and of vulcanism, resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth (orogeny; Few mountains are higher than 10 km (6 mi), few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, 15 km (9 mi), would develop so much pressure at its base, due to gravity, that the rock there would become plastic, and the mountain would slump back to a height of roughly 10 km (6 mi) in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons, is 27 km (17 mi) high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy.
The atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun — too distant and frozen atmospheres occur. Besides the four gas giant planets, almost all of the terrestrial planets (Earth, Venus, and Mars) have significant atmospheres. Two moons have significant atmospheres: Saturn's moon Titan and Neptune's moon Triton. A tenuous atmosphere exists around Mercury.
The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system, and are particularly visible on Jupiter and Saturn.
Comparative planetary science
Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan, the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples.
The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analogue studies are particularly common in planetary geology, geomorphology, and also in atmospheric science.
- Earth and Planetary Science Letters
- Earth, Moon, and Planets
- Geochimica et Cosmochimica Acta
- Journal of Geophysical Research—Planets
- Meteoritics and Planetary Science
- Planetary and Space Science
- Division for Planetary Sciences (DPS) of the American Astronomical Society
- American Geophysical Union
- Meteoritical Society
- Lunar and Planetary Science Conference (LPSC), organized by the Lunar and Planetary Institute in Houston. Held annually since 1970, occurs in March.
- Division for Planetary Sciences (DPS) meeting held annually since 1970 at a different location each year, predominantly within the mainland US. Occurs around October.
- American Geophysical Union (AGU) annual Fall meeting in December in San Francisco.
- American Geophysical Union (AGU) Joint Assembly (co-sponsored with other societies) in April–May, in various locations around the world.
- Meteoritical Society annual meeting, held during the Northern Hemisphere summer, generally alternating between North America and Europe.
- European Planetary Science Congress (EPSC), held annually around September at a location within Europe.
Smaller workshops and conferences on particular fields occur worldwide throughout the year.
This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used.
National space agencies
- Ames (NASA)
- Canadian Space Agency (CSA). Annual budget CAD $488.7 million (2013–2014).
- China National Space Administration (CNSA) (People's Republic of China). Budget $0.5-1.3 Billion (est.).
- Centre national d'études spatiales French National Centre of Space Research,Budget €1.920 Billion (2012).
- Deutsches Zentrum für Luft- und Raumfahrt e.V., (German: abbreviated DLR), the German Aerospace Center.Budget $2 Billion (2010).
- European Space Agency (ESA). Budget $5.51 Billion (2013).
- Russian Federal Space Agency Budget $5.61 Billion (2013).
- GSFC (NASA),
- Indian Space Research Organisation (ISRO),
- Israel Space Agency (ISA),
- Italian Space Agency Budget ~$1 Billion (2010).
- Japan Aerospace Exploration Agency (JAXA). Budget $2.15 Billion (2012).
- JPL (NASA),
- NASA: Considerable number of research groups, including the JPL, GSFC, Ames. Budget $18.724 Billion (2011).
- National Space Organization (Republic of China in Taiwan).
- UK Space Agency (UKSA).
- Arctic Planetary Science Institute
- The Australian National University's Planetary Science Institute
- Brown University Planetary Geosciences Group
- Caltech's Division of Geological and Planetary Sciences and Planetary Sciences subdivision
- Cornell University's Space and Planetary Science
- Curtin University's School of Earth and Planetary Sciences
- Florida Institute of Technology's Department of Physics and Space Sciences
- Johns Hopkins University's Applied Physics Laboratory
- Lunar and Planetary Institute
- Max Planck Institute for Solar System Research's Department Planets and Comets
- MIT Dept. of Earth, Atmospheric and Planetary Sciences
- Open University Planetary and Space Sciences Research Institute
- Planetary Science Institute
- Stony Brook University's Geosciences Department and soon to open Center for Planetary Exploration
- UCL/Birkbeck's Centre for Planetary Sciences
- University of Arizona's Lunar and Planetary Lab
- University of Arkansas's Center for Space and Planetary Sciences
- University of California Los Angeles's Department of Earth, Planetary, and Space Sciences
- University of California Santa Cruz's Department of Earth & Planetary Sciences
- University of Hawaii's Hawaii Institute of Geophysics and Planetology
- University of Copenhagen's Center for Planetary Research
- University of Central Florida Planetary Sciences Group
- University of British Columbia Department of Earth, Ocean and Atmospheric Sciences
- University of Western Ontario's Centre for Planetary Science and Exploration
- University of Tennessee Department of Earth and Planetary Sciences
- University of Colorado's Department of Astrophysical and Planetary Sciences
- INAF– Istituto di Astrofisica e Planetologia Spaziali
- Selenography - study of the surface and physical features of the Moon
- Theoretical planetology
- Timeline of Solar System exploration
- Taylor, Stuart Ross (29 July 2004). "Why can't planets be like stars?". Nature. 430 (6999): 509. Bibcode:2004Natur.430..509T. doi:10.1038/430509a. PMID 15282586.
- Hippolytus (Antipope); Origen (1921). Philosophumena (Digitized 9 May 2006). 1. Translation by Francis Legge, F.S.A. Original from Harvard University.: Society for promoting Christian knowledge. Retrieved 22 May 2009.
- Taylor, Stuart Ross (1994). "Silent upon a peak in Darien". Nature. 369 (6477): 196–7. Bibcode:1994Natur.369..196T. doi:10.1038/369196a0.
- Stern, Alan. "Ten Things I Wish We Really Knew In Planetary Science". Retrieved 2009-05-22.
- Carr, Michael H., Saunders, R. S., Strom, R. G., Wilhelms, D. E. 1984. The Geology of the Terrestrial Planets. NASA.
- Morrison, David. 1994. Exploring Planetary Worlds. W. H. Freeman. ISBN 0-7167-5043-0 | <urn:uuid:a97a6761-8a32-4cb5-a390-59bc419258d3> | 2.8125 | 3,659 | Knowledge Article | Science & Tech. | 28.151555 | 95,537,073 |
The team will test the predictions of current theories of gravity, including Einstein's theory of General Relativity, The funding is provided in the form of a Synergy Grant, the largest and most competitive type of grant of the ERC.
General relativistic ray tracing simulations of the shadow of the event horizon of a black hole.
M. Moscibrodzka & H. Falcke, Radboud University Nijmegen.
The team led by three principal investigators, Heino Falcke, Radboud University Nijmegen, Michael Kramer, Max-Planck-Institut für Radioastronomie, and Luciano Rezzolla, Goethe University in Frankfurt, hopes to measure the shadow cast by the event horizon of the black hole in the center of the Milky Way, find new radiopulsars near this black hole, and combine these measurements with advanced computer simulations of the behaviour of light and matter around black holes as predicted by theories of gravity.
They will combine several telescopes around the globe to peer into the heart of our own Galaxy, which hosts a mysterious radio source, called Sagittarius A* and which is considered to be the central supermassive black hole.
Synergy grants are awarded by the ERC, on the basis of scientific excellence in an intricate and highly competitive selection procedure. The grants have a maximum limit of 15 Million Euros and require the collaboration of 2-4 principal investigators. In the current selection round the ERC honoured 13 out of 449 funding proposal, which corresponds to a success rate of less than 3%. Proposals were submitted from all areas of European science. This is the first time an astrophysics proposal has been awarded.The project in depth
“While most astrophysicists believe black holes exists, nobody has actually ever seen one”, says Heino Falcke, Professor in radio astronomy at Radboud University in Nijmegen and ASTRON, The Netherlands. “The technology is now advanced enough that we can actually image black holes and check if they truly exist as predicted: If there is no event horizon, there are no black holes”.
Measure the tiniest shadowSo, if black holes are black and are hard to catch on camera, where should one look? The scientists want to peer into the heart of our own Galaxy, which hosts a mysterious radio source, called Sagittarius A*. The object is known to have a mass of around 4 million times the mass of the Sun and is considered to be the central supermassive black hole of the Milky Way.
As gaseous matter is attracted towards the event horizon by the black hole’s gravitational attraction, strong radio emission is produced before the gas disappears. The event horizon should then cast a dark shadow on that bright emission. Given the huge distance to the centre of the Milky Way, the shadow is equivalent to the size of an apple on the moon seen from the earth.
However, by combining high-frequency radio telescopes around the world, in a technique called very long baseline interferometry, or VLBI, even such a tiny feature is in principle detectable. Falcke first proposed this experiment 15 years ago and now an international effort is forming to build a global “Event Horizon Telescope” to realize it. Falcke is convinced: “With this grant from the ERC and the excellent expertise in Europe, we will be able to make it happen together with our international partners”.
Find more radio pulsars
In addition, the group wants to use the same radio telescopes to find and measure pulsars around the very same black hole. Pulsars are rapidly spinning neutron stars, which can be used as highly accurate natural clocks in space. “A pulsar around a black hole would be extremely valuable”, explains Michael Kramer, managing director of the Max-Planck-Institut für Radioastronomie in Bonn. “They allow us to determine the deformation of space and time caused by black holes and measure their properties with unprecedented precision”. However, while radio pulsars are ubiquitous in our Milky Way, surprisingly none had been found in the centre of the Milky Way for decades. Only recently Kramer and his team found the very first radio pulsar around Sagittarius A*. “We suspect there are many more radio pulsars, and if they are there we will find them”, says Kramer.
Behaviour of light and matterBut how will scientists be really sure that there is a black hole in our Milky Way and not something else that behaves in a very similar way? To answer this question, the scientists will combine the information from the black hole shadow and from the motion of pulsars and stars around Sagittarius A* with detailed computer simulations of the behaviour of light and matter around black holes as predicted by theory.
“We have made enormous progress in computational astrophysics in recent years”, states Luciano Rezzolla, Professor of theoretical astrophysics at the Goethe University in Frankfurt and head of the gravitational-wave modelling group at the Max-Planck-Institut für Gravitationsphysik. “We can now calculate very precisely how space and time are warped by the immense gravitational fields of a black hole, and determine how light and matter propagate around black holes”, he remarks. “Einstein’s theory of General Relativity is the best theory of gravity we know, but it is not the only one. We will use these observations to find out if black holes, one of the most cherished astrophysical objects, exist or not. Finally, we have the opportunity to test gravity in a regime that until recently belonged to the realm of science fiction; it will be a turning point in modern science”, says Rezzolla.
Partners in Europe
The principal investigators will closely collaborate with a number of groups throughout Europe. Team members in the ERC grant are:Robert Laing from the European Southern Observatory (ESO) in Garching, European project scientist of ALMA, a new high-frequency radio telescope, that the team seeks to use for their purpose,
Huib van Langevelde, director of the Joint Institute for VLBI in Europe (JIVE) and Professor of Galactic radio astronomy at the University of Leiden.
The efforts of the Max-Planck-Institut für Radioastronomie will be conducted jointly with the VLBI group and the high-frequency radio astronomy groups at the institute and their directors Anton Zensus and Karl Menten.
The scientists also want to make use of the two major European millimeter radio observatories (NOEMA and the IRAM 30m telescope) operated by IRAM, a joint German/French/Spanish radio astronomy institute.
The BlackHoleCam team will closely collaborate with the Event Horizon Telescope project, led by Shep Doeleman (MIT Haystack Observatory, Boston).Principal Investigators:
Luciano Rezzolla, Goethe-Universität Frankfurt und Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Potsdam.Project Title:
E-Mail: email@example.comDr. Elke Müller
Norbert Junkes | Max-Planck-Institut
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Power and Electrical Engineering
17.07.2018 | Life Sciences
16.07.2018 | Physics and Astronomy | <urn:uuid:6ae198c4-c114-4cba-b8a4-2214d4f96cac> | 2.734375 | 2,170 | Content Listing | Science & Tech. | 35.128631 | 95,537,118 |
(PhysOrg.com) -- Take a gold sample the size of the head of a push pin, shoot a laser through it, and suddenly more than 100 billion particles of anti-matter appear.
The anti-matter, also known as positrons, shoots out of the target in a cone-shaped plasma "jet."
This new ability to create a large number of positrons in a small laboratory opens the door to several fresh avenues of anti-matter research, including an understanding of the physics underlying various astrophysical phenomena such as black holes and gamma ray bursts.
Anti-matter research also could reveal why more matter than anti-matter survived the Big Bang at the start of the universe.
"We've detected far more anti-matter than anyone else has ever measured in a laser experiment," said Hui Chen, a Livermore researcher who led the experiment. "We've demonstrated the creation of a significant number of positrons using a short-pulse laser." Chen and her colleagues used a short, ultra-intense laser to irradiate a millimeter-thick gold target. "Previously, we concentrated on making positrons using paper-thin targets," said Scott Wilks, who designed and modeled the experiment using computer codes. "But recent simulations showed that millimeter-thick gold would produce far more positrons. We were very excited to see so many of them."
In the experiment, the laser ionizes and accelerates electrons, which are driven right through the gold target. On their way, the electrons interact with the gold nuclei, which serve as a catalyst to create positrons. The electrons give off packets of pure energy, which decays into matter and anti-matter, following the predictions by Einstein's famous equation that relates matter and energy. By concentrating the energy in space and time, the laser produces positrons more rapidly and in greater density than ever before in the laboratory.
"By creating this much anti-matter, we can study in more detail whether anti-matter really is just like matter, and perhaps gain more clues as to why the universe we see has more matter than anti-matter," said Peter Beiersdorfer, a lead Livermore physicist working with Chen.
Particles of anti-matter are almost immediately annihilated by contact with normal matter, and converted to pure energy (gamma rays). There is considerable speculation as to why the observable universe is apparently almost entirely matter, whether other places are almost entirely anti-matter, and what might be possible if anti-matter could be harnessed. Normal matter and anti-matter are thought to have been in balance in the very early universe, but due to an "asymmetry" the anti-matter decayed or was annihilated, and today very little anti-matter is seen.
Over the years, physicists have theorized about anti-matter, but it wasn't confirmed to exist experimentally until 1932. High-energy cosmic rays impacting Earth's atmosphere produce minute quantities of anti-matter in the resulting jets, and physicists have learned to produce modest amounts of anti-matter using traditional particle accelerators. Anti-matter similarly may be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur. The presence of the resulting anti-matter is detectable by the gamma rays produced when positrons are destroyed when they come into contact with nearby matter.
Laser production of anti-matter isn't entirely new either. Livermore researchers detected anti-matter about 10 years ago in experiments on the since-decommissioned Nova "petawatt" laser - about 100 particles. But with a better target and a more sensitive detector, this year's experiments directly detected more than 1 million particles. From that sample, the scientists infer that around 100 billion positron particles were produced in total.
Until they annihilate, positrons (anti-electrons) behave much like electrons (just with an opposite charge), and that's how Chen and her colleagues detected them. They took a normal electron detector (a spectrometer) and equipped it to detect particles with opposite polarity as well.
"We've entered a new era," Beiersdorfer said. "Now, that we've looked for it, it's almost like it hit us right on the head. We envision a center for antimatter research, using lasers as cheaper anti-matter factories."
Chen will present her work at the American Physical Society's Division of Plasma Physics meeting Nov. 17-21 at the Hyatt Regency Reunion in Dallas. S.C. Wilks, E. Liang, J. Myatt, K. Cone ,L. Elberson, D.D. Meyerhofer, M. Schneider, R. Shepherd, D. Stafford, R. Tommasini, P. Beiersdorfer are the collaborators on this project.
Provided by Lawrence Livermore National Laboratory
Explore further: Antimatter plasma reveals secrets of deep space signals | <urn:uuid:f51b0b3d-37e6-4545-aceb-3fb428b45ed5> | 3.8125 | 1,017 | News Article | Science & Tech. | 37.571251 | 95,537,127 |
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율|
|2 초||512 MB||74||50||42||72.414%|
Bessie the cow, always a fan of shiny objects, has taken up a hobby of mining diamonds in her spare time! She has collected \(N\) diamonds (\(N \leq 1000\)) of varying sizes, and she wants to arrange some of them in a display case in the barn.
Since Bessie wants the diamonds in the case to be relatively similar in size, she decides that she will not include two diamonds in the case if their sizes differ by more than \(K\) (two diamonds can be displayed together in the case if their sizes differ by exactly \(K\)). Given \(K\), please help Bessie determine the maximum number of diamonds she can display in the case.
The first line of the input file contains \(N\) and \(K\) (\(0 \leq K \leq 10,000\)). The next \(N\) lines each contain an integer giving the size of one of the diamonds. All sizes will be positive and will not exceed \(10,000\).
Output a single positive integer, telling the maximum number of diamonds that Bessie can showcase.
5 3 1 6 4 3 1 | <urn:uuid:b52a2bca-b37b-434a-b9c6-4dbb7adfec73> | 2.796875 | 326 | Tutorial | Science & Tech. | 77.68569 | 95,537,141 |
Mechanochemistry Assisted Asymmetric Organocatalysis: A Sustainable Approach
News Jan 30, 2013
Green chemistry involves innovation in chemical research and engineering that encourages the design of processes to minimize the use and production of hazardous materials and also reduce the use of energy. These requirements are fulfilled by preventing or minimizing the use of volatile and toxic solvents and reagents, minimizing chemical wastage, development of atom-economical processes and recyclable supported catalyst that are less toxic, biodegradable and can be used at low loading.
To address many of these issues mechanochemical methods such as ball-milling and grinding with pestle and mortar have emerged as powerful techniques. The mechanical energy generated by grinding two solids or one solid and one liquid substance results in the formation of new surfaces and cracks by breaking the order of the crystalline structure, and this results in the formation of products.
Grinding and ball-milling are widely applied to pulverize minerals into fine particles, in the preparation and modification of inorganic solids. Recently, their use in synthetic organic chemistry has increased considerably, due to the need for development of sustainable methodologies, and has been widely used in solvent-free non-asymmetric transformations.
On the other hand, demands for the development of stereoselective synthesis of organic molecules have noticeably amplified in recent times. In this regard, catalytic asymmetric synthesis involving the use of chiral organocatalysts has emerged as a powerful tool from the infancy to the maturity of asymmetric organocatalysis. The use of organocatalysts for catalysing asymmetric reactions may allow several advantages, such as lower toxicity compared to metal analogues, robustness, no requirement of an inert atmosphere, provision of high stereoselectivity, and the ability to be used for the synthesis of opposite enantiomers by using enantiomeric catalysts. Organocatalyts also provide an insight into biological catalytic processes, as a number of these catalysts work by the phenomenon of enzyme mimicry. These advantages of chiral organocatalysts also meet many of the requirements of green chemistry.
Recently developed, organocatalytic asymmetric transformations assisted by mechanochemical techniques proved to be an excellent alternative to atom-economical stereoselective transformations under solvent-free reaction conditions. This review gives an overview of the solvent-free asymmetric organocatalytic transformation assisted by mechanochemical techniques, viz. ball-milling and grinding with pestle and mortar.
This article is published online in the Beilstein Journal of Organic Chemistry and is free to access.
Lab Innovations 2018 – Registration Opens and Keynotes AnnouncedNews
Learn and earn CPD points at the UK’s only lab-dedicated showcase and scientific seminar series.READ MORE
BioAscent Secures Investment to Expand its Integrated Drug Discovery ServicesNews
£1.6m investment recognises BioAscent’s unique capabilities.READ MORE | <urn:uuid:0c08fb55-d131-4f4f-a52e-2043cba4da88> | 2.828125 | 617 | Truncated | Science & Tech. | -4.305969 | 95,537,144 |
7.9. Creating Databases and Accounts on a MySQL Server
Many software packages, including web applications such as the Serendipity blog software (http://www.s9y.org/), use MySQL to store data. In order to use these programs, you will need to create a MySQL database and access account.
7.9.1. How Do I Do That?
First, you'll need to select names for your database and access account; for this example, let's use chrisblog for the database name and chris for the access account. Both names should start with a letter, contain no spaces, and be composed from characters that can be used in filenames.
# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 to server version: 5.0.18 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> create database chrisblog; Query OK, 1 row affected (0.01 sec) mysql> grant all privileges on chrisblog.* to 'chris'@'localhost' identified by 'SecretPassword'; Query OK, 0 rows affected (0.00 sec) mysql> quit Bye
Figure 7-23. Serendipity Installation verification page
Figure 7-24. Serendipity Installation page
Figure 7-25. Serendipity Installation confirmation page
Figure 7-26. Serendipity blog front page
7.9.2. How Does It Work?
MySQL is a Structured Query Language (SQL) database server. It provides rapid access to large sets of structured data, such as customer lists, sports scores, student marks, product catalogs, blog comments, or event schedules. The MySQL database runs as a server daemon named mysqld, and many different types of software can connect to the server to access data.
Connections to the database server are made through the network socket /var/lib/mysql/mysql.sock (local connections) or on the TCP port 3306 (remote connections). If the MySQL server is running on the same machine as your application, you should leave port 3306 closed in your firewall configuration, but you must open it if you separate the MySQL server and the application onto different machines (which you might do for performance reasons if you're using the database heavily).
MySQL data is stored in /var/lib/mysql; each database is stored in a separate subdirectory.
7.9.3. What About...
220.127.116.11. ...creating my own scripts and programs that access MySQL data?
Most scripting and programming languages have modules to access MySQL data. For example, you can use the database driver (DBD) module DBD::mysql to access the basic database interface (DBI) abstraction layer to work with databases in Perl. For details on writing software that accesses a MySQL database, see Chapter 22 in the MySQL documentation (http://dev.mysql.com/doc/refman/5.0/en/apis.html).
7.9.4. Where Can I Learn More? | <urn:uuid:3c88d9e6-d4ec-4d0c-b868-e6847a5391f3> | 2.9375 | 665 | Documentation | Software Dev. | 67.142128 | 95,537,148 |
Cluster chemistry is one of the latest topic in Inorganic Chemistry. In chemistry, a cluster compound is considered as a compound with a triangular or larger closed polyhedron of metal atoms. Clusters exist in various stoichiometries and nuclearities. For example, carbon and boron atoms form fullerene and borane clusters. Transition metals and main group elements form robust clusters. cluster was the term introduced by F.A. Cotton to refer to compounds containing metal–metal bonds.clusters include: Atomic clusters , Molecular clusters, Transition metal carbonyl clusters, Transition metal halide clusters, Boron hydrides, Fe-S clusters in biology, Zintl clusters, Metalloid clusters, Catalysis by metal carbonyl clusters.
All Published work is licensed under a Creative Commons Attribution 4.0 International License
Copyright © 2018 All rights reserved. iMedPub LTD Last revised : July 19, 2018 | <urn:uuid:187f9e55-f99a-4386-97fa-a0fe238d8f03> | 3.015625 | 190 | Knowledge Article | Science & Tech. | 30.713685 | 95,537,150 |
CHAMPAIGN, Ill. -- Researchers have produced a "human scale" demonstration of a new phase of matter called quadrupole topological insulators that was recently predicted using theoretical physics. These are the first experimental findings to validate this theory.
The researchers report their findings in the journal Nature.
The team's work with QTIs was born out of the decade-old understanding of the properties of a class of materials called topological insulators. "TIs are electrical insulators on the inside and conductors along their boundaries, and may hold great potential for helping build low-power, robust computers and devices, all defined at the atomic scale," said mechanical science and engineering professor and senior investigator Gaurav Bahl.
The uncommon properties of TIs make them a special form of electronic matter. "Collections of electrons can form their own phases within materials. These can be familiar solid, liquid and gas phases like water, but they can also sometimes form more unusual phases like a TI," said co-author and physics professor Taylor Hughes .
TIs typically exist in crystalline materials and other studies confirm TI phases present in naturally occurring crystals, but there are still many theoretical predictions that need to be confirmed, Hughes said.
One such prediction was the existence of a new type of TI having an electrical property known as a quadrupole moment. "Electrons are single particles that carry charge in a material," said physics graduate student Wladimir Benalcazar. "We found that electrons in crystals can collectively arrange to give rise not only to charge dipole units - that is, pairings of positive and negative charges - but also high-order multipoles in which four or eight charges are brought together into a unit. The simplest member of these higher-order classes are quadrupoles in which two positive and two negative charges are coupled."
It is not currently feasible to engineer a material atom by atom, let alone control the quadrupolar behavior of electrons. Instead, the team built a workable-scale analogue of a QTI using a material created from printed circuit boards. Each circuit board holds a square of four identical resonators - devices that absorb electromagnetic radiation at a specific frequency. The boards are arranged in a grid pattern to create the full crystal analogue.
"Each resonator behaves as an atom, and the connections between them behave as bonds between atoms," said Kitt Peterson, the lead author and an electrical engineering graduate student. "We apply microwave radiation to the system and measure how much is absorbed by each resonator, which tells us about how electrons would behave in an analogous crystal. The more microwave radiation is absorbed by a resonator, the more likely it is to find an electron on the corresponding atom."
The detail that makes this a QTI and not a TI is a result of the specifics of the connections between resonators, the researchers said.
"The edges of a QTI are not conductive like you would see in a typical TI," Bahl said, "Instead only the corners are active, that is, the edges of the edges, and are analogous to the four localized point charges that would form what is known as a quadrupole moment. Exactly as Taylor and Wladimir predicted ."
"We measured how much microwave radiation each resonator within our QTI absorbed, confirming the resonant states in a precise frequency range and located precisely in the corners," Peterson said. "This pointed to the existence of predicted protected states that would be filled by electrons to form four corner charges."
Those corner charges of this new phase of electronic matter may be capable of storing data for communications and computing. "That may not seem realistic using our 'human scale' model," Hughes said. "However, when we think of QTIs on the atomic scale, tremendous possibilities become apparent for devices that perform computation and information processing, possibly even at scales below that we can achieve today."
The researchers said the agreement between experiment and prediction offered promise that scientists are beginning to understand the physics of QTIs well enough for practical use.
"As theoretical physicists, Wladimir and I could predict the existence of this new form of matter, but no material has been found to have these properties so far," Hughes said. "Collaborating with engineers helped turn our prediction into reality."
The National Science Foundation and U.S. Office of Naval Research supported this study.
To reach Gaurav Bahl, call 217-300-2194; firstname.lastname@example.org.
To reach Taylor Hughes, call 217-333-1195; email@example.com.
The paper "A quantized microwave quadrupole insulator with topologically protected corner states" is available online and from the U. of I. News Bureau. | <urn:uuid:98ab3d43-4330-431a-aa78-bc9113ff231b> | 3.34375 | 975 | News (Org.) | Science & Tech. | 31.773518 | 95,537,155 |
When Earth Day started in 1970 in the United States, probably very few could imagine the impact of technology not only on the environment (both good and bad) but also on the world of mapping. Where once, printed atlases were just about the only means of exploring environmental issues now, live web maps offer a window on the mashed up world we live in.
The map was launched on 20th April and how it is used is something of an experiment...who looks at the map, who contributes, where do they come from and can we discern regional patterns in priorities?
Check it out, have a vote and think a little about Earth Day!
Click here for the Esri PollMap Earth Day Edition
OS and Northumbrian Water create pioneering underground map - Last week, the Northumbrian Water Group Innovation Festival took place – and it was a huge success! 510 of the world’s leading businesses and most innova...
1 day ago | <urn:uuid:ab8c3723-7d44-466b-b7f2-aaf44372c31d> | 3.125 | 195 | Personal Blog | Science & Tech. | 60.412614 | 95,537,161 |
- (commutative law)
- (Jordan identity).
The product of two elements x and y in a Jordan algebra is also denoted x ∘ y, particularly to avoid confusion with the product of a related associative algebra. The axioms imply that a Jordan algebra is power-associative and satisfies the following generalization of the Jordan identity: for all positive integers m and n.
Jordan algebras were first introduced by Pascual Jordan (1933) to formalize the notion of an algebra of observables in quantum mechanics. They were originally called "r-number systems", but were renamed "Jordan algebras" by Abraham Adrian Albert (1946), who began the systematic study of general Jordan algebras.
- 1 Special Jordan algebras
- 2 Examples
- 3 Derivations and structure algebra
- 4 Formally real Jordan algebras
- 5 Peirce decomposition
- 6 Generalizations
- 7 See also
- 8 Notes
- 9 References
- 10 Further reading
- 11 External links
Special Jordan algebras
Given an associative algebra A (not of characteristic 2), one can construct a Jordan algebra A+ using the same underlying addition vector space. Notice first that an associative algebra is a Jordan algebra if and only if it is commutative. If it is not commutative we can define a new multiplication on A to make it commutative, and in fact make it a Jordan algebra. The new multiplication x ∘ y is the Jordan product:
This defines a Jordan algebra A+, and we call these Jordan algebras, as well as any subalgebras of these Jordan algebras, special Jordan algebras. All other Jordan algebras are called exceptional Jordan algebras. The Shirshov–Cohn theorem states that any Jordan algebra with two generators is special. Related to this, Macdonald's theorem states that any polynomial in three variables, which has degree one in one of the variables, and which vanishes in every special Jordan algebra, vanishes in every Jordan algebra.
Hermitian Jordan algebras
If (A, σ) is an associative algebra with an involution σ, then if σ(x)=x and σ(y)=y it follows that
Thus the set of all elements fixed by the involution (sometimes called the hermitian elements) form a subalgebra of A+ which is sometimes denoted H(A,σ).
1. The set of self-adjoint real, complex, or quaternionic matrices with multiplication
form a special Jordan algebra.
2. The set of 3×3 self-adjoint matrices over the octonions, again with multiplication
is a 27 dimensional, exceptional Jordan algebra (it is exceptional because the octonions are not associative). This was the first example of an Albert algebra. Its automorphism group is the exceptional Lie group F₄. Since over the complex numbers this is the only simple exceptional Jordan algebra up to isomorphism, it is often referred to as "the" exceptional Jordan algebra. Over the real numbers there are three isomorphism classes of simple exceptional Jordan algebras.
Derivations and structure algebra
A derivation of a Jordan algebra A is an endomorphism D of A such that D(xy) = D(x)y+xD(y). The derivations form a Lie algebra der(A). The Jordan identity implies that if x and y are elements of A, then the endomorphism sending z to x(yz)−y(xz) is a derivation. Thus the direct sum of A and der(A) can be made into a Lie algebra, called the structure algebra of A, str(A).
A simple example is provided by the Hermitian Jordan algebras H(A,σ). In this case any element x of A with σ(x)=−x defines a derivation. In many important examples, the structure algebra of H(A,σ) is A.
Derivation and structure algebras also form part of Tits' construction of the Freudenthal magic square.
Formally real Jordan algebras
A (possibly nonassociative) algebra over the real numbers is said to be formally real if it satisfies the property that a sum of n squares can only vanish if each one vanishes individually. In 1932, Jordan attempted to axiomatize quantum theory by saying that the algebra of observables of any quantum system should be a formally real algebra which is commutative (xy = yx) and power-associative (the associative law holds for products involving only x, so that powers of any element x are unambiguously defined). He proved that any such algebra is a Jordan algebra.
Not every Jordan algebra is formally real, but Jordan, von Neumann & Wigner (1934) classified the finite-dimensional formally real Jordan algebras. Every formally real Jordan algebra can be written as a direct sum of so-called simple ones, which are not themselves direct sums in a nontrivial way. In finite dimensions, the simple formally real Jordan algebras come in four infinite families, together with one exceptional case:
- The Jordan algebra of n×n self-adjoint real matrices, as above.
- The Jordan algebra of n×n self-adjoint complex matrices, as above.
- The Jordan algebra of n×n self-adjoint quaternionic matrices. as above.
- The Jordan algebra freely generated by Rn with the relations
- where the right-hand side is defined using the usual inner product on Rn. This is sometimes called a spin factor or a Jordan algebra of Clifford type.
- The Jordan algebra of 3×3 self-adjoint octonionic matrices, as above (an exceptional Jordan algebra called the Albert algebra).
Of these possibilities, so far it appears that nature makes use only of the n×n complex matrices as algebras of observables. However, the spin factors play a role in special relativity, and all the formally real Jordan algebras are related to projective geometry.
If e is an idempotent in a Jordan algebra A (e2 = e) and R is the operation of multiplication by e, then
- R(2R − 1)(R − 1) = 0
so the only eigenvalues of R are 0, 1/2, 1. If the Jordan algebra A is finite-dimensional over a field of characteristic not 2, this implies that it is a direct sum of subspaces A = A0(e) ⊕ A1/2(e) ⊕ A1(e) of the three eigenspaces. This decomposition was first considered by Jordan, von Neumann & Wigner (1934) for totally real Jordan algebras. It was later studied in full generality by Albert (1947) and called the Peirce decomposition of A relative to the idempotent e.
Infinite-dimensional Jordan algebras
In 1979, Efim Zelmanov classified infinite-dimensional simple (and prime non-degenerate) Jordan algebras. They are either of Hermitian or Clifford type. In particular, the only exceptional simple Jordan algebras are finite-dimensional Albert algebras, which have dimension 27.
Jordan operator algebras
These axioms guarantee that the Jordan algebra is formally real, so that, if a sum of squares of terms is zero, those terms must be zero. The complexifications of JB algebras are called Jordan C* algebras or JB* algebras. They have been used extensively in complex geometry to extend Koecher's Jordan algebraic treatment of bounded symmetric domains to infinite dimensions. Not all JB algebras can be realized as Jordan algebras of self-adjoint operators on a Hilbert space, exactly as in finite dimensions. The exceptional Albert algebra is the common obstruction.
The Jordan algebra analogue of von Neumann algebras is played by JBW algebras. These turn out to be JB algebras which, as Banach spaces, are the dual spaces of Banach spaces. Much of the structure theory of von Neumann algebras can be carried over to JBW algebras. In particular the JBW factors—those with center reduced to R—are completely understood in terms of von Neumann algebras. Apart from the exceptional Albert algebra, all JWB factors can be realised as Jordan algebras of self-adjoint operators on a Hilbert space closed in the weak operator topology. Of these the spin factors can be constructed very simply from real Hilbert spaces. All other JWB factors are either the self-adjoint part of a von Neumann factor or its fixed point subalgebra under a period 2 *-antiautomorphism of the von Neumann factor.
A Jordan ring is a generalization of Jordan algebras, requiring only that the Jordan ring be over a general ring rather than a field. Alternatively one can define a Jordan ring as a commutative nonassociative ring that respects the Jordan identity.
Any -graded associative algebra becomes a Jordan superalgebra with respect to the graded Jordan brace
Jordan simple superalgebras over an algebraically closed field of characteristic 0 were classified by Kac (1977). They include several families and some exceptional algebras, notably and .
The concept of J-structure was introduced by Springer (1973) to develop a theory of Jordan algebras using linear algebraic groups and axioms taking the Jordan inversion as basic operation and Hua's identity as a basic relation. In characteristic not equal to 2 the theory of J-structures is essentially the same as that of Jordan algebras.
Quadratic Jordan algebras
Quadratic Jordan algebras are a generalization of (linear) Jordan algebras introduced by Kevin McCrimmon (1966). The fundamental identities of the quadratic representation of a linear Jordan algebra are used as axioms to define a quadratic Jordan algebra over a field of arbitrary characteristic. There is a uniform description of finite-dimensional simple quadratic Jordan algebras, independent of characteristic: in characteristic not equal to 2 the theory of quadratic Jordan algebras reduces to that of linear Jordan algebras.
- Freudenthal algebra
- Jordan triple system
- Jordan pair
- Kantor–Koecher–Tits construction
- Scorza variety
- Jacobson (1968), p.35–36, specifically remark before (56) and theorem 8.
- McCrimmon (2004) p.100
- McCrimmon (2004) p.99
- Springer-Veldkamp (2000), 5.8, p. 153
- McCrimmon (2004) pp. 99 et seq,235 et seq
- McCrimmon (2004) pp.9–10
- Albert, A. Adrian (1946), "On Jordan algebras of linear transformations", Transactions of the American Mathematical Society, 59 (3): 524–555, doi:10.1090/S0002-9947-1946-0016759-3, ISSN 0002-9947, JSTOR 1990270, MR 0016759
- Albert, A. Adrian (1947), "A structure theory for Jordan algebras", Annals of Mathematics, Second Series, 48 (3): 546–567, doi:10.2307/1969128, ISSN 0003-486X, JSTOR 1969128, MR 0021546
- John C. Baez, The Octonions, Section 3: Projective Octonionic Geometry, Bull. Amer. Math. Soc. 39 (2002), 145-205. Online HTML version.
- Faraut, J.; Koranyi, A. (1994), Analysis on symmetric cones, Oxford Mathematical Monographs, Oxford University Press, ISBN 0198534779
- Hanche-Olsen, H.; Størmer, E. (1984), Jordan operator algebras, Monographs and Studies in Mathematics, 21, Pitman, ISBN 0273086197
- Jacobson, Nathan (1968), Structure and representations of Jordan algebras, American Mathematical Society Colloquium Publications, Vol. XXXIX, Providence, R.I.: American Mathematical Society, MR 0251099
- Jordan, Pascual (1933), "Ueber Verallgemeinerungsmöglichkeiten des Formalismus der Quantenmechanik'", Nachr. Akad. Wiss. Göttingen. Math. Phys. Kl. I, 41: 209–217
- Jordan, P.; von Neumann, J.; Wigner, E. (1934), "On an Algebraic Generalization of the Quantum Mechanical Formalism", Annals of Mathematics, Princeton, 35 (1): 29–64, doi:10.2307/1968117, JSTOR 1968117
- Kac, Victor G (1977), "Classification of simple Z-graded Lie superalgebras and simple Jordan superalgebras", Communications in Algebra, 5 (13): 1375–1400, doi:10.1080/00927877708822224, ISSN 0092-7872, MR 0498755
- McCrimmon, Kevin (1966), "A general theory of Jordan rings", Proc. Natl. Acad. Sci. U.S.A., 56: 1072–1079, doi:10.1073/pnas.56.4.1072, JSTOR 57792, MR 0202783, PMC , Zbl 0139.25502
- McCrimmon, Kevin (2004), A taste of Jordan algebras, Universitext, Berlin, New York: Springer-Verlag, doi:10.1007/b97489, ISBN 978-0-387-95447-9, MR 2014924, Zbl 1044.17001, Errata
- Ichiro Satake, Algebraic Structures of Symmetric Domains, Princeton University Press, 1980, ISBN 978-0-691-08271-4. Review
- Schafer, Richard D. (1996), An introduction to nonassociative algebras, Courier Dover Publications, ISBN 978-0-486-68813-8, Zbl 0145.25601
- Zhevlakov, K.A.; Slin'ko, A.M.; Shestakov, I.P.; Shirshov, A.I. (1982) . Rings that are nearly associative. Academic Press. ISBN 0-12-779850-1. MR 0518614. Zbl 0487.17001.
- Slin'ko, A.M. (2001) , "J/j054270", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
- Springer, Tonny A. (1998) , Jordan algebras and algebraic groups, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-63632-8, MR 1490836, Zbl 1024.17018
- Springer, Tonny A.; Veldkamp, Ferdinand D. (2000) , Octonions, Jordan algebras and exceptional groups, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-66337-9, MR 1763974
- Upmeier, H. (1985), Symmetric Banach manifolds and Jordan C∗-algebras, North-Holland Mathematics Studies, 104, ISBN 0444876510
- Upmeier, H. (1987), Jordan algebras in analysis, operator theory, and quantum mechanics, CBMS Regional Conference Series in Mathematics, 67, American Mathematical Society, ISBN 082180717X
- Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998), The book of involutions, Colloquium Publications, 44, With a preface by J. Tits, Providence, RI: American Mathematical Society, ISBN 0-8218-0904-0, Zbl 0955.16001 | <urn:uuid:7c953999-73b4-48eb-96cd-f015634e7aae> | 4 | 3,564 | Knowledge Article | Science & Tech. | 50.327386 | 95,537,172 |
The picture combines infrared, visible and X-ray light from NASA's Spitzer Space Telescope, ESO's New Technology Telescope (NTT) and the European Space Agency's XMM-Newton orbiting X-ray telescope, respectively. The NTT visible-light images allowed astronomers to uncover glowing gas in the region and the multi-wavelength image reveals new insights that appear only thanks to this unusual combination of information.
This new portrait of the bright star-forming region NGC 346, in which different wavelengths of light swirl together like watercolours, reveals new information about how stars form. NGC 346 is located 210 000 light-years away in the Small Magellanic Cloud, a neighbouring dwarf galaxy of the Milky Way. The image is based on data from ESA XMM-Newton (X-rays; blue), ESO\'s New Technology Telescope (visible light; green), and NASA\'s Spitzer (infrared; red). The infrared light shows cold dust, while the visible light denotes glowing gas, and the X-rays represent very hot gas. Ordinary stars appear as blue spots with white centres, while young stars enshrouded in dust appear as red spots with white centres. Credit: ESO/ESA/JPL-Caltech/NASA/D. Gouliermis (MPIA) et al.
NGC 346 is the brightest star-forming region in the Small Magellanic Cloud, an irregular dwarf galaxy that orbits the Milky Way at a distance of 210 000 light-years.
"NGC 346 is a real astronomical zoo," says Dimitrios Gouliermis of the Max Planck Institute for Astronomy in Heidelberg, Germany, and lead author of the paper describing the observations. "When we combined data at various wavelengths, we were able to tease apart what's going on in different parts of this intriguing region."
Small stars are scattered throughout the NGC 346 region, while massive stars populate its centre. These massive stars and most of the small ones formed at the same time out of one dense cloud, while other less massive stars were created later through a process called "triggered star formation". Intense radiation from the massive stars ate away at the surrounding dusty cloud, triggering gas to expand and create shock waves that compressed nearby cold dust and gas into new stars. The red-orange filaments surrounding the centre of the image show where this process has occurred.
But another set of younger low-mass stars in the region, seen as a pinkish blob at the top of the image, couldn't be explained by this mechanism. "We were particularly interested to know what caused this seemingly isolated group of stars to form," says Gouliermis.
By combining multi-wavelength data of NGC 346, Gouliermis and his team were able to pinpoint the trigger as a very massive star that blasted apart in a supernova explosion about 50 000 years ago. Fierce winds from the massive dying star, and not radiation, pushed gas and dust together, compressing it into new stars, bringing the isolated young stars into existence. While the remains of this massive star cannot be seen in the image, a bubble created when it exploded can be seen near the large, white spot with a blue halo at the upper left (this white spot is actually a collection of three stars).
The finding demonstrates that both wind- and radiation-induced triggered star formation are at play in the same cloud. According to Gouliermis, "the result shows us that star formation is a far more complicated process than we used to think, comprising different competitive or collaborative mechanisms."
The analysis was only possible thanks to the combination of information obtained through very different techniques and equipments. It reveals the power of such collaborations and the synergy between ground- and space-based observatories.
Henri Boffin | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Science Education
23.07.2018 | Health and Medicine
23.07.2018 | Life Sciences | <urn:uuid:db18bf2d-566f-4032-9e54-a03324a7e774> | 3.953125 | 1,359 | Content Listing | Science & Tech. | 43.975897 | 95,537,206 |
C++ is one of the well known of elementary languages. It had been recognized at Bell labs now also called AT&T being an adjustment to C language consisting of courses and item-oriented programming. C+ is predicated to the principle of object oriented Programming. Every code that customers write presents While using the objects and utilizing notion of C++ on Those people objects.
Not all information include basic text. Some information may comprise binary details – as an example, if I were being to save a CD databases to disk, the info stored in Each and every CD struct might have a binary representation. This movie explains the basics.
Specified an enum, it is a snap enough to emit the values from the enumerators or of variables from the enumeration:
While it's common that the volume of bits within a byte is eight, this isn't so For each and every procedure. That is proper, a byte will not be constantly 8 bits. A byte is one of those phrases that has an interesting historical past and ends up that means different things to diverse people. For illustration, there are some pcs where it is six, seven, 8, 9, 32-bits, and so on. In C (or C++) it is possible to tell what it really is for the process by taking a look at limits.h (known as climits in C++) where by the macro CHAR_BIT is defined. It signifies the "amount of bits to the smallest object that is not a tiny bit-area", To paraphrase, a byte. Observe that it needs to be no less than 8 (which signify that strictly Talking, a CPU that supports a six bit byte has a dilemma with C or C++). Also Notice that sizeof(char) is outlined as one by C++ and C (ditto for your sizeof unsigned char, signed char, as well as their const and risky permutations).
On the other hand, it's possible you'll decide on in favour of C++ programming help. C++ homework troubles could connect with on hardships and eat a lot of your time. Quite the opposite, a well timed c++ programming assignment help could help you preserve scores of your time and help you do other stuff you enjoy carrying out. With C++ programming help, you'll be able to comprehensive your homework and assignments nicely in the time.
When you've got an older compiler, The seller could have solved the situation and an up grade may very well be if you want. In some cases an inner compiler Restrict (ugh!) you may well be overflowing, As well as in those scenarios it could help to split much more challenging expressions into numerous statements. Or, you might need to separate an extremely substantial source file into a number of files. Regardless of the, if you are able to find out the line of code that is developing the error, then attempt to produce a smaller Edition with the resource, as Which may reveal some information and facts for you about the character of the trouble. It under no circumstances hurts to own multiple compiler so as to attempt the code with An additional vendor, or below One more working method, and see if that is helpful in revealing what the condition can be. Yet again even though, this problem is Commonly rooted in the compiler bug, so the above is especially being described as something to receive you from a bottleneck situation exactly where you cannot compile just about anything, and so that you've a little something to report back to your vendor. Back again to Top Back again to Comeau Household
Notice that here we do not always know the color, that's we can use a variable of style colours and it still performs. Be aware that colorsstrings was improved to place to const's, While an array of std::strings might have been used as well (which means this instance cannot be Employed in C, only C++):
The C++ analysis will discover the aptitude to implement this multi-paradigm language and machine code. C++ moves us clear of this paradigm into the article-Oriented Programming (OOP) world. This permits the item to maintain a constant interior condition and only allow other issues within the system to have just as much comprehension about each other to carry out their perform, and never ever at any time visit here ample to break the potential of A different factors.
as foo won't must be presented. This suggests inline functions usually are defined in header data files. Previously I mentioned that inlined functions really should be smaller, for some definition of tiny. Which was a cop out response. The situation is, there isn't any concrete answer, since it relies upon upon many things that may be further than your Handle. Does that signify you shouldn't care? In lots of instances yes. Also, as compilers get smarter, several situations involving inline'ing can be settled quickly as they have got in lots of situations involving the register key phrase. That said, the technological innovation just isn't there still, and it's Uncertain it will ever be fantastic. Some compilers even assistance special force-it inlining search phrases and pragma's for this as well as other reasons. So, the query still begs alone: How to come to a decision no matter if to produce a thing inline or not? I will response with a few issues that must be resolved upon and/or calculated, which can be System dependent, and so forth.: Have you profiled and analyzed your method to understand wherever its bottlenecks are? Have you ever regarded the context of use in the operate? IOWs, whether it is to get a library to be used by Other individuals, have you regarded as the implications of that on buyers? May be the operate in consideration even known as more than enough moments to treatment? May be the purpose in thing to consider identified as as one of several statements inside a loop?
Make sure you email email@example.com if you think This is often an error. Be sure to include your IP handle as part of your e-mail.
There are times when you need to get distinctive actions In keeping with some test condition. In this article I make clear how you can use if..else checks.
Any program or process could be described by some mathematical equations. Their character could possibly be arbitrary. Does protection assistance of a… Study more…
initialize the contents of memory after you’re using a debug Make configuration. This can not materialize when utilizing a launch Develop configuration.
The issue is that this code declares principal to return a void and that is just no superior for just a strictly conforming application. Neither Is that this: // B: implicit int not authorized in C++ or C99 | <urn:uuid:83e1a370-24aa-4610-8def-aaca3ef2b1c5> | 3.578125 | 1,363 | Spam / Ads | Software Dev. | 51.258959 | 95,537,212 |
How do wasters relative densities as a solid and a liquid differ from that of other substances?
The term is correctly specific gravity (also relative density ) which compares the density (typically in g/cm 3 ) to that of water (which is practically 1 g/cm 3 ).
How do water's relative densities in solid and liquid forms differ from those of most other substances?
Most substances become more dense when they solidify, because thesame amount of particles are taking up a smaller amount of space,but ice is actually less dense than water, an…d so it floats onwater. --- Water's solid form is less dense than its liquid form, while theopposite is true of most other substances. --- Substances that change phase from liquid to solid when cooled havelower molecular pressure, and are typically denser as solids thanas liquids. Water also loses molecular energy as it turns into ice,but the presence of hydrogen bonds forms a solid lattice that takes up MORE space than the water molecules did --- so ice isless dense.
liquid has no definite shape but an irregular solid has a definiteshape. ------------------------------------------------------------ Just by using Archimedes Principle
the density of the substance decreases
Water has higher density than ice. That is why ice floats on water.
solid- atoms more tightly packed specific shape and volume liquid- atoms slightly less tightly packed, no specific shape, specific volume
The solid cannot float in this liquid.
Solid water, ice, is less dense than liquid water and floats on top. The solid state of other substances is more dense than the liquid state and will sink in the liquid.
How do waters relatives densities as a solid and a liquid differ from that of most other substances?
The solid state of water is less dense than its liquid state, which is why ice floats on water. The solid state of nearly all other substances is more dense than the liquid st…ate and sinks in the liquid state.
Because the particles are closer together in a solid then more fit in a set space, making it denser, it's the opposite for gases
In most substances, as you solidify the product by cooling, its density rises (that is - it gets heavier). Water has a density inversion point, so ice is actually lighter than… water - this is due to the way the molecules rearrange within the ice relative to the way they are packed together in the water.. Hydrogen bonding in water molecules is quite strong (in the liquid phase) so the water is sort of "compacted" by this force - pulled together more tightly..
If you compare the densities of a solid, liquid, and gaseous statesof a substance you will find that the solid is the most dense, theliquid is a medium density, and the gas is… the least dense.
The space between the molecules is different in each case.
How does water's relative densities as a solid and a liquid differ from that of most other substances?
Water has a lower density as a solid than it does as a liquid. Inthe vast majority of substances are denser as solids than asliquids. | <urn:uuid:e7e9321b-d0fd-48ba-bec4-3052a9d13c57> | 3.515625 | 649 | Q&A Forum | Science & Tech. | 43.836233 | 95,537,217 |
These excited neutrons then collide with nitrogen atoms in the atmosphere, changing them into radioactive carbon-14 atoms.
CARBON-14 IS ABSORBED (Figure 1b): Plants absorb this carbon-14 during photosynthesis.
Uncle currently working on carbon dating problem examples a carbon dating problem solving feature that could save you money if the interest rate is low because.
Wang farris have thought that with online friends in on what going on network problems accuracy with of singles in san antonio, and someone great time with your love messages.
The amount of carbon-14 in the atomosphere is, on an average, relatively constant.
Plants take in carbon-14 through the process of photosynthesis.
Carbon-14 dating relies on the following assumptions: It is known that the radiocarbon content of the atmosphere has varied in the past, so the initial activity of carbon-14 has NOT been a constant.
In contrast, radiocarbon forms continually today in the earth’s upper atmosphere.Many people assume that rocks are dated at “millions of years” based on radiocarbon (carbon-14) dating. The most well-known of all the radiometric dating methods is radiocarbon dating. Carbon-14 can yield dates of only “thousands of years” before it all breaks down.Although many people think radiocarbon dating is used to date rocks, it is limited to dating things that contain the element carbon and were once alive (like fossils).Rb)—are not being formed on earth, as far as we know.
Some materials that do not contain carbon, like clay pots, can be dated if they were fired in an oven (burnt) and contain carbon as a result of this. | <urn:uuid:bb01ed40-6ad8-4347-8b49-0d9245218382> | 3.203125 | 365 | Spam / Ads | Science & Tech. | 46.840735 | 95,537,220 |
Injuries to Catalytic Units
In previous chapters we have stated that atoms combine to make molecules, molecules interact to make larger molecules (macromolecules), and macromolecules with different properties complex to make organelles, such as chromosomes and mitochondria. We made it appear as simple as building a toy house with wood blocks. Obviously, the interaction of atoms and molecules is much more complicated; so complicated that a description of the details is far beyond the scope of the book. Yet I shall try to give the reader a taste of and possibly an appetite for learning about these complex interactions.
KeywordsUric Acid Pyruvic Acid Glycogen Storage Disease Phenylacetic Acid Zymogen Granule
Unable to display preview. Download preview PDF. | <urn:uuid:0f8fc92d-5248-401f-9796-00f03bbcdf06> | 3.34375 | 158 | Truncated | Science & Tech. | 16.396346 | 95,537,227 |
- Meeting report
- Open Access
Genomic, chromosomal and allelic assessment of the amazing diversity of maize
© BioMed Central Ltd 2004
Published: 28 May 2004
A report on the 46th Annual Maize Genetics Conference, Mexico City, Mexico, 11-14 March 2004.
Teosinte thrived in the highlands and valleys of central Mexico 8,000 years ago. Human selection for increased seed number, cob size, poor seed dispersal, and nutritional value domesticated this wild plant into what we recognize today as maize. The 2004 Maize Genetics Conference was the first to be held near the site of the origin of maize and the present-day center of species diversity, and questions about the origin, types and consequences of maize diversity were central to the 42 talks and nearly 200 poster presentations. A starlight tour of the Museo Nacional de Antropología http://www.mna.inah.gob.mx/ allowed delegates to examine the depiction of corn by successive pre-colonial Mexican civilizations for further inspiration.
Modern maize captured the genetic diversity of teosinte
Ed Buckler (USDA-ARS at Cornell University, Ithaca, USA) has analyzed maize diversity by sequencing 18 genes, in toto or in part, from more than 100 inbred lines. As a benchmark consider that humans have about 0.09% base substitution in pair-wise comparisons and that as a species we are 1.34% different from chimpanzees. Evaluating pairs of modern inbred lines of maize, previous work has shown that there is 1.42% silent diversity in coding regions! In a typical gene there are between 20 and 25 amino-acid polymorphisms among alleles: 30% are radical changes and a further 22% are 'indel' mutations of missing or added amino acids. This tremendous diversity in maize reflects the maintenance of genetic differences from teosinte: domestication did not involve a bottleneck with a handful of representative alleles; rather, present-day corn has alleles that have been filtered by selection over millions of years. Buckler estimated that a single family gathering teosinte seed to supply 10% of their calories would have required 300,000 plants. The several million people of ancient Mexico at the onset of maize domestication probably used seed from teosinte populations of several billions of plants at all stages of domestication. In contrast, only a few tomato or pepper plants suffice in a kitchen, and the domesticated types exhibit correspondingly low genetic diversity.
Using diverse alleles, association genetics can pinpoint which polymorphisms confer specific phenotypes. To avoid false assignments between genotype and phenotype, a robust knowledge of population structure in maize lines allows line history to be separated from independent genetic changes that confer plant properties. Buckler's group and others have further established that linkage disequilibrium (LD), a measure of the recombinational history of chromosomal regions, decays within 1 kilobase (kb) for landraces (traditional varieties grown by subsistence farmers), within 2 kb for modern maize inbred lines used by geneticists, and in roughly 2-20 kb in the elite commercial inbred lines developed in the past decades for the hybrid corn seed industry. For loci with a major impact on productivity and plant architecture, ancient and modern plant breeders have applied stringent selection, and in these cases LD expands to cover a larger region and the drop in allele diversity can be used to link quantitative trait loci (QTLs) to genie regions likely to be important in domestication and yield. For example, four of six genes in the starch biosynthesis pathway show a significant decrease in allele diversity compared to only 5% of randomly selected loci. Recently published work from Buckler and collaborators describes an analysis of ancient maize specimens and showed that particular alleles of Teosinte branched1, which encodes a modulator of stem and floral architecture, and Pbf, encoding a regulator of seed storage protein, were fixed about 4,000 years ago in domesticated maize, whereas favorable alleles of Sugary1, key to producing sweet corn, were not selected in the corn grown in the southwestern USA until approximately 1,000 years ago.
And what has been the fate of teosinte? Jerry Kermicle (University of Wisconsin, Madison, USA) illustrated that it grows robustly in uncultivated areas, and as a weed in Mexican cornfields, often mimicking the morphology of modern maize so closely that farmers cannot recognize it. How does teosinte persist if it is interfertile with domesticated maize? Kermicle explained that haploid maize pollen performs poorly on teosinte silks, where many centimeters separate pollen attachment and the individual ovules on the ear. Teosinte carries dominant alleles of the Gametophyte factor1 (Ga1) locus that confer preferential growth on a Ga1 silk; in contrast, modern corn is ga1 and this pollen is only 1% as successful on teosinte Ga1 silks. This 'trick' is employed commercially to permit selective pollination within small blocks of sweet corn or popcorn despite the billions of wind-borne pollen grains from nearby standard corn. Ga1 alone cannot explain the crossing barrier between teosinte and corn, however, because Mexican landraces of corn carry the Ga1-male acting allele that is compatible with Ga1 teosinte silks. Kermicle reported a second gene, Teosinte crossing barrier1 (Tcb1), that reduces inter-crossing many-fold by restricting pollen with the recessive tcb1 allele from growing on Tcb1 teosinte silks. Interestingly, the dominant Tcb1 allele is found primarily in the weedy teosinte in corn fields, where it effectively blocks pollen flow from maize and may thus contribute to an incipient speciation process.
Chromosome organization: surprises in the 'junk' DMA
Allele dominance mediated by RNA interference
This talk and many others illustrated that the diversity of maize can be exploited by both molecular and population geneticists to answer fundamental questions about genetic interactions at the allele or karyotypic level within a plant and over short and long evolutionary time scales. The next harvest of maize results will be the 47th Annual Meeting to be held 10-13 March 2005 in Wisconsin, USA http://www.maizegdb.org/. | <urn:uuid:ada117db-720f-46a9-8dca-dd21979240d7> | 2.84375 | 1,324 | Academic Writing | Science & Tech. | 23.649054 | 95,537,243 |
12 July 2018
Delving into the stars
Published online 30 November 2016
A new technique allows scientists to measure the shape and structure of moving stars with unprecedented precision.
An international team of scientists, including from New York University Abu Dhabi (NYUAD), managed to directly observe structural components of one slowly rotating star, thanks to asteroseismology1.
This new technique, 10,000 times more precise than its predecessor, reveals a star’s flatter, rounder contours and different rotational speeds. It allows scientists to ‘see’ the nature of the stellar interior with very high precision, according to the scientists.
Traditional techniques can only be used to image some of the largest close-by stars.
Stars are not perfectly spherical. All stars rotate and are therefore flattened by the centrifugal force. The faster the rotation, the more oblate the star becomes. The shape of stars can also be distorted by magnetic fields.
“Stellar magnetic fields, especially weak magnetic fields, are notoriously difficult to directly observe on distant stars,” says lead author Laurent Gizon, researcher at the Max Planck Institute for Solar System Research, Germany.
Gizon’s research focused on the evolution of stars, which is typically controlled by the nuclear reactions at their core.
“Measuring the shape of stars can inform us about their rotation and their magnetic field, two fundamental properties of stars,” he says. Stellar magnetic fields are responsible for active phenomena such as sunspots and flares.
Gizon and his colleagues studied a hybrid pulsating star — hot, luminous, more than twice the size of the Sun and rotates three times slower — called Kepler 11145123. They found that the star, whose oscillations were observed by NASA’s Kepler mission for four years, is less oblate than implied by its rotation rate — meaning that its structural distortion is caused by more than rotation alone.
“This is an indication of the presence of a magnetic field,” says Gizon. “We propose that the presence of a magnetic field at low latitudes could make the star look more spherical to the stellar oscillations.”
Through this new technique, the researchers managed to separate frequencies of the sound waves oscillating from the star’s interior, discovering that the star rotated faster at the surface than at the core.
Of the significance of the discovery, Othman Benomar, researcher at the Center of Space Science at New York University Abu Dhabi, and co-author of the study, says, “stars are elementary components of our universe and it is important to precisely understand the mechanisms of their birth, evolution and death if one wants to understand the evolution of larger structures in the universe such as stellar clusters, and galaxies.”
According to Benomar, until this research, only little has been known about the physical conditions, such as pressure, temperature, nuclear reaction rate, magnetic field or rotation of the interior of stars.
“This is due to two reasons. Firstly, stars are very distant, dense and opaque objects, for which in situ measurements are not possible. Secondly, conditions in the deep interior are so extreme that they cannot be reproduced in laboratories,” he says.
In the future, the scientists plan to map out deformities of more rapidly spinning stars. “It will be particularly interesting to see how faster rotations and a stronger magnetic field can change a star’s shape. An important theoretical field in astrophysics has now become observational,” says Gizon.
Joseph Gelfand, assistant professor of physics at NYUAD who was not involved in the study, describes the research and findings as “exciting” with some caveats.
Indeed, it’s the first time that asteroseismology has been used to measure the magnetic field strength of a star, says Gelfand. But while magnetic fields might be the cause of the discrepancy between oscillations implied by stellar rotation rates and measured, “there are other possibilities, and this technique isn't really able to give the strength of the magnetic field.”
“That being said, the origin and strength of stellar magnetic fields is an important question relevant to a lot of fields of astrophysics and plasma physics — where magnetic fields are often acknowledged to be very important but too complicated to be studied.”
- Gizon, L. et al. Shape of a slowly rotating star measured by asteroseismology. Sci. Adv. 2, e1601777 (2016). | <urn:uuid:8aec73d0-7c0d-4b68-a71c-b5d966f12354> | 3.84375 | 941 | Truncated | Science & Tech. | 35.757917 | 95,537,252 |
Boldly going where larger, human-piloted planes cannot, they promise to close a key gap in knowledge for climate modelers
Scientists studying the behavior of the world's ice sheets--and the future implications of ice sheet behavior for global sea-level rise--may soon have a new airborne tool that will allow radar measurements that previously would have been prohibitively expensive or difficult to carry out with manned aircraft.
In a paper published in the March/ April edition of IEEE Geoscience and Remote Sensing Magazine, researchers at the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas noted that they have successfully tested the use of a compact radar system integrated on a small, lightweight Unmanned Aircraft System (UAS) to look through the ice and map the topography underlying rapidly moving glaciers.
"We're excited by the performance we saw from our radar and UAS during the field campaign. The results of this effort are significant, in that the miniaturized radar integrated into a UAS promises to make this technology more broadly accessible to the research community," said Rick Hale, associate professor of aerospace engineering and associate director of technology for CReSIS.
With support from the National Science Foundation's Division of Polar Programs and the State of Kansas, the CReSIS team recently successfully tested the UAS at a field camp in West Antarctica.
The measurements were the first-ever successful sounding of glacial ice with a UAS-based radar. If further tests in the Arctic prove as successful, the UAS could eventually be routinely deployed to measure rapidly changing areas of the Greenland and Antarctic ice sheets.
The use of unmanned aircraft in Antarctica, which is becoming a subject of wide international interest, is scheduled to be discussed in May at the upcoming Antarctic Treaty Consultative Meeting in Brazil.
The small but agile UAS has a takeoff weight of about 38.5 kilograms (85 pounds) and a range of approximately 100 kilometers (62 miles). The compact radar system weighs only two kilograms, and the antenna is structurally integrated into the wing of the aircraft.
The radar-equipped UAS appears to be an ideal tool for reaching areas that otherwise would be exceptionally difficult to map. The light weight and small size of the vehicle and sensor enable them to be readily transported to remote field locations, and the airborne maneuverability enables the tight flight patterns required for fine scale imaging. The UAS can be used to collect data over flight tracks about five meters apart to allow for more thorough coverage of a given area.
According to Shawn Keshmiri, an assistant professor of aerospace engineering, "a small UAS also uses several orders of magnitude less fuel per hour than the traditional manned aircraft used today for ice sounding."
This advantage is of great benefit, the researchers point out, "in remote locations, such as Antarctica, [where] the cost associated with transporting and caching fuel is very high."
The vast polar ice sheets hold an enormous amount of the Earth's freshwater--so much so that in the unlikely event of a sudden melt, global sea level would rise on the order of 66 meters (216 feet).
Even a fraction of the melt, and the associated sea-level, rise would cause severe problems to people living in more temperate areas of the globe, so scientists and engineers are seeking quicker, less expensive ways to measure and eventually predict exactly what it is that the ice sheets are doing and how their behavior may change in the future.
Until now, the lack of fine-resolution information about the topography underlying fast-flowing glaciers, which contain huge amounts of freshwater and which govern the flow of the interior ice, makes it difficult to model their behavior accurately.
"There is therefore an urgent need to measure the ice thickness of fast-flowing glaciers with fine resolution to determine bed topography and basal conditions," the researchers write. "This information will, in turn, be used to improve ice-sheet models and generate accurate estimates of sea level rise in a warming climate. Without proper representation of these fast-flowing glaciers, advancements in ice-sheet modeling will remain elusive."
With the successful test completed in the Antarctic, the researchers will begin analyzing the data collected during this field season, miniaturizing the radar further and reducing its weight to 1.5 kilograms (3.3 pounds) or less, and increasing the UAS radar transmitting power.
In the coming months, they will also perform additional test flights in Kansas to further evaluate the avionics and flight-control systems, as well as optimize the radar and transmitting systems.
In 2014 or 2015, they plan to deploy the UAS to Greenland to collect data over areas with extremely rough surfaces and fast-flowing glaciers, such as Jakobshavn, which is among the fastest flowing glaciers in the world.
For b-roll of the UAS test flights in Antarctica, please contact Dena Headlee, email@example.com / (703) 292-7739
Julie M. Palais, NSF, (703) 292-8033, firstname.lastname@example.org
Prasad Gogineni, University of Kansas, (785) 864.8800, email@example.com
Rick Hale, University of Kansas, (785) 864-2949, firstname.lastname@example.org
Center for Remote Sensing of Ice Sheets: https://www.cresis.ku.edu/
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Peter West | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:8e9151a4-b4ba-4913-b829-27489e77fd71> | 3.640625 | 1,873 | Content Listing | Science & Tech. | 40.510023 | 95,537,260 |
H. sapiens Exonuclease
of viral RNA
Blake Calcei '16, Brandon January '15 and Alexander McQuiston
Flaviviruses are single stranded RNA human pathogens that are
responsible The Escherichia coli catabolite gene
activator protein (CAP) is a DNA binding protein involved with the
transcription of several genes, including those that code for enzymes
involved in the metabolism of certain sugars (i.e. lactose,
maltose, and arabinose.) Basically, CAP is responsible for the global
regulation of carbon utilization. Upon binding cAMP (adenosine 3', 5'
monophosphate, or cyclic AMP), CAP binds to a conserved DNA sequence
from which it can either activate or repress transcription initiation
from various promoters. In some cases clusters of several promoters
are all controlled by a single cAMP-CAP complex bound to the DNA.
Once CAP has bound cAMP, the protein exhibits a higher
affinity for a specific conserved DNA sequence. When the
intracellular level of cAMP increases, the second messenger is bound
by CAP and the cAMP-CAP complex binds to the DNA. Once bound, it is
able to stimulate the transcription of the aforementioned genes. DNA
bound by the CAP-cAMP complex is bent by ~90 degrees. This DNA bend,
coupled with a protein-protein interaction between CAP and RNA
polymerase is thought to be the mechanism by which CAP regluates
transcription initiation on the chromosome.
II. General Structure
CAP is a dimer of 22, 500 molecular weight, composed of two
chemically identical polypeptide chains each 209 amino acids in
The overall structure of the dimer is assymetric; one subunit adopts a
"closed" conformation in which the
amino- and carboxy-termini are closer together than in the more "open" subunit. Each subunit is composed
of two distinct domains connected by a hinge
The N-terminal domain is responsible for
dimerization and cAMP
binding. The carboxy-terminal
domain contains a helix-turn helix DNA
and is also responsible for DNA bending.
III. cAMP Binding
An important recognition site for cAMP within CAP is the
ionic bond formed between the side chain of Arg-82
and the negatively charged phosphate group
of cAMP. In the crystal structure, the two cAMP molecules are buried
deep within the beta roll and the C-helix.
It is unclear how cAMP enters or leaves the binding site, but this
probably requires the separation of the two subunits of the dimer,
or the movement of the beta roll and the C helix away from each
other. Other side-chain interactions between the protein and cAMP
are hydrogen bonds occuring at Thr-127,
Ser-128, Ser-83, and Glu-72.
Additional hydrogen bonding between is seen between cAMP and the
polypeptide backbone at residues 83
IV. DNA Binding
Once CAP has bound cAMP, it is ready to bind to the DNA.
Binding occurs at the conserved sequence of
Hydrogen bonds between the protein and the DNA phsophates occur at the
backbone amide of residue
139, and the side chains of Thr-140,
Ser-179, and Thr-182
In addition to these phosphate interactions, the side chains of Glu-181 and Arg-185,
both emanating from the recognition
directly contact the bases within the major groove of the DNA. Because
of the way that the protein binds to the DNA, a kink of ~40
degrees occurs between nucleotide base pairs six
and seven on each side of the dyad
This sequence has been shown to favor DNA flexibility and bending in
other systems as well. Because of this kink, an additional five
ionic interactions and four hydrogen bonds are able to occur
between the protein and the DNA strand. Examples of these new
interactions occur between Lys-26, Lys-166,
His-199 and the DNA sugar-phosphate backbone
The DNA bend is integral to the mechanism of transcription activation.
Not only does it place CAP in the proper orientation for
interaction with RNA polymerase, but wrapping the DNA around the
protein may result in direct contacts between upstream DNA and RNA
V. Activating Regions
Transcription activation by CAP requires more than merely
the binding of cAMP and binding and bending of DNA. CAP contains
an "activating region" that has been proposed to participate in
direct protein-protein interactions with RNA polymerase and/or
other basal transcription factors. Specifically, amino acids 156, 158, 159, and 162
have been proposed to be critical for transcription activation by CAP.
These amino acids are part of a surface loop composed of residues
Researchers have concluded that the third and final step in
transcription activation is this direct protein-protein contact
between amino acids 156-162 of CAP, and RNA polymerase.
Gunasekera, Angelo, Yon W. Ebright, and
Richard H. Ebright. 1992. DNA Sequence Determinants for Binding of
the Escherichia coli Catabolite Gene Activator Protein. The
Journal of Biological Chemistry 267:14713-14720.
Schultz, Steve C., George C. Shields, and
Thomas A. Steitz. 1991. Crystal Structure of a CAP-DNA complex:
The DNA Is Bent by 90 degrees Science 253: 1001-1007.
Vaney, Marie Christine, Gary L. Gilliland,
James G. Harman, Alan Peterkofsky, and Irene T. Weber. 1989.
Crystal Structure of a cAMP-Independent Form of Catabolite Gene
Activator Protein with Adenosine Substituted in One of Two
cAMP-Binding Sites. Biochemistry 28:4568-4574.
Weber, Irene T., Gary L. Gilliland, James
G. Harman, and Alan Peterkofsky. 1987. Crystal Structure of a
Cyclic AMP-independent Mutant of Catabolite Activator Protein. The
Journal of Biological Chemistry 262:5630-5636.
Zhou, Yuhong, Ziaoping Zhang, and Richard
H. Ebright. 1993. Identification of the activating region of
catabolite gene activator protein (CAP): Isolation and
characterization of mutants of CAP specifically defective in
transcription activation. Proceedings of the National
Academy of Sciences of the United States of America
Back to Top | <urn:uuid:630e40ac-35cf-44ed-a238-3b8cdf0efb8c> | 2.515625 | 1,410 | Academic Writing | Science & Tech. | 45.163879 | 95,537,266 |
Real-time imaging of density ducts between the plasmasphere and ionosphere
MetadataShow full item record
Ionization of the Earth's atmosphere by sunlight forms a complex, multilayered plasma environment within the Earth's magnetosphere, the innermost layers being the ionosphere and plasmasphere. The plasmasphere is believed to be embedded with cylindrical density structures (ducts) aligned along the Earth's magnetic field, but direct evidence for these remains scarce. Here we report the first direct wide-angle observation of an extensive array of field-aligned ducts bridging the upper ionosphere and inner plasmasphere, using a novel ground-based imaging technique. We establish their heights and motions by feature tracking and parallax analysis. The structures are strikingly organized, appearing as regularly spaced, alternating tubes of overdensities and underdensities strongly aligned with the Earth's magnetic field. These findings represent the first direct visual evidence for the existence of such structures.
Showing items related by title, author, creator and subject.
Quantification of an Archaean to Recent Earth Expansion Process Using Global Geological and Geophysical Data SetsMaxlow, James (2001)Global geological and geophysical data, while routinely used in conventional plate tectonic studies, has not been applied to models of an expanding Earth. Crustal reconstructions on Archaean to Recent models of an expanding ...
Earth2014: 1 arc-min shape, topography, bedrock and ice-sheetmodels – Available as gridded data and degree-10,800 sphericalharmonicsHirt, Christian; Rexer, Moritz (2015)Since the release of the ETOPO1 global Earth topography model through the US NOAA in 2009, new or significantly improved topographic data sets have become available over Antarctica, Greenland and parts of the oceans. Here, ...
Hirt, Christian; Kuhn, Michael (2012)Mass associated with surface topography makes a significant contribution to the Earth’s gravitational potential at all spectral scales. Accurate computation in spherical harmonics to high degree requires calculations of ... | <urn:uuid:edecebc5-f1df-4588-8221-a31754e29148> | 2.828125 | 433 | Academic Writing | Science & Tech. | 20.431011 | 95,537,287 |
In this section from a calendar, put a square box around the 1st, 2nd, 8th and 9th. Add all the pairs of numbers. What do you notice about the answers?
What happens when you add the digits of a number then multiply the result by 2 and you keep doing this? You could try for different numbers and different rules.
Investigate what happens when you add house numbers along a street in different ways.
Start with four numbers at the corners of a square and put the total of two corners in the middle of that side. Keep going... Can you estimate what the size of the last four numbers will be?
If the answer's 2010, what could the question be?
These sixteen children are standing in four lines of four, one behind the other. They are each holding a card with a number on it. Can you work out the missing numbers?
Find the next number in this pattern: 3, 7, 19, 55 ...
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
48 is called an abundant number because it is less than the sum of its factors (without itself). Can you find some more abundant numbers?
Investigate this balance which is marked in halves. If you had a weight on the left-hand 7, where could you hang two weights on the right to make it balance?
Well now, what would happen if we lost all the nines in our number system? Have a go at writing the numbers out in this way and have a look at the multiplications table.
This challenge asks you to investigate the total number of cards that would be sent if four children send one to all three others. How many would be sent if there were five children? Six?
If the numbers 5, 7 and 4 go into this function machine, what numbers will come out?
On the planet Vuv there are two sorts of creatures. The Zios have 3 legs and the Zepts have 7 legs. The great planetary explorer Nico counted 52 legs. How many Zios and how many Zepts were there?
How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this?
Three dice are placed in a row. Find a way to turn each one so that the three numbers on top of the dice total the same as the three numbers on the front of the dice. Can you find all the ways to do. . . .
Using 3 rods of integer lengths, none longer than 10 units and not using any rod more than once, you can measure all the lengths in whole units from 1 to 10 units. How many ways can you do this?
Winifred Wytsh bought a box each of jelly babies, milk jelly bears, yellow jelly bees and jelly belly beans. In how many different ways could she make a jolly jelly feast with 32 legs?
A lady has a steel rod and a wooden pole and she knows the length of each. How can she measure out an 8 unit piece of pole?
Where can you draw a line on a clock face so that the numbers on both sides have the same total?
You have 5 darts and your target score is 44. How many different ways could you score 44?
What is happening at each box in these machines?
Vera is shopping at a market with these coins in her purse. Which things could she give exactly the right amount for?
Cassandra, David and Lachlan are brothers and sisters. They range in age between 1 year and 14 years. Can you figure out their exact ages from the clues?
Can you score 100 by throwing rings on this board? Is there more than way to do it?
Annie cut this numbered cake into 3 pieces with 3 cuts so that the numbers on each piece added to the same total. Where were the cuts and what fraction of the whole cake was each piece?
Rocco ran in a 200 m race for his class. Use the information to find out how many runners there were in the race and what Rocco's finishing position was.
On the table there is a pile of oranges and lemons that weighs exactly one kilogram. Using the information, can you work out how many lemons there are?
Try adding together the dates of all the days in one week. Now multiply the first date by 7 and add 21. Can you explain what happens?
There are three buckets each of which holds a maximum of 5 litres. Use the clues to work out how much liquid there is in each bucket.
Fill in the numbers to make the sum of each row, column and diagonal equal to 34. For an extra challenge try the huge American Flag magic square.
Fill in the missing numbers so that adding each pair of corner numbers gives you the number between them (in the box).
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15!
The clockmaker's wife cut up his birthday cake to look like a clock face. Can you work out who received each piece?
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the jugs.
Arrange three 1s, three 2s and three 3s in this square so that every row, column and diagonal adds to the same total.
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99 How many ways can you do it?
Find out what a Deca Tree is and then work out how many leaves there will be after the woodcutter has cut off a trunk, a branch, a twig and a leaf.
Tell your friends that you have a strange calculator that turns numbers backwards. What secret number do you have to enter to make 141 414 turn around?
I was looking at the number plate of a car parked outside. Using my special code S208VBJ adds to 65. Can you crack my code and use it to find out what both of these number plates add up to?
This group activity will encourage you to share calculation strategies and to think about which strategy might be the most efficient.
Complete these two jigsaws then put one on top of the other. What happens when you add the 'touching' numbers? What happens when you change the position of the jigsaws?
How would you count the number of fingers in these pictures?
Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice?
Use your logical reasoning to work out how many cows and how many sheep there are in each field.
Number problems at primary level that may require resilience.
Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out?
Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. | <urn:uuid:85be28e1-c9d4-4fca-acdd-9dae80fbba36> | 4.25 | 1,613 | Content Listing | Science & Tech. | 78.285023 | 95,537,290 |
compound B, then answer questions a and b.
An empty glass container has a mass of 658.572 g.
It has a mass of 659.452 g after it has been filled with
nitrogen gas at a pressure of 790. torr and a temperature
of 15ーC. When the container is evacuated and refilled
with a certain element (A) at a pressure of 745 torr
and a temperature of 26ーC, it has a mass of 660.59 g.
Compound B, a gaseous organic compound that
consists of 85.6% carbon and 14.4% hydrogen by mass,
is placed in a stainless steel vessel (10.68 L) with excess
oxygen gas. The vessel is placed in a constant-temperature
bath at 22ーC. The pressure in the vessel is 11.98 atm.
In the bottom of the vessel is a container that is packed
with Ascarite and a desiccant. Ascarite is asbestos impregnated
with sodium hydroxide; it quantitatively absorbs
2NaOH(s) CO2(g) 88n Na2CO3(s) H2O(l)
The desiccant is anhydrous magnesium perchlorate,
which quantitatively absorbs the water produced by the
combustion reaction as well as the water produced by
the above reaction. Neither the Ascarite nor the desiccant
reacts with compound B or oxygen. The total mass of
the container with the Ascarite and desiccant is 765.3 g.
The combustion reaction of compound B is initiated
by a spark. The pressure immediately rises, then begins
to decrease, and finally reaches a steady value of 6.02 atm.
The stainless steel vessel is carefully opened, and the mass
of the container inside the vessel is found to be 846.7 g.
A and B react quantitatively in a 1:1 mole ratio to
form one mole of gas C.
a. How many grams of C will be produced if 10.0 L
of A and 8.60 L of B (each at STP) are reacted by
opening a stopcock connecting the two samples?
b. What will be the total pressure in the system?Sara--I worked on this problem but didn't complete it. Perhaps what I have you have already done.
The difference in mass of the empty container and the container filled with N2 will give you the mass of the N2 added. From that you can calculate moles N2, then use PV = nRT to calculate the volume of the container. The second filling with the gaseous element, A, gives you a mass of element A when subtracted from the mass of the empty container. Again, use PV=nRT to calculate n; then using n and grams, calculate the molar mass (I found about 71). This element probably is Cl2 but there is no other information to prove that. However, we know there are no monatomic atoms that are gases with an atomic mass of 71 and Cl2 is about the only one close to 71 for a diatomic gas. Then element B gives %C and %H which allows you to calulate the simplest empirical formula. I found that to be CH2. There is no hydrocarbon molecule that is CH2; therefore, I assume this must be a dimer or so of CH2 (for example, ethene or propene) but I did not see the data to support that. This may be a "made up" problem. Anyway, perhaps this will get you started. I hope this helps a little. the container that is filled with nitrogen gas is at a temperature of 14C not 15 you should tray that and see if it works.
Recently Asked Questions
- Formation of a precipitate when solutions of magnesium nitrateand potassium hydroxide are mixed. No 3 - + k + -->No 3 K This was my answer but it
- describe the meaning of the probability distribution of a discrete random variable also give one example of such probability distribution
- A student is studying the equilibriumrepresented by the equation : 2CrO 4 2- (aq, yellow) +2H 3 O + (aq) Cr 2 O 7 2- (aq, orange) +3H 2 O(l) The mixture | <urn:uuid:7ac875f9-156a-4878-bef3-1335cc4c85d2> | 3.421875 | 925 | Q&A Forum | Science & Tech. | 70.025884 | 95,537,310 |
There may be better ways to engineer the planet's climate to prevent dangerous global warming than mimicking volcanoes, a University of Calgary climate scientist says in two new studies.
"Releasing engineered nano-sized disks, or sulphuric acid in a condensable vapour above the Earth, are two novel approaches. These approaches offer advantages over simply putting sulphur dioxide gas into the atmosphere," says David Keith, a director in the Institute for Sustainable Energy, Environment and Economy and a Schulich School of Engineering professor.
Mt. Pinatubo is an active volcano in the Philippine's frequently studied by scientists
Keith, a global leader in investigating this topic, says that geoengineering, or engineering the climate on a global scale, is an imperfect science.
"It cannot offset the risks that come from increased carbon dioxide in the atmosphere. If we don't halt man-made CO2 emissions, no amount of climate engineering can eliminate the problems – massive emissions reductions are still necessary."
Nevertheless, Keith believes that research on geoengineering technologies,their effectiveness and environmental impacts needs to be expanded.
"I think the stakes are simply too high at this point to think that ignorance is a good policy."
Keith suggests two novel geoengineering approaches–'levitating' engineered nano-particles, and the airborne release of sulphuric acid–in two newly published studies. One study was authored by Keith alone, and the other with scientists in Canada, the U.S. and Switzerland.
Scientists investigating geoengineering have so far looked mainly at injecting sulphur dioxide into the upper atmosphere. This approach imitates the way volcanoes create sulphuric acid aerosols, or sulphates, that will reflect solar radiation back into space – thereby cooling the planet's surface.
Keith says that sulphates are blunt instruments for climate engineering. It's very difficult to achieve the optimum distribution and size of the aerosols in the atmosphere to reflect the most solar radiation and get the maximum cooling benefit.
One advantage of using sulphates is that scientists have some understanding of their effects in the atmosphere because of emissions from volcanoes such as Mt. Pinatubo, he adds.
"A downside of both these new ideas is they would do something that nature has never seen before. It's easier to think of new ideas than to understand their effectiveness and environmental risks," says Keith.
In his study–published in the Proceedings of the National Academy of Sciences, a top-ranked international science journal–Keith describes a new class of engineered nano-particles that might be used to offset global warming more efficiently, and with fewer negative side effects, than using sulphates.
According to Keith, the distribution of engineered nano-particles above the Earth could be more controlled and less likely to harm the planet's protective ozone layer.
Sulphates also have unwanted side-effects, ranging from reducing the electricity output from certain solar power systems, to speeding up the chemical process that breaks down the ozone layer.
Engineered nano-particles could be designed as thin disks and built with electric or magnetic materials that would enable them to be levitated or oriented in the atmosphere to reflect the most solar radiation.
It may also be possible to control the position of particles above the Earth. In theory, the particles might be engineered to drift toward Earth's poles, to reduce solar radiation in polar regions and counter the melting of ice that speeds up polar warming–known as the ice-albedo feedback.
"Such an ability might be relevant in the event that warming triggers rapid deglaciation," Keith's study says.
"Engineered nano-particles would first need to be tested in laboratories, with only short-lived particles initially deployed in the atmosphere so any effects could be easily reversible," says Keith.
Research would also be needed to determine whether such nano-particles could be effectively distributed, given the complex interplay of forces in the atmosphere, and how much cooling might be achieved at the planet's surface.
It is also unknown whether the amount of particles needed–about 1 trillion kilograms per year or 10 million tonnes over 10 years–could be manufactured and deployed at a reasonable cost.
However, Keith notes another study, which looked at the cost of putting natural sulphates into the stratosphere.
"You could manipulate the Earth's climate at large scale for a cost that's of the order of $1 billion a year. It sounds like a lot of money, but compared to the costs of managing other environmental problems or climate change, that is peanuts."
"This is not an argument to do it, only an indication that risk, not cost, will be the deciding issue," he adds.
In a separate new study published in the journal Geophysical Research Letters, Keith and international scientists describe another geoengineering approach that may also offer advantages over injecting sulphur dioxide gas.
Releasing sulphuric acid, or another condensable vapour, from an aircraft would give better control of particle size. The study says this would reflect more solar radiation back into space, while using fewer particles overall and reducing unwanted heating in the lower stratosphere.
The study included computer modeling that showed that the sulphuric acid would quickly condense in a plume, forming smaller particles that would last longer in the stratosphere and be more effective in reflecting solar radiation than the large sulphates formed from sulphur dioxide gas.
Keith stresses that whether geoengineering technology is ever used, it shouldn't be seen as a reason not to reduce man-made greenhouse gas emissions now accumulating in the atmosphere.
"Seat belts reduce the risk of being injured in accidents. But having a seat belt doesn't mean you should drive drunk at 100 miles an hour," he says. | <urn:uuid:ac7e8e60-788a-4122-b608-01a6957a0c60> | 3.734375 | 1,171 | News Article | Science & Tech. | 28.981412 | 95,537,342 |
Fish and amphibians such as newts are capable of advanced tissue regeneration and can regenerate tissue without scar tissue to their perfect original shape, should they lose organs such as their limbs. Unraveling the mechanisms of regeneration and homeostasis of tissues has been one of the main issues in recent biology, anticipated for its potential for application in human regenerative medicine. Not much had been known about the mechanism and the source of cells supplied in the regeneration of tissue.
The research group led by Tokyo Tech's Associate Professor Atsushi Kawakami, graduate student Eri Shibata, and others used the regeneration of zebrafish fins as a model and labeled the cells of the regenerative tissue with fluorescence (Figure 1) using a genetic cell-labeling technique (Cre-loxP site-specific recombination) and tracked their fates over weeks. As a result, they determined that epithelial cells near a wound follow heterogeneous cell fates.
Cre-loxP was used as the cell-labeling technique. In this case, EGFP (enhanced green fluorescent protein) expression in the regenerative epidermis of zebrafish fins was switched on by using recombination enzyme Cre expressed under the regulation of the gene fibronectin 1b. Recombination can be induced by using a compound called tamoxifen (TAM).
*dpa: the number of days since amputation
Credit: Tokyo Institute of Technology
The first group of epithelial cells which are initially recruited to the wound cover the wound but disappear within a few days by apoptosis. The second group of epithelial cells which arrive later become the cells forming the regenerated skin.
However, many of these regenerated skin cells are moved toward the end of the fin and disappear about one to two weeks. In investigating the source of the replenishing skin cells, it was found that numerous new epithelial cells are supplied in the regeneration process by a large area of skin which contain stem cells and become active in cell proliferation.
Intriguingly, it became clear that skin cells in the regeneration process do not undergo special processes such as de-differentiating into stem cells and regenerating, but existing stem cells in the basal layer and differentiated cells in the surface layer each proliferate with their own characteristics intact to regenerate the skin.
Based on this study, it is conceivable that regeneration of skin would become possible by controlling the autonomous proliferation of stem cells in the basal layer in other vertebrates as well, including humans.
If the mechanism of skin regeneration discovered in this study proves to be the same in humans, it is expected to be used in the future to unravel the causes of various skin diseases, in regenerative medicine research, and for other purposes.
Authors: Eri Shibata, Kazunori Ando, Emiko Murase, and Atsushi Kawakami*
Title of original paper: Heterogeneous fates and dynamic rearrangement of regenerative epidermis-derived cells during zebrafish fin regeneration
DOI: dev.162016 doi: 10.1242/
Affiliation: School of Life Science and Technology,, Tokyo Institute of Technology
*Corresponding authors email: email@example.com
Tadashi Okamura | EurekAlert!
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Health and Medicine
23.07.2018 | Earth Sciences
23.07.2018 | Science Education | <urn:uuid:c06903ba-42c4-49ba-b7ba-f3d5c657a983> | 3.390625 | 1,268 | Content Listing | Science & Tech. | 30.030937 | 95,537,344 |
Visual Basic® Scripting Edition
|| Language Reference |
Returns the position of the first occurrence of one string within another.
InStr([start, ]string1, string2[, compare])
Look for string2 inside string1. The InStr function syntax has these arguments:
Part Description start Optional. Numeric expression that sets the starting position for each search. If omitted, search begins at the first character position. If start contains Null, an error occurs. The start argument is required if compare is specified. string1 Required. String expression being searched. string2 Required. String expression searched for. compare Optional. Numeric value indicating the kind of comparison to use when evaluating substrings. See Settings section for values. If omitted, a binary comparison is performed.
The compare argument can have the following values:
Constant Value Description vbBinaryCompare 0 Perform a binary comparison. This means it IS case sensitive vbTextCompare 1 Perform a textual comparison. This means it is NOT case sensitive vbDatabaseCompare 2 Perform a comparison based upon information contained in the database where the comparison is to be performed.
The Instr function returns the following values:
If InStr returns string1 is zero-length 0 string1 is Null Null string2 is zero-length start string2 is Null Null string2 is not found 0 string2 is found within string1 Position at which match is found. The first character is position 1. start > Len(string2) 0
Note Another function (InStrB) is provided for use with byte data contained in a string. Instead of returning the character position of the first occurrence of one string within another, InStrB returns the byte position.
|file: /Techref/language/asp/vbs/vbscript/95.htm, 7KB, , updated: 2011/8/17 11:43, local time: 2018/7/16 09:06,
|©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?|
<A HREF="http://www.piclist.com/techref/language/asp/vbs/vbscript/95.htm"> Microsoft® Visual Basic® Scripting Edition </A>
|Did you find what you needed?|
PICList 2018 contributors:
o List host: MIT, Site host massmind.org, Top posters @20180716 RussellMc, Van Horn, David, Sean Breheny, Isaac M. Bavaresco, David C Brown, Bob Blick, Neil, Denny Esterline, John Gardner, Brent Brown,
* Page Editors: James Newton, David Cary, and YOU!
* Roman Black of Black Robotics donates from sales of Linistep stepper controller kits.
* Ashley Roll of Digital Nemesis donates from sales of RCL-1 RS232 to TTL converters.
* Monthly Subscribers: Gregg Rew. on-going support is MOST appreciated!
* Contributors: Richard Seriani, Sr.
Welcome to www.piclist.com! | <urn:uuid:6443e23b-83b2-4afa-b19e-9144fd1af43f> | 2.875 | 681 | Documentation | Software Dev. | 54.862078 | 95,537,356 |
Explanation of Names
From the type genus Bombyx
, Greek meaning "silkworm." (1)
Apate was a Greek goddess of deceit. She was a daughter (by parthenogenesis!) of Nyx, Night, who was daughter of Chaos. Lodes is Greek (?) for veins, as in lodes of ore. So these are "deceitfully veined" moths? (Based on Internet searches, it makes sense, but this is somewhat speculative.)
5 species in 2 genera in North America
2 species in Canada (CBIF
Previously treated as a subfamily of Bombycidae by Lemaire and Minet in 1999. This classification was followed by Moths of Canada
and Charles Covell on page xiii of the 2005 edition of A Field Guide to Moths of Eastern North America (2)
. Zwick (2008) reinstated Apatelodidae as a valid family.
Franclemont, J.G. 1973. The Moths of America North of Mexico. Fascicle 20.1. Mimallonoidea (Mimallonidae) and Bombycoidea (Apatelodidae, Bombycidae,..(3)
Lemaire, C. & J. Minet 1999. The Bombycoidea and their relatives. Pages 321-353 in: Lepidoptera: Moths and Butterflies. 1. Evolution, Systematics, and Biogeography. Handbook of Zoology. Vol. IV, Part 35. N. P. Kristensen, ed. De Gruyter, Berlin and New York.
Zwick, A. (2008), Molecular phylogeny of Anthelidae and other bombycoid taxa (Lepidoptera: Bombycoidea). Systematic Entomology, 33: 190-209. doi:10.1111/j.1365-3113.2007.00410.x
pinned adult images
of three species occurring in Canada (CBIF) | <urn:uuid:5b9ad767-e56c-42be-8723-412bf0436a29> | 3.09375 | 425 | Knowledge Article | Science & Tech. | 59.195903 | 95,537,383 |
Dielectric barrier discharge
Dielectric-barrier discharge (DBD) is the electrical discharge between two electrodes separated by an insulating dielectric barrier. Originally called silent (inaudible) discharge and also known as ozone production discharge or partial discharge, it was first reported by Ernst Werner von Siemens in 1857. On right, the schematic diagram shows a typical construction of a DBD wherein one of the two electrodes is covered with a dielectric barrier material. The lines between the dielectric and the electrode are representative of the discharge filaments, which are normally visible to the naked eye. Below this, the photograph shows an atmospheric DBD discharge occurring in between two steel electrode plates, each covered with a dielectric (mica) sheet. The filaments are columns of conducting plasma, and the foot of each filament is representative of the surface accumulated charge.
The process normally uses high voltage alternating current, ranging from lower RF to microwave frequencies. However, other methods were developed to extend the frequency range all the way down to the DC. One method was to use a high resistivity layer to cover one of the electrodes. This is known as the resistive barrier discharge. Another technique using a semiconductor layer of gallium arsenide (GaAs) to replace the dielectric layer, enables these devices to be driven by a DC voltage between 580 V and 740 V.
DBD devices can be made in many configurations, typically planar, using parallel plates separated by a dielectric or cylindrical, using coaxial plates with a dielectric tube between them. In a common coaxial configuration, the dielectric is shaped in the same form as common fluorescent tubing. It is filled at atmospheric pressure with either a rare gas or rare gas-halide mix, with the glass walls acting as the dielectric barrier. Due to the atmospheric pressure level, such processes require high energy levels to sustain. Common dielectric materials include glass, quartz, ceramics and polymers. The gap distance between electrodes varies considerably, from less than 0.1 mm in plasma displays, several millimetres in ozone generators and up to several centimetres in CO2 lasers.
Depending on the geometry, DBD can be generated in a volume (VDBD) or on a surface (SDBD). For VDBD the plasma is generated between two electrodes, for example between two parallel plates with a dielectric in between. At SDBD the microdischarges are generated on the surface of a dielectric, which results in a more homogeneous plasma than can be achieved using the VDBD configuration At SDBD the microdischarges are limited to the surface, therefore their density is higher compared to the VDBD. The plasma is generated on top of the surface of an SDBD plate.
A particular compact and economic DBD plasma generator can be built based on the principles of the piezoelectric direct discharge. In this technique, the high voltage is generated with a piezo-transformer, the secondary circuit of which acts also as the high voltage electrode. Since the transformer material is a dielectric, the produced electric discharge resembles properties of the dielectric barrier discharge.
A multitude of random arcs form in operation gap exceeding 1.5 mm between the two electrodes during discharges in gases at the atmospheric pressure . As the charges collect on the surface of the dielectric, they discharge in microseconds (millionths of a second), leading to their reformation elsewhere on the surface. Similar to other electrical discharge methods, the contained plasma is sustained if the continuous energy source provides the required degree of ionization, overcoming the recombination process leading to the extinction of the discharge plasma. Such recombinations are directly proportional to the collisions between the molecules and in turn to the pressure of the gas, as explained by Paschen's Law. The discharge process causes the emission of an energetic photon, the frequency and energy of which corresponds to the type of gas used to fill the discharge gap.
I-V characteristic of DBD
The electrical diagram of the DBD device at the absence of discharge can be presented in the form shown in Fig. 1 where is capacitance of dielectric adjacent to one of two electrodes and is capacitance of the air (or gas) gap between the dielectric within the adjacent electrode footprint and the ground electrode. and are capacity and resistance modeling electric response of plasma. If a switch connects the capacitors and shown in Fig. 1 (there is no electrical breakdown), the voltage generator is connected to a circuit comprising two capacitors and connected in a series circuit. A capacitance of this circuit can be expressed as
and the electric current through this circuit can be expressed in the form
where is a generator voltage. Oscillograms and obtained in the case of the electrical breakdown of the operating gap, switch in Fig. 1 is connected to , are presented in Fig. 2. We are going to describe, in a first order of approximation, the plasma response to voltage applied to the gap in the same way as a series circuit of two invariable components and. To proof this assumption valuability, let us express the value in the form
where the second term on the right hand side is a drop in potential on the capacitor , and is the integration constant. The current can be expressed in terms of the voltage and values and . For this purpose, let us present the value in the form of sum
where and represent the drops in potential on the resistor and capacitor respectively. Taking into account that the electric current through the circuit can be expressed as and consequently,
the equation (4) can be rewritten as the standard linear differential equation
which solution is
where is the integration constant. Differentiating Equation (6) with respect to and substituting the result in Equation (5), one can express the current in terms of the voltage and values and :
is drop in potential on plasma, and is the constant near-electrode drop in potential. Four parameters: , ,, and can be found by procedure of fitting of the theoretical function , calculated from the experimental value by equation (7), with the actual electric current measured in experiment. Results of the least square fitting, corresponding to some particular case, are shown in Fig. 3. For detail of the equation (7) derivation and possibilities to analyse DBD parameters, see Ref. The equation (7) represents the I-V characteristic of DBD in a most general form.
Usage of generated radiation
DBDs can be used to generate optical radiation by the relaxation of excited species in the plasma. The main application here is the generation of UV-radiation. Those excimer ultraviolet lamps can produce light with short wavelengths which can be used to produce ozone in industrial scales. Ozone is still used extensively in industrial air and water treatment. Early 19th-century attempts at commercial nitric acid and ammonia production used DBDs as several nitrogen-oxygen compounds are generated as discharge products.
Usage of the generated plasma
Since the 19th century, DBDs were known for their decomposition of different gaseous compounds, such as NH3, H2S and CO2. Other modern applications include semiconductor manufacturing, germicidal processes, polymer surface treatment, high-power CO2 lasers typically used for welding and metal cutting, pollution control and plasma displays panels, aerodynamic flow control… The relatively lower temperature of DBDs makes it an attractive method of generating plasma at atmospheric pressure.
The plasma itself is used to modify or clean (plasma cleaning) surfaces of materials (e.g. polymers, semiconductor surfaces), that can also act as dielectric barrier, or to modify gases applied further to “soft” plasma cleaning and increasing adhesion of surfaces prepared for coating or gluing (flat panel display technologies).
A dielectric barrier discharge is one method of plasma treatment of textiles at atmospheric pressure and room temperature. The treatment can be used to modify the surface properties of the textile to improve wettability, improve the absorption of dyes and adhesion, and for sterilization. DBD plasma provides a dry treatment that doesn't generate waste water or require drying of the fabric after treatment. For textile treatment, a DBD system requires a few kilovolts of alternating current, at between 1 and 100 kilohertz. Voltage is applied to insulated electrodes with a millimetre-size gap through which the textile passes.
An excimer lamp can be used as a powerful source of short-wavelength ultraviolet light, useful in chemical processes such as surface cleaning of semiconductor wafers. The lamp relies on a dielectric barrier discharge in an atmosphere of xenon and other gases to produce the excimers.
An additional process when using chlorine gas for removal of bacteria and organic contaminates in drinking water supplies.Treatment of public swimming baths, aquariums and fish ponds involves the use of ultraviolet radiation produced when a dielectric mixture of xenon gas and glass are used.
Dielectric barrier discharges were used to generate relatively large volume diffuse plasmas at atmospheric pressure and applied to inactivate bacteria in the mid 1990s.This eventually led to the development of a new field of applications, the biomedical applications of plasmas. This field is now known as plasma medicine.
Due to their nature, these devices have the following properties:
- capacitive electric load: low Power Factor in range of 0.1 to 0.3
- high ignition voltage 1–10 kV
- huge amount of energy stored in electric field - requirement of energy recovery if DBD is not driven continuously
- voltages and currents during discharge event have major influence on discharge behaviour (filamented, homogeneous).
Operation with continuous sine waves or square waves is mostly used in high power industrial installations. Pulsed operation of DBDs may lead to higher discharge efficiencies.
Drivers for this type of electric load are power HF-generators that in many cases contain a transformer for high voltage generation. They resemble the control gear used to operate compact fluorescent lamps or cold cathode fluorescent lamps. The operation mode and the topologies of circuits to operate [DBD] lamps with continuous sine or square waves are similar to those standard drivers. In these cases, the energy that is stored in the DBD's capacitance does not have to be recovered to the intermediate supply after each ignition. Instead, it stays within the circuit (oscillates between the [DBD]'s capacitance and at least one inductive component of the circuit) and only the real power, that is consumed by the lamp, has to be provided by the power supply. Differently, drivers for pulsed operation suffer from rather low power factor and in many cases must fully recover the DBD's energy. Since pulsed operation of [DBD] lamps can lead to increased lamp efficiency, international research led to suiting circuit concepts. Basic topologies are resonant flyback and resonant half bridge. A flexible circuit, that combines the two topologies is given in two patent applications, and may be used to adaptively drive DBDs with varying capacitance.
An overview of different circuit concepts for the pulsed operation of DBD optical radiation sources is given in "Resonant Behaviour of Pulse Generators for the Efficient Drive of Optical Radiation Sources Based on Dielectric Barrier Discharges".
- Matsuno, Hiromitsu, Nobuyuki Hishinuma, Kenichi Hirose, Kunio Kasagi, Fumitoshi Takemoto, Yoshinori Aiura, and TatsushiIgarashi. Dielectric barrier discharge lamp, United States Patent 5757132 (Commercial website). Freepatentsonline.com. First published 1998-05-26. Retrieved on 2007-08-05.
- Dhali, S.K. and I. Sardja. Dielectric-barrier discharge for the removal of SO2 fromflue gas. (Abstract only). IEEE International Conference on Plasma Science, 1989; IEEE Conference Record - Abstracts, 1989. Retrieved on 2007-08-05.
- Kogelschatz, Ulrich, Baldur Eliasson, and Walter Egli. From ozone generators to flat television screens: history and future potential of dielectric-barrier discharges. Pure Applied Chemistry, Vol. 71, No. 10, pp. 1819-1828, 1999. Retrieved on 2007-08-05.
- "Aerosol charge distributions in Dielectric Barrier Discharges" (PDF). Publication date 2009. European Aerosol Conference 2009 Karlsruhe. Archived from the original (PDF) on 19 July 2011. Retrieved 2010-12-10.
- M. Laroussi, I. Alexeff, J. P. Richardson, and F. F. Dyer " The Resistive Barrier Discharge", IEEE Trans. Plasma Sci. 30, 158 (2002)
- "Structure formation in a DC-driven "barrier" discharge stability analysis and numerical solutions" (PDF). Publication date July 15–20, 2007. ICPIG Prague, Czech Republic. Retrieved 2010-12-09.
- Kraus, Martin, Baldur Eliasson, Ulrich Kogelschatzb, and Alexander Wokauna. CO2 reforming of methane by the combination of dielectric-barrier discharges and catalysis Physical Chemistry Chemical Physics, 2001, 3, 294-300. Retrieved on 2007-08-05.
- Gibalov, V. I. & Pietsch, G. J. (2000). "The development of dielectric barrier discharges in gas gaps and on surfaces". Journal of Physics D: Applied Physics. 33: 2618–2636. Bibcode:2000JPhD...33.2618G. doi:10.1088/0022-3727/33/20/315.
- Radacsi, N.; Van der Heijden, A. E. D. M.; Stankiewicz, A. I.; ter Horst, J. H. (2013). "Cold plasma synthesis of high quality organic nanoparticles at atmospheric pressure". Journal of nanoparticle research. 15: 1–13. Bibcode:2013JNR....15.1445R. doi:10.1007/s11051-013-1445-4.
- M. Teschke and J. Engemann, Contrib. Plasma Phys. 49, 614 (2009)
- M. Teschke and J. Engemann, US020090122941A1, U.S. Patent application
- "Dielectric-Barrier Discharges. Principle and Applications" (PDF). ABB Corporate Research Ltd., Baden, Switzerland. 11 October 1997. Retrieved 19 January 2013.
- Evgeny V. Shun’ko and Veniamin V. Belkin. "Treatment surfaces with atomic oxygen excited in dielectric barrier discharge plasma of O2 admixed to N2". AIP Advances. (2012) AIP ADVANCES. 2: 022157–24. Bibcode:2012AIPA....2b2157S. doi:10.1063/1.4732120.
- Nitrogen Classic Encyclopedia, Based on the 11th Edition of the Encyclopædia Britannica (pub. 1911), 1911encyclopedia.org.
- Evgeny V. Shun’ko and Veniamin V. Belkin. "Cleaning properties of atomic oxygen excited to metastable state 2s[sup 2]2p[sup 4]([sup 1]S[sub 0])". Journal of Applied Physics. (2007) J. Appl. Phys. 102: 083304–1–14. Bibcode:2007JAP...102h3304S. doi:10.1063/1.2794857.
- "Disinfection of materials". Retrieved 2010-12-16.
- The Textile Institute, Sustainable textiles, CRC Press, ISBN 978-1-84569-453-1 page 156
- "Dielectric". Siliconfareast.com 2001-2006. Retrieved 8 January 2011.
- "Dielectric barrier discharge system with catalytically active porous segment for improvement of water treatmen" (PDF). Department of Physics, University of West Bohemia, Univerzitni 22, 306 14 Plzen, Czech Republic 2008. Retrieved 9 January 2011.
- "UV v.s Chlorine". Atguv.com 2010. Retrieved 9 January 2011.
- "Dielectric barrier discharge lamp comprising an UV-B phosphor". Freepatentsonline.com 12/21/2010. Retrieved 9 January 2011.
- M. Laroussi, "Sterilization of contaminated matter with an atmospheric pressure plasma", IEEE Trans. Plasma Sci. 24, 1188 (1996)
- Roth, J. Reece (2001). "Chapter 15.3 Atmospheric Dielectric Barrier Discharges (DBDs)". Industrial Plasma Engineering: Volume 2: Applications to Nonthermal Plasma Processing (1st ed.). CRC Press. ISBN 978-0750305440.
- "Current controlled driver for a Dielectric Barrier Discharge lamp". Publication date 21–24 June 2010. Power Electronics Conference (IPEC) 2010 International. Retrieved 2010-12-09.
- "Resonance behaviour of a pulsed electronic control gear for dielectric barrier discharges". Power Electronics, Machines and Drives (PEMD 2010), 5th IET International Conference on.
- "Patent application title: Device for Generation of Voltage Pulse Sequences In Particular for Operation of Capacitive Discharge Lamps". Publication date 2005. University of Karlsruhe. Retrieved 2011-05-23.
- "Patent application title: Adaptive Drive for Dielectric Barrier Discharge (DBD) Lamp". Publication date 2008. Briarcliff Manor, New York US. Retrieved 2010-12-09.
- "Resonant Behaviour of Pulse Generators for the Efficient Drive of Optical Radiation Sources Based on Dielectric Barrier Discharges". Publication date 10.07.2013. KIT Scientific Publishing. | <urn:uuid:4747e349-9c7d-4993-b7b4-ab926305e09d> | 3.546875 | 3,802 | Knowledge Article | Science & Tech. | 45.13598 | 95,537,393 |
Cold fusion is a hypothesized type of nuclear reaction that would occur at, or near, room temperature. This is compared with the "hot" fusion which takes place naturally within stars, under immense pressure and at temperatures of millions of degrees, and distinguished from muon-catalyzed fusion. There is currently no accepted theoretical model that would allow cold fusion to occur.
In 1989 Martin Fleischmann (then one of the world's leading electrochemists) and Stanley Pons reported that their apparatus had produced anomalous heat ("excess heat") of a magnitude they asserted would defy explanation except in terms of nuclear processes. They further reported measuring small amounts of nuclear reaction byproducts, including neutrons and tritium. The small tabletop experiment involved electrolysis of heavy water on the surface of a palladium (Pd) electrode. The reported results received wide media attention, and raised hopes of a cheap and abundant source of energy.
Many scientists tried to replicate the experiment with the few details available. Hopes faded due to the large number of negative replications, the withdrawal of many reported positive replications, the discovery of flaws and sources of experimental error in the original experiment, and finally the discovery that Fleischmann and Pons had not actually detected nuclear reaction byproducts. By late 1989, most scientists considered cold fusion claims dead, and cold fusion subsequently gained a reputation as pathological science. In 1989 the United States Department of Energy (DOE) concluded that the reported results of excess heat did not present convincing evidence of a useful source of energy and decided against allocating funding specifically for cold fusion. A second DOE review in 2004, which looked at new research, reached similar conclusions and did not result in DOE funding of cold fusion.
A small community of researchers continues to investigate cold fusion, now often preferring the designation low-energy nuclear reactions (LENR) or condensed matter nuclear science (CMNS). Since cold fusion articles are rarely published in peer-reviewed mainstream scientific journals, they do not attract the level of scrutiny expected for mainstream scientific publications.
- 1 History
- 2 Current research
- 3 Reported results
- 4 Proposed mechanisms
- 5 Criticism
- 6 Publications
- 7 Conferences
- 8 Patents
- 9 Cultural references
- 10 See also
- 11 Notes
- 12 References
- 13 Bibliography
- 14 External links
Nuclear fusion is normally understood to occur at temperatures in the tens of millions of degrees. Since the 1920s, there has been speculation that nuclear fusion might be possible at much lower temperatures by catalytically fusing hydrogen absorbed in a metal catalyst. In 1989, a claim by Stanley Pons and Martin Fleischmann (then one of the world's leading electrochemists) that such cold fusion had been observed caused a brief media sensation before the majority of scientists criticized their claim as incorrect after many found they could not replicate the excess heat. Since the initial announcement, cold fusion research has continued by a small community of researchers who believe that such reactions happen and hope to gain wider recognition for their experimental evidence.
The ability of palladium to absorb hydrogen was recognized as early as the nineteenth century by Thomas Graham. In the late 1920s, two Austrian born scientists, Friedrich Paneth and Kurt Peters, originally reported the transformation of hydrogen into helium by nuclear catalysis when hydrogen was absorbed by finely divided palladium at room temperature. However, the authors later retracted that report, saying that the helium they measured was due to background from the air.
In 1927 Swedish scientist John Tandberg reported that he had fused hydrogen into helium in an electrolytic cell with palladium electrodes. On the basis of his work, he applied for a Swedish patent for "a method to produce helium and useful reaction energy". Due to Paneth and Peters's retraction and his inability to explain the physical process, his patent application was denied. After deuterium was discovered in 1932, Tandberg continued his experiments with heavy water. The final experiments made by Tandberg with heavy water were similar to the original experiment by Fleischmann and Pons. Fleischmann and Pons were not aware of Tandberg's work.[text 1][text 2]
The term "cold fusion" was used as early as 1956 in a New York Times article about Luis Alvarez's work on muon-catalyzed fusion. Paul Palmer and then Steven Jones of Brigham Young University used the term "cold fusion" in 1986 in an investigation of "geo-fusion", the possible existence of fusion involving hydrogen isotopes in a planetary core. In his original paper on this subject with Clinton Van Siclen, submitted in 1985, Jones had coined the term "piezonuclear fusion".
The most famous cold fusion claims were made by Stanley Pons and Martin Fleischmann in 1989. After a brief period of interest by the wider scientific community, their reports were called into question by nuclear physicists. Pons and Fleischmann never retracted their claims, but moved their research program to France after the controversy erupted.
Events preceding announcement
Martin Fleischmann of the University of Southampton and Stanley Pons of the University of Utah hypothesized that the high compression ratio and mobility of deuterium that could be achieved within palladium metal using electrolysis might result in nuclear fusion. To investigate, they conducted electrolysis experiments using a palladium cathode and heavy water within a calorimeter, an insulated vessel designed to measure process heat. Current was applied continuously for many weeks, with the heavy water being renewed at intervals. Some deuterium was thought to be accumulating within the cathode, but most was allowed to bubble out of the cell, joining oxygen produced at the anode. For most of the time, the power input to the cell was equal to the calculated power leaving the cell within measurement accuracy, and the cell temperature was stable at around 30 °C. But then, at some point (in some of the experiments), the temperature rose suddenly to about 50 °C without changes in the input power. These high temperature phases would last for two days or more and would repeat several times in any given experiment once they had occurred. The calculated power leaving the cell was significantly higher than the input power during these high temperature phases. Eventually the high temperature phases would no longer occur within a particular cell.
In 1988 Fleischmann and Pons applied to the United States Department of Energy for funding towards a larger series of experiments. Up to this point they had been funding their experiments using a small device built with $100,000 out-of-pocket. The grant proposal was turned over for peer review, and one of the reviewers was Steven Jones of Brigham Young University. Jones had worked for some time on muon-catalyzed fusion, a known method of inducing nuclear fusion without high temperatures, and had written an article on the topic entitled "Cold nuclear fusion" that had been published in Scientific American in July 1987. Fleischmann and Pons and co-workers met with Jones and co-workers on occasion in Utah to share research and techniques. During this time, Fleischmann and Pons described their experiments as generating considerable "excess energy", in the sense that it could not be explained by chemical reactions alone. They felt that such a discovery could bear significant commercial value and would be entitled to patent protection. Jones, however, was measuring neutron flux, which was not of commercial interest.[clarification needed] To avoid future problems, the teams appeared to agree to simultaneously publish their results, though their accounts of their 6 March meeting differ.
In mid-March 1989, both research teams were ready to publish their findings, and Fleischmann and Jones had agreed to meet at an airport on 24 March to send their papers to Nature via FedEx. Fleischmann and Pons, however, pressured by the University of Utah, which wanted to establish priority on the discovery, broke their apparent agreement, submitting their paper to the Journal of Electroanalytical Chemistry on 11 March, and disclosing their work via a press release and press conference on 23 March. Jones, upset, faxed in his paper to Nature after the press conference.
Fleischmann and Pons' announcement drew wide media attention. But the 1986 discovery of high-temperature superconductivity had made the scientific community more open to revelations of unexpected scientific results that could have huge economic repercussions and that could be replicated reliably even if they had not been predicted by established theories. Many scientists were also reminded of the Mössbauer effect, a process involving nuclear transitions in a solid. Its discovery 30 years earlier had also been unexpected, though it was quickly replicated and explained within the existing physics framework.
The announcement of a new purported clean source of energy came at a crucial time: adults still remembered the 1973 oil crisis and the problems caused by oil dependence, anthropogenic global warming was starting to become notorious, the anti-nuclear movement was labeling nuclear power plants as dangerous and getting them closed, people had in mind the consequences of strip mining, acid rain, the greenhouse effect and the Exxon Valdez oil spill, which happened the day after the announcement. In the press conference, Chase N. Peterson, Fleischmann and Pons, backed by the solidity of their scientific credentials, repeatedly assured the journalists that cold fusion would solve environmental problems, and would provide a limitless inexhaustible source of clean energy, using only seawater as fuel. They said the results had been confirmed dozens of times and they had no doubts about them. In the accompanying press release Fleischmann was quoted saying: "What we have done is to open the door of a new research area, our indications are that the discovery will be relatively easy to make into a usable technology for generating heat and power, but continued work is needed, first, to further understand the science and secondly, to determine its value to energy economics."
Response and fallout
Although the experimental protocol had not been published, physicists in several countries attempted, and failed, to replicate the excess heat phenomenon. The first paper submitted to Nature reproducing excess heat, although it passed peer-review, was rejected because most similar experiments were negative and there were no theories that could explain a positive result;[notes 1] this paper was later accepted for publication by the journal Fusion Technology. Nathan Lewis, professor of chemistry at the California Institute of Technology, led one of the most ambitious validation efforts, trying many variations on the experiment without success, while CERN physicist Douglas R. O. Morrison said that "essentially all" attempts in Western Europe had failed. Even those reporting success had difficulty reproducing Fleischmann and Pons' results. On 10 April 1989, a group at Texas A&M University published results of excess heat and later that day a group at the Georgia Institute of Technology announced neutron production—the strongest replication announced up to that point due to the detection of neutrons and the reputation of the lab. On 12 April Pons was acclaimed at an ACS meeting. But Georgia Tech retracted their announcement on 13 April, explaining that their neutron detectors gave false positives when exposed to heat. Another attempt at independent replication, headed by Robert Huggins at Stanford University, which also reported early success with a light water control, became the only scientific support for cold fusion in 26 April US Congress hearings.[text 3] But when he finally presented his results he reported an excess heat of only one degree celsius, a result that could be explained by chemical differences between heavy and light water in the presence of lithium.[notes 2] He had not tried to measure any radiation and his research was derided by scientists who saw it later. For the next six weeks, competing claims, counterclaims, and suggested explanations kept what was referred to as "cold fusion" or "fusion confusion" in the news.
In April 1989, Fleischmann and Pons published a "preliminary note" in the Journal of Electroanalytical Chemistry. This paper notably showed a gamma peak without its corresponding Compton edge, which indicated they had made a mistake in claiming evidence of fusion byproducts. Fleischmann and Pons replied to this critique, but the only thing left clear was that no gamma ray had been registered and that Fleischmann refused to recognize any mistakes in the data. A much longer paper published a year later went into details of calorimetry but did not include any nuclear measurements.
Nevertheless, Fleischmann and Pons and a number of other researchers who found positive results remained convinced of their findings. The University of Utah asked Congress to provide $25 million to pursue the research, and Pons was scheduled to meet with representatives of President Bush in early May.
On 30 April 1989 cold fusion was declared dead by the New York Times. The Times called it a circus the same day, and the Boston Herald attacked cold fusion the following day.
On 1 May 1989 the American Physical Society held a session on cold fusion in Baltimore, including many reports of experiments that failed to produce evidence of cold fusion. At the end of the session, eight of the nine leading speakers stated that they considered the initial Fleischmann and Pons claim dead, with the ninth, Johann Rafelski, abstaining. Steven E. Koonin of Caltech called the Utah report a result of "the incompetence and delusion of Pons and Fleischmann," which was met with a standing ovation. Douglas R. O. Morrison, a physicist representing CERN, was the first to call the episode an example of pathological science.
On 4 May, due to all this new criticism, the meetings with various representatives from Washington were cancelled.
From 8 May only the A&M tritium results kept cold fusion afloat.
In July and November 1989, Nature published papers critical of cold fusion claims. Negative results were also published in several other scientific journals including Science, Physical Review Letters, and Physical Review C (nuclear physics).[notes 3]
The United States Department of Energy organized a special panel to review cold fusion theory and research. The panel issued its report in November 1989, concluding that results as of that date did not present convincing evidence that useful sources of energy would result from the phenomena attributed to cold fusion. The panel noted the large number of failures to replicate excess heat and the greater inconsistency of reports of nuclear reaction byproducts expected by established conjecture. Nuclear fusion of the type postulated would be inconsistent with current understanding and, if verified, would require established conjecture, perhaps even theory itself, to be extended in an unexpected way. The panel was against special funding for cold fusion research, but supported modest funding of "focused experiments within the general funding system." Cold fusion supporters continued to argue that the evidence for excess heat was strong, and in September 1990 the National Cold Fusion Institute listed 92 groups of researchers from 10 different countries that had reported corroborating evidence of excess heat, but they refused to provide any evidence of their own arguing that it could endanger their patents. However, no further DOE nor NSF funding resulted from the panel's recommendation. By this point, however, academic consensus had moved decidedly toward labeling cold fusion as a kind of "pathological science".
In March 1990 Michael H. Salamon, a physicist from the University of Utah, and nine co-authors reported negative results. University faculty were then "stunned" when a lawyer representing Pons and Fleischmann demanded the Salamon paper be retracted under threat of a lawsuit. The lawyer later apologized; Fleischmann defended the threat as a legitimate reaction to alleged bias displayed by cold-fusion critics.
In early May 1990 one of the two A&M researchers, Kevin Wolf, acknowledged the possibility of spiking, but said that the most likely explanation was tritium contamination in the palladium electrodes or simply contamination due to sloppy work. In June 1990 an article in Science by science writer Gary Taubes destroyed the public credibility of the A&M tritium results when it accused its group leader John Bockris and one of his graduate students of spiking the cells with tritium. In October 1990 Wolf finally said that the results were explained by tritium contamination in the rods. An A&M cold fusion review panel found that the tritium evidence was not convincing and that, while they couldn't rule out spiking, contamination and measurements problems were more likely explanations,[text 4] and Bockris never got support from his faculty to resume his research.
On 1 January 1991 Pons left the University of Utah and went to Europe. In 1992, Pons and Fleischman resumed research with Toyota Motor Corporation's IMRA lab in France. Fleischmann left for England in 1995, and the contract with Pons was not renewed in 1998 after spending $40 million with no tangible results. The IMRA laboratory stopped cold fusion research in 1998 after spending £12 million. Pons has made no public declarations since, and only Fleischmann continued giving talks and publishing papers.
Mostly in the 1990s, several books were published that were critical of cold fusion research methods and the conduct of cold fusion researchers. Over the years, several books have appeared that defended them. Around 1998, the University of Utah had already dropped its research after spending over $1 million, and in the summer of 1997, Japan cut off research and closed its own lab after spending $20 million.
A 1991 review by a cold fusion proponent had calculated "about 600 scientists" were still conducting research. After 1991, cold fusion research only continued in relative obscurity, conducted by groups that had increasing difficulty securing public funding and keeping programs open. These small but committed groups of cold fusion researchers have continued to conduct experiments using Fleischmann and Pons electrolysis set-ups in spite of the rejection by the mainstream community. The Boston Globe estimated in 2004 that there were only 100 to 200 researchers working in the field, most suffering damage to their reputation and career. Since the main controversy over Pons and Fleischmann had ended, cold fusion research has been funded by private and small governmental scientific investment funds in the United States, Italy, Japan, and India.
Cold fusion research continues today in a few specific venues, but the wider scientific community has generally marginalized the research being done and researchers have had difficulty publishing in mainstream journals. The remaining researchers often term their field Low Energy Nuclear Reactions (LENR), Chemically Assisted Nuclear Reactions (CANR), Lattice Assisted Nuclear Reactions (LANR), Condensed Matter Nuclear Science (CMNS) or Lattice Enabled Nuclear Reactions; one of the reasons being to avoid the negative connotations associated with "cold fusion". The new names avoid making bold implications, like implying that fusion is actually occurring.
The researchers who continue acknowledge that the flaws in the original announcement are the main cause of the subject's marginalization, and they complain of a chronic lack of funding and no possibilities of getting their work published in the highest impact journals. University researchers are often unwilling to investigate cold fusion because they would be ridiculed by their colleagues and their professional careers would be at risk. In 1994, David Goodstein, a professor of physics at Caltech, advocated for increased attention from mainstream researchers and described cold fusion as:
A pariah field, cast out by the scientific establishment. Between cold fusion and respectable science there is virtually no communication at all. Cold fusion papers are almost never published in refereed scientific journals, with the result that those works don't receive the normal critical scrutiny that science requires. On the other hand, because the Cold-Fusioners see themselves as a community under siege, there is little internal criticism. Experiments and theories tend to be accepted at face value, for fear of providing even more fuel for external critics, if anyone outside the group was bothering to listen. In these circumstances, crackpots flourish, making matters worse for those who believe that there is serious science going on here.
United States Navy researchers at the Space and Naval Warfare Systems Center (SPAWAR) in San Diego have been studying cold fusion since 1989. In 2002 they released a two-volume report, "Thermal and nuclear aspects of the Pd/D2O system," with a plea for funding. This and other published papers prompted a 2004 Department of Energy (DOE) review.
In August 2003, the U.S. Secretary of Energy, Spencer Abraham, ordered the DOE to organize a second review of the field. This was thanks to an April 2003 letter sent by MIT's Peter L. Hagelstein,:3 and the publication of many new papers, including the Italian ENEA and other researchers in the 2003 International Cold Fusion Conference, and a two-volume book by U.S. SPAWAR in 2002. Cold fusion researchers were asked to present a review document of all the evidence since the 1989 review. The report was released in 2004. The reviewers were "split approximately evenly" on whether the experiments had produced energy in the form of heat, but "most reviewers, even those who accepted the evidence for excess power production, 'stated that the effects are not repeatable, the magnitude of the effect has not increased in over a decade of work, and that many of the reported experiments were not well documented.'" In summary, reviewers found that cold fusion evidence was still not convincing 15 years later, and they didn't recommend a federal research program. They only recommended that agencies consider funding individual well-thought studies in specific areas where research "could be helpful in resolving some of the controversies in the field". They summarized its conclusions thus:
While significant progress has been made in the sophistication of calorimeters since the review of this subject in 1989, the conclusions reached by the reviewers today are similar to those found in the 1989 review. The current reviewers identified a number of basic science research areas that could be helpful in resolving some of the controversies in the field, two of which were: 1) material science aspects of deuterated metals using modern characterization techniques, and 2) the study of particles reportedly emitted from deuterated foils using state-of-the-art apparatus and methods. The reviewers believed that this field would benefit from the peer-review processes associated with proposal submission to agencies and paper submission to archival journals.— Report of the Review of Low Energy Nuclear Reactions, US Department of Energy, December 2004
Cold fusion researchers placed a "rosier spin" on the report, noting that they were finally being treated like normal scientists, and that the report had increased interest in the field and caused "a huge upswing in interest in funding cold fusion research." However, in a 2009 BBC article on an American Chemical Society's meeting on cold fusion, particle physicist Frank Close was quoted stating that the problems that plagued the original cold fusion announcement were still happening: results from studies are still not being independently verified and inexplicable phenomena encountered are being labelled as "cold fusion" even if they are not, in order to attract the attention of journalists.
In February 2012, millionaire Sidney Kimmel, convinced that cold fusion was worth investing in by a 19 April 2009 interview with physicist Robert Duncan on the US news-show 60 Minutes, made a grant of $5.5 million to the University of Missouri to establish the Sidney Kimmel Institute for Nuclear Renaissance (SKINR). The grant was intended to support research into the interactions of hydrogen with palladium, nickel or platinum under extreme conditions. In March 2013 Graham K. Hubler, a nuclear physicist who worked for the Naval Research Laboratory for 40 years, was named director. One of the SKINR projects is to replicate a 1991 experiment in which a professor associated with the project, Mark Prelas says bursts of millions of neutrons a second were recorded, which was stopped because "his research account had been frozen". He claims that the new experiment has already seen "neutron emissions at similar levels to the 1991 observation".
In May 2016, the United States House Committee on Armed Services, in its report on the 2017 National Defense Authorization Act, directed the Secretary of Defense to "provide a briefing on the military utility of recent U.S. industrial base LENR advancements to the House Committee on Armed Services by September 22, 2016."
Since the Fleischmann and Pons announcement, the Italian National agency for new technologies, energy and sustainable economic development (ENEA) has funded Franco Scaramuzzi's research into whether excess heat can be measured from metals loaded with deuterium gas. Such research is distributed across ENEA departments, CNR laboratories, INFN, universities and industrial laboratories in Italy, where the group continues to try to achieve reliable reproducibility (i.e. getting the phenomenon to happen in every cell, and inside a certain frame of time). In 2006–2007, the ENEA started a research program which claimed to have found excess power of up to 500 percent, and in 2009, ENEA hosted the 15th cold fusion conference.
Between 1992 and 1997, Japan's Ministry of International Trade and Industry sponsored a "New Hydrogen Energy (NHE)" program of US$20 million to research cold fusion. Announcing the end of the program in 1997, the director and one-time proponent of cold fusion research Hideo Ikegami stated "We couldn't achieve what was first claimed in terms of cold fusion. (...) We can't find any reason to propose more money for the coming year or for the future." In 1999 the Japan C-F Research Society was established to promote the independent research into cold fusion that continued in Japan. The society holds annual meetings. Perhaps the most famous Japanese cold fusion researcher is Yoshiaki Arata, from Osaka University, who claimed in a demonstration to produce excess heat when deuterium gas was introduced into a cell containing a mixture of palladium and zirconium oxide,[text 5] a claim supported by fellow Japanese researcher Akira Kitamura of Kobe University and McKubre at SRI.
In the 1990s India stopped its research in cold fusion at the Bhabha Atomic Research Centre because of the lack of consensus among mainstream scientists and the US denunciation of the research. Yet, in 2008, the National Institute of Advanced Studies recommended that the Indian government revive this research. Projects were commenced at the Chennai's Indian Institute of Technology, the Bhabha Atomic Research Centre and the Indira Gandhi Centre for Atomic Research. However, there is still skepticism among scientists and, for all practical purposes, research has stalled since the 1990s. A special section in the Indian multidisciplinary journal Current Science published 33 cold fusion papers in 2015 by major cold fusion researchers including several Indian researchers.
A cold fusion experiment usually includes:
- a metal, such as palladium or nickel, in bulk, thin films or powder; and
- deuterium, hydrogen, or both, in the form of water, gas or plasma.
Electrolysis cells can be either open cell or closed cell. In open cell systems, the electrolysis products, which are gaseous, are allowed to leave the cell. In closed cell experiments, the products are captured, for example by catalytically recombining the products in a separate part of the experimental system. These experiments generally strive for a steady state condition, with the electrolyte being replaced periodically. There are also "heat-after-death" experiments, where the evolution of heat is monitored after the electric current is turned off.
The most basic setup of a cold fusion cell consists of two electrodes submerged in a solution containing palladium and heavy water. The electrodes are then connected to a power source to transmit electricity from one electrode to the other through the solution. Even when anomalous heat is reported, it can take weeks for it to begin to appear—this is known as the "loading time," the time required to saturate the palladium electrode with hydrogen (see "Loading ratio" section).
The Fleischmann and Pons early findings regarding helium, neutron radiation and tritium were never replicated satisfactorily, and its levels were too low for the claimed heat production and inconsistent with each other. Neutron radiation has been reported in cold fusion experiments at very low levels using different kinds of detectors, but levels were too low, close to background, and found too infrequently to provide useful information about possible nuclear processes.
Excess heat and energy production
An excess heat observation is based on an energy balance. Various sources of energy input and output are continuously measured. Under normal conditions, the energy input can be matched to the energy output to within experimental error. In experiments such as those run by Fleischmann and Pons, an electrolysis cell operating steadily at one temperature transitions to operating at a higher temperature with no increase in applied current. If the higher temperatures were real, and not an experimental artifact, the energy balance would show an unaccounted term. In the Fleischmann and Pons experiments, the rate of inferred excess heat generation was in the range of 10–20% of total input, though this could not be reliably replicated by most researchers. Researcher Nathan Lewis discovered that the excess heat in Fleischmann and Pons's original paper was not measured, but estimated from measurements that didn't have any excess heat.
Unable to produce excess heat or neutrons, and with positive experiments being plagued by errors and giving disparate results, most researchers declared that heat production was not a real effect and ceased working on the experiments. In 1993, after their original report, Fleischmann reported "heat-after-death" experiments—where excess heat was measured after the electric current supplied to the electrolytic cell was turned off. This type of report has also become part of subsequent cold fusion claims.
Helium, heavy elements, and neutrons
Known instances of nuclear reactions, aside from producing energy, also produce nucleons and particles on readily observable ballistic trajectories. In support of their claim that nuclear reactions took place in their electrolytic cells, Fleischmann and Pons reported a neutron flux of 4,000 neutrons per second, as well as detection of tritium. The classical branching ratio for previously known fusion reactions that produce tritium would predict, with 1 watt of power, the production of 1012 neutrons per second, levels that would have been fatal to the researchers. In 2009, Mosier-Boss et al. reported what they called the first scientific report of highly energetic neutrons, using CR-39 plastic radiation detectors, but the claims cannot be validated without a quantitative analysis of neutrons.
Several medium and heavy elements like calcium, titanium, chromium, manganese, iron, cobalt, copper and zinc have been reported as detected by several researchers, like Tadahiko Mizuno or George Miley. The report presented to the United States Department of Energy (DOE) in 2004 indicated that deuterium-loaded foils could be used to detect fusion reaction products and, although the reviewers found the evidence presented to them as inconclusive, they indicated that those experiments did not use state-of-the-art techniques.
In response to doubts about the lack of nuclear products, cold fusion researchers have tried to capture and measure nuclear products correlated with excess heat. Considerable attention has been given to measuring 4He production. However, the reported levels are very near to background, so contamination by trace amounts of helium normally present in the air cannot be ruled out. In the report presented to the DOE in 2004, the reviewers' opinion was divided on the evidence for 4He; with the most negative reviews concluding that although the amounts detected were above background levels, they were very close to them and therefore could be caused by contamination from air.
One of the main criticisms of cold fusion was that deuteron-deuteron fusion into helium was expected to result in the production of gamma rays—which were not observed and were not observed in subsequent cold fusion experiments. Cold fusion researchers have since claimed to find X-rays, helium, neutrons and nuclear transmutations. Some researchers also claim to have found them using only light water and nickel cathodes. The 2004 DOE panel expressed concerns about the poor quality of the theoretical framework cold fusion proponents presented to account for the lack of gamma rays.
Researchers in the field do not agree on a theory for cold fusion. One proposal considers that hydrogen and its isotopes can be absorbed in certain solids, including palladium hydride, at high densities. This creates a high partial pressure, reducing the average separation of hydrogen isotopes. However, the reduction in separation is not enough by a factor of ten to create the fusion rates claimed in the original experiment. It was also proposed that a higher density of hydrogen inside the palladium and a lower potential barrier could raise the possibility of fusion at lower temperatures than expected from a simple application of Coulomb's law. Electron screening of the positive hydrogen nuclei by the negative electrons in the palladium lattice was suggested to the 2004 DOE commission, but the panel found the theoretical explanations not convincing and inconsistent with current physics theories.
Criticism of cold fusion claims generally take one of two forms: either pointing out the theoretical implausibility that fusion reactions have occurred in electrolysis set-ups or criticizing the excess heat measurements as being spurious, erroneous, or due to poor methodology or controls. There are a couple of reasons why known fusion reactions are an unlikely explanation for the excess heat and associated cold fusion claims.[text 6]
Because nuclei are all positively charged, they strongly repel one another. Normally, in the absence of a catalyst such as a muon, very high kinetic energies are required to overcome this charged repulsion. Extrapolating from known fusion rates, the rate for uncatalyzed fusion at room-temperature energy would be 50 orders of magnitude lower than needed to account for the reported excess heat. In muon-catalyzed fusion there are more fusions because the presence of the muon causes deuterium nuclei to be 207 times closer than in ordinary deuterium gas. But deuterium nuclei inside a palladium lattice are further apart than in deuterium gas, and there should be fewer fusion reactions, not more.
Paneth and Peters in the 1920s already knew that palladium can absorb up to 900 times its own volume of hydrogen gas, storing it at several thousands of times the atmospheric pressure. This led them to believe that they could increase the nuclear fusion rate by simply loading palladium rods with hydrogen gas. Tandberg then tried the same experiment but used electrolysis to make palladium absorb more deuterium and force the deuterium further together inside the rods, thus anticipating the main elements of Fleischmann and Pons' experiment. They all hoped that pairs of hydrogen nuclei would fuse together to form helium, which at the time was needed in Germany to fill zeppelins, but no evidence of helium or of increased fusion rate was ever found.
This was also the belief of geologist Palmer, who convinced Steven Jones that the helium-3 occurring naturally in Earth perhaps came from fusion involving hydrogen isotopes inside catalysts like nickel and palladium. This led their team in 1986 to independently make the same experimental setup as Fleischmann and Pons (a palladium cathode submerged in heavy water, absorbing deuterium via electrolysis). Fleischmann and Pons had much the same belief, but they calculated the pressure to be of 1027 atmospheres, when cold fusion experiments only achieve a loading ratio of one to one, which only has between 10,000 and 20,000 atmospheres.[text 7] John R. Huizenga says they had misinterpreted the Nernst equation, leading them to believe that there was enough pressure to bring deuterons so close to each other that there would be spontaneous fusions.
Lack of expected reaction products
Conventional deuteron fusion is a two-step process,[text 6] in which an unstable high energy intermediary is formed:
Experiments have observed only three decay pathways for this excited-state nucleus, with the branching ratio showing the probability that any given intermediate follows a particular pathway.[text 6] The products formed via these decay pathways are:
- 4He* → n + 3He + 3.3 MeV (ratio=50%)
- 4He* → p + 3H + 4.0 MeV (ratio=50%)
- 4He* → 4He + γ + 24 MeV (ratio=10−6)
Only about one in one million of the intermediaries decay along the third pathway, making its products comparatively rare when compared to the other paths. This result is consistent with the predictions of the Bohr model.[text 8] If one watt (1 eV = 1.602 x 10−19 joule) of nuclear power were produced from deuteron fusion consistent with known branching ratios, the resulting neutron and tritium (3H) production would be easily measured. Some researchers reported detecting 4He but without the expected neutron or tritium production; such a result would require branching ratios strongly favouring the third pathway, with the actual rates of the first two pathways lower by at least five orders of magnitude than observations from other experiments, directly contradicting both theoretically predicted and observed branching probabilities.[text 6] Those reports of 4He production did not include detection of gamma rays, which would require the third pathway to have been changed somehow so that gamma rays are no longer emitted.[text 6]
The known rate of the decay process together with the inter-atomic spacing in a metallic crystal makes heat transfer of the 24 MeV excess energy into the host metal lattice prior to the intermediary's decay inexplicable in terms of conventional understandings of momentum and energy transfer, and even then there would be measurable levels of radiation. Also, experiments indicate that the ratios of deuterium fusion remain constant at different energies. In general, pressure and chemical environment only cause small changes to fusion ratios. An early explanation invoked the Oppenheimer–Phillips process at low energies, but its magnitude was too small to explain the altered ratios.
Setup of experiments
Cold fusion setups utilize an input power source (to ostensibly provide activation energy), a platinum group electrode, a deuterium or hydrogen source, a calorimeter, and, at times, detectors to look for byproducts such as helium or neutrons. Critics have variously taken issue with each of these aspects and have asserted that there has not yet been a consistent reproduction of claimed cold fusion results in either energy output or byproducts. Some cold fusion researchers who claim that they can consistently measure an excess heat effect have argued that the apparent lack of reproducibility might be attributable to a lack of quality control in the electrode metal or the amount of hydrogen or deuterium loaded in the system. Critics have further taken issue with what they describe as mistakes or errors of interpretation that cold fusion researchers have made in calorimetry analyses and energy budgets.
In 1989, after Fleischmann and Pons had made their claims, many research groups tried to reproduce the Fleischmann-Pons experiment, without success. A few other research groups, however, reported successful reproductions of cold fusion during this time. In July 1989, an Indian group from the Bhabha Atomic Research Centre (P. K. Iyengar and M. Srinivasan) and in October 1989, John Bockris' group from Texas A&M University reported on the creation of tritium. In December 1990, professor Richard Oriani of the University of Minnesota reported excess heat.
Groups that did report successes found that some of their cells were producing the effect, while other cells that were built exactly the same and used the same materials were not producing the effect. Researchers that continued to work on the topic have claimed that over the years many successful replications have been made, but still have problems getting reliable replications. Reproducibility is one of the main principles of the scientific method, and its lack led most physicists to believe that the few positive reports could be attributed to experimental error.[text 9] The DOE 2004 report said among its conclusions and recommendations:
"Ordinarily, new scientific discoveries are claimed to be consistent and reproducible; as a result, if the experiments are not complicated, the discovery can usually be confirmed or disproved in a few months. The claims of cold fusion, however, are unusual in that even the strongest proponents of cold fusion assert that the experiments, for unknown reasons, are not consistent and reproducible at the present time. (...) Internal inconsistencies and lack of predictability and reproducibility remain serious concerns. (...) The Panel recommends that the cold fusion research efforts in the area of heat production focus primarily on confirming or disproving reports of excess heat."
Cold fusion researchers (McKubre since 1994, ENEA in 2011) have speculated that a cell that is loaded with a deuterium/palladium ratio lower than 100% (or 1:1) will not produce excess heat. Since most of the negative replications from 1989–1990 did not report their ratios, this has been proposed as an explanation for failed replications. This loading ratio is hard to obtain, and some batches of palladium never reach it because the pressure causes cracks in the palladium, allowing the deuterium to escape. Fleischmann and Pons never disclosed the deuterium/palladium ratio achieved in their cells, there are no longer any batches of the palladium used by Fleischmann and Pons (because the supplier uses now a different manufacturing process), and researchers still have problems finding batches of palladium that achieve heat production reliably.
Misinterpretation of data
Some research groups initially reported that they had replicated the Fleischmann and Pons results but later retracted their reports and offered an alternative explanation for their original positive results. A group at Georgia Tech found problems with their neutron detector, and Texas A&M discovered bad wiring in their thermometers. These retractions, combined with negative results from some famous laboratories, led most scientists to conclude, as early as 1989, that no positive result should be attributed to cold fusion.
The calculation of excess heat in electrochemical cells involves certain assumptions. Errors in these assumptions have been offered as non-nuclear explanations for excess heat.
One assumption made by Fleischmann and Pons is that the efficiency of electrolysis is nearly 100%, meaning nearly all the electricity applied to the cell resulted in electrolysis of water, with negligible resistive heating and substantially all the electrolysis product leaving the cell unchanged. This assumption gives the amount of energy expended converting liquid D2O into gaseous D2 and O2. The efficiency of electrolysis is less than one if hydrogen and oxygen recombine to a significant extent within the calorimeter. Several researchers have described potential mechanisms by which this process could occur and thereby account for excess heat in electrolysis experiments.
Another assumption is that heat loss from the calorimeter maintains the same relationship with measured temperature as found when calibrating the calorimeter. This assumption ceases to be accurate if the temperature distribution within the cell becomes significantly altered from the condition under which calibration measurements were made. This can happen, for example, if fluid circulation within the cell becomes significantly altered. Recombination of hydrogen and oxygen within the calorimeter would also alter the heat distribution and invalidate the calibration.
The ISI identified cold fusion as the scientific topic with the largest number of published papers in 1989, of all scientific disciplines. The Nobel Laureate Julian Schwinger declared himself a supporter of cold fusion in the fall of 1989, after much of the response to the initial reports had turned negative. He tried to publish his theoretical paper "Cold Fusion: A Hypothesis" in Physical Review Letters, but the peer reviewers rejected it so harshly that he felt deeply insulted, and he resigned from the American Physical Society (publisher of PRL) in protest.
The number of papers sharply declined after 1990 because of two simultaneous phenomena: scientists abandoning the field and journal editors declining to review new papers, and cold fusion fell off the ISI charts. Researchers who got negative results abandoned the field, while others kept publishing. A 1993 paper in Physics Letters A was the last paper published by Fleischmann, and "one of the last reports [by Fleischmann] to be formally challenged on technical grounds by a cold fusion skeptic".[text 10]
The Journal of Fusion Technology (FT) established a permanent feature in 1990 for cold fusion papers, publishing over a dozen papers per year and giving a mainstream outlet for cold fusion researchers. When editor-in-chief George H. Miley retired in 2001, the journal stopped accepting new cold fusion papers. This has been cited as an example of the importance of sympathetic influential individuals to the publication of cold fusion papers in certain journals.
The decline of publications in cold fusion has been described as a "failed information epidemic".[text 11] The sudden surge of supporters until roughly 50% of scientists support the theory, followed by a decline until there is only a very small number of supporters, has been described as a characteristic of pathological science.[text 12][notes 4] The lack of a shared set of unifying concepts and techniques has prevented the creation of a dense network of collaboration in the field; researchers perform efforts in their own and in disparate directions, making the transition to "normal" science more difficult.
Cold fusion reports continued to be published in a small cluster of specialized journals like Journal of Electroanalytical Chemistry and Il Nuovo Cimento. Some papers also appeared in Journal of Physical Chemistry, Physics Letters A, International Journal of Hydrogen Energy, and a number of Japanese and Russian journals of physics, chemistry, and engineering. Since 2005, Naturwissenschaften has published cold fusion papers; in 2009, the journal named a cold fusion researcher to its editorial board. In 2015 the Indian multidisciplinary journal Current Science published a special section devoted entirely to cold fusion related papers.
In the 1990s, the groups that continued to research cold fusion and their supporters established (non-peer-reviewed) periodicals such as Fusion Facts, Cold Fusion Magazine, Infinite Energy Magazine and New Energy Times to cover developments in cold fusion and other fringe claims in energy production that were ignored in other venues. The internet has also become a major means of communication and self-publication for CF researchers.
Cold fusion researchers were for many years unable to get papers accepted at scientific meetings, prompting the creation of their own conferences. The first International Conference on Cold Fusion (ICCF) was held in 1990, and has met every 12 to 18 months since. Attendees at some of the early conferences were described as offering no criticism to papers and presentations for fear of giving ammunition to external critics; thus allowing the proliferation of crackpots and hampering the conduct of serious science. Critics and skeptics stopped attending these conferences, with the notable exception of Douglas Morrison, who died in 2001. With the founding in 2004 of the International Society for Condensed Matter Nuclear Science (ISCMNS), the conference was renamed the International Conference on Condensed Matter Nuclear Science (the reasons are explained in the subsequent research section), but reverted to the old name in 2008. Cold fusion research is often referenced by proponents as "low-energy nuclear reactions", or LENR, but according to sociologist Bart Simon the "cold fusion" label continues to serve a social function in creating a collective identity for the field.
Since 2006, the American Physical Society (APS) has included cold fusion sessions at their semiannual meetings, clarifying that this does not imply a softening of skepticism. Since 2007, the American Chemical Society (ACS) meetings also include "invited symposium(s)" on cold fusion. An ACS program chair said that without a proper forum the matter would never be discussed and, "with the world facing an energy crisis, it is worth exploring all possibilities."
On 22–25 March 2009, the American Chemical Society meeting included a four-day symposium in conjunction with the 20th anniversary of the announcement of cold fusion. Researchers working at the U.S. Navy's Space and Naval Warfare Systems Center (SPAWAR) reported detection of energetic neutrons using a heavy water electrolysis set-up and a CR-39 detector, a result previously published in Naturwissenschaften. The authors claim that these neutrons are indicative of nuclear reactions; without quantitative analysis of the number, energy, and timing of the neutrons and exclusion of other potential sources, this interpretation is unlikely to find acceptance by the wider scientific community.
Although details have not surfaced, it appears that the University of Utah forced the 23 March 1989 Fleischmann and Pons announcement to establish priority over the discovery and its patents before the joint publication with Jones. The Massachusetts Institute of Technology (MIT) announced on 12 April 1989 that it had applied for its own patents based on theoretical work of one of its researchers, Peter L. Hagelstein, who had been sending papers to journals from the 5 to 12 April. On 2 December 1993 the University of Utah licensed all its cold fusion patents to ENECO, a new company created to profit from cold fusion discoveries, and in March 1998 it said that it would no longer defend its patents.
The U.S. Patent and Trademark Office (USPTO) now rejects patents claiming cold fusion. Esther Kepplinger, the deputy commissioner of patents in 2004, said that this was done using the same argument as with perpetual motion machines: that they do not work. Patent applications are required to show that the invention is "useful", and this utility is dependent on the invention's ability to function. In general USPTO rejections on the sole grounds of the invention's being "inoperative" are rare, since such rejections need to demonstrate "proof of total incapacity", and cases where those rejections are upheld in a Federal Court are even rarer: nevertheless, in 2000, a rejection of a cold fusion patent was appealed in a Federal Court and it was upheld, in part on the grounds that the inventor was unable to establish the utility of the invention.[notes 5]
A U.S. patent might still be granted when given a different name to disassociate it from cold fusion, though this strategy has had little success in the US: the same claims that need to be patented can identify it with cold fusion, and most of these patents cannot avoid mentioning Fleischmann and Pons' research due to legal constraints, thus alerting the patent reviewer that it is a cold-fusion-related patent. David Voss said in 1999 that some patents that closely resemble cold fusion processes, and that use materials used in cold fusion, have been granted by the USPTO. The inventor of three such patents had his applications initially rejected when they were reviewed by experts in nuclear science; but then he rewrote the patents to focus more in the electrochemical parts so they would be reviewed instead by experts in electrochemistry, who approved them. When asked about the resemblance to cold fusion, the patent holder said that it used nuclear processes involving "new nuclear physics" unrelated to cold fusion. Melvin Miles was granted in 2004 a patent for a cold fusion device, and in 2007 he described his efforts to remove all instances of "cold fusion" from the patent description to avoid having it rejected outright.
A patent only legally prevents others from using or benefiting from one's invention. However, the general public perceives a patent as a stamp of approval, and a holder of three cold fusion patents said the patents were very valuable and had helped in getting investments.
In Undead Science, sociologist Bart Simon gives some examples of cold fusion in popular culture, saying that some scientists use cold fusion as a synonym for outrageous claims made with no supporting proof, and courses of ethics in science give it as an example of pathological science. It has appeared as a joke in Murphy Brown and The Simpsons. It was adopted as a software product name Adobe ColdFusion and a brand of protein bars (Cold Fusion Foods). It has also appeared in advertising as a synonym for impossible science, for example a 1995 advertisement for Pepsi Max.
The plot of The Saint, a 1997 action-adventure film, parallels the story of Fleischmann and Pons, although with a different ending. The film might have affected the public perception of cold fusion, pushing it further into the science fiction realm.
"Final Exam", the 16th episode of season 4 of The Outer Limits, depicts a student named Todtman who has invented a cold fusion weapon, and attempts to use it as a tool for revenge on people who have wronged him over the years. Despite the secret being lost with his death at the end of the episode, it is implied that another student elsewhere is on a similar track, and may well repeat Todtman's efforts.
In the DC's Legends of Tomorrow episode "No Country for Old Dads," Ray Palmer theorizes that cold fusion could repair the shattered Fire Totem, if it wasn't only theoretical. Damien Dahrk reveals that he assassinated a scientist in 1962 East Germany that developed a formula for cold fusion. Ray and Dahrk's daughter Nora time travel from 2018 to 1962 in an attempt to rescue the scientist from the younger version of Dahrk and/or retrieve the formula.
- On 26 January 1990, journal Nature rejected Oriani's paper, citing the lack of nuclear ash and the general difficulty that others had in replication.Beaudette 2002, p. 183 It was later published in Fusion Technology.Oriani et al. 1990, pp. 652–662
- Taubes 1993, pp. 228–229, 255 "(...) there are indeed chemical differences between heavy and light water, especially once lithium is added, as it was in the Pons-Fleischmann electrolyte. This had been in the scientific literature since 1958. It seems that the electrical conductivity of heavy water with lithium is considerably less than that of light water with lithium. And this difference is more than enough to account for the heavy water cell running hotter (...) (quoting a member of the A&M group) 'they're making the same mistake we did'"
- Miskelly, GM; Heben MJ; Kumar A; Penner RM; Sailor MJ; Lewis NL (1989), "Analysis of the Published Calorimetric Evidence for Electrochemical Fusion of Deuterium in Palladium", Science, 246 (4931): 793–796, Bibcode:1989Sci...246..793M, doi:10.1126/science.246.4931.793, PMID 17748706
- Aberdam, D; Avenier M; Bagieu G; Bouchez J; Cavaignac JF; Collot J; et al. (1990), "Limits on neutron emission following deuterium absorption into palladium and titanium", Phys. Rev. Lett., 65 (10): 1196–1199, Bibcode:1990PhRvL..65.1196A, doi:10.1103/PhysRevLett.65.1196, PMID 10042199
- Price, PB; Barwick, SW; Williams, WT; Porter, JD (1989), "Search for energetic-charged-particle emission from deuterated Ti and Pd foils", Phys. Rev. Lett., 63 (18): 1926, Bibcode:1989PhRvL..63.1926P, doi:10.1103/PhysRevLett.63.1926, PMID 10040716 More than one of
- Roberts, DA; Becchetti FD; Ben-Jacob E; Garik P; Musser J; Orr B; Tarlé G; et al. (1990), "Energy and flux limits of cold-fusion neutrons using a deuterated liquid scintillator", Phys. Rev. C, 42 (5): R1809–R1812, Bibcode:1990PhRvC..42.1809R, doi:10.1103/PhysRevC.42.R1809
- Lewis et al. 1989
- Sixth criterion of Langmuir: "During the course of the controversy the ratio of supporters to critics rises to near 50% and then falls gradually to oblivion. (Langmuir, 1989, pp. 43–44)", quoted in Simon p. 104, paraphrased in Ball p. 308. It has also been applied to the number of published results, in Huizenga 1993, pp. xi, 207–209 "The ratio of the worldwide positive results on cold fusion to negative results peaked at approximately 50% (...) qualitatively in agreement with Langmuir's sixth criteria."
- Swartz, 232 F.3d 862, 56 USPQ2d 1703, (Fed. Cir. 2000). decision Archived 12 March 2008 at the Wayback Machine.. Sources:
- "2164.07 Relationship of Enablement Requirement to Utility Requirement of 35 U.S.C. 101 – 2100 Patentability. B. Burden on the Examiner. Examiner Has Initial Burden To Show That One of Ordinary Skill in the Art Would Reasonably Doubt the Asserted Utility", U.S. Patent and Trademark Office, archived from the original on 12 September 2012 Manual of Patent Examining Procedure, in reference to 35 U.S.C. § 101
- Alan L. Durham (2004), "Patent law essentials: a concise guide" (2, illustrated ed.), Greenwood Publishing Group: 72 (footnote 30), ISBN 9780275982058
- Jeffrey G. Sheldon (1992), "How to write a patent application" (illustrated ed.), Practising Law Institute, ISBN 0-87224-044-4
- "60 Minutes: Once Considered Junk Science, Cold Fusion Gets A Second Look By Researchers", CBS, 17 April 2009, archived from the original on 12 February 2012
- Fleischmann & Pons 1989, p. 301 ("It is inconceivable that this [amount of heat] could be due to anything but nuclear processes... We realise that the results reported here raise more questions than they provide answers...")
- Voss 1999
- Browne 1989, para. 1
- Browne 1989, Close 1992, Huizenga 1993, Taubes 1993
- Browne 1989
- Taubes 1993, pp. 262, 265–266, 269–270, 273, 285, 289, 293, 313, 326, 340–344, 364, 366, 404–406, Goodstein 1994, Van Noorden 2007, Kean 2010
- Chang, Kenneth (25 March 2004), "US will give cold fusion a second look", The New York Times, retrieved 8 February 2009
- Ouellette, Jennifer (23 December 2011), "Could Starships Use Cold Fusion Propulsion?", Discovery News, archived from the original on 7 January 2012
- US DOE 2004, Choi 2005, Feder 2005
- Broad 1989b, Goodstein 1994, Platt 1998, Voss 1999, Beaudette 2002, Feder 2005, Adam 2005 "Advocates insist that there is just too much evidence of unusual effects in the thousands of experiments since Pons and Fleischmann to be ignored", Kruglinski 2006, Van Noorden 2007, Alfred 2009. Daley 2004 calculates between 100 and 200 researchers, with damage to their careers.
- "'Cold fusion' rebirth? New evidence for existence of controversial energy source", American Chemical Society, archived from the original on 21 December 2014
- Hagelstein et al. 2004
- "'ICMNS FAQ'". International Society of Condensed Matter Nuclear Science. Archived from the original on 3 November 2015.
- Biberian, Jean-Paul (2007), "Condensed Matter Nuclear Science (Cold Fusion): An Update" (PDF), International Journal of Nuclear Energy Science and Technology, 3 (1): 31–42, doi:10.1504/IJNEST.2007.012439, archived (PDF) from the original on 30 May 2008
- Goodstein 1994,Labinger & Weininger 2005, p. 1919
- US DOE 1989, p. 7
- Graham, Thomas (1 January 1866). "On the Absorption and Dialytic Separation of Gases by Colloid Septa". Philosophical Transactions of the Royal Society of London. 156: 399–439. doi:10.1098/rstl.1866.0018. ISSN 0261-0523. Archived from the original on 31 December 2015.
- Paneth & Peters 1926
- Kall fusion redan på 1920-talet Archived 3 March 2016 at the Wayback Machine., Ny Teknik, Kaianders Sempler, 9 February 2011
- Pool 1989, Wilner 1989, Close 1992, pp. 19–21 Huizenga 1993, pp. 13–14, 271, Taubes 1993, p. 214
- Huizenga 1993, pp. 13–14
- Laurence 1956
- Kowalski 2004, II.A2
- C. DeW. Van Siclen and S. E. Jones, "Piezonuclear fusion in isotopic hydrogen molecules," J. Phys. G: Nucl. Phys. 12: 213–221 (March 1986).
- Fleischmann & Pons 1989, p. 301
- Fleischmann et al. 1990
- Crease & Samios 1989, p. V1
- Lewenstein 1994, pp. 8–9
- Shamoo & Resnik 2003, p. 86, Simon 2002, pp. 28–36
- University of Utah, "'Simple experiment' results in sustained n-fusion at room temperature for first time", archived from the original on 14 October 2011, retrieved 28 July 2011
- For example, in 1989, the Economist editorialized that the cold fusion "affair" was "exactly what science should be about." Footlick, JK (1997), "Truth and Consequences: how colleges and universities meet public crises", Phoenix: Oryx Press: 51, ISBN 978-0-89774-970-1 as cited in Brooks, M (2008), "13 Things That Don't Make Sense", New York: Doubleday: 67, ISBN 978-1-60751-666-8
- Simon 2002, pp. 57–60, Goodstein 1994
- Goodstein 1994
- Petit 2009, Park 2000, p. 16
- Taubes 1993, pp. xviii–xx, Park 2000, p. 16
- Taubes 1993, pp. xx–xxi
- Beaudette 2002, pp. 183, 313
- Aspaturian, Heidi (14 December 2012). "Interview with Charles A. Barnes on 13 and 26 June 1989". The Caltech Institute Archives. Retrieved 22 August 2014.
- Schaffer 1999, p. 2
- Broad 1989a
- Broad 1989a, Wilford 1989
- Broad, William J. 19 April 1989. Stanford Reports Success, The New York Times.
- Close 1992, pp. 184, Huizenga 1993, p. 56
- Browne 1989, Taubes 1993, pp. 253–255, 339–340, 250
- Bowen 1989, Crease & Samios 1989
- Tate 1989, p. 1, Platt 1998, Close 1992, pp. 277–288, 362–363, Taubes 1993, pp. 141, 147, 167–171, 243–248, 271–272, 288, Huizenga 1993, pp. 63, 138–139
- Fleischmann, Martin; Pons, Stanley; Hawkins, Marvin; Hoffman, R. J (29 June 1989), "Measurement of gamma-rays from cold fusion (letter by Fleischmann et al. and reply by Petrasso et al.)" (PDF), Nature, 339 (6227): 667, Bibcode:1989Natur.339..667F, doi:10.1038/339667a0, archived from the original (PDF) on 20 July 2013
- Taubes 1993, pp. 310–314, Close 1992, pp. 286–287, Huizenga 1993, pp. 63, 138–139
- Taubes 1993, p. 242 (Boston Herald's is Tate 1989).
- Taubes 1993, p. 266
- "APS Special Session on Cold Fusion, May 1–2, 1989". ibiblio.org. Archived from the original on 26 July 2008.
- Taubes 1993, pp. 267–268
- Taubes 1993, pp. 275, 326
- Gai et al. 1989, pp. 29–34
- Williams et al. 1989, pp. 375–384
- Joyce 1990
- US DOE 1989, p. 39
- US DOE 1989, p. 36
- US DOE 1989, p. 37
- Huizenga 1993, p. 165
- Mallove 1991, pp. 246–248
- Rousseau 1992.
- Salamon, M. H.; Wrenn, M. E.; Bergeson, H. E.; Crawford, H. C.; et al. (29 March 1990). "Limits on the emission of neutrons, γ-rays, electrons and protons from Pons/Fleischmann electrolytic cells". Nature. 344 (6265): 401–405. Bibcode:1990Natur.344..401S. doi:10.1038/344401a0.
- Broad, William J. (30 October 1990). "Cold Fusion Still Escapes Usual Checks Of Science". New York Times. Archived from the original on 19 December 2013. Retrieved 27 November 2013.
- Taubes 1993, pp. 410–411, Close 1992, pp. 270, 322, Huizenga 1993, pp. 118–119, 121–122
- Taubes 1993, pp. 410–411, 412, 420, the Science article was Taubes 1990, Huizenga 1993, pp. 122, 127–128.
- Huizenga 1993, pp. 122–123
- "National Cold Fusion Institute Records, 1988–1991", archived from the original on 17 July 2012
- Taubes 1993, p. 424
- Huizenga 1993, p. 184
- Taubes 1993, pp. 136–138
- Close 1992, Taubes 1993, Huizenga 1993, and Park 2000
- Mallove 1991, Beaudette 2002, Simon 2002, Kozima 2006
- Wired News Staff Email (24 March 1998), "Cold Fusion Patents Run Out of Steam", Wired, archived from the original on 4 January 2014
- Huizenga 1993, pp. 210–211 citing Srinivisan, M., "Nuclear Fusion in an Atomic Lattice: An Update on the International Status of Cold Fusion Research", Current Science, 60: 471
- Simon 2002, pp. 131–133, 218
- Daley 2004
- Mullins 2004
- Seife 2008, pp. 154–155
- Simon 2002, pp. 131, citing Collins & Pinch 1993, p. 77 in first edition
- "Cold fusion debate heats up again", BBC, 23 March 2009, archived from the original on 11 January 2016
- Feder 2004, p. 27
- Taubes 1993, pp. 292, 352, 358, Goodstein 1994, Adam 2005 (comment attributed to George Miley of the University of Illinois)
- Mosier-Boss et al. 2009, Sampson 2009
- Szpak, Masier-Boss: Thermal and nuclear aspects of the Pd/D2O system Archived 16 February 2013 at the Wayback Machine., Feb 2002. Reported by Mullins 2004
- Brumfiel 2004
- Weinberger, Sharon (21 November 2004), "Warming Up to Cold Fusion", Washington Post: W22, archived from the original on 19 November 2016 (page 2 in online version)
- "Effetto Fleischmann e Pons: il punto della situazione", Energia Ambiente e Innovazione (in Italian), ENEA (3), May–June 2011, archived from the original on 8 August 2012
- Feder 2005
- US DOE 2004
- Janese Silvey, "Billionaire helps fund MU energy research" Archived 15 December 2012 at the Wayback Machine., Columbia Daily Tribune, 10 February 2012
- University of Missouri-Columbia "$5.5 million gift aids search for alternative energy. Gift given by Sidney Kimmel Foundation, created by founder of the Jones Group" Archived 5 March 2016 at the Wayback Machine., 10 February 2012, (press release), alternative link
- "Sidney Kimmel Foundation awards $5.5 million to MU scientists" Allison Pohle, Missourian, 10 February 2012
- Christian Basi, Hubler Named Director of Nuclear Renaissance Institute at MU Archived 4 March 2016 at the Wayback Machine., (press release) Missouri University News Bureau, 8 March 2013
- Professor revisits fusion work from two decades ago Archived 2 November 2012 at the Wayback Machine. Columbia Daily Tribune, 28 October 2012
- Mark A. Prelas, Eric Lukosi. Neutron Emission from Cryogenically Cooled Metals Under Thermal Shock Archived 16 January 2013 at the Wayback Machine. (self published)
- "Archived copy". Archived from the original on 18 May 2016. Retrieved 18 May 2016. Congress Is Suddenly Interested in Cold Fusion
- https://www.congress.gov/114/crpt/hrpt537/CRPT-114hrpt537.pdf#page=123 Archived 16 May 2016 at the Wayback Machine. Committee on Armed Services, House of Representatives Report 114-537 page 87
- Goodstein, David L. (2010), "On Fact and Fraud:Cautionary Tales from the Front Lines of Science", Princeton: Princeton University Press: 87–94, ISBN 0691139660
- COLD FUSION – The history of research in Italy (2009) PDF 8.7Mb Archived 13 March 2016 at the Wayback Machine. In the foreword by the president of ENEA the belief is expressed that the cold fusion phenomenon is proved.
- Pollack 1992, Pollack 1997, p. C4
- "Japan CF-research Society". jcfrs.org. Archived from the original on 21 January 2016.
- Japan CF research society meeting Dec 2011 Archived 12 March 2016 at the Wayback Machine.
- Kitamura et al. 2009
- Jayaraman 2008
- "Our dream is a small fusion power generator in each house", Times of India, 4 February 2011, archived from the original on 26 August 2011
- "Current Science - Archive". www.currentscience.ac.in.
- Mark Anderson (March 2009), "New Cold Fusion Evidence Reignites Hot Debate", IEEE Spectrum, archived from the original on 10 July 2009
- US DOE 1989, p. 29, Taubes 1993[page needed]
- Hoffman 1995, pp. 111–112
- US DOE 2004, p. 3
- Taubes 1993, pp. 256–259
- Huizenga 1993, pp. x, 22–40, 70–72, 75–78, 97, 222–223, Close 1992, pp. 211–214, 230–232, 254–271, Taubes 1993, pp. 264–266, 270–271 Choi 2005
- Fleischmann & Pons 1993
- Mengoli et al. 1998, Szpak et al. 2004
- Simon 2002, p. 49, Park 2000, pp. 17–18, Huizenga 1993, pp. 7, Close 1992, pp. 306–307
- Barras 2009
- Berger 2009
- US DOE 2004, pp. 3, 4, 5
- Hagelstein 2010
- US DOE 2004, pp. 3,4
- Rogers & Sandquist 1990
- Simon 2002, p. 215
- Simon 2002, pp. 150–153, 162
- Simon 2002, pp. 153, 214–216
- US DOE 1989, pp. 7–8, 33, 53–58 (appendix 4.A), Close 1992, pp. 257–258, Huizenga 1993, p. 112, Taubes 1993, pp. 253–254 quoting Howard Kent Birnbaum in the special cold fusion session of the 1989 spring meeting of the Materials Research Society, Park 2000, pp. 17–18, 122, Simon 2002, p. 50 citing Koonin S.E.; M Nauenberg (1989), "Calculated Fusion Rates in Isotopic Hydrogen Molecules", Nature, 339 (6227): 690–692, Bibcode:1989Natur.339..690K, doi:10.1038/339690a0
- Hagelstein et al. 2004, pp. 14–15
- Schaffer 1999, p. 1, Saeta 1999, (pages 3–5; "Assessment"; Morrison, Douglas R.O.)
- Huizenga 1993, p. viii "Enhancing the probability of a nuclear reaction by 50 orders of magnitude (...) via the chemical environment of a metallic lattice, contradicted the very foundation of nuclear science.", Goodstein 1994, Scaramuzzi 2000, p. 4
- Close 1992, pp. 32, 54, Huizenga 1993, p. 112
- Close 1992, pp. 19–20
- Close 1992, pp. 63–64
- Close 1992, pp. 64–66
- Close 1992, pp. 32–33
- Huizenga 1993, pp. 33, 47
- Huizenga 1993, pp. 7
- Scaramuzzi 2000, p. 4, Goodstein 1994, Huizenga 1993, pp. 207–208, 218
- Close 1992, pp. 308–309 "Some radiation would emerge, either electrons ejected from atoms or X-rays as the atoms are disturbed, but none were seen."
- Close 1992, pp. 268, Huizenga 1993, pp. 112–113
- Huizenga 1993, pp. 75–76, 113
- Taubes 1993, pp. 364–365
- Platt 1998
- Simon 2002, pp. 145–148
- Huizenga 1993, p. 82
- Bird 1998, pp. 261–262
- Saeta 1999, (pages 5–6; "Response"; Heeter, Robert F.)
- Biberian 2007 – (Input power is calculated by multiplying current and voltage, and output power is deduced from the measurement of the temperature of the cell and that of the bath")
- Fleischmann et al. 1990, Appendix
- Shkedi et al. 1995
- Jones et al. 1995, p. 1
- Shanahan 2002
- Biberian 2007 – ("Almost all the heat is dissipated by radiation and follows the temperature fourth power law. The cell is calibrated . . .")
- Browne 1989, para. 16
- Wilson et al. 1992
- Shanahan 2005
- Shanahan 2006
- Simon 2002, pp. 180–183, 209
- Jagdish Mehra; K. A. Milton; Julian Seymour Schwinger (2000), Oxford University Press, ed., "Climbing the Mountain: The Scientific Biography of Julian Schwinger" (illustrated ed.), New York: Oxford University Press: 550, ISBN 0-19-850658-9, Also Close 1992, pp. 197–198
- Simon 2002, pp. 180–183
- Huizenga 1993, pp. 208
- Bettencourt, Kaiser & Kaur 2009
- Simon 2002, pp. 183–187
- Park 2000, pp. 12–13
- Goodstein 1994, the first three conferences are commented in detail in Huizenga 1993, pp. 237–247, 274–285, specially 240, 275–277
- Huizenga 1993, pp. 276, Park 2000, pp. 12–13, Simon 2002, p. 108
- "ISCMNS FAQ". www.iscmns.org. Archived from the original on 23 December 2011.
- Taubes 1993, pp. 378, 427 anomalous effects in deuterated metals, which was the new, preferred, politically palatable nom de science for cold fusion [back in October 1989]."
- "Archived copy" (PDF). Archived from the original (PDF) on 31 July 2012. Retrieved 31 October 2012.
- Chubb et al. 2006, Adam 2005 ("[Absolutely not]. Anyone can deliver a paper. We defend the openness of science" – Bob Park of APS, when asked if hosting the meeting showed a softening of scepticism)
- Van Noorden 2007
- Van Noorden 2007, para. 2
- "Scientists in possible cold fusion breakthrough", AFP, archived from the original on 27 March 2009, retrieved 24 March 2009
- Broad, William J. (13 April 1989), "'Cold Fusion' Patents Sought", New York Times, archived from the original on 29 January 2017
- Lewenstein 1994, p. 43
- "2107.01 General Principles Governing Utility Rejections (R-5) – 2100 Patentability. II. Wholly inoperative inventions; "incredible" utility", U.S. Patent and Trademark Office, archived from the original on 27 August 2012 Manual of Patent Examining Procedure
- Simon 2002, pp. 193, 233
- Voss 1999b, in reference to US patents US 5,616,219 , US 5,628,886 and US 5,672,259
- Daniel C. Rislove (2006), "A Case Study of Inoperable Inventions: Why Is the USPTO Patenting Pseudoscience?" (PDF), Wisconsin Law Review, 2006 (4): 1302–1304, footnote 269 in page 1307, archived from the original (PDF) on 25 September 2015
- Sanderson 2007, in reference to US patent US 6,764,561
- Fox 1994 in reference to Canon's EP 568118
- Simon 2002, pp. 91–95, 116–118
- "No Country for Old Dads". 5 March 2018. Archived from the original on 13 February 2017 – via www.imdb.com.
References with quotations or other additional text
- Taubes 1993, p. 214 says the similarity was discovered on 13 April 1991, by a computer scientist and disseminated via the Internet. Another computer scientist translated an old article in the Swedish technical journal Ny Teknika. Taubes says: "Ny Teknika seemed to believe that Tanderg had missed on the discovery of the century, done in by an ignorant patent bureau. When Pons heard the story, he agreed."
- Brigham Young University discovered Tandberg's 1927 patent application, and showed it as proof that Utah University didn't have priority for the discovery of cold fusion, cited in Wilford, John Noble (24 April 1989), "Fusion Furor: Science's Human Face", New York Times, archived from the original on 25 June 2017
- Taubes 1993, pp. 225–226, 229–231 "[p. 225] Like those of MIT or harvard or Caltech, and official Stanford University announcement is not something to be taken lightly. (...) [p. 230] With the news out of Stanford, the situation, as one Department of Energy official put it, 'had come to a head'. The department had had its laboratory administrators send emissaries to Washington immediately. (...) the secretary of energy, had made the pursuit of cold fusion the department's highest priority (...) The government laboratories had free reign [sic] to pursue their cold fusion research, Ianniello said, to use whatever resources they needed, and DOE would cover the expenses. (...) [p. 231] While Huggins may have appeared to be the savior of cold fusion, his results also made him, and Stanford, a prime competitor [of MIT] for patents and rights.", Close 1992, pp. 184, 250 "[p. 184] The only support for Fleischmann and Pons [at the 26 April US congress hearings] came from Robert Huggins (...) [p. 250] The British Embassy in Washington rushed news of the proceedings to the Cabinet Office and Department of Energy in London. (...) noting that Huggin's heat measurements lent some support but that he had not checked for radiation, and also emphasizing that none of the US government laboratories had yet managed to replicate the effect.", Huizenga 1993, p. 56 "Of the above speakers (in the US Congress hearings) only Huggins supported the Fleischmann-Pons claim of excess heat."
- Taubes 1993, pp. 418–420 "While it is not possible for us to categorically exclude spiking as a possibility, it is our opinion, that possibility is much less probable than that of inadvertent contamination or other explained factors in the measurements.", Huizenga 1993, pp. 128–129
- "Physicist Claims First Real Demonstration of Cold Fusion", Physorg.com, 27 May 2008, archived from the original on 15 March 2012. The peer reviewed papers referenced at the end of the article are "The Establishment of Solid Nuclear Fusion Reactor" – Journal of High Temperature Society, Vol. 34 (2008), No. 2, pp.85–93 and "Atomic Structure Analysis of Pd Nano-Cluster in Nano-Composite Pd⁄ZrO2 Absorbing Deuterium" – Journal of High Temperature Society, Vol. 33 (2007), No. 3, pp.142–156
- US DOE 1989, p. 29, Schaffer 1999, pp. 1, 2, Scaramuzzi 2000, p. 4, Close 1992, pp. 265–268 "(...) the equality of the two channels is known to be preserved from high energy through 20 keV and down to about 5 keV. A reason that it is not as well known below this energy because the individual rates are so low. However, the rate is known at room temperature from muon catalysed fusion experiments. (...) theory can even accommodate the subtle variations in the ratio at these low temperatures [below 200 °C, where the first channel predominates due to 'molecular resonance excitation']", Huizenga 1993, pp. 6–7, 35–36, 75, 108–109, 112–114, 118–125, 130, 139, 173, 183, 217–218, 243–245 "[page 7] [the first two branches of the reaction] have been studied over a range of deuteron kinetic energies down to a few kiloelectron volts (keV). (...) [branching ratio] appear to be essentially constant at low energies. There is no reason to think that these branching ratios would be measurably altered for cold fusion. [page 108] The near equality of [the first two reaction branches] has been verified also for muon-catalyzed fusion. [in this case the ratio is 1.4 in favor of the first branch, due to 'the p-wave character of muon capture in muon-catalyzed fusion.']", Goodstein 1994 (explaining Pons and Fleischmann would both be dead if they had produced neutrons in proportion to their measurements of excess heat) ("It has been said . . . three 'miracles' are necessary [for D + D fusion to behave in a way consistent with the reported results of cold fusion experiments]")
- Close 1992, pp. 257–258, Huizenga 1993, pp. 33, 47–48, 79, 99–100, 207, 216 "By comparing cathode charging of deuterium into palladium with gas charging for a D7Pd ratio of unity, one obtains an equivalent pressure of 1.5x104 atmospheres, a value more than 20 orders of magnitude (1020) less than the Fleischmann-Pons claimed pressure.", Huizenga also cites US DOE 2004, pp. 33–34 in chapter IV. Materials Characterization: D. 'Relevant' Materials Parameters: 2. Confinement Pressure, which has a similar explanation.
- Huizenga 1993, pp. 6–7, 35–36 "[page 7] This well established experimental result is consistent with the Bohr model, which predicts that the compound nucleus decays predominantly by particle emission [first two branches], as opposed to radioactive capture [third branch], whenever it is energetically possible."
- Reger, Goode & Ball 2009, pp. 814–815 "After several years and multiple experiments by numerous investigators, most of the scientific community now considers the original claims unsupported by the evidence. [from image caption] Virtually every experiment that tried to replicate their claims failed. Electrochemical cold fusion is widely considered to be discredited."
- Labinger & Weininger 2005, p. 1919 Fleischmann's paper was challenged in Morrison, R.O. Douglas (28 February 1994). "Comments on claims of excess enthalpy by Fleischmann and Pons using simple cells made to boil" (PDF). Phys. Lett. A. 185 (5–6): 498–502. Bibcode:1994PhLA..185..498M. doi:10.1016/0375-9601(94)91133-9. Archived (PDF) from the original on 21 September 2017.
- Ackermann 2006 "(p. 11) Both the Polywater and Cold Nuclear Fusion journal literatures exhibit episodes of epidemic growth and decline."
- Close 1992, pp. 254–255, 329 "[paraphrasing Morrison] The usual cycle in such cases, he notes, is that interest suddenly erupts (...) The phenomenon then separates the scientists in two camps, believers and skeptics. Interest dies as only a small band of believers is able to 'produce the phenomenon' (...) even in the face of overwhelming evidence to the contrary, the original practitioners may continue to believe in it for the rest of the careers.", Ball 2001, p. 308, Simon 2002, pp. 104, Bettencourt, Kaiser & Kaur 2009
- Ackermann, Eric (February 2006), "Indicators of failed information epidemics in the scientific journal literature: A publication analysis of Polywater and Cold Nuclear Fusion", Scientometrics, 66 (3): 451–466, doi:10.1007/s11192-006-0033-0
- Adam, David (24 March 2005), Rusbringer, Alan, ed., "In from the cold", The Guardian, London, retrieved 25 May 2008
- Alfred, Randy (23 March 2009), "March 23, 1989: Cold Fusion Gets Cold Shoulder", Wired
- Ball, Phillip (2001), "Life's matrix: a biography of water" (illustrated, reprinted ed.), University of California Press, ISBN 978-0-520-23008-8
- Barras, Collin (23 March 2009), "Neutron tracks revive hopes for cold fusion", New Scientist
- Beaudette, Charles G. (2002), "Excess Heat & Why Cold Fusion Research Prevailed", South Bristol, Maine: Oak Grove Press, ISBN 0-9678548-3-0
- Berger, Eric (23 March 2009), "Navy scientist announces possible cold fusion reactions", Houston Chronicle
- Bettencourt, Luís M.A.; Kaiser, David I.; Kaur, Jasleen (July 2009), "Scientific discovery and topological transitions in collaboration networks" (PDF), Journal of Informetrics, 3 (3): 210–221, doi:10.1016/j.joi.2009.03.001, hdl:1721.1/50230, MIT Open Access Articles.
- Biberian, Jean-Paul (2007), "Condensed Matter Nuclear Science (Cold Fusion): An Update" (PDF), International Journal of Nuclear Energy Science and Technology, 3 (1): 31–42, doi:10.1504/IJNEST.2007.012439
- Bird, Alexander (1998), Routledge, ed., "Philosophy of Science: Alexander Bird" (illustrated, reprint ed.), London: UCL Press, ISBN 1-85728-504-2
- Bowen, Jerry (10 April 1989), "Science: Nuclear Fusion", CBS Evening News, retrieved 25 May 2008
- Broad, William J. (14 April 1989a), "Georgia Tech Team Reports Flaw In Critical Experiment on Fusion", New York Times, retrieved 25 May 2008
- Broad, William J. (31 October 1989b), "Despite Scorn, Team in Utah Still Seeks Cold-Fusion Clues", The New York Times: C1
- Browne, M. (3 May 1989), "Physicists Debunk Claim Of a New Kind of Fusion", New York Times, retrieved 25 May 2008,
- Brumfiel, Geoff (2 December 2004), "US review rekindles cold fusion debate. Energy panel split over whether experiments produced power", Nature News, doi:10.1038/news041129-11
- Choi, Charles (2005), "Back to Square One", Scientific American, retrieved 25 November 2008
- Chubb, Scott; McKubre, Michael C. H.; Krivit, Steve B.; Chubb, Talbot; Miley, George =H.; Swartz, Mitchell; Violante, V.; Stringham, Roger; Fleischmann, Martin; Li, Zing Z.; Biberian, J.P.; Collis, William (2006), "Session W41: Cold Fusion", American Physical Society, retrieved 25 May 2008
- Close, Frank E. (1992), "Too Hot to Handle: The Race for Cold Fusion" (2 ed.), London: Penguin, ISBN 0-14-015926-6
- Collins, Harry; Pinch, Trevor (1993), "The Golem: What Everyone Should Know About Science" (second edition 1998, reprinted 2005 ed.), Cambridge University Press, ISBN 0-521-64550-6
- Crease, Robert; Samios, N.P. (1989), "Cold Fusion confusion", New York Times Magazine (24 September 1989): 34–38
- Daley, Beth (27 July 2004), "Heating up a cold theory. MIT professor risks career to reenergize discredited idea", The Boston Globe
- Derry, Gregory Neil (2002), "What Science Is and How It Works" (reprint, illustrated ed.), Princeton, New Jersey; Oxford: Princeton University Press, ISBN 978-0-691-09550-9, OCLC 40693869
- Feder, Toni (2004), "DOE Warms to Cold Fusion", Physics Today, 57 (4): 27–28, Bibcode:2004PhT....57d..27F, doi:10.1063/1.1752414
- Feder, Toni (January 2005), "Cold Fusion Gets Chilly Encore", Physics Today, 58: 31, Bibcode:2005PhT....58a..31F, doi:10.1063/1.1881896
- Fleischmann, Martin; Pons, Stanley (1989), "Electrochemically induced nuclear fusion of deuterium", Journal of Electroanalytical Chemistry, 261 (2A): 301–308, doi:10.1016/0022-0728(89)80006-3
- Fleischmann, Martin; Pons, Stanley; Anderson, Mark W.; Li, Lian Jun; Hawkins, Marvin (1990), "Calorimetry of the palladium-deuterium-heavy water system", Journal of Electroanalytical Chemistry, 287 (2): 293–348, doi:10.1016/0022-0728(90)80009-U
- Fleischmann, Martin; Pons, S. (1993), "Calorimetry of the Pd-D2O system: from simplicity via complications to simplicity", Physics Letters A, 176 (1–2): 118–129, Bibcode:1993PhLA..176..118F, doi:10.1016/0375-9601(93)90327-V
- Fox, Barry (25 June 1994), "Patents: Cold fusion rides again", New Scientist (1931), ISSN 0262-4079
- Gai, M.; Rugari, S.L.; France, R.H.; Lund, B.J.; Zhao, Z.; Davenport, A.J.; Isaacs, H.S.; Lynn, K.G. (1989), "Upper limits on neutron and big gamma-ray emission from cold fusion", Nature, 340 (6228): 29–34, Bibcode:1989Natur.340...29G, doi:10.1038/340029a0
- Goodstein, David (1994), "Whatever happened to cold fusion?", American Scholar, Phi Beta Kappa Society, 63 (4): 527–541, ISSN 0003-0937, retrieved 25 May 2008
- Hagelstein, Peter L.; McKubre, Michael; Nagel, David; Chubb, Talbot; Hekman, Randall (2004), "New Physical Effects in Metal Deuterides" (PDF), CONDENSED MATTER NUCLEAR SCIENCE. Proceedings of the 11th International Conference on Cold Fusion. Held 31 October-5 November 2004 in Marseilles, Washington: US Department of Energy, 11: 23, Bibcode:2006cmns...11...23H, doi:10.1142/9789812774354_0003, ISBN 9789812566409, archived from the original (PDF) on 6 January 2007, (manuscript).
- Hagelstein, Peter L. (2010), "Constraints on energetic particles in the Fleischmann–Pons experiment" (PDF), Naturwissenschaften, Springer, 97 (4): 345–52, Bibcode:2010NW.....97..345H, doi:10.1007/s00114-009-0644-4, hdl:1721.1/71631, PMID 20143040
- Hoffman, Nate (1995), "A Dialogue on Chemically Induced Nuclear Effects: A Guide for the Perplexed About Cold Fusion", La Grange Park, Illinois: American Nuclear Society, ISBN 0-89448-558-X
- Huizenga, John R. (1993), "Cold Fusion: The Scientific Fiasco of the Century" (2 ed.), Oxford and New York: Oxford University Press, ISBN 0-19-855817-1
- Jayaraman, K.S. (17 January 2008), "Cold fusion hot again", Nature India, doi:10.1038/nindia.2008.77, retrieved 7 December 2008
- Jones, J.E.; Hansen, L.D.; Jones, S.E.; Shelton, D.S.; Thorne, J.M. (1995), "Faradaic efficiencies less than 100% during electrolysis of water can account for reports of excess heat in `cold fusion` cells", Journal of Physical Chemistry, 99 (18): 6973–6979, doi:10.1021/j100018a033
- Joyce, Christopher (16 June 1990), "Gunfight at the cold fusion corral", New Scientist (1721): 22, ISSN 0262-4079, retrieved 1 October 2009
- Kean, Sam (26 July 2010), "Palladium: The Cold Fusion Fanatics Can't Get Enough of the Stuff", Slate, retrieved 31 July 2011
- Kitamura, Akita; Nohmi, Takayoshi; Sasaki, Yu; Taniike, Akira; Takahashi, Akito; Seto, Reiko; Fujita, Yushi (2009), "Anomalous Effects in Charging of Pd Powders with High Density Hydrogen Isotopes" (PDF), Physics Letters A, 373 (35): 3109–3112, Bibcode:2009PhLA..373.3109K, doi:10.1016/j.physleta.2009.06.061
- Kozima, Hideo (2006), "The Science of the Cold Fusion phenomenon", New York: Elsevier Science, ISBN 0-08-045110-1
- Kruglinski, Susan (3 March 2006), "Whatever Happened To... Cold Fusion?", Discover Magazine, ISSN 0274-7529, retrieved 20 June 2008
- Kowalski, Ludwik (2004), "Jones's manuscript on History of Cold Fusion at BYU", Upper Montclair, New Jersey: csam.montclair.edu, retrieved 25 May 2008
- Lewenstein, Bruce V. (1994), "Cornell cold fusion archive" (PDF), collection n°4451, Division of Rare and Manuscript Collections, Cornell University Library, retrieved 25 May 2008
- Lewis, N.S.; Barnes, C.A.; Heben, M.J.; Kumar, A.; Lunt, S.R.; McManis, G.E.; Miskelly, S.R.; Penner, G.M.; et al. (1989), "Searches for low-temperature nuclear fusion of deuterium in palladium", Nature, 340 (6234): 525–530, Bibcode:1989Natur.340..525L, doi:10.1038/340525a0
- Mallove, Eugene (1991), "Fire from Ice: Searching for the Truth Behind the Cold Fusion Furor", London: Wiley, ISBN 0-471-53139-1
- Mengoli, G.; Bernardini, M.; Manduchi, C.; Zannoni, G. (1998), "Calorimetry close to the boiling temperature of the D2O/Pd electrolytic system", Journal of Electroanalytical Chemistry, 444 (2): 155–167, doi:10.1016/S0022-0728(97)00634-7
- Mullins, Justin (September 2004), "Cold Fusion Back From the Dead", IEEE Spectrum, 41 (9): 22, doi:10.1109/MSPEC.2004.1330805
- Mosier-Boss, Pamela A.; Szpak, Stanislaw; Gordon, Frank E.; Forsley, L.P.G. (2009), "Triple tracks in CR-39 as the result of Pd–D Co-deposition: evidence of energetic neutrons", Naturwissenschaften, 96 (1): 135–142, Bibcode:2009NW.....96..135M, doi:10.1007/s00114-008-0449-x, PMID 18828003
- Labinger, JA; Weininger, SJ (2005), "Controversy in chemistry: how do you prove a negative?—the cases of phlogiston and cold fusion", Angew Chem Int Ed Engl, 44 (13): 1916–22, doi:10.1002/anie.200462084, PMID 15770617,
So there matters stand: no cold fusion researcher has been able to dispel the stigma of 'pathological science' by rigorously and reproducibly demonstrating effects sufficiently large to exclude the possibility of error (for example, by constructing a working power generator), nor does it seem possible to conclude unequivocally that all the apparently anomalous behavior can be attributed to error.
- Laurence, William L. (30 December 1956), "Cold Fusion of Hydrogen Atoms; A Fourth Method Pulling Together", The New York Times: E7
- Oriani, Richard A.; Nelson, John C.; Lee, Sung-Kyu; Broadhurst, J. H. (1990), "Calorimetric Measurements of Excess Power Output During the Cathodic Charging of Deuterium into Palladium", Fusion Technology, 18: 652–662, ISSN 0748-1896
- Paneth, Fritz; Peters, Kurt (1926), "Über die Verwandlung von Wasserstoff in Helium", Naturwissenschaften (in German), 14 (43): 956–962, Bibcode:1926NW.....14..956P, doi:10.1007/BF01579126
- Park, Robert L (2000), "Voodoo Science: The road from foolishness to fraud", Oxford, U.K. & New York: Oxford University Press, ISBN 0-19-860443-2, retrieved 14 November 2010
- Petit, Charles (14 March 2009), "Cold panacea: two researchers proclaimed 20 years ago that they'd achieved cold fusion, the ultimate energy solution. The workwent nowhere, but the hope remains", Science News, 175 (6): 20–24, doi:10.1002/scin.2009.5591750622
- Platt, Charles (1998), "What if Cold Fusion is Real?", Wired Magazine (6.11), retrieved 25 May 2008
- Pollack, A. (17 November 1992), "Cold Fusion, Derided in U.S., Is Hot In Japan", The New York Times
- Pollack, A. (26 August 1997), "Japan, Long a Holdout, is Ending its Quest for Cold Fusion", New York Times, 79: 243, C4
- Pool, Robert (28 April 1989), "How cold fusion happened – twice!", Science, 244 (4903): 420–3, Bibcode:1989Sci...244..420P, doi:10.1126/science.244.4903.420, PMID 17807604
- Reger, Daniel L.; Goode, Scott R.; Ball, David W. (2009), "Chemistry: Principles and Practice" (3, revised ed.), Cengage Learning: 814–815, ISBN 978-0-534-42012-3
- Rousseau, D. L. (January–February 1992), "Case Studies in Pathological Science: How the Loss of Objectivity Led to False Conclusions in Studies of Polywater, Infinite Dilution and Cold Fusion", American Scientist, 80: 54–63, Bibcode:1992AmSci..80...54R
- Saeta, Peter N., ed. (21 October 1999), "What is the current scientific thinking on cold fusion? Is there any possible validity to this phenomenon?", Scientific American, Ask the Experts: 1–6, retrieved 17 December 2008 – introduction to contributions from:
- Sampson, Mark T. (2009), ""Cold fusion" rebirth? New evidence for existence of controversial energy source", ACS, archived from the original on 2 October 2011
- Sanderson, Katharine (29 March 2007), "Cold fusion is back at the American Chemical Society", Nature news, ISSN 0028-0836, retrieved 18 July 2009
- Scaramuzzi, F. (2000), "Ten years of cold fusion: an eye-witness account" (PDF), Accountability in Research, 8 (1&2): 77, doi:10.1080/08989620008573967, ISSN 0898-9621, OCLC 17959730, retrieved 20 January 2016
- Seife, Charles (2008), "Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking", New York: Viking, ISBN 0-670-02033-8
- Shamoo, Adil E.; Resnik, David B. (2003), Oxford University Press US, ed., "Responsible Conduct of Research" (2, illustrated ed.), Oxford: Oxford University Press, ISBN 0-19-514846-0
- Shanahan, Kirk L. (23 May 2002), "A systematic error in mass flow calorimetry demonstrated", Thermochimica Acta, 382 (2): 95–100, doi:10.1016/S0040-6031(01)00832-2
- Shanahan, Kirk L. (April 2005), "Comments on "Thermal behavior of polarized Pd/D electrodes prepared by co-deposition"" (PDF), Thermochimica Acta, 428 (1–2): 207–212, doi:10.1016/j.tca.2004.11.007
- Shkedi, Zvi; McDonald, Robert C.; Breen, John J.; Maguire, Stephen J.; Veranth, Joe (1995), "Calorimetry, Excess Heat, and Faraday Efficiency in Ni-H2O Electrolytic Cells", Fusion Technology, 28 (4): 1720–1731, ISSN 0748-1896
- Simon, Bart (2002), "Undead science: science studies and the afterlife of cold fusion", Undead science : science studies and the afterlife of cold fusion (illustrated ed.), Rutgers University Press: 49, Bibcode:2002usss.book.....S, ISBN 978-0-8135-3154-0
- Szpak, Stanislaw; Mosier-Boss, Pamela A.; Miles, Melvin H.; Fleischmann, Martin (2004), "Thermal behavior of polarized Pd/D electrodes prepared by co-deposition" (PDF), Thermochimica Acta, 410: 101, doi:10.1016/S0040-6031(03)00401-5
- Tate, N. (1989), "MIT bombshell knocks fusion 'breakthrough' cold", Boston Herald (1 May 1989): 1, ISSN 0738-5854
- Taubes, Gary (15 June 1990), "Cold fusion conundrum at Texas A&M", Science, 248 (4961): 1299–1304, Bibcode:1990Sci...248.1299T, doi:10.1126/science.248.4961.1299, PMID 17747511
- Taubes, Gary (1993), "Bad Science: The Short Life and Weird Times of Cold Fusion", New York: Random House, ISBN 0-394-58456-2
- US DOE, U.S. Department of Energy (1989), "A Report of the Energy Research Advisory Board to the United States Department of Energy", Washington, DC: U.S. Department of Energy, retrieved 25 May 2008
- US DOE, U.S. Department of Energy (2004), "Report of the Review of Low Energy Nuclear Reactions" (PDF), Washington, DC: U.S. Department of Energy, archived from the original (PDF) on 26 February 2008, retrieved 19 July 2008
- Van Noorden, R. (April 2007), "Cold fusion back on the menu", Chemistry World, ISSN 1473-7604, retrieved 25 May 2008
- Rogers, Vern C.; Sandquist, Gary M. (December 1990), "Cold fusion reaction products and their measurement", Journal of Fusion Energy, 9 (4): 483–485, Bibcode:1990JFuE....9..483R, doi:10.1007/BF01588284
- Voss, David (1 March 1999), "What Ever Happened to Cold Fusion", Physics World, ISSN 0953-8585, retrieved 1 May 2008
- Voss, David (21 May 1999b), "'New Physics' Finds a Haven at the Patent Office", Science, 284 (5418): 1252, doi:10.1126/science.284.5418.1252, ISSN 0036-8075, retrieved 18 July 2009
- Wilford, John Noble (24 April 1989), "Fusion Furor: Science's Human Face", New York Times, ISSN 0362-4331, retrieved 23 September 2008
- Williams, D.E.; Findlay, D.J.S.; Craston, D.H.; Sené, M.R.; Bailey, M.; Croft, S.; Hooton, B.W.; Jones, C.P.; et al. (1989), "Upper bounds on 'cold fusion' in electrolytic cells", Nature, 342 (6248): 375–384, Bibcode:1989Natur.342..375W, doi:10.1038/342375a0
- Wilner, Bertil (May 1989), "No new fusion under the Sun", Nature, 339 (6221): 180, Bibcode:1989Natur.339..180W, doi:10.1038/339180a0
- Wilson, R.H.; Bray, J.W.; Kosky, P.G.; Vakil, H.B.; Will, F.G. (1992), "Analysis of experiments on the calorimetry of LiOD-D2O electrochemical cells", Journal of Electroanalytical Chemistry, 332: 1–31, doi:10.1016/0022-0728(92)80338-5
- Cold fusion at Curlie (based on DMOZ)
- International Society for Condensed Matter Nuclear Science (iscmns.org), organizes the ICCF conferences and publishes the Journal of Condensed Matter Nuclear Science. See: library.htm of published papers and proceedings.
- Low Energy Nuclear Reactions (LENR) Phenomena and Potential Applications: Naval Surface Warfare Center report NSWCDD-PN-15-0040 by Louis F. DeChiaro, Ph.D., September 23, 2015
- Current Science, 25 Feb 2015 issue devoted to LENR, contains 34 papers, mostly review articles. | <urn:uuid:32b1b36a-6d7f-4d18-a16d-1418f0d3a817> | 4.15625 | 22,791 | Knowledge Article | Science & Tech. | 58.588178 | 95,537,394 |
It can operate as an autonomous, free-swimming vehicle to fly on pre-programmed missions over wide areas, mapping the seafloor, gathering data on the oceans, and searching for specific research targets. But then engineers can convert it within a few hours into a tethered vehicle connected via a hair-thin, 25-mile long cable, which enables scientists on the surface ship to receive real-time video images and send instant commands to maneuver the vehicle and its mechanical arm for close-up investigations and sample gathering. Nereus can also work in the deepest parts of the ocean, from 6,500 meters to 11,000 meters (21,500 feet to 36,000 feet), a depth currently unreachable for routine ocean research. After more testing and development, the goal is to aim Nereus to explore the deepest known waters on the planet-Challenger Deep, a trench in the Pacific Ocean southwest of Guam. The trench is deeper than Mount Everest is high, extending almost 11,000 meters (36,000 feet) beneath the sea surface.
Photo by Tim Shank, Woods Hole Oceanographic Institution | <urn:uuid:10e935d6-2126-4b17-8af7-811165fa66db> | 3.640625 | 227 | News Article | Science & Tech. | 35.947249 | 95,537,410 |
Thermonuclear Reactions Inside Stars
Author: Source: Datetime: 2016-09-21 10:58:09
To achieve the above light fusion reaction requires certain conditions in the earth's atmosphere is full of fusion reaction can be carried out in light elements, but due to the failure to meet the conditions of a fusion reaction occurs, and there is no release of nuclear energy to - because each light nuclei are positively charged, so they want to happen fusion reaction, we must overcome the Coulomb repulsion makes them sufficiently close and attracted by the strong nuclear force plays a leading role in the laboratory on earth, you can manually the way to charged particles or light nuclei accelerated to high energy, then let them bombard other light nuclei, in order to achieve fusion reaction in a very short period of time - but in the interior of the sun, completely rely on proton or deuteron the probability of their great kinetic energy to overcome the Coulomb repulsion, which requires a temperature of at least the central portion of the sun reach of 107K.
Clearly temperature of the central portion, the greater the kinetic energy of light nuclei, to produce a fusion reaction to produce the larger fusion t the number of nuclear reactions light the more since the fusion reaction, on the one hand continue to release nuclear energy to sustain a nuclear "burning"; on the other hand they continue to the surrounding energy dissipated by radiation or convection, etc. Therefore, only when. fusion and nuclear energy should be released to the surroundings than the energy loss when burning "to continue, otherwise it will turn off.
As mentioned earlier, the sun originated in a cloud of gas clouds, shrinking under its own gravity, and in the process of contraction, the gravitational potential energy of gas particles obtained by bumping into each other and into the kinetic energy of the particles, so that the gas when the temperature of the gas cloud temperature of the central region to meet the conditions required for fusion of light inches to ignite nuclear fusion reactions in the central region, which inches radiation pressure gas clouds and the central region may gravitational equilibrium, thereby maintaining a relatively stable stage of development.
If the sun original gas cloud contains a certain amount of oxygen isotopes deuterium 2D, the deuterium fusion reaction will be the first start, because it requires lower temperatures, when Genki after running out of fuel, other light nuclear fusion is also hard to occur , so the gas cloud will be due to the gravitational ashamed to use and further contraction, the temperature of the central region continue to rise until the P-P cycle begins, or, also accompanied CNO cycle • when the hydrogen eventually fired after loyal, The sun then further contraction, the temperature continues to rise Gao, thereby starting the fusion process nitrogen.
The sun brightness, radius and surface temperature versus time curve gives the temperature and density 7V central region of the sun changes with time curve - can be seen from the sun initial gas cloud is much larger than the diameter of the current, so brightness is much higher than the current due to the gravitational contraction of the reason, the radius of the sun is reduced, the brightness is reduced, about 2 million years, the temperature of the central region of the sun reached 8X05K, can already make deuterium fusion ignition.
We will use a solar power portable generator to power more devices, if the sun gas cloud deuterium content of the Earth's atmosphere at present quite content (B about hydrogen isotopes 0.02H), the deuterium fusion sustainable 105 years or so. then, the sun due to the gravitational effect of further contraction over from adolescence to young adulthood Australia, this elbow, increasing the temperature of the central region, while the surface temperature changes are not too large, the temperature difference between the center and the surface of the growing, about 1.4X 10s years later, the energy generated by the solar interior the loss by the principal to the surrounding radiation and convection, radiation and gradually form a nucleus.
TAG: Deployment Fixed SMA Tesvolt Unlimited Volt Army Sonnen Multi-Storage Reliability Shell Manganese 200MW Ørsted Micro
Solar Power Batteries In Phoenix
A small building on the outskirts of the Festival housing developments in the far West Valley could help utilities...
New Manganese Hydrogen Battery For...
A research team from Stanford University in the United States has developed a prototype of a manganese hydrogen ...
Honda Partnerships With Nissan To D...
As 2017 draws to a close, the buzz around electric vehicles has reached a white-hot level worldwide. | <urn:uuid:24650301-694f-4293-9964-28237119114b> | 3.609375 | 939 | Truncated | Science & Tech. | 24.76744 | 95,537,420 |
Random Number is the set of unordered arranged number. The class Random is derived from java.util.Random.An instance of Random class is used to provide you a stream of pseudorandom number.
In this Tutorial we want to describe you a code that helps you in understanding to Get Random Number. For this we have a class name Get Random Number. Inside the main method we create an instance of random class using new operator. The for loop is used to print the number and work as iterartor by using System.out.println.
1)Random.nextInt( )-The Method returns you the set of pseudorandom number executed till 10 times as specified in the for loop.
2)Random nextInt(int )-The Method returns you the pseudorandom, uniformly distributed int value one less than the specified int value.
On Execution the code show you a set of Random number in the command prompt from 0-9 in any sequence.GetRendomNumber.java | <urn:uuid:15dd0d1a-9043-4157-8ad0-ea3ffbdeccd2> | 4.09375 | 205 | Tutorial | Software Dev. | 60.2675 | 95,537,430 |
Researchers have now sequenced and analysed the genome of one termite species. The study has been published in the latest issue of the online journal "Nature Communications".
Scientists have long been doing research into how the complicated system of living together in insect colonies functions. Researchers are also looking for answers in animals' DNA. A large international group of researchers – including scientists from Münster University – have now sequenced and analysed the genome of one termite species. This means that they have now been able to compare the termites' DNA with that of ants and colony-building bees.
This is of particular interest to the researchers because although termites have a similar lifestyle – they, too, form colonies and have various castes such as workers and reproductives – they are not closely related to hymenopterans, which include bees and ants. The study has been published in the latest issue of the online journal "Nature Communications".
"The analysis of the termite genome is crucial in improving our understanding of decisive steps in the evolution of insects – the development of social insects," says Dr. Nicolas Terrapon, who carried out the study, as one of its main authors, during the time he spent as a post-doc at the Institute of Evolution and Biodiversity at Münster University. "Termites", he adds, "are, in contrast to bees and ants, quite original insects and belong to the cockroaches. Our investigations will help in acquiring a better understanding of the evolution of insects in general."
The scientists examined whether the evolution of sociality in various groups of insects was based on the same molecular mechanisms. In doing so, they discovered not only differences, but also things they had in common. One conspicuous difference they came across was in groups of genes involved in the maturing of the sperm in male animals. In the case of the species of termites that live in wood – Zootermopsis nevadensis (dampwood termites) – some of these genes occur more actively or in greater numbers than in the species of ants and bees hitherto examined.
The researchers assume that this reflects a special feature of their lifestyle – that while male ants and bees, for example, produce a large number of sperm just once and then die shortly after mating, male termites mate with the queen of their nest several times during their life.
Another difference is that, in comparison with the highly social hymenopterans, the dampwood termites have only a few olfactory receptors. In general, smell plays an extremely important role for social insects, not only in communication and in recognizing nest comrades, but also in looking for food. Dampwood termites, however, have a simpler lifestyle than ants, honey bees or more highly developed termites. In looking for food, for example, they do not move away from the nest and display less complex communicative behaviour. The lower number of olfactory receptors reflects this lifestyle.
The researchers did, however, also discover things they have in common. Dampwood termites, for example, have – just like ants – an especially large number of genes which play a role in immune responses. Social insects are more dependent on effective infection controls, as pathogens will otherwise spread easily in the densely populated colonies. Moreover, the scientists have found proteins which might play an important role in the development of caste-specific features – just like a similar system in honey bees.
Prof. Erich Bornberg-Bauer (Münster University), Prof. Jürgen Liebig (Arizona State University, USA), Prof. Judith Korb (Freiburg University) and Guojie Zhang (China National Genebank, BGI-Shenzen, China) were involved in the study as project leaders. Dr. Nicolas Terrapon is now engaged on research at Aix-Marseille University in France.
Terrapon N. et al. (2014): "Molecular traces of alternative social organization in a termite genome". Nature Communications 5; Article number: 3636, doi:10.1038/ncomms4636
http://www.nature.com/ncomms/2014/140520/ncomms4636/full/ncomms4636.html Original publication
Dr. Christina Heimken | idw - Informationsdienst Wissenschaft
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:8bc25e0a-d4ef-45a6-aaf8-362181da8e2a> | 3.484375 | 1,477 | Content Listing | Science & Tech. | 34.74398 | 95,537,440 |
Share this article:
In a time where technology and meteorology are very precise, psychologists and meteorologists are working together to evaluate better warning systems.
Dr. Laura Myers, director and senior research scientist at the Center for Advanced Public Safety, who studied warnings and how people react to them, said people have a tendency to not want to change plans or their behavior for weather unless they are fairly sure the weather is going to impact them.
Myers said people get desensitized to watches and warnings after so many don’t produce any impacts for their specific area.
“Improvements in the warning process are addressing these issues and providing more specific geolocations and more lead time when possible,” Myers said.
Alert notifications that are targeted to an exact location and provide more lead time to help people react better. Also, warnings with potential impacts and calls to actions help people better respond to threatening weather.
“When people hear what the weather impacts are, such as damage and destruction to well-built homes, they start to pay attention. When they are told they need to take shelter now because their location is going to take a direct impact, they usually act,” Myers said.
The word emergency, such as a tornado emergency or flash flood emergency, tends to get the attention of people, Myers said.
The time a warning is issued to the public can also make a difference in response. When people are given too much lead time, they can get tired of waiting and tend to go back to their business, Myers said.
“That’s why we have to be careful giving too much information several days out from an event. There is a real communication process involved in the sequencing of information in the days and hours leading up to an event,” Myers said.
Regardless of the warning, some people wait until they see their life is in danger.
“A lot of social media research was done and people said they have to see [a tornado] before they do something,” AccuWeather Meteorologist Dan Kottlowski said.
Mike Smith, Senior Vice President of AccuWeather Enterprise Solutions and author of Warnings: The True Story of How Science Tamed the Weather, said there are many reasons why people wait to react.
"There is considerable inertia in people. They are busy or their attention is on some project. There is also sociological evidence that people feel silly for taking shelter; that it somehow reflects poorly on their courage," Smith said.
He said in order to get people to take shelter or evacuate, it seems warnings need to come from someone they trust. Receiving the warning from more than one source can also help, Smith said.
“We try to issue the warnings very early now so they take cover, but people need to see that something’s coming at them,” Kottlowski said.
Therefore, Kottlowski said they have started enhanced warnings, which are warnings with amplified wording like severe and imminent danger.
"Not all warnings have enhanced wording, it's only when we have a situation when we have a big tornado already causing damage," Kottlowski said.
Meteorologists and psychologists are looking at more than just the wording.
“Psychologists are working with meteorologists to come up with an understanding of why people are just not adhering to the warnings, maybe it’s a combination of wording and graphics,“ Kottlowski said.
Kottlowski said radars don't make sense to some people, so there needs to be a simple graphic that will show were the most dangerous impacts of the severe weather will occur.
However, broadcasters are able to explain radars to people in a simple, urgent manner.
"The really good television meteorologists have mastered the art of using tone of voice and other cues to persuade people to take action when they are convinced a really dangerous storm is occurring or imminent," Smith said.
That will still not solve the problem entirely, experts agree.
“When you get a really bad outbreak where you have multiple tornadoes moving very quickly, you are still going to have a lot of fatalities and injuries because people aren’t going to be able to get out quickly enough,” Kottlowski said.
Technology, meteorologists and psychologists are making immense improvements and progress to help save the lives of people across the globe.
“The goal is to communicate weather information to an educated public who knows what to do when the time comes,” Myers said.
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
Staying cognizant of the weather is a key component of having a safe, enjoyable time on the water.
The weather will vary greatly across the United Kingdom this week as southern areas endure prolonged heat and the north is greeted by fresher air.
Dangerous heat will threaten millions of people across Europe this week with no lasting relief in sight.
Following episodes of flooding downpours in the northeastern United States into Thursday, weather conditions will improve, but it may not be totally rain-free for events at Pocono Raceway this coming weekend.
Deadly heat will continue across Japan through at least Thursday, following the hottest day on record in Japan.
Mars has been growing bigger and brighter in the night sky in 2018 and it will reach its peak on Thursday night, bringing the best opportunity to view the Red Planet since 2003.
A switch to a cooler weather pattern in the midwestern United States will come at the expense of locally violent thunderstorms prior to the middle of the week. | <urn:uuid:d26996ae-c3c2-4455-9f6f-fdeca122ceb2> | 3.140625 | 1,179 | News Article | Science & Tech. | 41.198528 | 95,537,448 |
The research groups headed by Prof. Christoph Dehio and Prof. Tilman Schirmer could demonstrate that through the alteration of one single amino acid this inhibition of enzyme activity can be relieved. Their findings, which have been published in the current issue of «Nature», will enable to investigate the physiological role of the potentially lethal function of Fic proteins in bacteria and higher organisms in the future.
Left: Binding of the antitoxin (blue) inhibits AMPylation of the target protein (magenta) by the Fic protein (grey), which allows normal bacterial growth. Right: In the absence of the antitoxin the target protein gets AMPylated, resulting in inhibition of cell division and thus abnormal filamentous growth of bacteria. Illustration: Universität Basel
Fic proteins are found in most forms of life ranging from simple bacteria to man. Only a few representatives of this protein family of about 3000 members have been investigated to date. These are enzymes that chemically alter other proteins through the attachment of an adenosine monophosphate group (AMP) derived from the important energy carrier ATP. This reaction, known as AMPylation, specifically modifies the function of the target proteins.
The biochemically best understood Fic proteins are produced by pathogenic bacteria and injected into host cells to alter cellular signaling proteins to the advantage of the bacterial intruder. However, the far majority of Fic proteins have probably evolved a function that is instrumental for the cell in which they are produced. Why the biochemical function of only a few of these Fic proteins has been elucidated so far was not clear. The reason has now been found by the collaborating research groups of the infection biologist Prof. Christoph Dehio and the structural biologist Prof. Tilman Schirmer.
The Active Center of Fic Proteins is BlockedThe scientists could show that an amino acid residue (glutamate-finger) protrudes into the active center of the Fic proteins. This prevents productive binding of ATP and explains the inactivate ground state of the enzyme. Surprisingly, in some Fic proteins the inhibiting residue is part of the Fic protein itself, whereas in other cases it is provided by a separate protein (called antitoxin). It was shown that upon truncation of the glutamate-finger by genetic manipulation or removal of the entire antitoxin the activity of the enzyme is awakened – sometimes with drastic consequences for the affected cells. Bacterial cells no longer divide, while human cells can even die.
Prof. Dr. Tilman Schirmer, Biozentrum, University of Basel, Tel. 061 267 28 89, Email: email@example.com
Heike Sacher | idw
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Materials Sciences
23.07.2018 | Information Technology
23.07.2018 | Health and Medicine | <urn:uuid:b982fe1b-5f7a-4d0a-b7cf-5930193e620a> | 3.0625 | 1,148 | Content Listing | Science & Tech. | 35.220223 | 95,537,451 |
Recent and rapid radiations provide rich material to examine the factors that drive speciation. Most recent and rapid radiations that have been well-characterized involve species that exhibit overt ecomorphological differences associated with clear partitioning of ecological niches in sympatry. The most diverse genus of rodents, Rattus (66 species), evolved fairly recently, but without overt ecomorphological divergence among species. We used multilocus molecular phylogenetic data and five fossil calibrations to estimate the tempo of diversification in Rattus, and their radiation on Australia and New Guinea (Sahul, 24 species). Based on our analyses, the genus Rattus originated at a date centered on the Pliocene-Pleistocene boundary (1.84-3.17 Ma) with a subsequent colonization of Sahul in the middle Pleistocene (0.85-1.28 Ma). Given these dates, the per lineage diversification rates in Rattus and Sahulian Rattus are among the highest reported for vertebrates (1.1-1.9 and 1.6-3.0 species per lineage per million years, respectively). Despite their rapid diversification, Rattus display little ecomorphological divergence among species and do not fit clearly into current models of adaptive radiations. Lineage through time plots and ancestral state reconstruction of ecological characters suggest that diversification of Sahulian Rattus was most rapid early on as they expanded into novel ecological conditions. However, rapid lineage accumulation occurred even when morphological disparity within lineages was low suggesting that future studies consider other phenotypes in the diversification of Rattus.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:a2210ee3-7668-4e73-a701-121d07221f96> | 3.421875 | 349 | Academic Writing | Science & Tech. | 15.507277 | 95,537,461 |
Environmental challenges facing our planet are unprecedented and facing them requires global efforts. These challenges are compounded in developing countries because of economic, education, and political issues. Palestine is unique in its position geographicaly at intersection of continents. This is combined with the geologic history due to tectonic plate movements that created the great rift valley (with the lowest point on earth at the Dead Sea) and a series of mountains from the Galilee to the Hebron mountains creating a very rich biodiversity for a very small area. Yet, Palestine was subjected to decades of de-development, the lack of Palestinian sovereignty over natural resources, and politics that trump environmental issues and create environmental challenges hard to deal with. Significant demographic shifts developed in the past several decades of Israeli-Palestinian conflict. Habitat destruction and environmental declines are notable [see research papers on this under the research tab]. But the question remains is there little to be done on the environmental and science front while we wait for the political situation to get resolved?
The Palestine Institute for Biodiversity and Sustainability (PIBS) and the Palestine Museum of Natural History (PMNH) at Bethlehem University (BU) were founded in 2014 for research, education and conservation. In the last year environmental conservation became a major focus after having done the appropriate research to allow us to focus on certain areas of significant biodiversity interest and having engaged in education that build capacity for conservation. PMNH/PIBS launched an Environmental Assessment Unit with oversight by experts like Prof. Mazin Qumsiyeh ((BU) and Prof. Zuhair Amr (Jordan) and with collaboration and consultation with the Environmental Quality Authority (EQA) and key stakeholders (Ministry of Education, local authorities, farmers, environmentalists and more).
For details on Conservation issues in Palestine we (MB Qumsiyeh and ZS Amr edited by Hans Seidel Foundation) published a report titled “Environmental Conservation and Protected Areas in Palestine: Challenges and Opportunities” (2017, available here ). Here are also three examples of achievements in the conservation sector for PIBS/PMNH at BU:
As an example, the project on Wadi Al-Zarqa Al-Ulwi funded by UNDP/GEF/SGP achieved the following: 1) surveying the fauna and flora of the area to identify the species at risk, 2) performing a SWOT analysis (strengths, weaknesses, opportunities, and threats) of the area and providing practical recommendations for action that maximize benefit while minimizing use of resources, 3) reaching out to the community via tested permaculture models and environmental education programs (women, school children, and farmers) to enhance community buy-in and increase community benefits from environmental conservation and 4) increasing local community and students’ public awareness through a series of 10 workshops. The total beneficiaries were 493, including 200 students (more than 50% female) and 293 adults. The two objectives accomplished via education were: a) increased environmental awareness and behavioral change to conserve ecosystems in WZU, and b) introduced methods that improve people’s lives and the economy via things like permaculture, recycling, upcycling, and composting (many started implementing these practices).
The Convention on Biological Diversity adopted at the Earth Summit Conference in Rio de Janeiro, Brazil, highlighted three key principles: conservation of biological diversity, sustainable use of nature, and fair and equitable sharing of the benefits. Our work shows the value of combining basic research with education and conservation and with collaboration between academia, NGOs, and government officials and succeeding with limited resources. Much more remains to be done. We have limited resources and human capacity so we welcome collaborations and support both from local and global activists to help us protect our shared blue planet.
Some birds recorded from the Wadi Zarqa protected area. A. European Bee-eater. B. Syrian Woodpecker. C. Little Owl. D. White-throated Kingfisher. E. Mallard. F. Cattle Egret. Photos by A. Khalilieh | <urn:uuid:aadc5c3c-3ab5-4b17-80b6-144ac9d162e5> | 3.171875 | 829 | Knowledge Article | Science & Tech. | 18.827857 | 95,537,463 |
Investigating the Adaptability of Cells in Space
Article Mar 24, 2017 | By Anna MacDonald, Editor for Technology Networks
Berthing Dragon Spaceship CRS-6 with ISS. Credit: NASA
With the opportunity for space travel steadily growing, and for trips of longer duration, it is becoming increasingly important to understand what happens to cells in the human body in microgravity. Studying how cells behave in space could have wide implications for the health and safety of both astronauts and space tourists of the future.
Headed by University of Zurich scientists Professor Oliver Ullrich and Dr. Cora Thiel, experiments to directly measure how cells respond and adapt to changes in gravity were recently conducted aboard the ISS.1 Amazingly, the cells adapted to zero gravity in just 42 seconds, offering hope that our bodies may be able to cope with space travel better than previously thought.
To learn more about the experiments, and the challenges of designing and carrying out research such as this in space, we spoke to Professor Ullrich.
AM: Can you give us an overview of your lab’s main research directions?
OU: Gravity has been a constant factor throughout the evolution of life on Earth, and played an important role for the architecture and morphology of all biological systems. It can therefore be assumed that abrupt changes of the gravitational force have an impact on the function of living organisms. We are investigating, if and how cellular and molecular functions respond and adapt to gravitational changes or if they strictly depend on Earth’s gravity. We are aiming to understand how a gravitational force is being transduced into a cellular reaction, how gravity is connected to molecular and cellular homeostasis and function and finally to obtain fundamental data for an appropriate risk assessment of long-term space missions.
ESA-Astronaut Samantha Christoforetti preparing the BIOLAB centrifuge for the TRIPLE LUX experiment. Credit: NASA
AM: What challenges did you face when designing experiments to be conducted in space?
OU: An experiment in space never forgive mistakes. An experiment in space never tolerates any carelessness. An experiment in space has to be perfect.
It takes a very long time to prepare for a space experiment. First of all, after the development of the project idea, the preliminary experiments and tests and the selection of the project proposal in highly competitive calls of experiments, all requirements have to be discussed and coordinated with the involved space agencies and payload developer. Numerous tests follow to check, develop and approve the hardware. During the entire development process, the biological system must be adapted to the hardware changes in many series of tests and checked again and again, and finally standardized and validated. Always included are redundant procedures, reserves, plan B and C procedures, everything tested in every detail. The entire team is very well trained and has to work with precision and discipline over many years.
The big difference between a regular laboratory experiment and a space experiment is the huge amount of testing, optimization, standardization, logistics and in the extraordinary large team size. Our experiment was prepared partially at the University of Zurich and partially at the Space Life Science Labs of the Kennedy Space Center and therefore, samples, chemicals and hardware had to be transported by airplane between the two continents.
The entire experiment preparation up to the experiment itself took more than 10 years, with a phase of very intensive work of 2 years before the final “go” to send the experiment into space. The direct preparations for the experiment mission were about 6 months before being launched with Space X-6 to the ISS.
In spite of all the hardships, this kind of work is very rewarding. We are entering an entirely new world, the world of a new gravity environment different from Earth. Research in microgravity is to explore and to discover, to enter new worlds and to go far beyond known boundaries.
AM: Can you tell us about the TRIPLE LUX A experiment?
OU: The experiment TRIPLE LUX A was developed for and performed in the BIOLAB laboratory of the ISS COLUMBUS module and allowed for the first time the direct measurement of a cellular function in real time and on orbit. We transported mammalian macrophages (NR8383 rat alveolar macrophages) in an ultra-deep frozen state from Zurich to the ISS, thawed and recovered them in the BIOLAB on orbit and measured the oxidative burst reaction while being exposed to a centrifuge regime of internal 0g and 1g controls and step-wise increase or decrease of the gravitational force in four independent experiments. All data recorded were directly transmitted to the BIOLAB ground control at the German Aerospace Center DLR in Cologne, followed in real time and analysed afterwards.
AM: Why was the oxidative burst reaction chosen as a measurement?
OU: In one sentence: Because of its evolutionary importance for understanding multicellular life, because of its clinical importance for human space flight and because of the operational feasibility.
We choose the oxidative burst reaction because 1.) it represents one of the key elements in the innate immune response, the most important barrier against microbes invading the body. 2.) it represents an evolutionary very ancient part of the immune system, present from the earliest stages of evolution in every type of multicellular life and 3.) our previous experiments demonstrated that the oxidative burst reaction is inhibited in microgravity during parabolic flight experiments, justifying to analyse threshold levels of gravity sensitivity adaptation effects in a much more complex ISS experiment.
AM: Why were immune cells chosen? Would you expect the same adaptability across other cell types? What are the wider implications of these results, and what does this mean for space travel?
OU: In our study, we found a really surprising ultra-rapid adaptation in one mammalian cell type, the macrophages. That proves the general existence of ultra-fast adaption responses and hints to the possibility that it appears in other cell types too. Macrophages are of crucial importance for the entire organism. Macrophages are not only important for the elimination of pathogens, they are also required for the clearance of billions of apoptotic cells every day and therefore for the constant and balanced regeneration and homeostasis of our body's cellular architecture. Therefore, the adaptation potential of one tiny cell type could lead to huge beneficial consequence for the entire organism.
The entire human organism is an unbelievable complex system, which responds and adapts to a microgravity environment at different levels and with different time courses. Our experiences are that a huge number of molecular reactions are occurring surprisingly fast, whereas systemic effects such as the musculoskeletal system need longer reaction times.
Because gravity has been constant throughout the history of Earth and evolution of life, no pre-set adaptation program can be expected and the cellular response may therefore be less organized than other adaptation processes. Indeed, so many in vitro studies demonstrated extensive alterations in almost every cellular and molecular aspect which were examined in more detail. In contrast, long-term studies with animals and humans are completely lacking this dramatic picture of short-term cellular effects, which indicates a very efficient adaptation process. We assume that the human body and its cells are equipped with a robust and efficient adaptation potential when challenged with low gravitational environments.
We could speculate that the more complex a multicellular system is, the more it depends on the existence of a certain level of gravitational force. The force of gravity on Earth could probably represent a fundamental driving force for the evolution of higher organisms.
ESA-Astronaut Samantha Christoforetti preparing BIOLAB of the COLUMBUS module for the TRIPLE LUX experiment. Credit: NASA
AM: What could be responsible for the rapid adaptability of the cells to microgravity?
OU: In our experiment, we were able to provide for the first time, direct evidence of cellular sensitivity to gravity through real-time on orbit measurements using an experiment system, in which all factors except gravity were constant. The molecular mechanisms with which oxidative burst reacts and adapts to a new gravitational environment are still unknown. The rapid reaction and adaptation of oxidative burst reaction suggests a direct effect at the level of the membrane-bound NADPH oxidase complex, which is closely associated with cytoskeletal dynamics and linked to areas where tension and compression sensitivity are located. Key signal proteins might be involved in this process. So far we are technically not able to investigate the interaction of these molecules and molecule complexes on board the ISS in real-time. Our goal is to investigate these dynamic reactions in live-imaging studies as soon as this technology becomes operational for on-orbit experiments at ISS.
AM: Can you tell us about future work that you have planned?
OU: Our next ISS experiments in the German and European Space Research Program will clarify whether and to which extent gravity is involved in epigenetic gene regulation and cell function and how cells adapt to the new situation. Our experiments will allow us to identify microgravity-induced specific gene expression regulation at the level of every single gene. Importantly, cellular adaptation to the microgravity environment appears to include very complex changes of cellular and molecular mechanisms. It can only be studied and understood in dynamic measurements. Therefore in further experiments we plan to follow our concept of dynamic on-orbit monitoring and to pin down at least one direct molecular pathway which links gravitational force and cell function.
Biosafety and the Agents of DoomArticle
To combat harmful pathogens it is vitally important that scientists have facilities available that enable them to work on them safely. These facilities should prevent the operator from becoming infected with the agent they are working with and prevent the organism from escaping the laboratory setting and potentially initiating new outbreaks of disease.
The Power of Nanopores for Understanding Proteins -Part 1Article
We spoke to Professor Giovanni Maglia, group leader in chemical biology at the University of Groningen, about the progress he and his team have been making in turning nanopore technology to solving protein questions.READ MORE | <urn:uuid:23152aab-3a91-4165-8b73-cd96843b80ac> | 3.484375 | 2,020 | Truncated | Science & Tech. | 20.096232 | 95,537,477 |
In Chapter 6 we investigated sound wave propagation in a compressible fluid (air) under the hypothesis that sound waves are considered to be due to small amplitude oscillatory motion of the medium. Therefore Chapter 6, which was devoted to sound waves in air, involved a linearized theory whereby the linear wave equation was invoked. In this chapter we shall extend the theory of wave propagation in a compressible fluid to a more general treatment, in which we shall take into account large amplitude wave propagation of supersonic flow (which involves nonlinear phenomena), and shock waves (which involve discontinuities in some of the dynamic and thermodynamic variables). We shall show, for example, that the wave front for a sound wave is the limiting case of the shock front for supersonic flow where the shock strength becomes infinitely weak and the flow field becomes linearized. It also appears that, for subsonic flow, there can be no wave propagation because of the different character of the partial differential equations (PDEs) that govern the flow. Transonic flow involves a transition between subsonic and supersonic flow. Steady flows in which this transition occurs are called mixed or transonic flows, and the surface where the transition occurs is called the transitional or sonic surface.
KeywordsShock Wave Shock Front Shock Tube Viscous Fluid Field Point
Unable to display preview. Download preview PDF. | <urn:uuid:ce510c31-51da-4798-ab1e-2615fa3c1e7d> | 3.671875 | 280 | Truncated | Science & Tech. | 26.172 | 95,537,501 |
Inscrit le: 11 Fév 2018
|Posté le: Jeu 31 Mai - 08:34 (2018) Sujet du message: Our space telescope has unique capabilities to observ
|BEIJING Tiago Splitter Spurs Jersey , May 28 (Xinhua) -- Many black holes and neutron stars are thought to be hidden in the Milky Way. Since they don't emit visible light, or are covered by dust, only X-ray telescopes can find them.
China will soon launch its first X-ray space telescope, the Hard X-ray Modulation Telescope (HXMT), with the aim of surveying the Milky Way to observe celestial sources of X-rays.
"Our space telescope has unique capabilities to observe high-energy celestial bodies such as black holes and neutron stars. We hope to use it to resolve mysteries such as the evolution of black holes and the strong magnetic fields of neutron stars Sean Elliott Spurs Jersey ," says Zhang Shuangnan, lead scientist of HXMT and director of the Key Laboratory of Particle Astrophysics at the Chinese Academy of Sciences (CAS).
"We are looking forward to discovering new activities of black holes and studying the state of neutron stars under extreme gravity and density conditions, and the physical laws under extreme magnetic fields. These studies are expected to bring new breakthroughs in physics," says Zhang.
Compared with X-ray astronomical satellites of other countries, HXMT has larger detection area Pau Gasol Spurs Jersey , broader energy range and wider field of view. These give it advantages in observing black holes and neutron stars emitting bright X-rays, and it can more efficiently scan the galaxy, Zhang says.
The telescope will work on wide energy range from 1 to 250 keV, enabling it to complete many observation tasks previously requiring several satellites, according to Zhang.
Other satellites have already conducted sky surveys Patty Mills Spurs Jersey , and found many celestial sources of X-rays. However, the sources are often variable, and occasional intense flares can be missed in just one or two surveys, Zhang says.
New surveys can discover either new X-ray sources or new activities in known sources. So HXMT will repeatedly scan the Milky Way for active and variable celestial bodies emitting X-rays.
Zhang says other countries have launched about 10 X-ray satellites, but they have different advantages and therefore different observation focuses.
"There are so many black holes and neutron stars in the universe Nikola Milutinov Spurs Jersey , but we don't have a thorough understanding of any of them. So we need new satellites to observe more," Zhang says.
The study of black holes and neutron stars is often conducted through observing X-ray binary systems. The X-ray emissions of these binary systems are the result of the compact object (such as black hole or neutron star) accreting matter from a companion regular star.
By analyzing binary system X-ray radiation, astronomers can study compact objects such as black holes or neutrons stars.
How do the black holes or neutron stars accrete matter from companion stars? What causes X-ray flares? These are questions scientists want to answer, and China's new space telescope might help.
Lu Fangjun, chief designer of the payload of HXMT Matt Bonner Spurs Jersey , says the space telescope will focus on the Galactic plane. If it finds any celestial body in a state of explosion, it will conduct high-precision pointed observation and joint multiband observation with other telescopes either in space or on the ground.
Upcoming Dragon Boat Festival marked across China
In pics: first day of Ramadan around world
18 killed as car bomb rocks eastern Afghan town: official
Oslo Medieval Festival marked in Norway
Aerobatics aircraft perform at air show in C China's Henan
Upcoming Dragon Boat Festival marked across China
Bottlenose dolphins relocated in NE China's Heilongjiang
China's Xiamen to hold 9th BRICS Summit
A lot of the origami airplanes flying you do will be inside home, at home. Inside Home flying has some major advantages - no airflow to carry off the plane or send it crashing to the Earth. No rain to turn it into a soggy wreck. No sun receiving in your eyes. No dead wood or bushes to gobble up the aircraft in their leaved. And Indoors game is great for origami airplane contests because it means everyone flies from below the same considerations. (For more on paper airplane contests, see my web page origami-kids) But indoor flying also has a few problems, namely walls and ceilings Marco Belinelli Spurs Jersey , not to mention furniture, to crash into. The thing to do is to turn these drawbacks into benefits. Here are some games you can play that take advantage of the fact that you`re inside
Paper Plane Golf
This game is a lot like the real game of golf, but you play it indoors using your arm and a paper airplane rather than outside using gold clubs and a ball, and it doesn`t take nearly as long. You can play the game alone or with friends. We recommend using the Count, Slice Manu Ginobili Spurs Jersey , or Butterfly. Choose between three and nine landing spots (these will be your `holes`), such as chairs, tables, and small rugs. They don`t all have to be within one throw of each other; in fact they don`t even have to be I the same room. In golf it`s normal to have to hit a few shots between holes, and in this game you can set it up so that some holes require at least three to five throws. If you want LaMarcus Aldridge Spurs Jersey , number the `holes` with pieces of paper so you remember the order you`re supposed to go in. the object of the game is to land on all the spots using the fewest number of throws. If you set up a challenging and fun course, write down where all the holes are so you can play the course again and improve your `golf` game. This is an excellent game to play if you`re sick and have to stay in bed. It`s also fun if you`re sick and have to stay in bet. It`s also fun if you`re feeling lazy or watching TV. All you need is a paper airplane (almost any kind will do), about 15 feet of string or strong thread, and some tape. Tape the string to the pane near where you`d hold it to throw it. Then tie the other end of the string to your wrist. Choose or set up a target about 10 feet away from where you are and see how. Cheap Jerseys Cheap Jerseys China Cheap Jerseys Online Cheap MLB Jerseys Wholesale Jerseys Cheap Authentic NBA Jerseys Wholesale Jerseys From China Wholesale Jerseys China Wholesale Football Jerseys Wholesale Authentic Soccer Jerseys | <urn:uuid:bc4055ca-9ca0-48a6-b270-2e40a30f4b86> | 2.84375 | 1,421 | Comment Section | Science & Tech. | 49.968821 | 95,537,513 |
Infrared spectroscopy can be described as the use of instrumentation in measuring a physical property of matter, and the relating of the data to chemical composition. The instruments used are called infrared spectrophotometers, and the physical property measured is the ability of matter to absorb, transmit, or reflect infrared radiation.
KeywordsElectromagnetic Spectrum Absorbance Unit Band Shape Nonlinear Absorbance Chart Paper
Unable to display preview. Download preview PDF. | <urn:uuid:4c584e7b-8537-40e4-8c96-8d13b2029503> | 2.859375 | 94 | Truncated | Science & Tech. | 4.218825 | 95,537,518 |
One of the most interesting questions in solid state theory is the structure of glass, which has eluded researchers since the early 1900's. Since then, two competing models, the random network theory and the crystallite theory, have both gathered experimental support. Here, we present a direct, atomic-level structural analysis during a crystal-to-glass transformation, including all intermediate stages. We introduce disorder on a 2D crystal, graphene, gradually, utilizing the electron beam of a transmission electron microscope, which allows us to capture the atomic structure at each step. The change from a crystal to a glass happens suddenly, and at a surprisingly early stage. Right after the transition, the disorder manifests as a vitreous network separating individual crystallites, similar to the modern version of the crystallite theory. However, upon increasing disorder, the vitreous areas grow on the expense of the crystallites and the structure turns into a random network. Thereby, our results show that, at least in the case of a 2D structure, both of the models can be correct, and can even describe the same material at different degrees of disorder.
Since over 80 years, the popular concept of the atomic structure of a glass1 has been strongly influenced2,3 by the beautiful illustrations of a random network by Zachariasen4. However, this theory can not easily be verified, since imaging of the atomic structure of a conventional glass has remained impossible. The direct study of atomic coordinates in a glass has therefore for the most part remained in the realm of theoretical models and computer simulations5,6. Although recent images of the 2D silica glass7,8 revealed vitreous regions, which look remarkably similar to Zachariasen's illustration, also crystalline areas were discovered. Due to small samples and limited statistics, it was impossible to determine whether these crystallites are an intrinsic part of the glass structure or a separate phase depending on the exact growth conditions of the material. Nevertheless, their existence gives much needed credibility for the crystallite theory2, which posits that a glass is a disordered arrangement of small crystallite particles separated by a disordered network. In a recent study9, also giving support for the crystallite model, the ratio of crystallites was found to be up to 50% in amorphous sputtered silicon. To finally resolve this issue, one would need a way to directly monitor the glass formation at the atomic level. However, the traditional way to form a glass, i.e., to quench a liquid10,11, renders this impossible.
In this study, in contrast to melting and quenching a solid, we gradually introduce disorder into graphene using an electron beam12, so that the material remains in the solid state. Although our way to introduce disorder in the crystal differs fundamentally from the traditional method for creating a glass by quenching a melt, we believe that it can lead to new insights into the structures of disordered materials. The two-dimensionality of graphene allows us to directly observe all atoms, and to sidestep the imaging problem of conventional materials. The energy of the electrons is chosen slightly above the damage threshold to keep the rate of transformation slow enough to allow accurate monitoring of every change in structure. The atomic network is altered both via removal of atoms13 as well as bond rotations14, which occur at random locations and are sufficiently temporarily spaced to be stochastic and mutually independent. The introduced disorder manifests in the formation of non-hexagonal carbon rings. As has been described earlier15,16, the graphene lattice can remain flat even upon introduction of pentagons and heptagons. At initial stages of the experiment, small isolated defects appear within the lattice (Fig. 1 a), as has been discussed previously12. Here, we focus on the amorphous networks that form under increased irradiation dose, and have not been analyzed so far. Under continuous irradiation, defects grow until they form a vitreous network separating the original crystal into nanometer-sized crystallites (Fig. 1 b), as will be shown in more detail below. During the experiments, also holes appear in the graphene sheet. However, they are created by chemical effects driven by the electron beam13, and therefore not considered a part of the disordered structure. At the final stage, the crystallites vanish into the growing network (Fig. 1 c). Importantly, the structure remains two-dimensional throughout the transformation, providing direct views of the atomic structure at each stage.
In order to quantify the material-wide structural changes, we first apply Fourier analysis on selected atomically resolved TEM images during the transition. At low doses, the sixfold pattern of graphene is clearly visible (see the inset in Fig. 1 a). During the experiment, the peak-to-peak distance remains constant (Fig. 1 d; we attribute the variations to changes in instrumental parameters during the experiment), whereas the peak widths change significantly. The spread in radial direction, shown in Fig. 1 e, displays three different regimes: (1) at low doses, where the material contains isolated defects, it increases only slightly (from point a almost until b), whereas after merging of the defects into the vitreous network, (2) the spread accelerates (between points b and c), (3) finally saturating at the highest doses. The spreading of the peaks corresponds to decreasing size of ordered structures within the sample, and is direct evidence of decreasing crystallite size17. Evidently, at the highest doses, the minimum size has already been reached. A more direct measure for the disorder in the structure can be obtained by analyzing the spread of the peaks in azimuthal direction. As can be seen in Fig. 1 f, it appears to depend almost linearly on the dose, up to the fully disordered case (peak spread of 60°). Comparison between Figs. 1 e and f shows that the completion of the disorder coincides with the saturation of the decreasing crystallite size. Because the atomic changes imposed by electron irradiation on a sample depend on the acceleration voltage of the TEM, we introduce density deficit as a transferable proxy for the disorder (second x-axis in Fig. 1 f). It can be directly determined experimentally, as described in Ref. 13.
In modern aberration corrected TEM instruments, and for very thin samples, the obtained images can be directly interpreted as the projected atomic structure. However, the exact atomic configurations have been traditionally obtained manually from the images, which is both laborious and error-prone. Therefore, even though amorphization of graphene under a TEM has been reported earlier12, the analysis was limited to relatively small isolated defects. To circumvent this problem, we generated an automatic method for extracting the atomic coordinates directly from TEM images. First, we created model TEM images corresponding to carbon rings with different number of atoms (from pentagons to octagons), and then calculated a cross correlation map for each of them for each of the actual TEM images. The maxima within these maps mark the centers of carbon rings within the atomic network, from which the vertices of the polygons (i.e., atomic coordinates) were directly obtained assuming each atom has three neighbors. After obtaining the structures, we optimized them with conjugate gradient energy minimization method (to ensure optimized bond lengths, and to allow slight out-of-plane corrugations18). We estimate that inaccuracies in the obtained structures are less than 5% for non-hexagons, and much lower for the crystalline areas. Moreover, any deviations from the actual structure are random since all user-bias is eliminated, further enhancing the statistics. As shown in the supplementary information, the obtained structures very well match the experimental images. Atomic coordinates obtained with this method provide the basis for our analysis of intermediate stages during the structural transformation.
In Fig. 2, we present examples of atomic structures derived from the TEM images (the supplementary information contains all structures that were used in the present study), and highlight the three regimes that we identified above (crystalline with isolated defects, crystallite glass and random network). Fig. 2 a shows the first regime with a few isolated defects. Upon increased irradiation dose, the defects grow and begin to merge. Remarkably, this happens in such a way that isolated crystallites, separated by a vitreous network, are formed. Fig. 2 b shows an example case, where isolated crystallites can be clearly identified. We point out that although this structure somewhat resembles polycrystalline graphene with nanometer-range grains19,20, the vitreous network is much wider than typical graphene grain boundaries21,22,23 and covers a significantly higher fraction of the area. Upon further irradiation, the vitreous areas grow at the expense of the crystalline ones, resulting in a structure that has almost no extended hexagonal areas, and is well described by the random network theory. This final state is shown in Fig. 2 c, where the vitreous areas clearly dominate the structure.
In contrast to vitreous oxides like silica, which are built from multi-atomic tetrahedral structural units, the building block in our case is the carbon atom. Therefore, the shortest possible range of disorder in this material is at the interconnection and orientation between adjacent structural units2. To quantify this disorder, we compute the distribution of the inter-atomic distances from the atomic coordinates (radial distribution function, g(r)). This analysis was limited to the intermediate structures to ensure that the statistics for the longer inter-atomic distances (r ≥ 0.5 nm) remain meaningful (the number of holes and other complications in the structure increases during the experiment). Example cases are shown in Fig. 3 a–c for different regimes during the transformation. Although deviation from ideal inter-atomic distances (Fig. 3 a) are already clear close to the crystalline-to-glass transition (Fig. 3 b, see Fig. 3 e for the structure), the disappearance of the long range order only becomes complete after the crystallite structure has been formed (Fig. 3 c, see Fig. 3 f and 2 b for the structure). At this stage, g(r) is very similar to that obtained for the 2D silica glass7,8. We also calculate the bond angle distribution (α(θ)), presented in Fig. 3 d. Both the spread in the distribution around the ideal hexagon angle (θ = 120°) and the appearance of a second peak close to 108° are already clear for the structure of Fig. 3 e. These features are further enhanced for the higher disorder case (Fig. 3 f). The peak at 108° corresponds to pentagonal carbon rings, which are the second most common topological building blocks of networks built from three-coordinated structural units24. Although heptagonal rings are nearly as likely as pentagons, the corresponding peak, which should appear at about 129°, remains hidden within the data. This is caused by local curvature, which allows the larger rings to maintain bonding angles close to 120°.
To obtain a better understanding on changes in the topological order during the transformation, we further calculated the shortest path ring statistics. In Fig. 4 a–d, we plot the results for four different structures: the very first recorded TEM image, at the transition point between the crystalline and disordered phases (close to the structure of Fig. 3 e), for the structure presented in Figs. 2 b and 3 f, and at the completely disordered phase at the end of the experiment (Fig. 2 c), respectively. A gradually increasing spread is evident during the continuing experiment. Surprisingly, hexagons clearly dominate the statistics even at the highest degree of disorder (Fig. 4 d), similar to the random network structure proposed by Shackelford and Brown24 and in contrast to the 2D silica7,8. This discrepancy is likely due to the fact that the statistics for 2D silica are based on selected vitreous areas, disregarding crystallites, whereas our analysis was carried out for the complete structure. To facilitate comparison with these earlier studies, we have plotted our data in a log normal probability plot (Fig. 4 e). With increasing disorder, our results approach literature values7,8,24, with the above-mentioned caveat. As already hinted by the histograms in 4 a–d, our results show that the main change during the experiment is the decreasing ratio of hexagons to other polygons, whereas the ratios between other polygon pairs remain nearly constant. For example, the ratio of pentagons to heptagons is about 1.14 ± 0.02 throughout the experiment (this ratio is about 1.3 for the theoretical model of Ref. 24 and between 1.3 and 1.6 for the vitreous 2D silica7,8). This allows for controlled prediction of structural changes made on graphene under an electron beam25.
As a final step of the topological analysis, we calculate the ratio of hexagons to all polygons in the structure (i.e., crystallinity, C26) from ring statistics. For pristine graphene C = 1, whereas for the vitreous silica structures C ≈ 0.47,8. Our dataset, plotted in Fig. 4 f, allows for the first time to see the transition between these two extremities. A similar discontinuity, as was already seen in the analysis of the transformation in reciprocal space (Fig. 1 e), appears also within this data: as long as the disorder is limited to defects within the original lattice, crystallinity decreases slowly and linearly. However, as the defects merge into the vitreous network, crystallinity decreases more rapidly, and tends towards a constant ratio of 1/1 between hexagons and non-hexagons (corresponding to C ≈ 0.51). This result supports our earlier hypothesis that the discontinuity seen in Fig. 1 e marks the actual transition point between crystalline graphene and the glassy phase. It also shows that it is possible to create glass structures with different crystallinity values ranging from 0.5 to 0.9 at will. The crystallinity C fits very well to an exponential decay in Fig. 3e, and the final data points are almost at the constant offset C = 0.51 of this fit. This indicates that we have indeed reached the maximum amount of disorder at the end of the experiment, an equilibrium, where continued randomization of the structure no longer changes its statistical parameters.
Finally, we look at the density fluctuations in the material to establish the amount of order at the longest range. In order to do this, we measure density variation in the final structure (Fig. 2 c). We divide the structure into square sampling areas of different widths (w = 0.2 … 2.0 nm), and calculate the standard deviation in the calculated densities as a function of w (see Fig. 4 g–h). As expected, the density fluctuations become smaller when the integration size increases, that is, the structure appears smoother on the larger scale. However, as can be seen from the fit in Fig. 4 h, our data follows 1/w-behaviour, which is precisely what would be expected for completely random atomic coordinates (or white noise). This indicates the absence of long-range density fluctuations, or presence of hyperuniformity27.
In summary, we have reported a solid-state transition from a crystalline mono-layer graphene into a 2D carbon glass, imposed in a controlled manner using electron irradiation. For the first time, our results shed light into the transitional states between crystalline and amorphous materials, and allow for a controlled creation of 2D carbon glass structures with a specific amount of disorder. The atomic structure of the material is obtained in situ from high-quality TEM images. The transition starts from separated point defects, which merge into a vitreous network separating small crystallites, resembling the crystallite theory of the atomic structure of a glass. At the final stage, according to both real space and reciprocal space analysis, the structure is indistinguishable from a 2D random network. Our study shows that the two competing theories of the structure of glass, the nano-crystalline theory and the random network model, provide complementary descriptions on the same material, each applying to a specific range of disorder, at least in two dimensions.
Our graphene samples were prepared by mechanical exfoliation with subsequent transfer to a TEM grid. An image side aberration corrected FEI Titan 80–300 was aligned for high resolution imaging at 100 kV. Images were recorded at a Scherzer defocus and the spherical aberration was set to ca. 20 μm. Under these conditions, the dark contrast can be directly interpreted as the atomic structure. The TEM image sequence of the amorphization was shown before (supplementary information of Refs. 12,13), but no analysis of the amorphous areas as shown here was done previously.
Energy minimization and structural optimization was carried out with the conjugate gradient method as implemented in the LAMMPS simulation package28,29. For the radial and angular distribution analysis, the obtained structures (which had about 13,000 atoms with sizes ca. 30 × 30 nm2) were embedded in a larger ideal lattice (with more than twice the number of atoms) to ensure realistic relaxation also at the edges of the field of view of the experimental images. This ideal lattice was excluded from the analysis after structural optimization. All structures (optimized without the surrounding lattice) used for the shortest path ring statistics are provided as supplementary material. For this analysis, we excluded all edges to minimize the errors in the interpretation caused by adsorbates and holes in the structure. The carbon-carbon interactions were modeled with the AIREBO potential30, as implemented in LAMMPS. Any inaccuracies stemming from the use of a semi-empirical interaction model are expected to be limited to the absolute values of the inter-atomic distances. Therefore, they have no influence on any of the conclusions made in the study.
We acknowledge funding from the Austrian Science Fund (FWF: M1481-N20 and I1283-N20), the German Ministry of Science (DFG), Research and the Arts (MWK) of the State of Baden-Wuertternberg within the SALVE (Sub-Angstrom Low-Voltage Electron microscopy) project, and University of Helsinki Funds, as well as computational time from the Vienna Scientific Cluster. | <urn:uuid:3b889572-0d97-4eae-8437-1bd979615e69> | 2.59375 | 3,812 | Academic Writing | Science & Tech. | 37.192053 | 95,537,531 |
Genome sequencing reveals extensive inbreeding in Scandinavian wolves
Researchers from Uppsala University and others have for the first time determined the full genetic consequences of intense inbreeding in a threatened species. The large-scale genomic study of the Scandinavian wolf population is reported in Nature Ecology & Evolution.
The Scandinavian wolf population was founded in the 1980s by only two individuals. This has subsequently led to intense inbreeding, which is considered a long-term threat to the population. To reveal the genetic consequences of inbreeding, the whole genome of some 100 Scandinavian wolves has now been analysed.
'Inbreeding has been so extensive that some individuals have entire chromosomes that completely lack genetic variation', says Hans Ellegren, Professor at the Evolutionary Biology Centre, Uppsala University and leader of the study. 'In such cases identical chromosome copies have been inherited from both parents.'
A surprising discovery was that also some immigrant wolves were partly inbred, and related. This was the case, for example, for two wolves that 2013 were translocated by management authorities from northernmost Sweden, due to conflict with reindeer husbandry, to southern Sweden. This is counter to the often-made assumption of unrelated and non-inbred founders when inbreeding is estimated from pedigrees.
'The degree of inbreeding determined at high precision with genome analysis agreed rather well with inbreeding estimated from established pedigrees', says Hans Ellegren. 'However, for stochastic reasons, some wolves were found to be a bit more, and others a bit less, inbred than estimated from pedigrees.'
Moreover, wolves were generally more inbred than expected from recent mating between relatives in the contemporary population. This is because the two copies of a chromosome in an individual can originate from one and the same ancestor further back in time.
The study is a collaboration between Uppsala University, Swedish University of Agricultural Sciences, Inland Norway University, and Norwegian Institute for Nature Research.
Related Journal Article | <urn:uuid:9c5bf231-ad5d-49b3-a3c8-cfbdbc455268> | 3.296875 | 404 | Truncated | Science & Tech. | 9.921425 | 95,537,535 |
Russell Standish wrote:
What I was asking is why you think "time-area" should be proportional to length. I can't see any reasoning as to what it should be proportional to.
Thanks for your interest in this. I did not make this any easier by bungling the initial concept a little in my first post. To directly answer your question, I am assuming space-time is a single entity, with time representing the spatial area of the multiverse. Therefore, the question you pose really wouldn't make sense. It would be like drawing a square and asking why height is proportional to length. The relationship is necessary.
Going back to all of our multiverse stacks with the cube on it, all these stacks would equal the time-area. This is the "depth" of the cube in the multiverse, that would allow the cube to store 10^300 bits of information. The time area equals the cube in it's totality in the multiverse. So why, in our universe, can we only store information equal to the surface area? Well we know we don't have access to the whole cube, because we are not in all of the universes that this cube exists in. So we have to divide the cube by something to represent the fact that we are only on one stack. The proper divisor would be the length of the cube, because we are existing on a time-line. The information that can be stored is limited to a single set of outcomes- a line along the plane of the time area (a stack of pictures).
This leaves us with the Holographic principle.
Please note this is an interesting concept (to me) I am proposing because the geometry of it makes sense when I picture it mentally. You or others much smarter than I will have to explain why this works or doesn't work mathematically in QM or TOR. Colin Bruce suggests in his book that the cube volume contains multiverse information (as a speculative ending to his book), and when I started thinking about it I realized if you take the "multiverse block" concept seriously, and consider time a spatial dimension through the multiverse, a cube of space would only provide a full content of information before it was seperated out into all of the individual outcomes as it moved through time (or how about "multiverse space"?).
A cube of space really does hold it's volume in information. But we have to divide by time. Particularly, the length of the time plane because the rest of the time area has been lost to the other outcomes/universes/stacks (or whatever allows you to conceptualize it the best). This is speculative (obviously). I'd like to hear some feedback, as this explains a lot (to me anyway) if the concept is right. | <urn:uuid:e66d825e-9dbb-46b8-bcc8-ae4ba411c0c1> | 2.71875 | 572 | Comment Section | Science & Tech. | 51.717293 | 95,537,540 |
occam (programming language)
|Designed by||David May|
2.1 (official), 2.5 (unofficial), 3 (not fully implemented) / 1988+
|Communicating sequential processes|
occam is a programming language which is concurrent and builds on the communicating sequential processes (CSP) process algebra, and shares many of its features. It is named after philosopher William of Ockham for whom Occam's razor is named.
occam is an imperative procedural language (such as Pascal). It was developed by David May and others at Inmos (trademark INMOS), advised by Tony Hoare, as the native programming language for their transputer microprocessors, but implementations for other platforms are available. The most widely known version is occam 2; its programming manual was written by Steven Ericsson-Zenith and others at Inmos.
In the following examples indentation and formatting are critical for parsing the code: expressions are terminated by the end of the line, lists of expressions need to be on the same level of indentation. This feature, named the off-side rule, is also found in other languages such as Haskell and Python.
Communication between processes work through named channels. One process outputs data to a channel via
! while another one inputs data with
?. Input and output cannot proceed until the other end is ready to accept or offer data. (In the not proceeding case it is often said that the process blocks on the channel. However, the program will neither spin nor poll; thus terms like wait, hang or yield may also convey the behaviour; also in the context that it will not block other independent processes from running.) Examples (c is a variable):
keyboard ? c
screen ! c
SEQ introduces a list of expressions that are evaluated sequentially. This is not implicit as it is in most other programming languages. Example:
SEQ x := x + 1 y := x * x
PAR begins a list of expressions that may be evaluated concurrently. Example:
PAR p() q()
ALT specifies a list of guarded commands. The guards are a combination of a boolean condition and an input expression (both optional). Each guard for which the condition is true and the input channel is ready is successful. One of the successful alternatives is selected for execution. Example:
ALT count1 < 100 & c1 ? data SEQ count1 := count1 + 1 merged ! data count2 < 100 & c2 ? data SEQ count2 := count2 + 1 merged ! data status ? request SEQ out ! count1 out ! count2
This will read data from channels c1 or c2 (whichever is ready) and pass it into a merged channel. If countN reaches 100, reads from the corresponding channel will be disabled. A request on the status channel is answered by outputting the counts to out.
occam 1 (released 1983) was a preliminary version of the language which borrowed from David May's work on EPL and Tony Hoare's CSP. This supported only the VAR data type, which was an integral type corresponding to the native word length of the target architecture, and arrays of only one dimension.
occam 2 is an extension produced by Inmos Ltd in 1987 that adds floating-point support, functions, multi-dimensional arrays and more data types such as varying sizes of integers (INT16, INT32) and bytes.
With this revision, occam became a language capable of expressing useful programs, whereas occam 1 was more suited to examining algorithms and exploring the new language (however, the occam 1 compiler was written in occam 1, so there is an existence proof that reasonably sized, useful programs could be written in occam 1, despite its limitations).
occam 2.1 was the last of the series of occam language developments contributed by Inmos. Defined in 1994, it was influenced by an earlier proposal for an occam 3 language (also referred to as "occam91" during its early development) created by Geoff Barrett at Inmos in the early 1990s. A revised Reference Manual describing occam 3 was distributed for community comment, but the language was never fully implemented in a compiler.
occam 2.1 introduced several new features to occam 2, including:
- Named data types (DATA TYPE x IS y)
- Named records
- Packed records
- Relaxation of some of the type conversion rules
- New operators (e.g. BYTESIN)
- Channel retyping and channel arrays
- Ability to return fixed-length array from function.
For a full list of the changes see Appendix P of the Inmos occam 2.1 Reference Manual.
occam-π is the common name for the occam variant implemented by later versions of the Kent Retargetable occam Compiler (KRoC). The addition of the symbol π (pi) to the occam name is an allusion to KRoC occam including several ideas inspired by the pi-calculus. It contains several significant extensions to the occam 2.1 compiler, for example:
- Communicating sequential processes
- The XC Programming Language, which is based on occam but with C-style syntax.
- Concurrent programming languages
- List of concurrent and parallel programming languages
- Inmos (1995-05-12). occam 2.1 Reference Manual (PDF). SGS-Thomson Microelectronics Ltd. Inmos document 72 occ 45 03
- Inmos (1984). occam Programming Manual. Prentice-Hall. ISBN 0-13-629296-8.
- Ericsson-Zenith, Steven (1988). occam 2 Reference Manual. Prentice-Hall. ISBN 0-13-629312-3.
- Cook, Barry M; Peel, RMA (1999-04-11). "Occam on Field-Programmable Gate Arrays". In Barry M. Cook. Architectures, Languages and Techniques for Concurrent Systems. 22nd World Occam and Transputer User Group Technical Meeting. Keele, United Kingdom: IOS Press. p. 219. ISBN 90 5199 480 X. Retrieved 2016-11-28.
- Barrett, Geoff; Ericsson-Zenith, Steven (1992-03-31). "occam 3 Reference Manual" (PDF). Inmos. Retrieved 2008-03-24.
- Barnes, Fred; Welch, Peter (2006-01-14). "occam-pi: Blending the best of CSP and the pi-calculus". Retrieved 2006-11-24.
- Communicating Process Architectures 2007 – WoTUG-30. IOS Press. 2007. pp. 513 pages. ISBN 978-1-58603-767-3.
- Communicating Process Architectures 2006 – WoTUG-29. IOS Press. 2006. pp. 391 pages. ISBN 978-1-58603-671-3.
- Communicating Process Architectures 2005 – WoTUG-28. IOS Press. 2005. pp. 405 pages. ISBN 978-1-58603-561-7.
- Kerridge, Jon, ed. (1993). Transputer and Occam Research: New Directions. IOS Press. pp. 253 pages. ISBN 0-8247-0711-7.
- Roscoe, Andrew William; Hoare, Charles Antony Richard (1986). The Laws of Occam Programming. Programming Research Group, Oxford University.
- Egorov, A., Technical University – Sofia, (1983-2011) Записки по Компютърни архитектури
- Information, compilers, editors and utilities at the WoTUG occam pages.
- Compilers, documentation, examples, projects and utilities at the Internet Parallel Computing Archive (no longer maintained).
- Occam books on Transputer.net.
- David May's Occam page.
- Fred Barnes' occam tutorial.
- The occam-pi language.
- Tock occam compiler – (translator from occam to C from Kent) a Haskell-based compiler for occam and related languages. | <urn:uuid:e78a4a8d-cc33-4c7b-9dcc-1c3739ac3ae8> | 3.5 | 1,738 | Knowledge Article | Software Dev. | 61.018601 | 95,537,550 |
TeachMeFinance.com - explain Pulse Length
Pulse Length --
The linear distance in range occupied by an individual pulse from a radar. h = c * t , where t is the duration of the transmitted pulse, c is the speed of light, h is the length of the pulse in space. Note, in the radar equation, the length h/2 is actually used for calculating pulse volume because we are only interested in signals that arrive back at the radar simultaneously. This is also called a pulse width.
About the author
Copyright © 2005 by Mark McCracken, All Rights Reserved. TeachMeFinance.com is an informational website, and should not be used as a substitute for professional medical, legal or financial advice. Information presented at TeachMeFinance.com is provided on an "AS-IS" basis. Please read the disclaimer for details. | <urn:uuid:61ebc5bd-ce23-4501-8676-5954cc081dcd> | 3.25 | 179 | Knowledge Article | Science & Tech. | 49.892843 | 95,537,596 |
Software Architecture: The 5 Patterns You Need to Know
Software Architecture: The 5 Patterns You Need to Know
Whether you're a software architect or a developer, it always pays to know the patterns used in a given architecture. Here are five of the most important ones.
Join the DZone community and get the full member experience.Join For Free
Containerized Microservices require new monitoring. See why a new APM approach is needed to even see containerized applications.
When I was attending night school to become a programmer, I learned several design patterns: singleton, repository, factory, builder, decorator, etc. Design patterns give us a proven solution to existing and recurring problems. What I didn’t learn was that a similar mechanism exists on a higher level: software architecture patterns. These are patterns for the overall layout of your application or applications. They all have advantages and disadvantages. And they all address specific issues.
The layered pattern is probably one of the most well-known software architecture patterns. Many developers use it, without really knowing its name. The idea is to split up your code into “layers”, where each layer has a certain responsibility and provides a service to a higher layer.
There isn’t a predefined number of layers, but these are the ones you see most often:
- Presentation or UI layer
- Application layer
- Business or domain layer
- Persistence or data access layer
- Database layer
The idea is that the user initiates a piece of code in the presentation layer by performing some action (e.g. clicking a button). The presentation layer then calls the underlying layer, i.e. the application layer. Then we go into the business layer and finally, the persistence layer stores everything in the database. So higher layers are dependent upon and make calls to the lower layers.
You will see variations of this, depending on the complexity of the applications. Some applications might omit the application layer, while others add a caching layer. It’s even possible to merge two layers into one. For example, the ActiveRecord pattern combines the business and persistence layers.
As mentioned, each layer has its own responsibility. The presentation layer contains the graphical design of the application, as well as any code to handle user interaction. You shouldn’t add logic that is not specific to the user interface in this layer.
The business layer is where you put the models and logic that is specific to the business problem you are trying to solve.
The application layer sits between the presentation layer and the business layer. On the one hand, it provides an abstraction so that the presentation layer doesn’t need to know the business layer. In theory, you could change the technology stack of the presentation layer without changing anything else in your application (e.g. change from WinForms to WPF). On the other hand, the application layer provides a place to put certain coordination logic that doesn’t fit in the business or presentation layer.
Finally, the persistence layer contains the code to access the database layer. The database layer is the underlying database technology (e.g. SQL Server, MongoDB). The persistence layer is the set of code to manipulate the database: SQL statements, connection details, etc.
- Most developers are familiar with this pattern.
- It provides an easy way of writing a well-organized and testable application.
- It tends to lead to monolithic applications that are hard to split up afterward.
- Developers often find themselves writing a lot of code to pass through the different layers, without adding any value in these layers. If all you are doing is writing a simple CRUD application, the layered pattern might be overkill for you.
- Standard line-of-business apps that do more than just CRUD operations
The microkernel pattern, or plug-in pattern, is useful when your application has a core set of responsibilities and a collection of interchangeable parts on the side. The microkernel will provide the entry point and the general flow of the application, without really knowing what the different plug-ins are doing.
An example is a task scheduler. The microkernel could contain all the logic for scheduling and triggering tasks, while the plug-ins contain specific tasks. As long as the plug-ins adhere to a predefined API, the microkernel can trigger them without needing to know the implementation details.
Another example is a workflow. The implementation of a workflow contains concepts like the order of the different steps, evaluating the results of steps, deciding what the next step is, etc. The specific implementation of the steps is less important to the core code of the workflow.
- This pattern provides great flexibility and extensibility.
- Some implementations allow for adding plug-ins while the application is running.
- Microkernel and plug-ins can be developed by separate teams.
- It can be difficult to decide what belongs in the microkernel and what doesn’t.
- The predefined API might not be a good fit for future plug-ins.
- Applications that take data from different sources, transform that data and writes it to different destinations
- Workflow applications
- Task and job scheduling applications
CQRS is an acronym for Command and Query Responsibility Segregation. The central concept of this pattern is that an application has read operations and write operations that must be totally separated. This also means that the model used for write operations (commands) will differ from the read models (queries). Furthermore, the data will be stored in different locations. In a relational database, this means there will be tables for the command model and tables for the read model. Some implementations even store the different models in totally different databases, e.g. SQL Server for the command model and MongoDB for the read model.
This pattern is often combined with event sourcing, which we’ll cover below.
How does it work exactly? When a user performs an action, the application sends a command to the command service. The command service retrieves any data it needs from the command database, makes the necessary manipulations and stores that back in the database. It then notifies the read service so that the read model can be updated. This flow can be seen below.
When the application needs to show data to the user, it can retrieve the read model by calling the read service, as shown below.
- Command models can focus on business logic and validation while read models can be tailored to specific scenarios.
- You can avoid complex queries (e.g. joins in SQL) which makes the reads more performant.
- Keeping the command and the read models in sync can become complex.
- Applications that expect a high amount of reads
- Applications with complex domains
As I mentioned above, CQRS often goes hand in hand with event sourcing. This is a pattern where you don’t store the current state of your model in the database, but rather the events that happened to the model. So when the name of a customer changes, you won’t store the value in a “Name” column. You will store a “NameChanged” event with the new value (and possibly the old one too).
When you need to retrieve a model, you retrieve all its stored events and reapply them on a new object. We call this rehydrating an object.
A real-life analogy of event sourcing is accounting. When you add an expense, you don’t change the value of the total. In accounting, a new line is added with the operation to be performed. If an error was made, you simply add a new line. To make your life easier, you could calculate the total every time you add a line. This total can be regarded as the read model. The example below should make it more clear.
You can see that we made an error when adding Invoice 201805. Instead of changing the line, we added two new lines: first, one to cancel the wrong line, then a new and correct line. This is how event sourcing works. You never remove events, because they have undeniably happened in the past. To correct situations, we add new events.
Also, note how we have a cell with the total value. This is simply a sum of all values in the cells above. In Excel, it automatically updates so you could say it synchronizes with the other cells. It is the read model, providing an easy view for the user.
Event sourcing is often combined with CQRS because rehydrating an object can have a performance impact, especially when there are a lot of events for the instance. A fast read model can significantly improve the response time of the application.
- This software architecture pattern can provide an audit log out of the box. Each event represents a manipulation of the data at a certain point in time.
- It requires some discipline because you can’t just fix wrong data with a simple edit in the database.
- It’s not a trivial task to change the structure of an event. For example, if you add a property, the database still contains events without that data. Your code will need to handle this missing data graciously.
Ideal for applications that
- Need to publish events to external systems
- Will be built with CQRS
- Have complex domains
- Need an audit log of changes to the data
When you write your application as a set of microservices, you’re actually writing multiple applications that will work together. Each microservice has its own distinct responsibility and teams can develop them independently of other microservices. The only dependency between them is the communication. As microservices communicate with each other, you will have to make sure messages sent between them remain backwards-compatible. This requires some coordination, especially when different teams are responsible for different microservices.
A diagram can explain.
In the above diagram, the application calls a central API that forwards the call to the correct microservice. In this example, there are separate services for the user profile, inventory, orders, and payment. You can imagine this is an application where the user can order something. The separate microservices can call each other too. For example, the payment service may notify the orders service when a payment succeeds. The orders service could then call the inventory service to adjust the stock.
There is no clear rule of how big a microservice can be. In the previous example, the user profile service may be responsible for data like the username and password of a user, but also the home address, avatar image, favorites, etc. It could also be an option to split all those responsibilities into even smaller microservices.
- You can write, maintain, and deploy each microservice separately.
- A microservices architecture should be easier to scale, as you can scale only the microservices that need to be scaled. There’s no need to scale the less frequently used pieces of the application.
- It’s easier to rewrite pieces of the application because they’re smaller and less coupled to other parts.
- Contrary to what you might expect, it’s actually easier to write a well-structured monolith at first and split it up into microservices later. With microservices, a lot of extra concerns come into play: communication, coordination, backward compatibility, logging, etc. Teams that miss the necessary skill to write a well-structured monolith will probably have a hard time writing a good set of microservices.
- A single action of a user can pass through multiple microservices. There are more points of failure, and when something does go wrong, it can take more time to pinpoint the problem.
- Applications where certain parts will be used intensively and need to be scaled
- Services that provide functionality to several other applications
- Applications that would become very complex if combined into one monolith
- Applications where clear bounded contexts can be defined
I’ve explained several software architecture patterns, as well as their advantages and disadvantages. But there are more patterns than the ones I’ve laid out here. It is also not uncommon to combine several of these patterns. They aren’t always mutually exclusive. For example, you could have several microservices and have some of them use the layered pattern, while others use CQRS and event sourcing.
The important thing to remember is that there isn’t one solution that works everywhere. When we ask the question of which pattern to use for an application, the age-old answer still applies: “it depends.” You should weigh in on the pros and cons of a solution and make a well-informed decision.
Published at DZone with permission of Peter Morlion , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:6a4276f3-8554-479c-a2b7-4e7693722b2e> | 2.90625 | 2,656 | Listicle | Software Dev. | 44.515107 | 95,537,599 |
Using the perspective of the last few centuries and millennia, speakers in a press conference at the Fall Meeting of the American Geophysical Union in San Francisco will discuss the latest research involving climate reconstructions and different climate models.
The press conference features Caspar Ammann of the National Center for Atmospheric Research (NCAR), Boulder, Colo.; Drew Shindell of NASAs Goddard Institute for Space Studies, New York; and Tom Crowley of Duke University, Durham, N.C. The press conference is at 5 p.m. EST, Thursday, December 11 in the Moscone Convention Center West, Room 2012.
Changes in the suns activity have been considered responsible for some part of past climatic variations. Although useful measurements of solar energy are limited to the last 25 years of satellite data, this record is not long enough to confirm potential trends in solar energy changes over time. Tentative connections between the measured solar activity, with sunspots or the production of specific particles in the Earths atmosphere (such as carbon-14 and beryllium-10), have been used to estimate past solar energy.
Rob Gutro | GSFC
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:c188284a-903d-4ddd-aa11-ca4a60d0422f> | 2.875 | 858 | Content Listing | Science & Tech. | 39.22 | 95,537,603 |
Turbulence and mixing
Think you understand turbulence and mixing in fluids? Here’s a thought experiment for you. When you’ve thoroughly done the thought experiment, try it in practice.
Water flowing down a pipe slowly isn’t turbulent. As the flow rate increases, it becomes turbulent. Mixing is increased by turbulence, right?
The water in the pipe from your hot water tank to your hot tap is cold when the tap hasn’t been used for a while. When you run the tap, you can run it slowly, a bit faster, or very fast. Will the temperature of the water coming out of the tap rise over longer or shorter periods in each case?
Actually, there’s a slightly better question than that. Rather than looking at the change of temperature versus time, look at the change of temperature versus volume of water delivered. Imagine you’re filling a series of thirty glasses of water, and measuring the temperature in each. Then draw a graph of temperature versus volume of water delivered. It takes longer to fill the thirty glasses with a slow-running tap, of course!
The diagram shows the results. Think about what’s going on, and see if you can work out which graph is the slow-running tap, and which the fast running one.
Then, ideally, do the experiment and see if you were right. I’ve had a classful of students do it, and most of them had got it right – but then, I’d been teaching them about turbulence. I’ve not tried it with an uninitiated class.
No, I’m not giving you the answer – do the experiment yourself! Or email me if you really want to...
But online, I’ve tried it on several nuclear engineers. And they got it wrong, and were so confident in their incorrect thinking that they couldn’t be bothered to do the experiment, which is worrying. Turbulence in fluids is an important topic for anyone trying to extract heat from a nuclear reactor, whether to drive a turbine in normal service, or to prevent a meltdown in an emergency.
Several of them also thought that bathwater consistently swirls down the plughole in the same direction in the northern hemisphere, and in the opposite direction in the southern hemisphere. It’s not rocket science*, chaps – and it’s an experiment you can do. Easily. And your misunderstanding of the science behind it is, again, rather worrying
* Actually, rocket science isn’t really all that difficult either. Rocket technology rather more so. | <urn:uuid:8cf8b61a-7085-4208-808d-6ae9a7691b29> | 3.03125 | 543 | Personal Blog | Science & Tech. | 61.142736 | 95,537,606 |
Human beings produce a lot of energy that mostly goes to waste.
We are looking for the new ways of producing energy or power from the humans heat or energy, new methods of consuming body heat into useful manners have been introduced now we can charge our mobiles and laptops with the help of our bodies heat.04-10-16
Thermometric generators:Yangliang Zhang is a mechanical and bio medical engineer who is the inventor of the thermometric generators; he is working with US energy department and looking for the advancement in it. Thermometric generators are the type of generators that will be used for the conversion of waste heat of a hot engine exhaust of any vehicle into electricity. This electrical energy puts up efficiency on the vehicles accessories. These generators would provide with a few semiconductor elements which will produce a voltage when the temperature differs and in the result of all that happening, electric current generates and in the whole process, there would not release any carbon dioxide.
CONVERSION OF BODY HEAT TO ELECTRICITY BY A LIGHT WEIGHT PATCHESA Korean institute have been recently developed a flexible thermometric patch which can generate electrical power from the heat of human body. This discovery blossomed the new ways of inventions on this field. The patch designed by the Korean institute has been formed by the mixture of fiber glasses. The solution will then generate electric current after notifying the difference of surface temperatures between our skin and the air.
PIEZEO ELECTRICITYAnother method known as piezo electricity which means electricity produces through the pressure. In this aspect, human heart has been used in producing energy through its pumping pressure. These devices uses tiny amount of power, a millionth part of a watt. Nanotechnology researchers produces such power shirts which have fibers containing gold and zinc oxides, when we move, the fibers rub out and produces current.
HOW COULD WE GAIN POWER FROM THE HUMAN URINEGenerating power from human pee is the most amazing inventions, I have ever seen. We are going to see in our near future that the methods will introduces in which if we get pee on microbial fuel cells and in the result, we will get enough power for the charging of a mobile phone. Let us describe the whole method, firstly place bacteria inside a ceramic cylinders. Microbial fuel cells will works to feed the live bacteria with the chemicals present in the human urine when the urine will passed over the ceramic cylinders where the bacteria kept alive, they got their feed from, urine and in the result of whole process, electricity would generates. This process is useful for charging your cell phone in the backward areas where the electricity supply is not available or in the load shedding situations. It is being said that in the near future, it will be happen that the robots will gain power with the human urine through public toilets.
HUMANS ARE USED AS A LIVING BATTERYFujifilm and Advanced Industrial Science and Technology (Japanese corporations) have recently created a sheet type device that can observe human bodies heat and can converts it into energy. Actually the idea came from the Matrix film in which humans are acting like batteries. This sheet works when it analyzes the difference of air and human body temperature. The sheet is of a minor body but its working really amazes us. In the conclusion of this difference in temperature, electricity generates.THERMOELECTRIC WRISTWATCHESThermometric devices are taking their unique position and becoming very common in today life. We never ever realize that one day our body becomes an energy source. Through science, we came to know that human body can produces up to 116 watts a day. The thermometric wristwatch has been introduced by a company named Perpetual Power. The watch attaches to our wrist and can convert human heat to the useful energy which we can utilize in our desired purposes. The development in these watches is in early stages but certain advancements should occur in it.Human bodies heat is quite useful nowadays in generating energy which can be utilized in many important ways. We were never realized that one day our body can produce electricity but the science proves it true through the efficient working of scientists and biologists.
Basically veracity refers many types of sources and different types of data are in structured form or unstructured form which is used to stored data from source such as spreadsheets...more
Micro colleges like its name suggest small colleges. Many people around the world just want to learn to work and wont jobs more quickly. So micro colleges is suitable for this purpose, micro...more
Drones are more officially known as unmanned airborne vehicles (UAV). A drone is an untitled airliner in technology. For all intents and purposes a drone is a flying Robot. The airliner may...more
Assisted suicide, also known as physician assisted suicide, is actually doctors helping patients die or doctors helping patients commit suicide.
FEEDBACK AND OPINION FROM PEOPLE
Q#1 what is the most depressed moment of your life?
Person 1: best friend’s death
person 2: father’s accident
person 3: never thought of it
person 4: when parents...more
Magnetic 3D Bio Printing includes bio compatible magnetic to print cells into three dimensional cell cultures. Cells are attached with magnetic Nan shuttle which are used to turn them magnetic. Once it become magnetic,...more
The story is basically about an adult girl, her name was Jemmy. she was not more than 24 and she was living in a small village known as frit well about 26 miles from city area. she had two younger sisters who were studying very hard as her father died in an accident 6 years ago. Her mother was the...more
As the thought behind molecular communication is that you utilize synthetic signs, what I anticipate similarly as the fate of sub-atomic correspondence is worried that as in different frameworks where small scale advances...more
As the time passed by, war style has been changed from man on man fight having swords or arrows to by using internal or external sources to destabilize State affairs or an economy....more
Today, the farmers have to use tractors to plant seeds in their huge fields. The invention of seeding drones can drag out the farmers from this difficulty by planting the perfect rows of...more
A drone is commonly known as unnamed aerial vehicles. Drones are flying robots. They are remotely controlled or can be controlled by software which enabled with GPS. Drones are used for military purpose...more
Research is to study their behavior and their design and their controlling systems. Swarms involve constant behavior change automatically after per-defined time interval. Swarm clothing a new invention in the future. Swarm clothing...more
Scientists and researchers believe that super human qualities are much greater than other human beings. Advance Scientific experiments create such extraordinary qualities to the super DNA which we only think....more
The concept of smart flying machines and robots is not something limited to fiction and sci-fi movies anymore. This is the era of the drones. They are gradually taking over...more | <urn:uuid:c7cea756-0f94-4d77-9a54-a8a8ba6db144> | 2.984375 | 1,430 | Content Listing | Science & Tech. | 43.736438 | 95,537,608 |
What would a mole of mammal DNA look like? Smell like? Feel like?
How about a mole of plant DNA?
To have a mole of something you need a finite number of regular molecules. 6.022*10^23 to be exact. DNA comes in long chains or little snippets, and depending on its nucleotide makeup would come in different molecular weights even if it was of uniform length. So I don't know if you can even have a 'mole of DNA' in the general sense.
When it's extracted from cells in preparation for sequencing or PCR, it looks a lot like, well, snot.
In regards to looking like snot, I don't think there would be much difference between plant and animal DNA. Feels? Probably slimey.
So maybe I should have stated a mole of human DNA and a mole of cauliflower DNA. Because each species have very similar DNA and each cell in an individual entity such as human have the same length, size DNA but only choose to read a segment of it (This DNA was formed when the person first formed and became a zygote cell). How many moles of cells in a human? So it should be attainable.
DNA is molecule with variable length. Some pieces of DNA are quite long, while some can be quite short. Thus, it makes no sense to talk about "a dozen DNA," nor does it make sense to talk about "a mole of DNA."
Either way, your question is just asking about the physical qualities of a large amount of pure DNA, and doesn't really depend on the interpretation of the word "mole."
Do most single type proteins in mass look like this?
So just say in a human, you say there are many different types of DNA? Is that because there are 23 pairs of different chromosomes and the DNA in them are different? Are all these different types present in the zygote cell? How many different types are there?
It's a cloudy whitish color.
There are 23 pairs of chromosomes, each of different length, and the code carried by each is indeed different. Furthermore, when you extract DNA, you necessarily damage it. What you end up with is not 23 pairs of nice tidy complete DNA strands -- you end up with a mush of millions of broken pieces of DNA of all different lengths. Then you use a gene sequencing machine (and a supercomputer) to fit the jigsaw puzzle back together again.
Isn't snot green by definition?
Only if you've got a cold, I think.
"green" means sinusitis. arildno - do you have some kind of perennial infection?
Not really. At other times, thiough, I would call it "icky-stuff-from-the-nose", rather than "snot".
The PBS documentary "Journey of Man" by Spencer Wells shows this in one scene. (great show, BTW)
Maybe dried snot. If there are no salts or proteins contaminating the preparation, it's actually darned hard to see when completely isolated. Usually just a clear-ish speck on the bottom of the tube that you can only be sure is there when you start reconstituting it with water and see where the water flow changes (in the 10-100 microgram range...how many moles would depend on the size of the DNA strand, which varies with species, but a LOT). Salts in your preparation will leave the pellet looking a bit cloudier...easier to see, but not pure (and then the folks that do the sequencing for you yell at you and send it back to be purified better :grumpy:).
Moonbear - the voice of experience?
That is after you have spun it down. When you lyse cells and the DNA is released, you get very long snotty fibers of DNA. Samples of genomic DNA that have a high concentration can be hard to pipet because of the snotty nature of the sample.
Oops, I realized that I've only dealt with plasmid DNA (to ultimately use as templates for making probes for in situ hybridization), and throw away the snotty genomic DNA in everything I do, so forgot about what that part looks like; I have better recollection of those stubborn pellets.
I'm really digging our highly intellectual discussions of snot. :rofl:
You would prefer disordered fragments of long chain complexes?
DNA snippets also have liquid crystal states, when in the DNA is at high concentration in water... those solutions are pretty in optical microscopy... there is some dependency of the optical microscopy on the helicity, temp, concentration, etc. I have some friends preparing a publication.
but yeah -- by itself... whitish snot. :yuck:
Separate names with a comma. | <urn:uuid:2337f736-b05f-4d87-b136-9f36b8d095fb> | 3.109375 | 1,003 | Comment Section | Science & Tech. | 64.431972 | 95,537,621 |
Imagine a beautiful moonlit night. You're standing on a Bahamian beach watching the ocean waves. A mermaid appears on the beach and asks you a fateful question. You know that if you answer the question correctly, you'll win a visit to her underwater crystal palace in a blue hole cave, a lock of her hair, and good fortune throughout your life. But, if you answer incorrectly, your trip will be one way, pulled down into the gaping mouth of a blue hole, perhaps to feed "Lusca" (Palmer, 1986).
This is a Bahamian folk tale created from the mysteriousness of the Bahamas' blue holes, the underwater caverns that weave through the limestone banks of the Bahamas. In this presentation, I will discuss:
· the folk tale of "Lusca,"
· when and how the blue holes formed,
· the exploration of blue holes, and finally
· the flora and fauna that make their homes in Bahamian blue holes.
II. Folklore of Bahamian blue holes
1. Fisherman, farmers, and children of the Bahamas know this sea monster as the Lusca-a half octopus, half shark or sometimes half eel, half squid (Palmer, 1987; Palmer, 1985, p. 73).
2. Lusca inhabits the blue holes and draws fishermen and their boats down as it inhales, then exhales the indigestible flotsam (Palmer, 1987).
3. Or some say that the Lusca shoots out his tentacles and captures boats and fishermen that way (Benjamin, 1970).
B. The causes of Lusca
1. "The appetite of the legendary creature" is the result of swift tidal changes that cause whirlpools on inflow, and cold, mushrooming mounds when the currents reverse and blow out of the caves (Palmer, 1987; Palmer, 1990).
2. These whirlpools surge and boil and hold the potential to pull unwary swimmers into the depths of the blue holes (Belleville, 1994).
III. When and how blue holes formed
· Formed by chemical actions occurring in the mixing zone
· Formed during the ice ages of the last million years or so
A. Bahama Banks
1. More than 5km thick and made of accumulated marine and wind-blown sediments (Palmer, 1986) that represent over a hundred million years of slow sediment accumulation in shallow seas (Palmer, 1985).
2. These horizontal layers of sediment form a limestone platform with a surface area greater than 100,000 square km (Palmer, 1986).
3. Several times in the banks' 150-million year history, it has been exposed and re-flooded as the levels of the oceans rose and fell in response to glacial activity (Palmer, 1986).
4. During these exposed dry spells, the surface of the banks was eroded, leading to the formation of many features typical of exposed limestone, such as caves (Palmer, 1986).
5. As the last glacial period ended, once again raising the level of the sea, the caves flooded, forming the blue holes of today (Benjamin, 1970).
6. Some of the holes are located inland, in freshwater lakes, and some are located in the offshore marine environments, creating somewhat different habitats.
B. Mixing zone
1. When sea levels were at their lowest, the entire Bahama bank was exposed to create a country notably more extensive than the scattered islands of today (Palmer, 1986).
2. During this time, underground freshwater existed in considerable quantities throughout the banks (Palmer, 1986).
3. Now, with current sea levels, only the largest islands have some underground freshwater resources, which create lens-shaped reservoirs (Palmer, 1986).
4. Freshwater sits atop a layer of denser saltwater that has saturated the rock beneath the island surface (Palmer, 1987). This fresh/saline water interface is known as the halocline.
5. Caves form along the base of these lenses, in the chemically aggressive mixing zone between fresh and saline waters (Palmer, 1986).
a. Tidal flow carries the limestone-saturated water out to sea, more freshwater flows down, and more limestone is dissolved (Palmer, 1987).
b. Bacteria, decomposing the organic debris in the freshwater lens, help this process along (Palmer, 1986).
c. The debris forms a layer floating on the denser sea water below-here is where the bacteria decompose the debris, creating a slightly acidic environment, poor in oxygen but rich in carbon dioxide (perfect for dissolving limestone) (Palmer, 1986).
d. A body-sized passage might take 10,000 years to form (Palmer, 1987).
e. As the ice ages came and went and sea levels changed, so did the position of the mixing zone, and caves formed at many different levels beneath the islands (Palmer, 1987), creating extensive horizontal systems of caves of considerable complexity (Palmer, 1986).
f. According to Palmer, the "underside of the Bahamas resemble[s] a gigantic Swiss cheese" (Palmer, 1987).
1. When the caves were above sea level (during the Pleistocene Epoch), conditions were often suitable for the formation of stalactites and stalagmites (Palmer, 1986; Belleville, 1994).
2. Speleothems are formed through mineralization and the slow dripping of water laden with calcium carbonate (Belleville, 1994).
3. Rainwater fell on the surface of the exposed Bahama Banks and dripped steadily down into the dry caves and formed galleries of stalagmites and stalactites (Palmer, 1985).
4. Within their crystal cores, speleothems contain a history of climatic change during the ice ages (Palmer, 1985).
5. Dating stalagmites gives a minimum possible age for the caves; dating their host rock gives a maximum one (Palmer, 1986).
IV. Blue hole exploration
· Story of blue hole exploration based on two islands, Andros (largest but one of the least developed of the Bahamas) and Grand Bahama (home to Freeport/Lucaya, one of the major tourist centers of the islands) (Palmer, 1990).
A. George Benjamin-1960's
1. Beginning in 1958, George Benjamin (died around 1994), a research chemist and amateur spelunker from Toronto, began exploring many of these phenomena for the first time (Belleville, 1994).
2. "Ever since my first encounter with these strange holes, I have felt irresistibly drawn toward their dark mouths. Everyone talked about the blue holes, but no one, apparently, had mustered either the equipment or the curiosity to explore them." (Benjamin, 1970)
3. On the island of Andros, he and a team of skilled divers charted several hundred potential blue hole sites and personally explored 54 holes around the island (Belleville, 1994).
4. Benjamin also became one of the first to explain tidal surges and vortexes in the mouth of the holes (Belleville, 1994).
B. Robert Palmer-1980's
1. Robert Palmer, a British cave diver and Geologist at Bristol University, first heard of the Bahamian blue holes and George Benjamin at the annual British caving conference in 1976 (Palmer, 1985).
2. Palmer, who was a novice cave diver, was intrigued by the blue holes, but worked on building his cave diving skills in Britain (Palmer, 1985).
3. It wasn't until 1981 that Palmer and a small British team made their first expedition to Andros (Palmer, 1985).
4. In 1983, Palmer and another team made an expedition to Grand Bahama and another one in 1984 (Palmer, 1985).
5. During this time, British and American cave divers made spectacular discoveries beneath other islands, showing that blue holes are widespread throughout the Bahamas and housed unique ecosystems (Palmer, 1987).
6. Beneath the interior of Grand Bahamas lie the Lucayan Caverns, possibly one of the world's most extensive underwater cave systems with over 10km of explored passages (Palmer, 1990).
a. It holds the distinction of being the world's only underwater cave National Park (presented to the Bahamas National Trust by Sir Jack Hayward in 1982) (Palmer, 1990).
C. Dangers of cave diving
1. Exploring underwater caves is not one of the easiest techniques of scientific research, nor one of the safest (Palmer, 1986).
2. Swimming deep into pitch black, enclosed caves, filled entirely with water, where passages twist and turn into disorienting mazes is not for the unsure, nor is it for someone who isn't extensively trained (Palmer, 1990).
3. Cave diving requires a lot of experience in both scuba diving and caving, and long periods of training with organizations, like the USA's National Association of Cave Divers (Palmer, 1990).
4. Between 1980 and 1990, over 200 cave divers died in Florida alone, showing that cave diving is a unique activity that demands respect (Palmer, 1990).
V. Flora and fauna of the Bahamian blue holes
A. Remipedia (found in inland caves)
1. An entirely new genus, order, and class of life (Palmer, 1990).
2. Tiny centipede-like crustacean with a graceful swimming motion discovered in the Lucayan Caverns by biologist Jill Yager in 1979 (Palmer, 1990; Palmer, 1986).
3. Named Remipedia meaning "oar-footed" (Palmer, 1990).
4. Its nearest known relatives became extinct over 150-million years ago when America was still linked to Europe and Africa, and the Atlantic had barely started to form (Palmer, 1990).
B. Lucifuga (Lucifuga speleotes) (found in marine caves)
1. A brotulid, one of a group of stumpy, eel-like fish whose marine relatives inhabit the inside of reefs and crevices in undersea cliffs (Palmer, 1986).
2. The only truly adapted cave fish, which generally inhabits the deeper saline region below the mixing zone (Palmer, 1987).
3. Their eyes have atrophied over the millennia to black spots beneath its pigmentless, pink skin (Palmer, 1986).
C. Inland caves (Lucayan Caverns and eastern Grand Bahama's Zodiac Caverns)
1. Inland caves are "anchialine," meaning they contain saline or brackish water and are subject to some tidal influence, but no direct connection with the sea (Palmer, 1986).
2. They are free from the strong tidal currents and animals of marine blue holes, making them extremely stable environments (Palmer, 1986).
3. The temperature fluctuates by less than one or two degrees Celsius over the year, and the current through the caves is negligible (Palmer, 1986).
4. Decaying organic debris from the forests above floats into these inland holes, and are decomposed by bacteria, creating an organic broth (Palmer, 1986).
5. The bacteria and the remains of the organic debris form the basic diet of the simplest of the cave crustacea: ostracods, copepods, and thermosbanaceans (a little-known order of crustacean, found almost exclusively in caves) (Palmer, 1986).
6. A link or two higher on the food chain is small crustaceans and cirolanid isopods, feeding on sediments and weaker cave fauna (Palmer, 1986).
7. Next are the predators, such as Remipedes, with their sharp mandibles and many pairs of swift-swimming legs (Palmer, 1986).
8. The top of the food chain is cave-adapted fish, such as Lucifuga
D. Marine caves (Andros Island's Conch Sound Blue Hole)
1. Currents carry food, plankton and organic debris, into the caves (Palmer, 1990).
2. These currents bring in food twice daily and encourage rich coral growth around the entrances, and nourish a marine community which stretches inside the caves (Palmer, 1990).
3. Corals, sponges, ascidians, anemones, hydroids, and bryozoans-all sessile organisms-grow in a colorful carpet of life on the walls, roof, and floor of the passages for long distances into the lightless tunnels (Palmer, 1990; Palmer, 1987).
4. They compete for space and access to the nourishing food of current flows (Palmer, 1987).
5. Sponges and hydroids stretch up to three times their normal lengths, streaming two to three feet into the nourishing flow (Palmer, 1987).
6. Hunters, including arrow crabs and shrimp, clean the cave walls of organic debris (Palmer, 1987).
7. The marine cave mouths are home for much of the wildlife that is found on the outer reef, but in comparatively shallow waters (Palmer, 1990).
8. Blue holes provide shelter for a host of reef fish, such as nurse and lemon sharks, which have been found sleeping inside blue holes (Palmer, 1990).
9. Fish are often found swimming upside down because they orient themselves to the nearest solid surface (Palmer, 1987), which is often the roof of the cave if the floor drops to several hundred meters below.
Bahamian blue holes are truly unique, underground waterworlds. Bahamian folklore has attempted to explain the mystery of these limestone caves that formed during the ice ages of the last million years. However, not until the intense exploration of these wondrous caves were scientists able to answer many questions and reveal an ecosystem of uniquely adapted flora and fauna.
Return to Topic Menu
It is 6:20:12 PM on Wednesday, July 18, 2018. Last Update: Wednesday, May 7, 2014 | <urn:uuid:87c761a7-1818-48bc-942a-ceb777ca71ab> | 3.5625 | 2,958 | Knowledge Article | Science & Tech. | 56.220068 | 95,537,623 |
DNA study of southern humpback finds calving ground loyalty drives population differences
Scientists conducting the first circum-global assessment of mitochondrial DNA variation in the Southern Hemisphere's humpback whales (Megaptera novaeangliae) have found that whales faithfully returning to calving grounds year after year play a major role in how populations form, according to WCS (Wildlife Conservation Society), the American Museum of Natural History, and a number of other contributing organizations.
The research results build on previous regional studies of genetic diversity and will help scientists to better understand how humpback whale populations evolve over time and how to best advise international management authorities.
The paper titled "First Circumpolar Assessment of Southern Hemisphere Humpback Whale Mitochondrial Genetic Variation at Multiple Scales and Implications for Management" now appears in the online version of Endangered Species Research.
"Exploring the relationships of humpback whales around the Southern Hemisphere has been a massive undertaking requiring years of work and collaboration by experts from more than a dozen countries," said Dr. Howard Rosenbaum, Director of WCS's Ocean Giants Program and lead author on the study. "Our findings give us insights into how fidelity to breeding and feeding destinations persist over many generations, resulting in differences between whale populations, and why some populations are more genetically differentiated from the rest. From these efforts, we are in better positions to inform actions and policies that will help protect Southern Hemisphere humpback whales across their range, as well as in the Arabian Sea."
In the largest study of its kind to date, researchers used mitochondrial DNA microsatellites from skin samples gathered from more than 3,000 individual humpback whales across the Southern Hemisphere and the Arabian Sea to examine how whale populations are related to one another, a question that is difficult to answer with direct observations of whales in their oceanic environment. Overall, the study's data from mitochondrial DNA — different from nuclear DNA in that it helps scientists trace maternal lineages — reveal that population structure in humpback whales is largely driven by female whales that return annually to the same breeding grounds and by the early experience of calves that accompany their mothers on their first round-trip migration to the feeding grounds. The persistence of return to these migratory destinations over generations, is known as 'maternally directed site fidelity'.
The occasional genetic interchange between populations also seemed to correlate with feeding grounds with high densities of krill, places where whales from different populations are likely to move vast distances and come into contact with other populations. The study also identified specific populations — those inhabiting the eastern South Pacific off of Colombia and a non-migratory population in the Arabian Sea–as more genetically distinct and isolated from other nearby populations and perhaps in need of additional management and conservation consideration.
"Our increased understanding of how whale populations are structured can help governments and inter-governmental organizations like the International Whaling Commission improve management decisions in the future," said Dr. C. Scott Baker of Oregon State University's Marine Mammal Institute and a member of the South Pacific Whale Research Consortium that contributed to the study.
The humpback whale reaches a body length of 50 feet and, as a largely coastal species, is popular with whale watch operations around the world. Before receiving international protection in 1966, humpback whales were targeted by commercial whaling vessels that nearly drove the species into extinction. This included more than 45,000 humpback whales taken illegally by the Soviet Union after World War II. Current threats to humpback whales include ship strikes, underwater noise, pollution, and entanglement in fishing gear.
These threats are particularly pertinent to humpback whales in the Arabian Sea, a genetically isolated population numbering fewer than 100 animals and currently listed on the IUCN's Red List of Threatened Species as "Endangered." WCS's research is done in collaboration with a number of regional and local partners in the Arabian Sea working on advocacy and conservation, notably the Environment Society of Oman, among others.
The authors of the study are: Howard C. Rosenbaum of WCS and AMNH (American Museum of Natural History); Francine Kershaw of Columbia University and the Natural Resources Defense Council; Martín Mendez of WCS; Cristina Pomilla of AMNH and the Wellcome Trust Sanger Institute, United Kingdom; Matthew S. Leslie of AMNH and the Smithsonian Institution; Ken P. Findlay of the Cape Peninsula University of Technology, South Africa; Peter B. Best of the University of Pretoria, South Africa; Timothy Collins of WCS; Michel Vely of Association Megaptera, France; Marcia H. Engel of Projeto Baleia Jubarte/Instituto Baleia Jubarte, Brazil; Robert Baldwin of the Five Oceans Environmental Services LLC, Sultanate of Oman; Gianna Minton of Megaptera Marine Conservation, the Netherlands; Michael Meÿer of Oceans and Coasts, Department of Environmental Affairs, South Africa; Lilian Flórez-González of Fundación Yubarta, Colombia; M. Michael Poole of the South Pacific Whale Research Consortium, Cook Islands; Nan Hauser of the South Pacific Whale Research Consortium and Cook Islands Whale Research, Cook Islands; Claire Garrigue of the South Pacific Whale Research Consortium (Cook Islands) and Opération Cétacés (New Caledonia); Muriel Brasseur of Edith Cowan University, Australia; John Bannister of the Western Australian Museum; Megan Anderson of Southern Cross University, Australia; Carlos Olavarría of the University of Auckland (New Zealand) and Centro de Estudios Avanzados en Zonas Aridas (Chile); and C. Scott Baker of the South Pacific Whale Research Consortium (Cook Islands) and Oregon State University. | <urn:uuid:7f1909ec-2de1-4e67-bf6d-41c2ed416f6f> | 3.921875 | 1,165 | News Article | Science & Tech. | 8.147434 | 95,537,714 |
For the first time scientists from the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the Scottish Association for Marine Science (SAMS), the University of Southern Denmark, the University of Copenhagen (Denmark), the HGF-MPG Joint Research Group on Deep-Sea Ecology and Technology from the Max Planck Institute for Marine Microbiology (MPI Bremen, Germany), and the Alfred Wegener Institute for Polar and Marine Research (AWI Bremerhaven, Germany) successfully collected data directly at the bottom of the Mariana Trench located 2000 km East of the Philippines, in a depth of 11000 meters.
A sophisticated deep-diving autonomous lander has carried out a series of descents to the seafloor of the Challenger Deep, a canyon 10.9 km beneath the ocean surface. Here it performed detailed investigations of microbial processes occurring in the sediment. Such detailed science has never been carried out at these extreme depths. The work was carried out during an expedition with the Japanese research vessel Yokosuka (Cruise YK 10-16), with Prof. Hiroshi Kitazato (JAMSTEC) acting as cruise leader.
To better understand the global carbon cycle it is critical to know what role the oceans play in carbon sequestration. Deep ocean trenches make up only 2% of the seafloor but may be disproportionately important as a trap for carbon. The aim of this research was to measure the rate by which organic carbon is degraded at these extreme depths and to estimate from collected sediment samples how much organic carbon is retained in the trenches. The fraction of carbon retained versus degraded in the seabed is crucial to understand the marine carbon cycle and hence the climate of our planet.
The pressure at these great depths is extreme, so to investigate microbial processes in samples from such depths can be very difficult – bringing the organisms to the surface can radically affect them. Therefore scientists around Prof. R.N. Glud (SAMS and SDU) and Dr. F. Wenzhöfer (MPI and AWI) developed an instrument capable of performing the measurements directly on the seafloor at this great depth. Specially constructed sensors probed the sediment in small grids and mapped out the distribution of oxygen in the seabed, providing key insight into the rate at which organic carbon is degraded. To get the “robot” to operate at 10.9km depth was a great challenge. Equipment designed by the team was specially engineered for the mission to function at pressures in excess of 1000 atmospheres. The deployed deep-sea system was a joint effort of Japanese, Scottish, Danish and German scientists. During the cruise the scientist succeeded in performing detailed mapping of microbial activity using highly sophisticated, movable instrumentation and microsensor arrays.
Preliminary data from the measurements came as a surprise, as they reveal that the turnover of carbon is much greater at the bottom of the Trench than on the Abyssal plain (6000 metres down). This demonstrates that the seabed in the trenches acts as a trap for organic material and may therefore have high rates of carbon retention. Analyses will be carried out on recovered samples by the research team and they will reveal the rate at which sediment is accumulating at the bottom of the trench. The expedition is a good example of international teamwork. There was a great sense of achievement to study and bring back data from the deepest part of the ocean. Now the researchers expect that this information will help to answer some very important questions regarding carbon mineralisation and sequestration in the ocean trenches.
Dr. Manfred Schloesser | Max-Planck-Institut
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:ee9f0cc5-e295-4141-a9f8-eaf6eb7c78cd> | 3.3125 | 1,371 | Content Listing | Science & Tech. | 40.519116 | 95,537,726 |
object-oriented programming(redirected from Object-orientated programming)
Also found in: Dictionary, Thesaurus.
object-oriented programming,a modular approach to computer programcomputer program,
a series of instructions that a computer can interpret and execute; programs are also called software to distinguish them from hardware, the physical equipment used in data processing.
..... Click the link for more information. (software) design. Each module, or object, combines data and procedures (sequences of instructions) that act on the data; in traditional, or procedural, programming the data are separated from the instructions. A group of objects that have properties, operations, and behaviors in common is called a class. By reusing classes developed for previous applications, new applications can be developed faster with improved reliability and consistency of design. The first object-oriented programs, written in the language Simula 67, were used extensively for modeling and simulation, primarily in Europe during the late 1960s and early 1970s. The technique was popularized in the United States during the following decade using the language SmallTalk and achieved its greatest prominence with the development of the object-oriented language C++ during the late 1980s and 1990s.
See P. W. Oman and T. G. Lewis, Milestones in Software Evolution (1990); T. Budd, An Introduction to Object-Oriented Programming (1991); P. Varhol, Object-Oriented Programming: The Software Development Revolution (1993); P. Coad and J. Nicola, OOP, Object-Oriented Programming (1993).
object-oriented programming[¦äb‚jekt ‚ȯr·ē‚en·təd ′prō‚gram·iŋ]
A computer-programming methodology that focuses on data items rather than processes. Traditional software development models assume a top-down approach. A functional description of a system is produced and then refined until a running implementation is achieved. Data structures (and file structures) are proposed and evaluated based on how well they support the functional models.
The object-oriented approach focuses first on the data items (entities, objects) that are being manipulated. The emphasis is on characterizing the data items as active entities which can perform operations on and for themselves. It then describes how system behavior is implemented through the interaction of the data items.
The essence of the object-oriented approach is the use of abstract data types, polymorphism, and reuse through inheritance.
Abstract data types define the active data items described above. A traditional data type in a programming language describes only the structure of a data item. An abstract data type also describes operations that may be requested of the data item. It is the ability to associate operations with data items that makes them active. The abstract data type makes operations available without revealing the details of how the operations are implemented, preventing programmers from becoming dependent on implementation details. The definition of an operation is considered a contract between the implementor of the abstract data type and the user of the abstract data type. The implementor is free to perform the operation in any appropriate manner as long as the operation fulfills its contract. Object-oriented programming languages give abstract data types the name class.
Polymorphism in the object-oriented approach refers to the ability of a programmer to treat many different types of objects in a uniform manner by invoking the same operation on each object. Because the objects are instances of abstract data types, they may implement the operation differently as long as they fulfill the agreement in their common contract.
A new abstract data type (class) can be created in object-oriented programming simply by stating how the new type differs from some existing type. A feature that is not described as different will be shared by the two types, constituting reuse through inheritance. Inheritance is useful because it replaces the practice of copying an entire abstract data type in order to change a single feature.
In the object-oriented approach, a class is used to define an abstract data type, and the operations of the type are referred to as methods. An instance of a class is termed an object instance or simply an object. To invoke an operation on an object instance, the programmer sends a message to the object.
Each class is a separate module and has a position in a "class hierarchy". Methods or code in one class can be passed down the hierarchy to a subclass or inherited from a superclass. This is called "inheritance".
A procedure call is described as invoking a method on an object (which effectively becomes the procedure's first argument), and may optionally include other arguments. The method name is looked up in the object's class to find out how to perform that operation on the given object. If the method is not defined for the object's class, it is looked for in its superclass and so on up the class hierarchy until it is found or there is no higher superclass.
OOP started with SIMULA-67 around 1970 and became all-pervasive with the advent of C++, and later Java. Another popular object-oriented programming language (OOPL) is Smalltalk, a seminal example from Xerox's Palo Alto Research Center (PARC). Others include Ada, Object Pascal, Objective C, DRAGOON, BETA, Emerald, POOL, Eiffel, Self, Oblog, ESP, Loops, POLKA, and Python. Other languages, such as Perl and VB, permit, but do not enforce OOP.
FAQ. http://zgdv.igd.fhg.de/papers/se/oop/. http://cuiwww.unige.ch/Chloe/OOinfo.
Usenet newsgroup: news:comp.object.
object-oriented programmingWriting software that supports a model wherein the data and their associated processing (called "methods") are defined as self-contained entities called "objects." Object-oriented programming (OOP) languages, such as C++ and Java, provide a formal set of rules for creating and managing objects. The data are stored in a traditional relational database or in an object database if the data have a complex structure. See O-R mapping and object database.
There are three major features in object-oriented programming: encapsulation, inheritance and polymorphism.
Encapsulation Enforces Modularity
Encapsulation refers to the creation of self-contained modules that bind processing functions to the data. These user-defined data types are called "classes," and one instance of a class is an "object." For example, in a payroll system, a class could be Manager, and Pat and Jan could be two instances (two objects) of the Manager class. Encapsulation ensures good code modularity, which keeps routines separate and less prone to conflict with each other.
Inheritance Passes "Knowledge" Down
Classes are created in hierarchies, and inheritance allows the structure and methods in one class to be passed down the hierarchy. That means less programming is required when adding functions to complex systems. If a step is added at the bottom of a hierarchy, then only the processing and data associated with that unique step needs to be added. Everything else about that step is inherited. The ability to reuse existing objects is considered a major advantage of object technology.
Polymorphism Takes any Shape
Object-oriented programming allows procedures about objects to be created whose exact type is not known until runtime. For example, a screen cursor may change its shape from an arrow to a line depending on the program mode. The routine to move the cursor on screen in response to mouse movement would be written for "cursor," and polymorphism allows that cursor to take on whatever shape is required at runtime. It also allows new shapes to be easily integrated.
OOP Traditional Programming class description of data + processing object (instance) actual data + processing attribute actual data (a field) method function that processes a particular structure message function call instantiate allocate a structure
|When information systems are modeled as objects, they can employ the powerful inheritance capability. Instead of building a table of employees with department and job information in separate tables, the type of employee is modeled. The employee class contains the data and the processing for all employees. Each subclass (manager, secretary, etc.) contains the data and processing unique to that person's job. Changes can be made globally or individually by modifying the class in question.| | <urn:uuid:495ca63e-1230-40fc-bafd-3ab1f66e706f> | 3.5 | 1,735 | Structured Data | Software Dev. | 33.771132 | 95,537,738 |
Vertical distribution of infauna in sediments of a subestuary of central Chesapeake Bay
- Anson H. Hines, Kathryn L. Comtois
- Estuaries SCOPUS
- Coastal and Estuarine Research Federation in 1985
- Cited Count
- Springer JSTOR
The vertical distribution of infauna was quantified in eight strata from 0–35 cm in sand and mud sediments of a lower mesohaline subestuary of Chesapeake Bay. Large numbers of small polychaetes, amphipods, and clams occurred in the upper 5 cm of both sediment types, whereas large clams (Macoma balthica in mud andMya arenaria in sand) extended down to 30 cm and comprised most of the biomass in their respective sediment types. There was extensive overlap of the species inhabiting both sediment types. Vertical stratification within and among species apparently reflected constraints on burrowing depth related to body size rather than resource partitioning among competitors. The maximal sediment penetration of 35 cm, which was exhibited byHeteromastus filiformis, was considerably less than the maximal penetration for deep burrowing species in some marine infaunal communities. Several species which burrowed deeper than 5 cm exhibited significant temporal shifts in their vertical distribution.
No relevant information is available
If you register references through the customer center, the reference information will be registered as soon as possible. | <urn:uuid:cb5814c2-ed8c-4544-84cd-db7036d02d4b> | 2.8125 | 293 | Academic Writing | Science & Tech. | 14.628333 | 95,537,758 |
Intermediate Mathematics - Binomial Theorem - Pascal's Triangle
Binomial Theorem - Pascal's Triangle
Observe the pattern in the following binomial expansions. You should
verify these by writing down the factors and multiplying them out. The
number of factors is given by n:
* The first term in each expansion is a and the last term is b.
* As we work from left to right, the power of a decreases by 1 while the
power of b increases by 1.
* Each term in the expansion has degree n. This means that the powers of
a and b in each term add to n.
* There are n+1 terms in each expansion.
The coeffficients in each expansion are known as the binomial coefficients
and they form the following pattern known as Pascal's Triangle:
and so on. Each row begins and ends with 1. Each other number can be
obtained by adding the two numbers in the line immediately above to the
right and left. For example, in line four each of the 3's is obtained by
adding 1+2 from immediately above. In line five we have 4=1+3, 6=3+3,
4=1+3. The rows are symmetrical.
Previous | Next
Please log in so we can save your progress and see when you successfully complete Alison’s free Differentiation and Functions in Mathematics online course
Please sign up so we can save your progress and see when you successfully complete Alison’s free Differentiation and Functions in Mathematics online course
We will send your password reset instructions to your associated address. Please enter your current email. | <urn:uuid:8b630c10-645e-49cf-92c2-b5f72c44dab5> | 3.796875 | 346 | Truncated | Science & Tech. | 64.161982 | 95,537,759 |
Groundhogs know physics!! They ventilate their burrows by building a mound over one entrance, which is open to a stream of air; and the other entrance is at ground level that is open to stagnant air. How exactly does this construction ventilate the burrow?© BrainMass Inc. brainmass.com July 18, 2018, 10:22 pm ad1c9bdddf
According to Bernoulli's Equation, flowing air exerts less pressure than still or stagnant air. ...
The solution provides detailed explanations and instructions for the problem. | <urn:uuid:2a55fa92-e611-4942-a99a-c51dfbae457e> | 3.359375 | 117 | Truncated | Science & Tech. | 66.208182 | 95,537,761 |
+44 1803 865913
Series: The Moths and Butterflies of Great Britain and Ireland Volume: 4(2)
Edited By: A Maitland Emmet and John R Langmaid
277 pages, 6 col plates, b/w illus, maps
Comprises the Gelechiiae, a total of 160 species. There are genetalia drawings for each species, showing both sexes, and also representative figures of wing venation for each family, as well as other diagnostic characters.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
The efficiency of supply, favourable pricing, and the friendly personal service we receive, makes dealing with NHBS a real pleasure
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:bcf7ea74-8dad-4ea5-a30b-7a797a5227c5> | 2.5625 | 184 | Product Page | Science & Tech. | 41.180385 | 95,537,766 |
Mathematical models about how mass moves in natural systems are used in various scientific fields such as to understand the global carbon and water cycles, or predicting the spread of contaminants or tracers in water bodies, soils, or organisms. These models, technically known as compartmental systems, are common in medical sciences, biology, and geosciences. Scientists from the Max Planck Institute for Biogeochemistry in Jena made a big step forward in this field by developing formulas and algorithms that help to describe the evolution of the age of particles in such systems when these are out of equilibrium.
Their findings, just published in the scientific journal Proceedings of the National Academy of Sciences (PNAS), extend the existing theory that so far was only available for systems resting in equilibrium. The new formulas and algorithms will allow much faster computations. In addition, they will also improve the investigation and understanding of non-linear dynamical systems, which describe many physical and biological processes in nature.
As matter enters a natural system, there is a constant replacement of the particles or atoms that are present in there. For example, a tree that fixes carbon from the atmosphere puts some of this carbon in its leaves, stems, and roots, but at the same time carbon is removed from these compartments by the respiratory activity of the tree.
“It is common for scientists to think about how much time carbon atoms spend in each of these compartments. In our general approach, we are interested in learning about the time it takes for atoms or particles to stay in the compartments and the time they need to travel across a system,” says Holger Metzler, first author of the study and PhD student at the Max Planck Institute for Biogeochemistry (MPI-BGC).
“In many cases we know, based on measurements, that there is a mix of ages in compartmental systems, but until now we did not have formulas to calculate the proportion of atoms in different ages and how this age structure changes over time as the system evolves,” Carlos Sierra, leader of the Theoretical Ecosystem Ecology group, further explains.
The new mathematical theory developed by the scientists at MPI-BGC in Jena focuses on describing the age structure of mass in a compartmental system. The new set of formulas allows scientists to compute the complete age structure of each individual compartment as well as of the entire system, and to observe how the system evolves over time. The researchers also developed a computer program to perform these complex computations.
“These new formulas and algorithms are highly versatile and can be used for a number of different scientific questions. We implemented them in open source software, which is publicly available to other scientists to corroborate our findings and use them in other scientific studies” illustrates Markus Müller who took an active role in the development of the software.
For example, the formulas can be used to calculate how long it would take to remove all the carbon dioxide from the atmosphere that is emitted by humans through combustion of fossil fuels. Furthermore, they can be applied to determine how much time it takes for a drug to be assimilated by an organism that is actively moving such as an athlete; or the time period needed to naturally degrade a contaminant dumped into a lake experiencing a drought. Many different questions of scientific or societal interest can now be tackled with the new mathematical and computational theory.
Metzler H., Müller M., Sierra C. (2018) Transit-time and age distributions for nonlinear time-dependent compartmental systems. Proceedings of the National Academy of Sciences doi:10.1073/pnas.1705296115
Metzler, H., & Sierra, C. A. (2018). Linear Autonomous Compartmental Models as Continuous-Time Markov Chains: Transit-Time and Age Distributions. Mathematical Geosciences, 50(1), 1–34. doi:10.1007/s11004-017-9690-1
Carlos Sierra, email@example.com, +49 (0)3641 57 6133
Holger Metzler, firstname.lastname@example.org
https://www.bgc-jena.mpg.de/TEE/index.html Webpage of the Research Group
Susanne Héjja | Max-Planck-Institut für Biogeochemie
Investigating cell membranes: researchers develop a substance mimicking a vital membrane component
25.05.2018 | Westfälische Wilhelms-Universität Münster
New approach: Researchers succeed in directly labelling and detecting an important RNA modification
30.04.2018 | Westfälische Wilhelms-Universität Münster
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:48ea8151-3c79-42a8-b3e7-be212125342d> | 3.59375 | 1,572 | Content Listing | Science & Tech. | 40.255551 | 95,537,820 |
Materials such as milk, paper, white paint and tissue are opaque because they scatter light, not because they absorb it. But no matter how great the scattering, light is always able to get through the material in question.
At least, according to the theory. Researchers Ivo Vellekoop and Allard Mosk of the University of Twente have now confirmed this with experiments. By shaping the waveform of light, they have succeeded in finding the predicted ‘open channels’ in material along which the light is able to move. The results will soon be published in Physical Review Letters and are already available on the authoritative websites: ScienceNOW and Physics Today.
In materials that have a disordered structure, incident light is scattered in every direction possible. In an opaque layer, so much scattering takes place that barely any light comes out ‘at the back’. However, even a material that causes a great deal of light scattering has channels along which light can propagate. This is only possible if the light meets strict preconditions so that the scattered light waves can reinforce one another on the way to the exit.
Always an open channel
By manipulating the waveform of light, Vellekoop and Mosk have succeeded in finding these open channels. They used an opaque layer of the white pigment, zinc oxide, which was in use by painters such as Van Gogh. Only a small part of the original laser light that falls on the zinc oxide, as a plane wave, is allowed through. As every painter knows, the thicker the paint coating, the less light it will let through. By using information about the light transmitted to programme the laser, the researchers shaped the waveform to the optimum form to get it to pass through the open channels.
To this end, parts of the incident wave were slowed down to allow the scattered light to interfere in precisely the right manner with other parts of the same wave. In this way, Vellekoop and Mosk increased the amount of light allowed through by no less than 44 percent. As theoreticians had predicted, open channels can always be found and transmission through them is, furthermore, independent of the thickness of the material concerned.
The results are highly remarkable: although the theoretical existence of open channels was acknowledged, so far manipulating the light such that the channels in materials could actually be found has been too complex. As a result of better light conductivity in opaque materials, it may in the future be easier to look into materials that have so far not divulged their secrets: for example in medical imaging technology. There is a significant parallel with the conductivity of electrons in extremely thin wires, such as those on semi-conductor chips. Electrons, which according to quantum mechanics behave as waves, move through these same open channels.
It is also conceivable that this research will yield more information about waveforms other than light, such as radio waves for mobile communication: can the range be improved by adjusting the waveform?
This research was carried out in the Complex Photonic Systems group of the University of Twente’s MESA+ Institute for Nanotechnology. It is financed by the Foundation for Fundamental Research on Matter (FOM) and by a Vidi grant from the Netherlands Organization for Scientific Research (NWO).
Wiebe van der Veen | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:e80a0c2e-4972-4364-83b7-355dc6be86d0> | 3.4375 | 1,253 | Content Listing | Science & Tech. | 40.130109 | 95,537,821 |
Using these supernovae they have traced the expansion history of the universe with unprecedented accuracy and sharpened our knowledge of what it might be that is causing the mysterious acceleration of the expansion of the universe.
Background and outline
At the end of last century astronomers discovered the startling fact that the expansion of our universe is not slowing down, as all our previous understanding of gravity had predicted. Rather the expansion is speeding up. Nothing in conventional physics can explain such a result. It means that either the universe is made up of around 70% 'dark energy' (something that has a sort of anti-gravity) or our theory of gravity is flawed.
Now, as part of the international collaboration "ESSENCE", researchers at the Danish Dark Cosmology Centre have added a new piece to the puzzle. In two papers recently released they detail observations of supernovae (exploding stars) that allow them to trace the expansion history of the universe in unprecedented detail. ESSENCE is an extension of the original team that discovered the acceleration of the universe and these results push the limits of technology and knowledge, observing light from dying stars that was emitted almost half the age of the universe ago.
In a third paper, led by the Danish team and released this week, the many new theories that have been proposed to explain the acceleration of the universe are critically assessed in the face of this new data. Dr. Jesper Sollerman and Dr. Tamara Davis lead the team who show that despite the increased sophistication in cosmological models over the last century the best model to explain the acceleration remains one that was proposed by Einstein back in 1917. Although Einstein's reasoning at the time was flawed (he proposed the modification to his theory so it could support a static universe, because in those days everyone 'knew' the universe was not expanding, it may be that he was right all along.
The results include 60 new type Ia supernovae discovered on the Cerro-Tololo Interamerican Observatory 4m telescope in an ongoing survey that so far has lasted four years. In order to follow up these discoveries the team uses some of the biggest telescopes in the world: the 8.2m VLT (Very Large Telescope) run by the European Southern Observatory and the 6m Magellan telescope (both in Chile), the 8m Keck telescope and the 10m Gemini telescope (both in Hawaii). The ESSENCE team includes 38 top researchers from many different countries on four continents.
The primary aim of the experiment is to measure the 'dark energy' - the thing that is causing the acceleration of the universe - to better than 10%. The feature of this dark energy that we measure is its 'equation of state'. This also allows us to check whether our theory of gravity needs modification. So far it looks like our theory is correct and that the strange acceleration of the expansion of the universe can be explained by Einstein's 'cosmological constant'.
In modern terms the cosmological constant is viewed as a quantum mechanical phenomenon called the 'energy of the vacuum'. In other words, the energy of empty space. It is this energy that is causing the universe to accelerate. The new data shows that none of the fancy new theories that have been proposed in the last decade are necessary to explain the acceleration. Rather, vacuum energy is the most likely cause and the expansion history of the universe can be explained by simply adding this constant background of acceleration into the normal theory of gravity.
Gertie Skaarup | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:1a178074-4fed-46c6-961c-ffbcd40ba771> | 3.390625 | 1,346 | Content Listing | Science & Tech. | 38.641444 | 95,537,835 |
- gene expression,
- Gulf of Mexico,
- coastal plume,
- recycled production,
Low salinity plumes of coastal origin are occasionally found far offshore, where they display a distinct color signature detectable by satellites. The impact of such plumes on carbon fixation and phytoplankton community structure in vertical profiles and on basin wide scales is poorly understood. On a research cruise in June 1999, ocean-color satellite-images (Sea-viewing Wide Field-of-view Sensor, SeaWiFS) were used in locating a Mississippi River plume in the eastern Gulf of Mexico. Profiles sampled within and outside of the plume were analyzed using flow cytometry, HPLC pigment analysis and primary production using C-14 incorporation. Additionally, RubisCO large subunit (rbcL) gene expression was measured by hybridization of extracted RNA using 3 full-length RNA gene probes specific for individual phytoplankton clades. We also used a combination of RT-PCR/PCR and TA cloning in order to generate cDNA and DNA rbcL clone libraries from samples taken in the plume. Primary productivity was greatest in the low salinity surface layer of the plume. The plume was also associated with high Synechococcus counts and a strong peak in Form IA rbcL expression. Form IB rbcL (green algal) mRNA was abundant at the subsurface chlorophyll maximum (SCM), whereas Form ID rbcL (chromophytic) expression showed little vertical structure. Phylogenetic analysis of cDNA libraries demonstrated the presence of Form IA rbcL Synechococcus phylotypes in the plume. Below the plume, 2 spatially separated and genetically distinct rbcL clades of Prochlorococcus were observed. This indicated the presence of the high- and low-light adapted clades of Prochlorococcus. A large and very diverse clade of Prymnesiophytes was distributed throughout the water column, whereas a clade of closely related prasinophytes may have dominated at the SCM. These data indicate that the Mississippi river plume may dramatically alter the surface picoplankton composition of the Gulf of Mexico, with Synechococcus displacing Prochlorococcus in the surface waters.
Marine Ecology - Progress Series, v. 251, p. 87-101.
Available at: http://works.bepress.com/john_paul/4/ | <urn:uuid:9e1346a8-d62d-4f9b-b302-630704fd6de9> | 2.640625 | 516 | Academic Writing | Science & Tech. | 24.469455 | 95,537,856 |
Species Detail - Field Slug (Deroceras (Deroceras) agreste) - Species information displayed is based on the dataset "All Ireland Non-Marine Molluscan Database".
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Deroceras (Deroceras) agreste
Threatened Species: Data deficient
12 April (recorded in 1962)
10 December (recorded in 2005)
Conchological Society of Great Britain and Ireland, All Ireland Non-Marine Molluscan Database, National Biodiversity Data Centre, Ireland, Field Slug (Deroceras (Deroceras) agreste), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Dataset/1/Species/123408> | <urn:uuid:c7895f21-e6e2-4168-8781-68ce32edd3bc> | 2.6875 | 211 | Structured Data | Science & Tech. | 14.261922 | 95,537,866 |
Visualization of Molecular Motions by MD Method
Various flows in micro structure have significant influences on the performance of so-called micro machines, the manufacturing processes of semiconductors, and so on. In some cases, such flows are very sensitive to the boundary conditions on the solid wall, namely, the behavior of gas molecule near the surface. In this paper, molecular motions are visualized by the molecular dynamics methods, where the substance is modeled as an aggregate of particles that simulate the molecules. A solid thin film, which is consisted of monatomic molecules, is formed and then a monatomic gas molecule has collide on the surface. The molecular motions are well observed by this visualization and the numerical results reveal that the scattering behavior of the gas molecule is neither specular nor diffuse reflection.
KeywordsIncident Angle Diffuse Reflection Molecular Motion Solid Wall Molecular Dynamic Method
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Trefethen, L. M., et al.
; “Some problems and Examples in Micro Fluid Dynamics” Forum on Micro Fluid Mechanics-1991-, ASME, FED-Vol. 113, pp. 29–42, 1991Google Scholar
Ikegawa, M and Kobayashi, J.; “Deposition Profile Simulation Using the Direct Simulation Monte Carlo Method” J. Electrochem. Soc., 136 (10), pp. 2982–2986, 1989CrossRefGoogle Scholar
Oman, R. A., et al.
; “Interactions of Gas Molecules with an Ideal Crystal Surface”, AIAA Journal, Vol 2, pp. 1722–1730, 1964MATHCrossRefGoogle Scholar
Cercignani, C. and Lampis, M.; “Kinetic models for gas-surface interactions ”, Transport Theory and Statistical Physics, 1 (2), pp. 101–114, 1971MathSciNetMATHCrossRefGoogle Scholar
Goodmann, F. O. and Wachman, H. Y.; “DYNAMICS OF GAS-SURFACE SCATTERING
”, Academic Press, 1976Google Scholar
Streett, W. B., et al.
; “Multiple time-step methods in molecular dynamics”, Mol. Phys., Vol 35, p639, 1978CrossRefGoogle Scholar
© Springer-Verlag Berlin Heidelberg 1992 | <urn:uuid:17d5bccb-a0d4-49ff-81dc-a23eb4859849> | 2.609375 | 515 | Academic Writing | Science & Tech. | 41.272657 | 95,537,877 |
How the expanding forests of the European Arctic could result in MORE carbon dioxide being released
Global warming could be accelerated by new trees growing in the warming regions of the Arctic tundra, scientists have warned.
By stimulating decomposition rates in soils, the expansion of forest into tundra in arctic Sweden could result in the release of carbon dioxide to the atmosphere.
The Arctic is getting greener as plant growth spreads thanks to a warmer climate. It had previously been hoped that increased plant biomass could take up carbon dioxide from the atmosphere, slowing climate change.
Arctic circle: Rapa River Delta in Sarek National Park in autumn, when the birch trees turn yellow and the leaves of the blueberry bushes turn red
But this latest research suggests net losses of carbon are possible if the decomposition of the large carbon stocks in Arctic soils are stimulated.
This is important as Arctic soils currently store more carbon than is present in the atmosphere as carbon dioxide and thus have considerable potential to affect rates of climate change.
By measuring carbon stocks in vegetation and soils between tundra and neighbouring birch forest, it was shown that the two-fold greater carbon storage in plant biomass in the forest was more than outweighed by the smaller carbon stocks in forest soils.
Furthermore, using a methodology based on measuring the radiocarbon content of the carbon dioxide being released, researchers found that the birch trees appeared to be stimulating the decomposition of soil organic matter.
Thus, the research was able to identify a mechanism by which the birch trees can contribute directly to reducing carbon storage in soils.
Dr Iain Hartley of the University of Exeter, lead author of the paper, said: 'Our work indicates that greater plant biomass may not always translate into greater carbon storage at the ecosystem level.
'We need to better understand how the anticipated changes in the distribution of different plant communities in the Arctic affects the decomposition of the large carbon stocks in tundra soils if we are to be able to predict how arctic greening will affect carbon dioxide uptake or release in the future.'
Global warming: Birch trees in Sweden's Arctic regions appear to be stimulating the decomposition of soil organic matter, increasing the release of carbon dioxide
Dr Gareth Phoenix, of the University of Sheffield's Department Animal and Plant Sciences, who collaborated on the research, added: 'It shows that the encroachment of trees onto Arctic tundra caused by the warming may cause large release of carbon to the atmosphere, which would be bad for global warming.
'This is because tundra soil contains a lot of stored organic matter, due to slow decomposition, but the trees stimulate the decomposition of this material. So, where before we thought trees moving onto tundra would increase carbon storage it seems the opposite may be true. So, more bad news for climate change.'
It is yet to be seen whether this observed pattern is confined to certain soil conditions and colonising tree species, or whether the carbon stocks in the soils of other arctic or alpine ecosystems may be vulnerable to colonisation by new plant communities as the climate continues to warm.
Most watched News videos
- Shocking video shows mother brutally beating her twin girls
- Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant
- Road rage brawl ends with BMW driver sending man flying
- Sir David Attenborough shuts down Naga Munchetty's questions
- Bon Jovi star Richie Sambora soars in fighter plane
- London commuter sings out loud and doesn't care who hears him
- Waitress tackles male customer after grabbing her backside
- Disaster averted by good samaritan that saved child in hot car
- Cohen taped Trump discussing payment to Playboy model
- Woman livestreams unassisted birth of her 6th child in her garden
- Roseanne Barr gives official statement on her Valerie Jarrett tweet
- Man fatally shoots a father during an argument over a handicap spot | <urn:uuid:97a4cc97-413c-4700-bb2b-6064d021010f> | 3.609375 | 807 | Truncated | Science & Tech. | 17.94378 | 95,537,903 |
Java Performance Tuning
Java(TM) - see bottom of page
Our valued sponsors who help make this site possible
JProfiler: Get rid of your performance problems and memory leaks!
Training online: Threading Essentials course
Tips April 2010
Get rid of your performance problems and memory leaks!
Get rid of your performance problems and memory leaks!
Back to newsletter 113 contents
Thinking about How Much Fun It Is to Develop Multi-Threaded Apps (Page last updated April 2010, Added 2010-04-28, Author Kevin Farnham, Publisher java.net). Tips:
- The parallelisable part of a program can be speeded up with more cores, but the sequential part can't - which means the sequential code tends to dominate execution time the more parallel you make your code
- To improve execution speed of an application, you need to figure out ways or patterns to make more code execute in parallel.
- A multithreaded application can actually run slower than its purely sequential, single-threaded starting point, because you are adding all the management of the threads, potential dividing of the application's data into snippets which then have to be re-assembled after the parallel code has been executed, etc.
- It requires a lot of care to properly implement and use locks - but it is worth it as when you try to access a lock that's already being held, the thread gets suspended first and awakened later when the lock is available, thus spending some time doing nothing.
- Compare-And-Swap is helpful in cases where there's something else a thread can work on if it finds that a locked resource is presently unavailable.
Billy Newport Discusses Parallel Programming in Java (Page last updated April 2010, Added 2010-04-28, Author Ryan Slobojan, Billy Newport, Publisher InfoQ). Tips:
- Non-blocking TCP is likely to be more efficient than a reliable transport implementation of UDP, but with the same capabilities.
- The cost of distributing work to another node can be orders of magnitude slower than just doing work on a local machine because of IO overheads. So you need to analyse the cost versus benefits to see if it is worth it.
- One of the reasons for using a data grid instead of something like Hadoop is to keep the data in memory so that it's fast. If you are going to keep all the data on disk you may as well use Hadoop.
- Hadoop is optimized for streaming data; it tries to read through a very large file but it's not loading the whole file up at once, it reads a block of records in, processes them, you get an output that is written to disk, you read the next block of records in. You are not paging but you are still working with a much bigger data set than you could in memory. A na?ve programmer might read the whole file into RAM and then it could be paged by the OS, and the difference in performance would be massive.
The top Java EE best practices (Page last updated January 2007, Added 2010-04-28, Author Keys Botzum, Kyle Brown, Ruth Willenborg, Albert Wong , Publisher IBM). Tips:
- Reduce the distribution communication in your application, with very large-grained "facade" objects that wrap logical subsystems and that can accomplish useful business functions in a single method call. This reduces network overhead, and also reduces the number of database calls by creating a single transaction context for the entire business function. The session facade should be a stateless session bean, with remote interfaces.
- EJB local interfaces provide performance optimization for co-located EJBs (local interfaces must be explicitly called by your application, requiring code changes and preventing the ability to later distribute the EJB without application changes). If you are certain the EJB call will always be local, take advantage of the optimization of local EJBs.
- For performance optimization, local interfaces can be added to the session facade.
- Use stateless session beans instead of stateful session beans (stateful solutions are not as scalable as stateless ones).
- Java EE application servers cannot load-balance requests to stateful beans but do load-balance stateless beans.
- Avoid stateful beans - to get stateless session beans user-specific state can be passed in as an argument or be retrieved as part of the EJB transaction from a persistent back-end store.
- Rely on two-phase commit transactions rather than developing your own transaction management - the container will almost always be better at transaction optimization - and can optimize for different deployments with no code changes.
- Store the minimum amount of state as possible in HttpSessions - what you need for the current business transaction and no more. A good rule of thumb is under 4K.
- A common problem is in using HttpSessions to cache information that can be easily recreated - this is a very expensive decision forcing unnecessary serialization and writing of the data. Instead, use an in memory hash table to cache the data and just keep a key to the data in the session.
- Enable session persistence - the fault tolerance obtained by automatic failover of sessions by the application server is valuable for providing uninterrupted user servicing.
- Log all transition point activity (entering and exiting significant boundaries).
- One of the most common errors is memory leaks, nine times out of ten caused by forgetting to close a connection (JDBC most of the time) or return an object back into the pool.
How better Caching helps Frankfurt's Airport Website to handle additional load caused by the Volcano (Page last updated April 2010, Added 2010-04-28, Author Andreas Grabner, Publisher DynaTrace). Tips:
- Too many resources for each page makes for slow page download.
- Tell the browser to cache static or infrequently changing content.
- Ensure that the "expires" header is correctly set (to far in the future), otherwise although the cached content is retrieved from the browser cache, the server is still queried on each retrieval which is almost as big a delay as not having the content in the cache.
- Use Http 1.1 and Connection: Keep-Alive.
- Gzip content for delivery.
- Minimize or preferably eliminate any redirects.
How MySpace Tested Their Live Site with 1 Million Concurrent Users (Page last updated March 2010, Added 2010-04-28, Author Dan Bartow, Publisher highscalability). Tips:
- Understand your application breaking points, define your capacity thresholds, and have a plan for when those thresholds are exceeded.
- Testing production infrastructure with actual anticipated load levels is the only way to understand how things will behave when peak traffic arrives.
- [Article decribes the Amazon cloud resources acquired for running 1 million concurrent user requests.]
- The more you scale, the more you have to limit the statistics you collect to just the most relevant.
- For highest load generation, you probably have to stagger virtual user requests so that the load generator can spread resources across virtual users.
- For load testing across data centres, you uneed to generate requests from multiple geographically different sites so that point of presence servers service requests appropriately.
- For high traffic websites, testing in production is the only way to get an accurate picture of capacity and performance.
- Elastic scalability (dynamically moving resources to handle load changes) is becoming an increasingly important part of application architectures.
- Applications should be built so that critical business processes can be independently monitored and scaled.
- Keeping things loosely coupled has many benefits, and capacity and performance are quickly moving to the front of that list.
- Real-time monitoring is critical. In order to react to capacity or performance problems, you need real-time monitoring in place. This monitoring should tie in to your key business processes and functional areas, and needs to be as real time as possible.
- Performance testing online applications is about more than saturation. Opening threads and sockets that actually remain open while downloading or streaming content is where you eat up all of your capacity on a server by server basis. Downloading content takes time, and while content is downloading or streaming you have lost capacity to generate load.
- For performance testing you aren't just firing off requests to generate load and letting them go. You are recording massive amounts of performance data about every single user. How much time every hit took, bandwidth transferred, errors, and things of that nature.
What Second Life can teach your datacenter about scaling Web apps (Page last updated February 2010, Added 2010-04-28, Author Ian Wilkes, Publisher ars technica). Tips:
- Make sure all your code assumes that any component can be in any failure state at any time
- Version all interfaces such that they can safely communicate with newer and older modules
- Practice a high degree of automated fault recovery, auto-provision all resources
- Implement a working version rather than one that scale hugely - and change the implementation as it scales. A single up front effort to achieve "right first time" is doomed to fail - and me very expensive too.
- Use the basic restraints to identify expected load (how many users, how many concurrent, how much work per concurrent user).
- Can the system be shut down at regular intervals?
- Developers can misunderstand how their code will affect the rest of the system, especially when centralized resources (e.g. databases) are abstracted (e.g. by ORM). Ask developers which resources their new feature consumes, and how much of them.
- Either load-test the system automatically or add load-testing and/or profiling hooks to internal interface layers.
- Beyond a certain size table, schema changes to MySQL may become impossible in production due to the time it takes. One solution is to create a new table each time the schema changes, and slowly migrate data to it while the system is running.
- When the database does become the bottleneck, one improvement is to partition the databases into horizontal slices of the data set (e.g. by user); another is to reduce the data going to and from the database.
- When a system is changing, the more heavily interchangeable the parts are, the more quickly the team can respond to failures or new demands. Standardise as much as possible and eliminate system specific dependencies.
- Applications can ofetn have silent failures - only an error in the logs shows up that something went wrong. Reporting statistics on error rates across the entire system allows you to identify where to target developer time to fix things - letting error rates get too high means you are likely having more and more problems until at some point the system becomes constantly unusable.
- Keep a close eye on batch jobs - they can easily spiral out of control in terms of their resource requirements, but as they run when staff are at a minimum, the problems may not be discovered until they cause serious outages.
- The frequency of alerts needing human intervention must be low and manageable or the system becomes unmaintainable (killed by it's own success).
- Try to automatically handle failures rather than require human intervention.
Back to newsletter 113 contents
Last Updated: 2018-06-28
Copyright © 2000-2018 Fasterj.com. All Rights Reserved.
All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss
Trouble with this page? Please contact us | <urn:uuid:f799fbfe-f7c8-4837-9c03-eb3a7f64689b> | 2.53125 | 2,428 | Content Listing | Software Dev. | 37.728127 | 95,537,904 |
How Do We Protect Earth From Asteroids? Part 1 - Finding Them
by Super User, 12 months ago
Support us at: http://www.patreon.com/universetoday
More stories at: http://www.universetoday.com/
Follow us on Twitter: @universetoday
Like us on Facebook: https://www.facebook.com/universetoday
Google+ - https://plus.google.com/+universetoday/
Instagram - http://instagram.com/universetoday
Karla Thompson - @karlaii / https://www.youtube.com/channel/UCEIt...
Chloe Cain - @chloegwen2001
On the early morning of February 13, 2013, people living in the Chelyabinsk region of Russia awoke to one of the most powerful warnings in recent history. Anyone looking up saw an incredibly bright meteor streak across the sky, brighter than the Sun. Observers said they could even feel the heat of the object as it passed overhead.
Moments later, the shockwave arrived, smashing out windows across a huge region, sending almost 1,500 people to the hospital with various cuts and injuries. It was absolutely amazing that nobody died.
But what was it? According to astronomers, the Chelyabinsk meteor was probably a space rock measuring about 20 meters (or 60 feet across). It struck the Earth’s atmosphere going almost 20 kilometers per second, at such a low angle that it just detonated, raining down debris, but sparing the region the true devastation of this kind of an impact.
The Universe delivered a powerful warning that the Solar System is filled with rocks and debris left over from its formation. And those objects still continue to smash into the Earth.
In fact, one of the most terrifying things about the Chelyabinsk strike is this: the meteor was completely unknown to astronomers before it crashed into the atmosphere. The moment of impact was the moment of discovery.
Today I’m beginning a two part series all about the search for killer asteroids and comets. In part one, we’re going to talk about the risks we face. What kinds of objects are out there, how dangerous are they, and what kinds of observatories and programs are working to find the next impact event.
In part two, we’ll talk about defense. If we do find a potentially dangerous asteroid or comet, what can we do to prevent an impact? We’ll talk about the physics and engineering of moving asteroids, to make the Solar System safer.
It’s not a question of “if” an asteroid will smash into the Earth, it’s a question of “when”. In fact, material from space is impacting our atmosphere all the time. According to NASA, about 100 tonnes of rock and dust gets added to the Earth every day. Once a year, a car-sized chunk of space rock impacts the Earth, exploding as a bright fireball.
A Chelyabinsk-level event is thought to happen once every 60 years or so. In fact, there have been three other recorded events with that kind of energy release in the last century, including the 1908 Tunguska event.
Every 2,000 years or so, an object the size of a football field hits Earth, causing localized destruction. And every few million years, an object comes along that releases so much energy, it would threaten the existence of human civilization.
The problem of course, is that we don’t know when or where these events are going to happen.
And it’s this problem that astronomers are trying to solve first.
Super User uploaded a new media, How Do We Protect Earth From Asteroids? Part 1 - Finding Them
12 months ago | <urn:uuid:9d70a00a-6da9-45ea-b534-aa9539cef138> | 3.203125 | 793 | Personal Blog | Science & Tech. | 60.113442 | 95,537,906 |
Do you know a quick way to check if a number is a multiple of two? How about three, four or six?
Where should you start, if you want to finish back where you started?
My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be?
The clues for this Sudoku are the product of the numbers in adjacent squares.
Using the digits 1 to 9, the number 4396 can be written as the product of two numbers. Can you find the factors?
Can you find a way to identify times tables after they have been shifted up?
Gabriel multiplied together some numbers and then erased them. Can you figure out where each number was?
How many moves does it take to swap over some red and blue frogs? Do you have a method?
Play around with sets of five numbers and see what you can discover about different types of average...
Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all?
A game that tests your understanding of remainders.
How many winning lines can you make in a three-dimensional version of noughts and crosses?
Imagine you were given the chance to win some money... and imagine you had nothing to lose...
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
What is the smallest number of answers you need to reveal in order to work out the missing headers?
A country has decided to have just two different coins, 3z and 5z coins. Which totals can be made? Is there a largest total that cannot be made? How do you know?
The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you notice.
Interior angles can help us to work out which polygons will tessellate. Can we use similar ideas to predict which polygons combine to create semi-regular solids?
Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations?
If you are given the mean, median and mode of five positive whole numbers, can you find the numbers?
A game for 2 or more people, based on the traditional card game Rummy. Players aim to make two `tricks', where each trick has to consist of a picture of a shape, a name that describes that shape, and. . . .
How many solutions can you find to this sum? Each of the different letters stands for a different number.
Six balls of various colours are randomly shaken into a trianglular arrangement. What is the probability of having at least one red in the corner?
What happens when you add a three digit number to its reverse?
Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think?
Can you crack these cryptarithms?
How many different symmetrical shapes can you make by shading triangles or squares?
If you move the tiles around, can you make squares with different coloured edges?
Imagine you have an unlimited number of four types of triangle. How many different tetrahedra can you make?
7 balls are shaken in a container. You win if the two blue balls touch. What is the probability of winning?
Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light?
Here is a machine with four coloured lights. Can you make two lights switch on at once? Three lights? All four lights?
In 15 years' time my age will be the square of my age 15 years ago. Can you work out my age, and when I had other special birthdays?
Can you do a little mathematical detective work to figure out which number has been wiped out?
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Engage in a little mathematical detective work to see if you can spot the fakes.
Is this a fair game? How many ways are there of creating a fair game by adding odd and even numbers?
Generate three random numbers to determine the side lengths of a triangle. What triangles can you draw?
Move your counters through this snake of cards and see how far you can go. Are you surprised by where you end up?
A game in which players take it in turns to try to draw quadrilaterals (or triangles) with particular properties. Is it possible to fill the game grid?
Who said that adding, subtracting, multiplying and dividing couldn't be fun?
Use the differences to find the solution to this Sudoku.
Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs?
Can you find the values at the vertices when you know the values on the edges of these multiplication arithmagons?
Can you find the values at the vertices when you know the values on the edges?
A game in which players take it in turns to turn up two cards. If they can draw a triangle which satisfies both properties they win the pair of cards. And a few challenging questions to follow...
There are nasty versions of this dice game but we'll start with the nice ones...
A spider is sitting in the middle of one of the smallest walls in a room and a fly is resting beside the window. What is the shortest distance the spider would have to crawl to catch the fly?
A hexagon, with sides alternately a and b units in length, is inscribed in a circle. How big is the radius of the circle?
Can you find the hidden factors which multiply together to produce each quadratic expression? | <urn:uuid:94a8db0f-0623-47f5-8f93-b0294ed08d47> | 2.984375 | 1,301 | Content Listing | Science & Tech. | 68.22438 | 95,537,912 |
- 7.3k Downloads
Several recent papers, including one in BMC Evolutionary Biology, examine the colonization history of house mice. As well as background for the analysis of mouse adaptation, such studies offer a perspective on the history of movements of the humans that accidentally transported the mice.
See research article: http://www.biomedcentral.com/1471-2148/10/325
KeywordsHouse Mouse Colonization History Northeastern Atlantic Ocean Western Subspecies Overland Route
Commensals of humans are likely to share the human global distribution. Being easily noticed, and surviving and reproducing well in environments that humans create, they also include some of the most favored model organisms. A prime example of this is the house mouse, Mus musculus, which is both the 'classic' mammalian model organism and a globally present commensal. Through its association with humans, the house mouse is even found in the remotest archipelagos, such as Kerguelen, a group of sub-Antarctic islands with a mean summer temperature as low as 8°C. It is the mouse populations inhabiting this inhospitable place that are the focus of a study by Hardouin et al. in BMC Evolutionary Biology.
With all the genomic tools available, there is currently a scramble to study the genetics of adaptation in house mice. What better place to study that than Kerguelen? Here, human occupancy is restricted to the few inhabitants of a research station on the main island (Grande Terre). The mice on these islands live outdoors in extreme conditions and, in contrast to the typical seed-eating of house mice elsewhere, they feed primarily on invertebrates. To understand the adaptations for this exceptional lifestyle, it is important to know something about the history of the mice. In particular: where did the mice come from? Is it a genetically mixed population? Is the population young or old? Hardouin et al. investigated all these questions for Kerguelen house mice which belong to the western subspecies, Mus musculus domesticus.
The Kerguelen study
Hardouin et al.'s study involved 437 mice from Kerguelen, an unprecedented coverage for the analysis of colonization history of such a small area. They found remarkable consistency in the mitochondrial DNA (mtDNA) sequences on Grande Terre and most of the surrounding small islands, suggesting that these populations are the product of a single relatively recent colonization (ultimately deriving from Europe). This fits with the recorded discovery of the archipelago in 1772 (by a Frenchman called Kerguelen-Trémarec) and settlement by mice either at the time or with subsequent human arrivals. Two of the other small islands in the archipelago (Cochons and Cimetière) may have been colonized in a second, separate introduction, as their mice belong to a different mtDNA lineage (also ultimately European). Over the archipelago as a whole there was no evidence of within-island heterogeneity in terms of mtDNA lineage. This is surprising given the large number of ships carrying mice that would have visited the islands (coming from many different places and therefore carrying mice of many different mtDNA lineages). These results are consistent with other data suggesting that mouse populations are resistant to secondary invasion by females (mtDNA is a maternally inherited marker). Presumably, newly arriving females coming into an established population are generally unable to survive or gain mates, and in consequence do not contribute to the population's gene pool. All of this means that mtDNA may be a very good marker for initial colonization by house mice within a given area.
Studies on European mice
Studies by ourselves and others have looked at the mtDNA lineages of house mice in northern Europe. A different lineage from those typically seen in Mediterranean Europe has been found further north in the area between Britain and Germany [2, 5]. The mouse mtDNA again matches a regional sphere of influence of Iron Age people , and, unlike other mouse mtDNA lineages, it appears that this Anglo-German lineage did not arrive in northern Europe by an overland route; instead it probably came along the Atlantic coast (Figure 1).
mtDNA studies suggest another pulse of detectable mouse colonizations during Viking times (Figure 1). Like the Phoenicians, the Vikings were impressive seafarers, carrying substantial cargoes ideal for stowaway mice, and there are mtDNA signals of maritime colonization events [2, 5, 7, 8].
How did Mus musculus domesticus get from Europe to Kerguelen? This subspecies had been in the right place at the right time to make use of the first storage of grain by Neolithic humans in the Fertile Crescent in the Middle East, and to adapt to changing human cultural practices. Good fortune struck again when the subspecies found itself in western Europe at the time that British, Dutch, French and Iberian seafarers were 'discovering', exploiting and taking settlers to the rest of the world. Kerguelen-Trémarec and his crew may have been the first humans to see the archipelago that now bears his name, but the colonization route of the first mice to arrive there is still uncertain, although their starting point was certainly western Europe (Figure 1).
Mice as a proxy for human history
It is intriguing how far the linkage between human history and mouse history may go. Jones et al. found a correlation between mouse genetic diversity and human population size (proportional to amount of mouse habitat) in discrete areas of the Faroe Islands in the northeastern Atlantic Ocean, another archipelago where house mice have been studied. This supports the expectation that the population genetics (in terms of genetic response to population expansions and contractions) of house mice is likely to reflect rather closely the population genetics of humans.
We have been considering how the history of humans impacts on the genetics of the house mouse, but that can be turned around. If the history of house mice is so intimately determined by humans, then the genetics of house mice may be useful to answer human historical questions; for example, the details of human affiliations in the Iron Age are sometimes imprecise - might house mice be able to indicate associations between Iron Age people from different geographical areas? House mice are equivalent to an artifact that an archeologist discovers and uses to determine human colonization or trading routes. The provenance of the mice is established from their DNA sequence and that is a very powerful tool, given its extraordinary information content. Not only can the DNA sequence help to establish the source of the mice found in a particular place but it can be used to date the original colonization and subsequent population history (including secondary colonizations), following approaches used for human DNA (see, for example )). However, it is clear from all the recent papers considered here [1, 3, 5, 7, 8] that archeogenetics using house mice is at an early stage, and that, in particular, calibrations to generate an accurate mouse mtDNA molecular clock are urgently needed. Hardouin et al. comment that, for the mtDNA region analyzed, they found a much higher mutation rate than suggested by previous studies. Further work should follow up this finding and also use other subspecies to globalize the opportunities for applying mice as a proxy to study humans, following the lead of another recent paper .
SIG was funded by the scholarship SFRH/BD/21437/2005 from Fundação para a Ciência e a Tecnologia (Portugal), FJ and JBS by Cornell University (USA) and EPJ by the Carl Trygger Foundation (Sweden). We are grateful to Angela Douglas for her comments on the manuscript.
- 1.Hardouin EA, Chapuis JL, Stevens MI, van Vuuren BJ, Quillfeldt P, Scavetta RJ, Teschke M, Tautz D: House mouse colonization patterns on the sub--Antarctic Kerguelen Archipelago suggest singular primary invasions and resilience against re--invasion. BMC Evol Biol. 10: 325-Google Scholar
- 2.Searle JB, Jones CS, Gündüz İ, Scascitelli M, Jones EP, Herman JS, Rambau RV, Noble LR, Berry RJ, Giménez MD, Jóhannesdóttir F: Of mice and (Viking?) men: phylogeography of British and Irish house mice. Proc R Soc B. 2009, 276: 201-207. 10.1098/rspb.2008.0958.PubMedCentralCrossRefPubMedGoogle Scholar
- 3.Bonhomme F, Orth A, Cucchi T, Rajabi-Maham H, Catalan J, Boursot P, Auffray JC, Britton-Davidian J: Genetic differentiation of the house mouse around the Mediterranean basin: matrilineal footprints of early and late colonization. Proc Biol Sci. 2010, Google Scholar
- 5.Jones EP, Jóhannesdóttir F, Günduz İ, Richards MB, Searle JB: The expansion of the house mouse into North--western Europe. J Zool. Google Scholar
- 6.Cunliffe BW: Iron Age communities in Britain: an account of England, Scotland and Wales from the seventh century BC until the Roman conquest. 2004, London, UK: Routledge, 4Google Scholar
- 7.Jones EP, van der Kooij J, Solheim R, Searle JB: Norwegian house mice (Mus musculus musculus//domesticus): Distributions, routes of colonization and patterns of hybridization. Mol Ecol. Google Scholar
- 8.Jones EP, Jensen JK, Magnussen E, Gregersen N, Hansen HS, Searle JB: A molecular characterization of the charismatic Faroe house mouse. Biol J Linn Soc. Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:8cccd479-33db-43eb-bc5c-d08491d93e7a> | 3.390625 | 2,134 | Truncated | Science & Tech. | 33.96562 | 95,537,925 |
AbstractAircraft observations in a cold-air outbreak to the north of the United Kingdom are used to examine the boundary layer and cloud properties in an overcast mixed-phase stratocumulus cloud layer and across the transition to more broken open-cellular convection. The stratocumulus cloud is primarily composed of liquid drops with small concentrations of ice particles and there is a switch to more glaciated conditions in the shallow cumulus clouds downwind. The rapid change in cloud morphology is accompanied by enhanced precipitation with secondary ice processes becoming active and greater thermodynamic gradients in the subcloud layer. The measurements also show a removal of boundary layer accumulation mode aerosols via precipitation processes across the transition that are similar to those observed in the subtropics in pockets of open cells. Simulations using a convection-permitting (1.5-km grid spacing) regional version of the Met Office Unified Model were able to reproduce many of the salient features of the cloud field although the liquid water path in the stratiform region was too low. Sensitivity studies showed that ice was too active at removing supercooled liquid water from the cloud layer and that improvements could be made by limiting the overlap between the liquid water and ice phases. Precipitation appears to be the key mechanism responsible for initiating the transition from closed- to open-cellular convection by decoupling the boundary layer and depleting liquid water from the stratiform cloud.
Journal of the Atmospheric Sciences – American Meteorological Society
Published: Jul 15, 2017
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
All the latest content is available, no embargo periods.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud | <urn:uuid:b5b22644-d698-40f0-89f2-576f388afa08> | 2.921875 | 439 | Truncated | Science & Tech. | 29.091439 | 95,537,927 |
Researching peatlands for restoration
Letting water back in and fertilizing the soil may help get peat plots back on track.
A faint buzz reverberates over open plots of harvested peat above a team of researchers in southeastern Manitoba. At first the sound could be confused with one of the province’s legendary mosquitoes hovering near the ear. But this mechanical pitch comes from a drone far overhead.
Pete Whittington, a hydrogeologist from Brandon University, stands on the ground with a remote control, taking photos using the aerial drone. Whittington and a group of fellow researchers from across the country are planning research in the area and the drone will allow the team to see the layout from above.
Stretching a few hundred metres out from the road are rectangular peat plots, peppered with chunks of decomposing wood and hemmed in by the dense vegetation of the boreal forest. From his bird’s eye view, a few bright green plots stick out where vegetation has been re-established by rewetting the site.
Neighbouring the plots are bogs and fens full of mosses, shrubs, sedge grasses and stunted trees. Underneath the dense vegetation are deep peat deposits that are full of water. So much you can wring it out of the spongy sphagnum moss.
When these plants die they are unable to fully decompose in the saturated ground. Over thousands of years, this slow accumulation of partially decomposed mosses forms peat – a carbon-rich soil found at every garden centre in the country.
But to harvest the peat, producers must drain these wetlands, and when they’ve harvested all the good stuff, dry, open plots remain. Exposed to oxygen, the remaining ancient plant matter finally breaks down. All around Whittington, the spent peat harvest sites are releasing a slow, invisible seep of carbon.
Though the carbon is invisible, the greenhouse effect is not. It’s been a hot, dry spring in Manitoba and climate change is on everyone’s lips. It’s why the scientists are here. They want to figure out how to help get this ecosystem back to work for the climate, so it can take carbon out of the atmosphere and bury it underground once more.
“We’re looking for ways to set these peat bogs back to a more natural state,” says Pascal Badiou with DUC’s Institute for Wetlands and Waterfowl Research. “We want to find a way to bring back the vegetation at these sites so they start storing that carbon again.”
Images from the drone show the researchers have a head start. Operations staff say when they built dams to direct water back into the bog, they built one in the wrong place. One retired field didn’t get rewetted, and it is noticeably less green than the others.
“That’s perfect!” says an excited Line Rochefort, a peatland expert from Université Laval’s Peatland Ecosystem Research Group. “Without knowing it, you’ve been running an experiment for us for nine years.”
The researchers will study how letting water back in and fertilizing the soil may help get the peat plots back on track. They’ll see how this might help peat producers rehabilitate their expired plots. DUC will evaluate the effects of this rewetting and fertilization on downstream streams and lakes. It’s part of a partnership with the Canadian Sphagnum Peat Moss Association to support conservation and sustainable practices for peat harvesting and restoration.
“Vast swathes of the boreal forest are peatlands,” says Badiou. “Their health and the health of the planet go hand-in-hand.”
Read These Stories NextFind more stories
Habitat restoration project provides business and environmental benefits at Alberta cattle farm.
School field trips connect Alberta youngsters to nature. | <urn:uuid:72a49c91-1f0f-4fce-a65c-c631b6a8101b> | 3.90625 | 844 | Truncated | Science & Tech. | 53.667292 | 95,537,934 |