text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of Resolved status.
Section: 23.9 [numeric.ops] Status: Resolved Submitter: Chris Jefferson Opened: 2011-01-01 Last modified: 2018-01-10
View all other issues in [numeric.ops].
View all issues with Resolved status.
The C++0x draft says std::accumulate uses: acc = binary_op(acc, *i).Eelis van der Weegen has pointed out, on the libstdc++ mailing list, that using acc = binary_op(std::move(acc), *i) can lead to massive improvements (particularly, it means accumulating strings is linear rather than quadratic). Consider the simple case, accumulating a bunch of strings of length 1 (the same argument holds for other length buffers). For strings s and t, s+t takes time length(s)+length(t), as you have to copy both s and t into a new buffer. So in accumulating n strings, step i adds a string of length i-1 to a string of length 1, so takes time i. Therefore the total time taken is: 1+2+3+...+n = O(n2) std::move(s)+t, for a "good" implementation, is amortized time length(t), like vector, just copy t onto the end of the buffer. So the total time taken is: 1+1+1+...+1 (n times) = O(n). This is the same as push_back on a vector. I'm trying to decide if this implementation might already be allowed. I suspect it might not be (although I can't imagine any sensible code it would break). There are other algorithms which could benefit similarly (inner_product, partial_sum and adjacent_difference are the most obvious). Is there any general wording for "you can use rvalues of temporaries"? The reflector discussion starting with message c++std-lib-29763 came to the conclusion that above example is not covered by the "as-if" rules and that enabling this behaviour would seem quite useful.
[ 2011 Bloomington ]
Moved to NAD Future. This would be a larger change than we would consider for a simple TC.
[2017-02 in Kona, LEWG responds]
Like the goal.
What is broken by adding std::move() on the non-binary-op version?
A different overload might be selected, and that would be a breakage. Is it breakage that we should care about?
We need to encourage value semantics.
Need a paper. What guidance do we give?
Use std::reduce() (uses generalized sum) instead of accumulate which doesn’t suffer it.
Inner_product and adjacent_difference also. adjacent_difference solves it half-way for “val” object, but misses the opportunity for passing acc as std::move(acc).
[2017-06-02 Issues Telecon]
Ville to encourage Eelis to write a paper on the algorithms in <numeric>, not just for accumulate.
Howard pointed out that this has already been done for the algorithms in <algorithm>
Status to Open; Priority 3
This was resolved by the adoption of P0616r0. | <urn:uuid:93893edf-2f1b-4e9c-a1d7-72bad7f788e9> | 2.53125 | 720 | Comment Section | Software Dev. | 65.023575 | 95,501,636 |
How many times will increase if the surface area of the cube if we triple length of its edge?
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
- Cube 3
How many times will increase the volume of a cube if we double the length of its edge?
How many times increases the surface area of a cube with edge 23.4 cm if the length of the edge doubled?
- Cube edges
The sum of the lengths of the cube edges is 42 cm. Calculate the surface of the cube.
- Cube surface area
Wall of cube has content area 99 cm square. What is the surface of the cube?
- The cube - simple
Calculate the surface of the cube measuring 15 centimeters.
Sick Marcel already taken six tablets, which was a quarter of the total number of pills in the pack. How many pills were in the pack?
- Farmers 2
On Wednesday the farmers at the Grant Farm picked 2 barrels of tomatoes. Thursday, the farmers picked 1/2 as many tomatoes as on Wednesday. How many barrels of tomatoes did the farmers pick on Thursday?
Alley measured a meters. At the beginning and end are planted poplar. How many we must plant poplars to get the distance between the poplars 15 meters?
- Unknown number 11
That number increased by three equals three times itself?
Triangle whose sides are midpoints of sides of triangle ABC has a perimeter 45. How long is perimeter of triangle ABC?
- One-third 2
One-third of the people in a barangay petitioned the council to allow them to plant in vacant lots and another 1/5 of the people petitioned to have a regular garbage collection. What FRACTION of the barangay population made the petition?
Average of 7 numbers is 65. What is its sum?
The following numbers round to the thousandth:
The rectangle is divided into seven fields. On each box is to write just one of the numbers 1, 2 and 3. Mirek argue that it can be done so that the sum of the two numbers written next to each other was always different. Zuzana (Susan) instead argue that.
- Pizza 4
Marcus ate half pizza on monday night. He than ate one third of the remaining pizza on Tuesday. Which of the following expressions show how much pizza marcus ate in total?
The hotel has a p floors each floor has i rooms from which the third are single and the others are double. Represents the number of beds in hotel.
- Dropped sheets
Three consecutive sheets dropped from the book. The sum of the numbers on the pages of the dropped sheets is 273. What number has the last page of the dropped sheets? | <urn:uuid:70705570-981d-4f3e-a32b-1d4031ab2de5> | 3.453125 | 606 | Tutorial | Science & Tech. | 68.574659 | 95,501,641 |
University of Tokyo researchers discovered an increase in a helium isotope during a ten-year period before the 2014 Mount Ontake eruption in central Japan. The finding suggests this anomaly is related to activation of the volcano’s magma system and could be a valuable marker for long-term risk mitigation.
Possibility of long-term risk mitigation
University of Tokyo researchers discovered an increase in a helium isotope during a ten-year period before the 2014 Mount Ontake eruption in central Japan. The finding suggests that this helium isotope anomaly is related to activation of the volcano’s magma system and could be a valuable marker for the long-term risk mitigation concerning volcanic eruptions.
Small quantities of the isotope helium-3 are present in the mantle, while helium-4 is produced in the crust and mantle by radioactive decay. A higher ratio of helium-3 to helium-4 therefore indicates that a sample of helium gas originates from the mantle rather than the crust. Previous research suggested that variation of helium isotopic ratios over time in crater fumaroles and hot springs correlates well with volcanic activity.
However, helium anomalies reported in these studies were all related to magmatic eruptions, and not to hydro-volcanic or phreatic eruptions, caused when a heat source such as magma vaporizes water to steam. Because phreatic eruptions are highly local phenomena, they are extremely difficult to predict. Mount Ontake, which erupted unexpectedly on September 27, 2014 just before noon, is believed to have been a phreatic eruption, and resulted in 58 deaths with 5 still missing.
An international research group lead by Professor Yuji Sano at the Atmosphere and Ocean Research Institute, the University of Tokyo, found that prior to the 2014 eruption, the helium-3 to helium-4 ratio at the hot spring closest to the volcanic cone increased significantly from June 2003 to November 2014, while that at distant hot springs showed no significant change. In addition, the helium isotopic ratios of the closest hot spring remained constant from November 1981 to June 2000.
These findings suggest that helium anomalies are also associated with phreatic eruptions. The research group suggests that increased input of magmatic gas over a ten-year period resulted in the slow pressurization of the volcanic conduit and eventually lead to the eruption.
“We were aware that helium isotopic ratios of the closest hot spring increased significantly from June 2003 to July 2009. At that time we did not understand the reason behind it,” recalls Sano. He adds, “Our findings suggest that the anomaly was related to the 2014 eruption and may have been a precursor. Although this new research does not offer a way to predict an eruption in the short-term, it offers a guide that may be useful for long-term risk management and disaster mitigation.”
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:a3fe3d6c-2320-431b-bcab-0d800c08c352> | 3.453125 | 1,211 | Content Listing | Science & Tech. | 38.806919 | 95,501,642 |
Geoscientists have long puzzled over the mechanism that created the Tibetan Plateau, but a new study finds that the landform's history may be controlled primarily by the strength of the tectonic plates whose collision prompted its uplift. Given that the region is one of the most seismically active areas in the world, understanding the plateau's geologic history could give scientists insight to modern day earthquake activity.
The new findings are published in the journal Nature Communications.
Even from space, the Tibetan Plateau appears huge. The massive highland, formed by the convergence of two continental plates, India and Asia, dwarfs other mountain ranges in height and breadth. Most other mountain ranges appear like narrow scars of raised flesh, while the Himalaya Plateau looks like a broad, asymmetrical scab surrounded by craggy peaks.
A topographic map of the area around the Tibetan Plateau, left, and the map view of the composite strong and weak Asian plate model, right. The composite plate strength model -- with the Asian plate stronger in the west (Tarim Basin) and weaker to the east -- results in a topography that is similar to what exists today.
Graphic courtesy of Lin Chen
"The asymmetric shape and complex subsurface structure of the Tibetan Plateau make its formation one of the most significant outstanding questions in the study of plate tectonics today," said University of Illinois geology professor and study co-author Lijun Liu.
In the classic model of Tibetan Plateau formation, a fast-moving Indian continental plate collides head-on with the relatively stationary Asian plate about 50 million years ago. The convergence is likely to have caused the Earth's crust to bunch up into the massive pile known as the Himalaya Mountains and Tibetan Plateau seen today, but this does not explain why the plateau is asymmetrical, Liu Said.
"The Tibetan Plateau is not uniformly wide," said Lin Chen, the lead author from the Chinese Academy of Sciences. "The western side is very narrow and the eastern side is very broad -- something that many past models have failed to explain."Many of those past models have focused on the surface geology of the actual plateau region, Liu said, but the real story might be found further down, where the Asian and Indian plates meet.
"There is a huge change in topography on the plateau, or the Asian plate, while the landform and moving speed of the Indian plate along the collision zone are essentially the same from west to east," Liu said. "Why does the Asian plate vary so much?"
To address this question, Liu and his co-authors looked at what happens when tectonic plates made from rocks of different strengths collide. A series of 3-D computational continental collision models were used to test this idea.
"We looked at two scenarios -- a weak Asian plate and a strong Asian plate," said Liu. "We kept the incoming Indian plate strong in both models."
When the researchers let the models run, they found that a strong Asian plate scenario resulted in a narrow plateau. The weak Asian plate model produced a broad plateau, like what is seen today.
"We then ran a third scenario which is a composite of the strong and weak Asian plate models," said Liu. "An Asian plate with a strong western side and weak eastern side results in an orientation very similar to what we see today."
This model, besides predicting the surface topography, also helps explain some of the complex subsurface structure seen using seismic observation techniques.
"It is exciting to see that such a simple model leads to something close to what we observe today," Liu said. "The location of modern earthquake activity and land movement corresponds to what we predict with the model, as well."
The Strategic Priority Research Program (B) of the Chinese Academy of Sciences, the National Key Research and Development Project and the National Natural Science Foundation of China supported this study.
To reach Lijun Liu, call 217-300-0378; email@example.com
Lois E Yoksoulian | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:2be51b03-9363-4d7e-a0f5-7bec40a72164> | 3.953125 | 1,411 | Content Listing | Science & Tech. | 42.237572 | 95,501,655 |
System Html Template
HTML::Template::Extension::DOC are plugins for comments in template. SYNOPSIS use HTML::Template::Extension; my $text = qq | < HTML >< HEAD >< /HEAD >< BODY > < H1 >This is a template example...< /H1 > < TMPL_DOC >An example use of TMPL_DOC tag < /TMPL_DOC > The sum between 1+1 is: <...
|License: Freeware||Size: 15.36 KB||Download (82): HTML::Template::Extension::DOC Download|
HTML::Template::Extension::SLASH_VAR is a plugins for < /TMPL_VAR > syntax. SYNOPSIS use HTML::Template::Extension; my $text = qq | SLASH_VAR plugin example ======================== If all is ok you can read this here --> < TMPL_VAR NAME="test" >a placeholder< /TMPL_VAR > |; my...
|License: Freeware||Size: 15.36 KB||Download (73): HTML::Template::Extension::SLASH_VAR Download|
HTML::Template::Set is a HTML::Template extension that adds set support. SYNOPSIS in your HTML: < TMPL_SET NAME="handler" >apples_to_oranges< /TMPL_SET > < TMPL_SET NAME="title" >Apples Are Green< /TMPL_SET > < HTML > < HEAD > < TITLE >< TMPL_VAR NAME="title" >< /TITLE > < /HEAD > <...
|License: Freeware||Size: 9.22 KB||Download (72): HTML::Template::Set Download|
HTML::Template module attempts to make using HTML templates simple and natural. HTML::Template library extends standard HTML with a few new tags for variables, loops, if/else blocks and includes. A file written with HTML and these new tags is called a template. Using this module you fill in...
|License: Freeware||Size: 62.46 KB||Download (84): HTML::Template Download|
HTML::Template::Expr module provides an extension to HTML::Template which allows expressions in the template syntax. HTML::Template::Expr module is purely an addition--all the normal HTML::Template options, syntax, and behaviors will still work. Expression support includes comparisons, math...
|License: Freeware||Size: 18.43 KB||Download (118): HTML::Template::Expr Download|
HTML::Template::JIT is a just-in-time compiler for HTML::Template. Templates are compiled into native machine code using Inline::C. When using HTML::Template::JIT, the compiled code is stored to disk and reused on subsequent calls. HTML::Template::JIT is up to 8 times as fast as HTML::Template...
|License: Freeware||Size: 30.72 KB||Download (115): HTML::Template::JIT Download|
HTML::Template::HashWrapper provides a simple way to use arbitrary hash references (and hashref-based objects) with HTML::Template's associate option. new($ref) returns an object with a param() method which conforms to HTML::Template's expected interface: * param($key) returns the value of...
|License: Freeware||Size: 10.24 KB||Download (18): HTML::Template::HashWrapper Download|
HTPL is a PHP 5 HTML template system with the following features: hierarchical structure with inheritance, direct database access, application and session wide caching, multiple language support, extendable by user defined functions.
Platforms: Windows, Mac, Linux
|License: Freeware||Size: 272.6 KB||Download (28): HTPL - PHP 5 HTML Template System Download|
Anyone who has worked with web templates, knows how difficult it can be to see all the templates you've downloaded. That's what HTML Template Browser is for. Point it at a directory and it will find all the main pages of a template (index.html, default.htm, etc.) and list them in a browse window....
|License: Freeware||Size: 1.17 MB||Download (195): HTML Template Browser Download|
Clearsilver is a fast, powerful, and language-neutral HTML template system. In both static content sites and dynamic HTML applications, ClearSilver CGI Kit provides a separation between presentation code and application logic which makes working with your project easier. The design of...
|License: Freeware||Size: 583.68 KB||Download (74): ClearSilver CGI Kit Download|
HTML::DWT is a Perl module with DreamWeaver HTML Template. INSTALLATION Unzip/tar the archive: tar xvfz HTML-DWT-2.08 Create the makefile perl Makefile.PL Make the module (must have root access to install) make make test make install SYNOPSIS use HTML::DWT; $template = new...
|License: Freeware||Size: 17.41 KB||Download (86): HTML::DWT Download|
This module attempts to make using HTML templates simple and natural. It extends standard HTML with a few new HTML-esque tags - , , , , and . The file written with HTML and these new tags is called a template. It is usually saved separate from your script - possibly even created by someone else!...
Platforms: Windows, Mac, *nix, Perl, BSD Solaris
|License: Freeware||Download (28): HTML::Template Download|
This module extends HTML::Template::Pro to easily support methods and tags not implemented in parent module. All plugins live in the H::T::P::Extension namespace and you can built your own extension to support you prefered tags and functionality.
Platforms: Windows, Mac, BSD, Linux
|License: Freeware||Size: 13.43 KB||Download (28): Extension for HTML::Template::Pro module Download|
Template::Library::HTML is a template library for building basic HTML pages. NOTE: This documentation is incomplete and may be incorrect in places. The html template library is distributed as part of the Template Toolkit. It can be found in the templates sub-directory of the installation...
|License: Freeware||Size: 778.24 KB||Download (93): Template::Library::HTML Download|
cPhabCMS is a content management system that combines a flexible module based PHP application, with HTML template support for ease of use.
|License: Shareware||Cost: $0.00 USD||Size: 122.88 KB||Download (19): PhabCMS Download|
Template Lite is a very fast, small HTML template engine written in PHP. The engine supports most of the Smarty template engine functions and filters. Template Lite has many advantages over other template engines, as well as a few disadvantages. - Template Lite is a stripped down version of...
Platforms: Windows, Mac, *nix, PHP, BSD Solaris
|License: Freeware||Download (92): Template Lite Download|
KubeSupport is a free PHP project to manage your customer support features for easy to edit HTML Template files and language files. You are allowed to either open a new support ticket or check on existing tickets. Detailed manuals provided for you to solve whatever problem you may encounter....
|License: Freeware||Size: 327.68 KB||Download (19): KubeSupport Download|
Debian::Package::HTML is a Perl module that generates a webpage information (and Linda/Lintian checks) about a Debian binary or source package using HTML::Template SYNOPSIS use strict; use Debian::Package::HTML; my $package = Debian::Package::HTML->new(%packageHash);...
|License: Freeware||Size: 5.12 KB||Download (78): Debian::Package::HTML Download|
Apache2::PageKit is a MVCC web framework using mod_perl, XML and HTML::Template. SYNOPSIS In httpd.conf SetHandler perl-script PerlSetVar PKIT_ROOT /path/to/pagekit/files PerlSetVar PKIT_SERVER staging PerlHandler +Apache2::PageKit Apache2::PageKit->startup(/path/to/pagekit/files,...
|License: Freeware||Size: 122.88 KB||Download (71): Apache2::PageKit Download|
This incredible form has everything, it has its own autoresponse system, html format and what's more you can include a image signature in order to give a professional touch to your site.
Platforms: Not Applicable
|License: Commercial||Cost: $10.00 USD||Download (27): Html form with image signature Download| | <urn:uuid:8145940f-a566-4f8b-a18c-e781c5956fe9> | 2.671875 | 1,867 | Content Listing | Software Dev. | 59.450212 | 95,501,704 |
3D Printing – Applications for Space Exploration
Published: Last Edited:
Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
- Puneet Bhalla
3D Printing or Additive Manufacturing (AM) was first tested in 1983 by inventor Chuck Hull. Conventional "subtractive manufacturing" involves carving out items from a single block of material, whereas AM involves adding plastic or metal layer by layer according to a computer generated design to manufacture a product. Over the years a number of processes that differ in the method of depositing of layers and their binding have been developed. The technology in the earlier years did not evolve enough for it to find mainstream support and its use was restricted to production of computer generated models and prototype research. Advances in metallurgy, miniaturisation and processing have now made it a more viable competitor to conventional manufacturing. It is even being called the third industrial revolution.
Commercial enterprises having recognised the transformative potential of 3D printing, both in designing and manufacturing, are increasingly investing in it. It allows faster design iterations, providing flexibility for refinements and variations and produces more accurate 3D scaled models for testing. This helps in accelerating product development and manufacturing with corresponding cost benefits. It helps overcome constraints of conventional manufacturing and allows for more precision in manufacturing to produce more complex parts. The process allows for more cohesive structures and components can be constructed using much fewer parts, making them lighter, sturdier and more efficient. Large factories with their assembly lines can also be done away with. Existing parts can now be redesigned and designers can be more audacious in their pursuits, stepping beyond the constraints of conventional design and manufacturing, while seeking innovative solutions or entirely new capabilities. The manufacturing process requires less material, reduces wastage during production and is more energy efficient, making it potentially more environment friendly. Objects can be created on demand, thereby eliminating costs, logistical complexities and wastages related to surplus inventories. Initial printers were capable of handling single materials only but the multi-jet technology is allowing combining of materials to produce varied material properties – mechanical, thermal and chemical. Nanotechnology coupled with 3D printing promises exciting opportunities in the future. Already, availability of cheaper printers has made the power of designing and producing publicly available. This democratising of manufacturing has the potential to revolutionise innovation. Market researcher Gartner forecasts that worldwide spending on 3D printing will rise from $1.6 billion in 2015 to around $13.4 billion in 2018. Despite the excitement, there are experts who say that the technology might only evolve to supplement the conventional mass manufacturing methods that will continue to be faster and cheaper. They instead favour its suitability for niche and customised production.
Space exploration has always been costly due to its requirement of low volume, customised and at times unique components. 3D printing is being seen by the space industry as enabling to the development of future space infrastructure. Various R&D efforts both for ground based as also in orbit manufacturing are being supported with an aim to develop parts that could meet the stringent high performance and high reliability criteria required for space operations. NASA along with US rocket engine maker Aerojet Rocketdyne has successfully tested a rocket engine injector and an advanced rocket engine thrust chamber assembly using copper alloy materials, in different configurations. The components proved themselves in tests where they were subjected to pressures of up to 1,400 pounds per square inch and temperatures up to 6,000 degrees Fahrenheit to produce 20,000 pounds of thrust. NASA has claimed that 3D technology enabled designers to create more complex injectors while at the same time reducing the number of parts from 115 to just two. This resulted in more efficient processes and also provided better thermal resilience. While the traditionally constructed injectors cost about $10,000 each and took six months to build, the 3D printed versions cost less than $5,000 and reached the test stand in a matter of weeks. These tests have provided confidence in the technology and paved the way for its use in replacing other complex engine components.
Already, many small 3D produced parts are flying in space onboard US and European satellites and more are being developed. ESA and European Commission’s Additive Manufacturing Aiming Towards Zero Waste & Efficient Production of High-Tech Metal Products (AMAZE) project, has 28 European companies as partners that are looking at perfecting 3D printing of high quality metal components for aerospace applications. NASA is also evaluating using the technology for manufacturing composite CubeSats. China has also started investing in this technology and on its last manned space mission in 2013, their taikonauts occupied customised 3D printed seats. In December 2014, Chinese scientists have claimed to have produced a 3D printing machine, which could be used during space missions. Private companies the world over are investing heavily in the technology for aerospace applications.Japanese Space Agency JAXA along with Mitsubishi is working at producing 3D components for a new large-scale rocket that the two are expected to develop by 2020. Swiss company RUAG Space has built an antenna support for an Earth observation (EO) satellite that will replace a conventionally manufactured one after tests. The engine chamber of SuperDraco thruster to be used on the crew version of SpaceX’s Dragon spacecraft, capable of producing 16,000 pounds of thrust, is manufactured using 3D printing. A team of engineering students from the University of Arizona, with help from 3D printing company Solid Concepts, recently assembled a 3D printed rocket within a day and successfully tested it. Planetary Resources, a private company seeking space exploration and asteroid mining has collaborated with a company, 3D Systems for developing and manufacturing components for its ARKYD Series of spacecraft using its advanced 3D printing and digital manufacturing solutions.All these efforts are providing solutions that are cheaper, have lesser parts and have comparatively shorter developmental timelines.
In the future, the technology could be used for entire structure fabrication that would involve integrating many of the system’s geometries into structural elements during production. This would reduce the number of parts, eliminate most joints or welds, simplify the design and production, reduce the number of interfaces and make the system more efficient and safer. Such vehicles would better sustain the rigours of launch and space exploration. Integrated structures would even enable reconceptualising space architectures, impacting on their design, sizes and functionality.
The most exciting opportunity is 3D printing of objects in space – an idea that has the potential to cause a paradigm change in the way we look at space exploration. The concept has been debated for decades and NASA has also conducted some experiments since theSkylab space stationof the 1970s. In 2010, it collaborated with a US company Made in Space to develop and test a 3D printer that could operate in microgravity aboard the International Space Station. The microwave oven sized printer, previously tested on suborbital flights, was installed on board the station on 17 November. After two calibration tests, on 24 November 2014, on command from the ground controllers, the printer produced the first 3D object in microgravity. The object was a faceplate of the printer itself, demonstrating that the printer could make replacement parts for itself. Initial results have shown that layer bonding might be different in microgravity, but this would have to be substantiated by further testing on more such produced parts in the future. These parts will subsequently be returned to Earth where they will be compared with similar samples made by the same printer before launch and also analysed for effects of microgravity on them. This would help in evaluating the variance and possible advantages of additive manufacturing in space and in defining the roadmap for future developments. Meanwhile, Europe's POP3D Portable On-Board Printer designed and built in Italy is also scheduled for installation aboard the ISS next year.
Producing parts and structures in space potentially provides a host of benefits. Structures being constructed on Earth have to be built in an environment that is different from where they would operate. These parts also have to survive the vibrations and high ‘g’ stresses of launch. Freed from these constraints, novel space architectures, more optimised to the microgravity environment, can be imagined and developed. 3D printers in space would enable astronauts manufacture their own components and tools, undertake repairs, replace broken items and respond to evolving requirements without being dependent on support from Earth. This would bring down logistical requirements related to deployment of structures in space, while improving mission efficiency and reliability. NASA is even funding research into the possibility of making food in space using a 3D printer. This would overcome the current issues related to food shelf life, variety and nutritional requirements. It would be possible to have human missions of longer duration and venturing much further into space. Made In Space has an ongoing project R3DO that seeks to recycle 3D produced broken or redundant parts to create new ones, thereby helping reduce space waste. The technology in the future could be used for space based construction of large structures – even entire spacecraft in space.
Another concept being envisaged is the use of 3D printing for construction of large housing structures, roads and launch pads using the resources available in-situ on celestial bodies. Concrete houses being produced through 3D printing have already been demonstrated. Both NASA and ESA are exploring printing of objects using Regolith, the powdery substance that covers much of the surface of the moon. Besides the huge savings in cost and time, such habitats would be more suited to the local hazardous environment. The printers could either be controlled from Earth or make use of automation technology on robots or artificial intelligence. These capabilities would be a great step forward for human interplanetary exploration.
3D printing is making rapid strides and its applications are being recognised by industry. Scientists are working to smoothen out the inefficiencies and shortcomings of the processes as also evaluating potential opportunities. Developments in the space domain are promising but these would have to be put through rigorous testing before being cleared for regular use. Qualification and verification standards that would eventually be defined for this new industry would have to be more stringent for use in space. More complex printers will have to be devised for construction of larger parts. Currently, most construction is focussed on building frames and structures but in the future would also require manufacturing techniques to producing working electronic components. For production in space, bigger printers would bring forth issues of mass, volume and power requirements, each one of which is critical for space launch and operations. Some methods would also have to be devised to bring together the parts so produced. The new technology provides an avenue for space industries the world over to graduate to common standards of software as well as hardware. This would allow a larger pool of scientists and engineers coming together learning and benefiting from each other. At the same time, and the policy makers would also have to come up with requisite regulatory framework.
In India, 3D printing technology is still in its infancy and its penetration is low among industry is low. Most institutions continue to use it for producing 3D Computer Assisted Design (CAD) models and for prototype testing. Some global additive manufacturing companies have gained foothold in India through collaborations and there are some indigenous initiatives too. Isolated research is being undertaken by some private and public sector entities including the DRDO. Private companies are collaborating with some engineering institutions like IITs to promote research. There is also the Additive Manufacturing Society of India (AMSI) that seeks to promote 3D printing & Additive Manufacturing technologies. Applications for Defence and Aerospace are two important sectors that most companies are focussing on. ISRO chairman, after the successful Mars Orbiter Mission, mentioned 3D Printing as one of the technologies that he wishes to see Indian engineers build upon in the future. India has lagged behind in conventional manufacturing and metallurgy. It could leverage its advances in software technology and collaborate with international experts to initiate activities in this sunshine sector. While increased awareness and commercial benefits will drive industry to invest in the sector, space initiatives would require the government to play the vital supporting role while seeking participation from industry and academia. Investments would be required in planning and executing the supporting infrastructure required to enable fabrication processes, in creating knowledge and capabilities through education and training and for provision of adequate R&D facilities.
“From earphones to jet engines, 3D printing takes off”, 09 November, 2014
“3-D Printed Engine Parts Withstand Hot Fire Tests”, 14 November, 2014
TheAerojet Rocketdyne RS-25engine powered NASA’sSpace Shuttleand will power the upcoming Space Launch System (SLS), a heavy-lift, exploration-class rocket currently under development to take humans beyond Earth orbit and Mars.
Cite This Essay
To export a reference to this article please select a referencing stye below: | <urn:uuid:581ca05d-a939-402c-b415-40cfbfbf3011> | 3.34375 | 2,646 | Truncated | Science & Tech. | 21.354078 | 95,501,716 |
Boreidae, commonly called snow scorpionflies, or in the British Isles, snow fleas (no relation to the snow flea Hypogastrura nivicola) are a very small family of scorpionflies, containing only around 30 species, all of which are boreal or high-altitude species in the Northern Hemisphere. Recent research indicates the boreids are more closely related to fleas than to other scorpionflies, which renders the order Mecoptera paraphyletic if the order Siphonaptera is excluded from it.
These insects are small (typically 6 mm or less), with the wings reduced to bristles or absent, and they are somewhat compressed, so in fact some resemblance to fleas is noted. They are most commonly active during the winter months, towards the transition into spring, and the larvae typically feed on mosses. The adults will often disperse between breeding areas by walking across the open snow, thus the common name. The males use their bristle-like wings to help grasp the female while mating.
A snow scorpionfly is so adapted to its cold environment, just holding it in a human hand will kill it.
The cladogram of external relationships, based on a 2008 DNA and protein analysis, shows the family as a clade, sister to the Siphonaptera, and more distantly related to the Diptera (true flies) and Mecoptera (scorpionflies).
|part of Endopterygota||
This list is adapted from the World Checklist of extant Mecoptera species, and is complete as of 1997. The number of species in each genus is indicated in parentheses.
- Boreus (24) Latreille, 1816 (North America, Europe, Asia)
- Boreus hyemalis - also called the snow flea.
- Caurinus (2) Russell, 1979 (Oregon, Alaska)
- Hesperoboreus (2) Penny, 1977 (USA)
- Glacier flea
- Snow flies genus Chionea - a convergent genus of wingless crane flies
- Apteropanorpidae - another family of wingless scorpionflies
- Whiting, M. F. (2002). "Mecoptera is paraphyletic: multiple genes and phylogeny of Mecoptera and Siphonaptera". Zoologica Scripta. 31 (1): 93. doi:10.1046/j.0300-3256.2001.00095.x.
- Attenborough, David. (2005) Life in the Undergrowth. http://www.bbc.co.uk/sn/tvradio/programmes/lifeintheundergrowth/video.shtml
- Whiting, Michael F.; Whiting, Alison S.; Hastriter, Michael W.; Dittmar, Katharina (2008). "A molecular phylogeny of fleas (Insecta: Siphonaptera): origins and host associations". Cladistics. 24 (5): 1–31. doi:10.1111/j.1096-0031.2008.00211.x.
- Yeates, David K.; Wiegmann, Brian. "Endopterygota Insects with complete metamorphosis". Tree of Life. Retrieved 24 May 2016.
- Whiting, Michael F. (2002). "Mecoptera is paraphyletic: multiple genes and phylogeny of Mecoptera and Siphonaptera". Zoologica Scripta. 31 (1): 93–104. doi:10.1046/j.0300-3256.2001.00095.x.
- Wiegmann, Brian; Yeates, David K. (2012). The Evolutionary Biology of Flies. Columbia University Press. p. 5. ISBN 978-0-231-50170-5.
- Boreidae Archived 2004-01-11 at Archive.is
- Sikes, Derek; Jill Stockbridge (July 11, 2013). "Description of Caurinus tlagu, new species, from Prince of Wales Island, Alaska (Mecoptera, Boreidae, Caurininae)". ZooKeys. 316: 35–53. doi:10.3897/zookeys.316.5400. PMC . PMID 23878513.
|This Mecoptera related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:7ef21140-8838-45db-a4cf-96c572e360c6> | 3.484375 | 941 | Knowledge Article | Science & Tech. | 60.40019 | 95,501,724 |
Genetic variation among endangered Irish red grouse (Lagopus lagopus hibernicus) populations: implications for conservation and management.
|Title:||Genetic variation among endangered Irish red grouse (Lagopus lagopus hibernicus) populations: implications for conservation and management.||Authors:||McMahon, Barry J.
Johansson, Magnus P.
Piertney, Stuart B.
|Permanent link:||http://hdl.handle.net/10197/4018||Date:||Jun-2012||Abstract:||Extant populations of Irish red grouse (Lagopus lagopus hibernicus) are both small and fragmented, and as such may have an increased risk of extinction through the effects of inbreeding depression and compromised adaptive potential. Here we used 19 microsatellite markers to assay genetic diversity across 89 georeferenced samples from putatively semi-isolated areas throughout the Republic of Ireland and we also genotyped 27 red grouse from Scotland using the same markers. The genetic variation within Ireland was low in comparison to previously published data from Britain and the sample of Scottish red grouse, and comparable to threatened European grouse populations of related species. Irish and Scottish grouse were significantly genetically differentiated (FST = 0.07, 95% CI = 0.04–0.10). There was evidence for weak population structure within Ireland with indications of four distinct genetic clusters. These correspond approximately to grouse populations inhabiting suitable habitat patches in the North West, Wicklow Mountains, Munster and Cork, respectively, although some admixture was detected. Pair-wise FST values among these populations ranged from 0.02 to 0.04 and the overall mean allelic richness was 5.5. Effective population size in the Munster area was estimated to be 62 individuals (95% CI = 33.6–248.8). Wicklow was the most variable population with an AR value of 5.4 alleles/locus. Local (Munster) neighbourhood size was estimated to 31 individuals corresponding to an average dispersal distance of 31 km. In order to manage and preserve Irish grouse we recommend that further fragmentation and destruction of habitats need to be prevented in conjunction with population management, including protection of the integrity of the existing population by refraining from augmenting it with individuals from mainland Britain to maximise population size.||Type of material:||Journal Article||Publisher:||Springer||Copyright (published version):||2012 Springer Science+Business Media B.V.||Keywords:||Ireland;Red grouse;Fragmented;Genetic diversity;Differentiation||DOI:||10.1007/s10592-011-0314-x||Language:||en||Status of Item:||Not peer reviewed|
|Appears in Collections:||Agriculture and Food Science Research Collection|
Show full item record
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply. | <urn:uuid:c4ce3f1d-e21d-4e0f-ab0b-b8e174f50042> | 2.71875 | 659 | Academic Writing | Science & Tech. | 30.51712 | 95,501,750 |
How much light has been emitted by all galaxies since the cosmos began? After all, almost every photon (particle of light) from ultraviolet to far infrared wavelengths ever radiated by all galaxies that ever existed throughout cosmic history is still speeding through the Universe today.
This figure illustrates how energetic gamma rays (dashed lines) from a distant blazar strike photons of extragalactic background light (wavy lines) and produce pairs of electrons and positrons. The energetic gamma rays that are not attenuated by this process strike the upper atmosphere, producing a cascade of charged particles which make a cone of Èerenkov light that is detected by the array of imaging atmospheric Èerenkov telescopes on the ground.
Credit: Nina McCurdy and Joel R. Primack/UC-HiPACC; Blazar: Frame from a conceptual animation of 3C 120 created by Wolfgang Steffen/UNAM
If we could carefully measure the number and energy (wavelength) of all those photons—not only at the present time, but also back in time—we might learn important secrets about the nature and evolution of the Universe, including how similar or different ancient galaxies were compared to the galaxies we see today.
That bath of ancient and young photons suffusing the Universe today is called the extragalactic background light (EBL). An accurate measurement of the EBL is as fundamental to cosmology as measuring the heat radiation left over from the Big Bang (the cosmic microwave background) at radio wavelengths. A new paper, called "Detection of the Cosmic γ-Ray Horizon from Multiwavelength Observations of Blazars," by Alberto Dominguez and six coauthors, just published today by the Astrophysical Journal—based on observations spanning wavelengths from radio waves to very energetic gamma rays, obtained from several NASA spacecraft and several ground-based telescopes—describes the best measurement yet of the evolution of the EBL over the past 5 billion years.
Directly measuring the EBL by collecting its photons with a telescope, however, poses towering technical challenges—harder than trying to see the dim band of the Milky Way spanning the heavens at night from midtown Manhattan. Earth is inside a very bright galaxy with billions of stars and glowing gas. Indeed, Earth is inside a very bright solar system: sunlight scattered by all the dust in the plane of Earth's orbit creates the zodiacal light radiating across the optical spectrum down to long-wavelength infrared. Therefore ground-based and space-based telescopes have not succeeded in reliably measuring the EBL directly.
So, astrophysicists developed an ingenious work-around method: measuring the EBL indirectly through measuring the attenuation of—that is, the absorption of—very high energy gamma rays from distant blazars. Blazars are supermassive black holes in the centers of galaxies with brilliant jets directly pointed at us like a flashlight beam. Not all the high-energy gamma rays emitted by a blazar, however, make it all the way across billions of light-years to Earth; some strike a hapless EBL photon along the way. When a high-energy gamma ray photon from a blazar hits a much lower energy EBL photon, both are annihilated and produce two different particles: an electron and its antiparticle, a positron, which fly off into space and are never heard from again. Different energies of the highest-energy gamma rays are waylaid by different energies of EBL photons. Thus, measuring how much gamma rays of different energies are attenuated or weakened from blazars at different distances from Earth indirectly gives a measurement of how many EBL photons of different wavelengths exist along the line of sight from blazar to Earth over those different distances.
Observations of blazars by NASA's Fermi Gamma Ray Telescope spacecraft for the first time detected that gamma rays from distant blazars are indeed attenuated more than gamma rays from nearby blazars, a result announced on November 30, 2012, in a paper published in Science, as theoretically predicted.
Now, the big news—announced in today's Astrophysical Journal paper—is that the evolution of the EBL over the past 5 billion years has been measured for the first time. That's because looking farther out into the Universe corresponds to looking back in time. Thus, the gamma ray attenuation spectrum from farther distant blazars reveals how the EBL looked at earlier eras.
This was a multistep process. First, the coauthors compared the Fermi findings to intensity of X-rays from the same blazars measured by X-ray satellites Chandra, Swift, Rossi X-ray Timing Explorer, and XMM/Newton and lower-energy radiation measured by other spacecraft and ground-based observatories. From these measurements, Dominguez et al. were able to calculate the blazars' original emitted, unattenuated gamma-ray brightnesses at different energies.
The coauthors then compared those calculations of unattenuated gamma-ray flux at different energies with direct measurements from special ground-based telescopes of the actual gamma-ray flux received at Earth from those same blazars. When a high-energy gamma ray from a blazar strikes air molecules in the upper regions of Earth's atmosphere, it produces a cascade of charged subatomic particles. This cascade of particles travels faster than the speed of light in air (which is slower than the speed of light in a vacuum). This causes a visual analogue to a "sonic boom": bursts of a special light called Čerenkov radiation. This Čerenkov radiation was detected by imaging atmospheric Čerenkov telescopes (IACTs), such as HESS (High Energy Stereoscopic System) in Namibia, MAGIC (Major Atmospheric Gamma Imaging Čerenkov) in the Canary Islands, and VERITAS (Very Energetic Radiation Imaging Telescope Array Systems) in Arizona.
Comparing the calculations of the unattenuated gamma rays to actual measurements of the attenuation of gamma rays and X-rays from blazars at different distances allowed Dominquez et al. to quantify the evolution of the EBL—that is, to measure how the EBL changed over time as the Universe aged—out to about 5 billion years ago (corresponding to a redshift of about z = 0.5). "Five billion years ago is the maximum distance we are able to probe with our current technology," Domínguez said. "Sure, there are blazars farther away, but we are not able to detect them because the high-energy gamma rays they are emitting are too attenuated by EBL when they get to us—so weakened that our instruments are not sensitive enough to detect them." This measurement is the first statistically significant detection of the so-called "Cosmic Gamma Ray Horizon" as a function of gamma-ray energy. The Cosmic Gamma Ray Horizon is defined as the distance at which roughly one-third (or, more precisely, 1/e – that is, 1/2.718 – where e is the base of the natural logarithms) of the gamma rays of a particular energy have been attenuated.
This latest result confirms that the kinds of galaxies observed today are responsible for most of the EBL over all time. Moreover, it sets limits on possible contributions from many galaxies too faint to have been included in the galaxy surveys, or on possible contributions from hypothetical additional sources (such as the decay of hypothetical unknown elementary particles).
Links to this press release on the UC-HiPACC site—including an illustration—are http://hipacc.ucsc.edu/CGRH.html and http://hipacc.ucsc.edu/PressRelease/CGRH.html. Link to the paper in The Astrophysical Journal as accepted is http://arxiv.org/abs/1305.2162 (PDF of the full paper is at http://arxiv.org/pdf/1305.2162v1.pdf). Link to a related press release from the University of California, Riverside is http://ucrtoday.ucr.edu/14888. The article will also appear in the June 10, 2013 print edition of The Astrophysical Journal.
For further information, contact The Astrophysical Journal authors directly:Alberto Dominguez
Alberto Dominguez | EurekAlert!
Further reports about: > Big Bang > Cosmic > Deepwater Horizon > EBL > Earth's magnetic field > Milky Way > Riverside > Telescope > The Astrophysical Journal > Universe > X-ray microscopy > X-rays > black hole > gamma rays > ground-based telescopes > high-energy gamma rays > speed of light > speed|scan atlineCT-System > subatomic particle
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:866cef2c-42b8-4518-96bf-5020338959de> | 3.96875 | 2,441 | Content Listing | Science & Tech. | 37.156018 | 95,501,753 |
Physicists discover new quantum electronic material
>>With an atomic structure resembling a Japanese basketweaving pattern, “kagome metal” exhibits exotic, quantum behavior.
A motif of Japanese basketweaving known as the kagome pattern has preoccupied physicists for decades. Kagome baskets are typically made from strips of bamboo woven into a highly symmetrical pattern of interlaced, corner-sharing triangles.
If a metal or other conductive material could be made to resemble such a kagome pattern at the atomic scale, with individual atoms arranged in similar triangular patterns, it should in theory exhibit exotic electronic properties.
In a paper published today in Nature, physicists from MIT, Harvard University, and Lawrence Berkeley National Laboratory report that they have for the first time produced a kagome metal — an electrically conducting crystal, made from layers of iron and tin atoms, with each atomic layer arranged in the repeating pattern of a kagome lattice.
When they flowed a current across the kagome layers within the crystal, the researchers observed that the triangular arrangement of atoms induced strange, quantum-like behaviors in the passing current. Instead of flowing straight through the lattice, electrons instead veered, or bent back within the lattice.
This behavior is a three-dimensional cousin of the so-called Quantum Hall effect, in which electrons flowing through a two-dimensional material will exhibit a “chiral, topological state,” in which they bend into tight, circular paths and flow along edges without losing energy.
“By constructing the kagome network of iron, which is inherently magnetic, this exotic behavior persists to room temperature and higher,” says Joseph Checkelsky, assistant professor of physics at MIT. “The charges in the crystal feel not only the magnetic fields from these atoms, but also a purely quantum-mechanical magnetic force from the lattice. This could lead to perfect conduction, akin to superconductivity, in future generations of materials.”
To explore these findings, the team measured the energy spectrum within the crystal, using a modern version of an effect first discovered by Heinrich Hertz and explained by Einstein, known as the photoelectric effect.
“Fundamentally, the electrons are first ejected from the material’s surface and are then detected as a function of takeoff angle and kinetic energy,” says Riccardo Comin, an assistant professor of physics at MIT. “The resulting images are a very direct snapshot of the electronic levels occupied by electrons, and in this case they revealed the creation of nearly massless ‘Dirac’ particles, an electrically charged version of photons, the quanta of light.”
The spectra revealed that electrons flow through the crystal in a way that suggests the originally massless electrons gained a relativistic mass, similar to particles known as massive Dirac fermions. Theoretically, this is explained by the presence of the lattice’s constituent iron and tin atoms. The former are magnetic and give rise to a “handedness,” or chirality. The latter possess a heavier nuclear charge, producing a large local electric field. As an external current flows by, it senses the tin’s field not as an electric field but as a magnetic one, and bends away.
The research team was led by Checkelsky and Comin, as well as graduate students Linda Ye and Min Gu Kang in collaboration with Liang Fu, the Biedenharn Associate Professor of Physics, and postdoc Junwei Liu. The team also includes Christina Wicker ’17, research scientist Takehito Suzuki of MIT, Felix von Cube and David Bell of Harvard, and Chris Jozwiak, Aaron Bostwick, and Eli Rotenberg of Lawrence Berkeley National Laboratory.
“No alchemy required”
Physicists have theorized for decades that electronic materials could support exotic Quantum Hall behavior with their inherent magnetic character and lattice geometry. It wasn’t until several years ago that researchers made progress in realizing such materials.
“The community realized, why not make the system out of something magnetic, and then the system’s inherent magnetism could perhaps drive this behavior,” says Checkelsky, who at the time was working as a researcher at the University of Tokyo.
This eliminated the need for laboratory produced fields, typically 1 million times as strong as the Earth’s magnetic field, needed to observe this behavior.
“Several research groups were able to induce a Quantum Hall effect this way, but still at ultracold temperatures a few degrees above absolute zero — the result of shoehorning magnetism into a material where it did not naturally occur,” Checkelsky says.
At MIT, Checkelsky has instead looked for ways to drive this behavior with “instrinsic magnetism.” A key insight, motivated by the doctoral work of Evelyn Tang PhD ’15 and Professor Xiao-Gang Wen, was to seek this behavior in the kagome lattice. To do so, first author Ye ground together iron and tin, then heated the resulting powder in a furnace, producing crystals at about 750 degrees Celsius — the temperature at which iron and tin atoms prefer to arrange in a kagome-like pattern. She then submerged the crystals in an ice bath to enable the lattice patterns to remain stable at room temperature.
“The kagome pattern has big empty spaces that might be easy to weave by hand, but are often unstable in crystalline solids which prefer the best packing of atoms,” Ye says. “The trick here was to fill these voids with a second type of atom in a structure that was at least stable at high temperatures. Realizing these quantum materials doesn’t need alchemy, but instead materials science and patience.”
Bending and skipping toward zero-energy loss
Once the researchers grew several samples of crystals, each about a millimeter wide, they handed the samples off to collaborators at Harvard, who imaged the individual atomic layers within each crystal using transmission electron microscopy. The resulting images revealed that the arrangement of iron and tin atoms within each layer resembled the triangular patterns of the kagome lattice. Specifically, iron atoms were positioned at the corners of each triangle, while a single tin atom sat within the larger hexagonal space created between the interlacing triangles.
Ye then ran an electric current through the crystalline layers and monitored their flow via electrical voltages they produced. She found that the charges deflected in a manner that seemed two-dimensional, despite the three-dimensional nature of the crystals. The definitive proof came from the photoelectron experiments conducted by co-first author Kang who, in concert with the LBNL team, was able to show that the electronic spectra corresponded to effectively two-dimensional electrons.
“As we looked closely at the electronic bands, we noticed something unusual,” Kang adds. “The electrons in this magnetic material behaved as massive Dirac particles, something that had been predicted long ago but never been seen before in these systems.”
“The unique ability of this material to intertwine magnetism and topology suggests that they may well engender other emergent phenomena,” Comin says. “Our next goal is to detect and manipulate the edge states which are the very consequence of the topological nature of these newly discovered quantum electronic phases.”
Looking further, the team is now investigating ways to stabilize other more highly two-dimensional kagome lattice structures. Such materials, if they can be synthesized, could be used to explore not only devices with zero energy loss, such as dissipationless power lines, but also applications toward quantum computing.
“For new directions in quantum information science there is a growing interest in novel quantum circuits with pathways that are dissipationless and chiral,” Checkelsky says. “These kagome metals offer a new materials design pathway to realizing such new platforms for quantum circuitry.”
This research was supported in part by the Gordon and Betty Moore Foundation and the National Science Foundation.
Written by Jennifer Chu, MIT News Office | <urn:uuid:a308d72a-236b-469c-aec8-17d36175c32e> | 3.046875 | 1,711 | News Article | Science & Tech. | 22.120936 | 95,501,758 |
A little mouse called Delia lives in a hole in the bottom of a tree.....How many days will it be before Delia has to take the same route again?
My cube has inky marks on each face. Can you find the route it has taken? What does each face look like?
In the planet system of Octa the planets are arranged in the shape of an octahedron. How many different routes could be taken to get from Planet A to Planet Zargon?
I like to walk along the cracks of the paving stones, but not the outside edge of the path itself. How many different routes can you find for me to take?
Alice and Brian are snails who live on a wall and can only travel along the cracks. Alice wants to go to see Brian. How far is the shortest route along the cracks? Is there more than one way to go?
Find all the different shapes that can be made by joining five equilateral triangles edge to edge.
Nina must cook some pasta for 15 minutes but she only has a 7-minute sand-timer and an 11-minute sand-timer. How can she use these timers to measure exactly 15 minutes?
These activities lend themselves to systematic working in the sense that it helps if you have an ordered approach.
Can you use this information to work out Charlie's house number?
How could you put these three beads into bags? How many different ways can you do it? How could you record what you've done?
When intergalactic Wag Worms are born they look just like a cube. Each year they grow another cube in any direction. Find all the shapes that five-year-old Wag Worms can be.
How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...?
This task, written for the National Young Mathematicians' Award 2016, invites you to explore the different combinations of scores that you might get on these dart boards.
Lolla bought a balloon at the circus. She gave the clown six coins to pay for it. What could Lolla have paid for the balloon?
Can you order the digits from 1-3 to make a number which is divisible by 3 so when the last digit is removed it becomes a 2-figure number divisible by 2, and so on?
How many rectangles can you find in this shape? Which ones are differently sized and which are 'similar'?
This challenge, written for the Young Mathematicians' Award, invites you to explore 'centred squares'.
Can you rearrange the biscuits on the plates so that the three biscuits on each plate are all different and there is no plate with two biscuits the same as two biscuits on another plate?
The planet of Vuvv has seven moons. Can you work out how long it is between each super-eclipse?
Using all ten cards from 0 to 9, rearrange them to make five prime numbers. Can you find any other ways of doing it?
On a digital 24 hour clock, at certain times, all the digits are consecutive. How many times like this are there between midnight and 7 a.m.?
How many trapeziums, of various sizes, are hidden in this picture?
The Zargoes use almost the same alphabet as English. What does this birthday message say?
What is the smallest number of jumps needed before the white rabbits and the grey rabbits can continue along their path?
Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread?
Exactly 195 digits have been used to number the pages in a book. How many pages does the book have?
What could the half time scores have been in these Olympic hockey matches?
Make a pair of cubes that can be moved to show all the days of the month from the 1st to the 31st.
There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places.
Sitting around a table are three girls and three boys. Use the clues to work out were each person is sitting.
Can you create jigsaw pieces which are based on a square shape, with at least one peg and one hole?
Find the product of the numbers on the routes from A to B. Which route has the smallest product? Which the largest?
Place eight dots on this diagram, so that there are only two dots on each straight line and only two dots on each circle.
Put 10 counters in a row. Find a way to arrange the counters into five pairs, evenly spaced in a row, in just 5 moves, using the rules.
Seven friends went to a fun fair with lots of scary rides. They decided to pair up for rides until each friend had ridden once with each of the others. What was the total number rides?
This task, written for the National Young Mathematicians' Award 2016, focuses on 'open squares'. What would the next five open squares look like?
Can you work out the arrangement of the digits in the square so that the given products are correct? The numbers 1 - 9 may be used once and once only.
In a bowl there are 4 Chocolates, 3 Jellies and 5 Mints. Find a way to share the sweets between the three children so they each get the kind they like. Is there more than one way to do it?
There are 78 prisoners in a square cell block of twelve cells. The clever prison warder arranged them so there were 25 along each wall of the prison block. How did he do it?
Stuart's watch loses two minutes every hour. Adam's watch gains one minute every hour. Use the information to work out what time (the real time) they arrived at the airport.
A merchant brings four bars of gold to a jeweller. How can the jeweller use the scales just twice to identify the lighter, fake bar?
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the jugs.
This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15!
You have two egg timers. One takes 4 minutes exactly to empty and the other takes 7 minutes. What times in whole minutes can you measure and how?
On a digital clock showing 24 hour time, over a whole day, how many times does a 5 appear? Is it the same number for a 12 hour clock over a whole day?
Katie had a pack of 20 cards numbered from 1 to 20. She arranged the cards into 6 unequal piles where each pile added to the same total. What was the total and how could this be done?
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
You cannot choose a selection of ice cream flavours that includes totally what someone has already chosen. Have a go and find all the different ways in which seven children can have ice cream.
Investigate the different ways you could split up these rooms so that you have double the number.
How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this? | <urn:uuid:9c2d9527-5fc5-4533-8af2-0f592802fb41> | 2.859375 | 1,584 | Content Listing | Science & Tech. | 73.634104 | 95,501,765 |
" In this paper for the first time we compared spectra of the brain and Schumann electromagnetic waves. We argue that both modes of electromagnetic radiation: brain waves and Schumann waves can be analyzed with the help of the Planck formula. From our calculation we deduced the temperature of the Schumann and brain waves T= 10-10 K."
One of the data that they underline is that the energy density generated by lightning discharges along the narrow shell of 2km on the biosphere with earth radius is remarkably similar to that generated by action potentials within the brain. Moreover when this energy density is applied across the 3-5 mm thikness of the cerebral cortices the value is within the range of power of photon emissions near the skull when subjects engage in specific imagination and which is strongly correlated with the power of electroencephalographic (EEG) activity.
They also show that even though the current is much larger in a lightning stroke because of its absolute size by a factor of 1010, the infinitely less axon potential current is comparable per crosssectional area.
More scale-invariant similarities are found in this theoretical research.
Last modified on 14-Aug-16 | <urn:uuid:84654104-ba7e-46b7-ab74-08854bc67b05> | 2.96875 | 243 | Comment Section | Science & Tech. | 30.288636 | 95,501,773 |
Pinacoidal (1) |
(same H-M symbol)
a = 8.1768, b = 12.8768 |
c = 14.169 [Å]; α = 93.17°
β = 115.85°, γ = 92.22°; Z = 8
|Color||White, grayish, reddish|
|Crystal habit||Anhedral to subhedral granular|
|Cleavage||Perfect good poor |
|Fracture||Uneven to concoidal|
|Mohs scale hardness||6|
|Diaphaneity||Transparent to translucent|
|Optical properties||Biaxial (-)|
|Refractive index||nα = 1.573–1.577 nβ = 1.580–1.585 nγ = 1.585–1.590|
|Birefringence||δ = 0.012–0.013|
|2V angle||78° to 83°|
|Melting point||1553 °C|
Anorthite is the calcium-rich endmember of the plagioclase solid solution series, the other endmember being albite, NaAlSi3O8. Anorthite also refers to plagioclase compositions with more than 90 molecular percent of the anorthite endmember. At 1 atmosphere, anorthite melts at 1553 °C.
Anorthite is a rare compositional variety of plagioclase. It occurs in mafic igneous rock. It also occurs in metamorphic rocks of granulite facies, in metamorphosed carbonate rocks, and corundum deposits. Its type localities are Monte Somma and Valle di Fassa, Italy. It was first described in 1823. It is more rare in surficial rocks than it normally would be due to its high weathering potential in the Goldich dissolution series.
It also makes up much of the lunar highlands; the Genesis Rock is made of anorthosite, which is composed largely of anorthite. Anorthite was discovered in samples from comet Wild 2, and the mineral is an important constituent of Ca-Al-rich inclusions in rare varieties of chondritic meteorites. | <urn:uuid:2b9c8d12-ac1e-450b-baef-1e2d6c9e6496> | 2.921875 | 478 | Knowledge Article | Science & Tech. | 62.181022 | 95,501,803 |
NASA has released a stunning image mosaic of Saturn assembled from the images snapped by the Cassini probe during the final leg of its journey.
Cassini’s wide-angle camera acquired 42 red, green and blue images, covering the planet and its main rings from one end to the other, on September 13 and imaging scientists stitched these frames together to make a natural colour view, NASA said on Tuesday. The image is Cassini’s tribute to Saturn that had been its home for over 13 years.
“This ‘Farewell to Saturn’ will forevermore serve as a reminder of the dramatic conclusion to that wondrous time humankind spent in intimate study of our Sun’s most iconic planetary system,” said Carolyn Porco from the Space Science Institute in Boulder, Colorado.
Launched in 1997, the Cassini spacecraft orbited Saturn from 2004 to 2017. Cassini ended its journey with a dramatic plunge into Saturn’s atmosphere on September 15, returning unique science data until it lost contact with Earth.
“Cassini’s scientific bounty has been truly spectacular – a vast array of new results leading to new insights and surprises, from the tiniest of ring particles to the opening of new landscapes on Titan and Enceladus, to the deep interior of Saturn itself,” said Robert West from NASA’s Jet Propulsion Laboratory in Pasadena, California. | <urn:uuid:10b3b46a-59ae-4e5e-8d1a-f8def524dbc8> | 3.046875 | 290 | News Article | Science & Tech. | 28.941131 | 95,501,806 |
+44 1803 865913
By: Yair Ben-Dov
368 pages, no illustrations
A synthesis and catalogue of all the information published on eight families of scale insects (Hemiptera: Coccoidea) worldwide from 1758 to the present. Data is provided on their correct scientific names, common names, synonyms, taxonomy, host plants, distribution, natural enemies, biology, and economic importance.
This book will be a valuable compendium of biological and systematic information for zoologists, entomologists, crop protection specialists, quarantine officers, students studying entomology and related disciplines, and others who require information about scale insects for research and control projects.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Your prompt attention has beaten almost every other material supplier hands down.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:0db3bdbf-3b7a-4643-a934-59ec6a1cf35a> | 2.609375 | 209 | Product Page | Science & Tech. | 20.747973 | 95,501,808 |
By Alister Doyle, Environment Correspondent Alister Doyle, EnvironmentCorrespondent Mon Mar 16, 4:13 pm ET
Reuters An undated handout shows an illustration of a 50 ft (15 metre) long Jurassic era marine reptile crushing
OSLO (Reuters) A giant fossil sea monster found in the Arctic and known as "Predator X" had a bite that would make T-Rex look feeble, scientists said Monday.
The 50 ft (15 meter) long Jurassic era marine reptile had a crushing 33,000 lbs (15 tonnes) per square inch bite force, the Natural History Museum of Oslo University said of the new find on the Norwegian Arctic archipelago of Svalbard.
"With a skull that's more than 10 feet long you'd expect the bite to be powerful but this is off the scale," said Joern Hurum, an associate professor of vertebrate paleontology at the museum who led the international excavation in 2008.
"It's much more powerful than T-Rex," he said of the pliosaur reptile that would have been a top marine predator. Tyrannosaurus Rex was a top land carnivore among dinosaurs.
The scientists reconstructed the predator's head and estimated the force by comparing it with the similarly-shaped jaws of alligators in a park in Florida.
"The calculation is one of the largest bite forces ever calculated for any creature," the Museum said of the bite, estimated with the help of evolutionary biologist Greg Erickson from Florida State University.
Predator X's bite was more than 10 times more powerful than any modern animal and four times the bite of a T-Rex, it said of the fossil, reckoned at 147 million years old. Alligators, crocodiles and sharks all now have fearsome bites.
The teeth of the pliosaur, belonging to a new species, were a foot (30 cms) long. The scientists reconstructed the reptile from a partial skull and 20,000 fragments of skeleton.
The pliosaur, estimated to have weighed 45 tonnes, was similar to but had more massive bones than another fossil sea monster found on Svalbard in 2007, also estimated at 50 feet long and the largest pliosaur to date.
"It's not complete enough to say it's really bigger than 15 meters," Hurum said of the new fossil.
Hurum had said of the first fossil pliosaur that it was big enough to chomp on a small car. He said the bite estimates for the latest fossil forced a rethink.
"This one is more like it could crush a Hummer," he said. referring to General Motors' large sport utility vehicle.
Among other findings were that the pliosaur had a small thin brain shaped like that of a great white shark, according to scans by Patrick Druckenmiller of the University of Alaska.
Pliosaurs preyed upon squid-like animals, fish, and other marine reptiles. Predator X had four huge flippers to propel itself along, perhaps using just two at cruising speeds and the others for a burst of speed. | <urn:uuid:d53c34e6-8ff7-4a28-af20-75b21e817584> | 2.59375 | 632 | News Article | Science & Tech. | 47.765489 | 95,501,814 |
- Quantum info using BECs
- Exciton-polariton condensates and new quantum technologies
- Atom chips for quantum information
- Quantum information theory: entanglement and coherence
- Quantitative biology
- Relativistic quantum information
- BEC-BCS crossover of polaritons
- Novel light sources using exciton-polariton condensates
- Optimization using BECs
- Join us
We are listed on NYU Shanghai’s news page! Below is the article:
Because of the difficulty in measuring absolute mass, the kilogram is the last physical constant that is still defined in terms of a material prototype, in this case a block of an Iridium-Platinum alloy cast in 1875 and stored ever since in France. . This may all be about to change.
NYU Shanghai Prof. Tim Byrnes and Prof. Jonathan P. Dowling from Louisiana State University have developed a novel way of finding the mass of an atom — addressing an age old problem relating to the very definition of mass.
The new method of measuring the mass takes advantage of measuring the quantum properties of ultracold gases with vortices, or whirlpools, in them.
It is currently possible to accurately measure the ratio of the mass of atoms. Using a device called a Penning trap, scientists can very precisely measure how much heavier one atom is in comparison to another. However, a more challenging task of measuring the absolute mass of an atom can be done with only limited precision.
The research by Byrnes and Dowling show that by imaging the position and velocity distribution of an ultracold gas with vortices in them, it is possible to determine the masses of the atoms making up the gas.
The gas needs to be cooled down to extremely low temperatures — one billion times colder than room temperature – such that a new state of matter called a Bose-Einstein condensate is formed. Although this is even colder than deep space, this is now routinely done in many laboratories in the world.
The idea relies on the same basic effect as that seen in the quantum Hall effect, which allows for extremely precise calibration of resistance. In a similar way, the present discovery could lead to a redefinition of the kilogram by measuring the mass of atoms extremely precisely.See it on Shanghai NYU official website
- Professor Byrnes on Money Morning May 14, 2018
- USTC-LSU-Calgary-NYU Collaboration April 22, 2018
- Our work featured in ScienceDaily April 8, 2018
- Professor Byrnes Was Interviewed by Xinhua News March 26, 2018
- Our work featured in Kaleidoscope March 9, 2018
- Our work featured in Physics February 19, 2018
- APS TV interviewed Tim February 15, 2018
- Generalized Grover algorithm paper in PRL February 15, 2018
- Chandra and Ebube win BOCO Global Scholar January 15, 2018
- Chandra wins prize in outstanding paper competition! December 22, 2017
- NYU Shanghai Gazette features our interview September 10, 2017
- News article in Financial Times August 18, 2017
- Another news article in Financial Times July 18, 2017
- NYU-ECNU Institute of Physics at NYU Shanghai featured in Physics World July 10, 2017
- News article in Financial Times China March 28, 2017
- News article in Sina Finance March 28, 2017
- News article in Valor Economico March 28, 2017
- News article in Financial Times about China as a tech superpower March 21, 2017
- Tim interviewed by Pudong TV for 1000 talents award July 26, 2016
- Quantum Communications: Way to Protect Information Security June 22, 2016 | <urn:uuid:0f8b71b7-eed4-4afd-a0e8-aa9bebd03a8b> | 2.625 | 766 | News (Org.) | Science & Tech. | 27.984572 | 95,501,835 |
Darcy's law is an equation that describes the flow of a fluid through a porous medium. The law was formulated by Henry Darcy based on the results of experiments on the flow of water through beds of sand, forming the basis of hydrogeology, a branch of earth sciences.
- 1 Background
- 2 Description
- 3 Derivation
- 4 Additional forms of Darcy's law
- 5 Validity of Darcy's law
- 6 See also
- 7 References
Darcy's law was determined experimentally by Darcy. It has since been derived from the Navier–Stokes equations via homogenization. It is analogous to Fourier's law in the field of heat conduction, Ohm's law in the field of electrical networks, or Fick's law in diffusion theory.
One application of Darcy's law is to analyze water flow through an aquifer; Darcy's law along with the equation of conservation of mass are equivalent to the groundwater flow equation, one of the basic relationships of hydrogeology.
Morris Muskat first refined Darcy's equation for single phase flow by including viscosity in the single (fluid) phase equation of Darcy, and this change made it suitable for the petroleum industry. Based on experimental results worked out by his colleagues Wyckoff and Botset, Muskat and Meres also generalized Darcy's law to cover multiphase flow of water, oil and gas in the porous medium of a petroleum reservoir. The generalized multiphase flow equations by Muskat and others provide the analytical foundation for reservoir engineering that exists to this day.
Darcy's law, as refined by Morris Muskat, at constant elevation is a simple proportional relationship between the instantaneous discharge rate through a porous medium, the viscosity of the fluid and the pressure drop over a given distance.
The above equation for single phase (fluid) flow is the defining equation for absolute permeability (single phase permeability). The total discharge, Q (units of volume per time, e.g., m3/s) is equal to the product of the intrinsic permeability of the medium, κ (m2), the cross-sectional area to flow, A (units of area, e.g., m2), and the total pressure drop pb − pa (pascals), all divided by the viscosity, μ (Pa·s) and the length over which the pressure drop is taking place L (m).
The negative sign is needed because fluid flows from high pressure to low pressure. Note that the elevation head must be taken into account if the inlet and outlet are at different elevations. If the change in pressure is negative (where pa > pb), then the flow will be in the positive x direction. There have been several proposals for a constitutive equation for absolute permeability, and the most famous one is probably the Kozeny equation (also called Kozeny–Carman equation).
Dividing both sides of the equation by the area and using more general notation leads to
where q is the flux (discharge per unit area, with units of length per time, m/s) and ∇p is the pressure gradient vector (Pa/m). This value of flux, often referred to as the Darcy flux or Darcy velocity, is not the velocity which the fluid traveling through the pores is experiencing. The fluid velocity (v) is related to the Darcy flux (q) by the porosity (φ). The flux is divided by porosity to account for the fact that only a fraction of the total formation volume is available for flow. The fluid velocity would be the velocity a conservative tracer would experience if carried by the fluid through the formation.
- if there is no pressure gradient over a distance, no flow occurs (these are hydrostatic conditions),
- if there is a pressure gradient, flow will occur from high pressure towards low pressure (opposite the direction of increasing gradient — hence the negative sign in Darcy's law),
- the greater the pressure gradient (through the same formation material), the greater the discharge rate, and
- the discharge rate of fluid will often be different — through different formation materials (or even through the same material, in a different direction) — even if the same pressure gradient exists in both cases.
A graphical illustration of the use of the steady-state groundwater flow equation (based on Darcy's law and the conservation of mass) is in the construction of flownets, to quantify the amount of groundwater flowing under a dam.
Darcy's law is only valid for slow, viscous flow; fortunately, most groundwater flow cases fall in this category. Typically any flow with a Reynolds number less than one is clearly laminar, and it would be valid to apply Darcy's law. Experimental tests have shown that flow regimes with Reynolds numbers up to 10 may still be Darcian, as in the case of groundwater flow. The Reynolds number (a dimensionless parameter) for porous media flow is typically expressed as
where ρ is the density of water (units of mass per volume), v is the specific discharge (not the pore velocity — with units of length per time), d30 is a representative grain diameter for the porous media (often taken as the 30% passing size from a grain size analysis using sieves — with units of length), and μ is the viscosity of the fluid.
For stationary, creeping, incompressible flow, i.e. D(ρui)/ ≈ 0, the Navier–Stokes equation simplifies to the Stokes equation:
where μ is the viscosity, ui is the velocity in the i direction, gi is the gravity component in the i direction and p is the pressure. Assuming the viscous resisting force is linear with the velocity we may write:
where φ is the porosity, and κij is the second order permeability tensor. This gives the velocity in the n direction,
which gives Darcy's law for the volumetric flux density in the n direction,
The above equation is a governing equation for single phase fluid flow in a porous medium.
Additional forms of Darcy's law
Darcy's law in petroleum engineering
Another derivation of Darcy's law is used extensively in petroleum engineering to determine the flow through permeable media — the most simple of which is for a one-dimensional, homogeneous rock formation with a single fluid phase and constant fluid viscosity.
where Q is the flowrate of the formation (in units of volume per unit time), k is the permeability of the formation (typically in millidarcys), A is the cross-sectional area of the formation, μ is the viscosity of the fluid (typically in units of centipoise). ∂p/ represents the pressure change per unit length of the formation. This equation can also be solved for permeability and is used to measure it, forcing a fluid of known viscosity through a core of a known length and area, and measuring the pressure drop across the length of the core.
Almost all oil reservoirs have a water zone below the oil leg, and some have also a gas cap above the oil leg. When the reservoir pressure drops due to oil production, water flows into the oil zone from below, and gas flows into the oil zone from above (if the gas cap exists), and we get a simultaneous flow and immiscible mixing of all fluid phases in the oil zone. The operator of the oil field may also inject water (and/or gas) in order to improve oil production. The petroleum industry is therefore using a generalized Darcy equation for multiphase flow that was developed by Muskat et alios. Because Darcy's name is so widespread and strongly associated with flow in porous media, the multiphase equation is denoted Darcy's law for multiphase flow or generalized Darcy equation (or law) or simply Darcy's equation (or law) or simply flow equation if the context says that the text is discussing the multiphase equation of Muskat et alios. Multiphase flow in oil and gas reservoirs is a comprehensive topic, and one of many articles about this topic is Darcy's law for multiphase flow.
For flows in porous media with Reynolds numbers greater than about 1 to 10, inertial effects can also become significant. Sometimes an inertial term is added to the Darcy's equation, known as Forchheimer term. This term is able to account for the non-linear behavior of the pressure difference vs flow data.
where the additional term κ1 is known as inertial permeability.
The flow in the middle of a sandstone reservoir is so slow that Forchheimer's equation is usually not needed, but the gas flow into a gas production well may be high enough to justify use of Forchheimer's equation. In this case the inflow performance calculations for the well, not the grid cell of the 3D model, is based on the Forchheimer equation. The effect of this is that an additional rate-dependent skin appears in the inflow performance formula.
Some carbonate reservoirs have lots of fractures, and Darcy's equation for multiphase flow is generalized in order to govern both flow in fractures and flow in the matrix (i.e. the traditional porous rock). The irregular surface of the fracture walls and high flow rate in the fractures, may justify use of Forchheimer's equation.
For gas flow in small characteristic dimensions (e.g., very fine sand, nanoporous structures etc.), the particle-wall interactions become more frequent, giving rise to additional wall friction (Knudsen friction). For a flow in this region, where both viscous and Knudsen friction are present, a new formulation needs to be used. Knudsen presented a semi-empirical model for flow in transition regime based on his experiments on small capillaries. For a porous medium, the Knudsen equation can be given as
where N is the molar flux, Rg is the gas constant, T is the temperature, Deff
K is the effective Knudsen diffusivity of the porous media. The model can also be derived from the first-principle-based binary friction model (BFM). The differential equation of transition flow in porous media based on BFM is given as
This equation is valid for capillaries as well as porous media. The terminology of the Knudsen effect and Knudsen diffusivity is more common in mechanical and chemical engineering. In geological and petrochemical engineering, this effect is known as the Klinkenberg effect. Using the definition of molar flux, the above equation can be rewritten as
This equation can be rearranged into the following equation
Comparing this equation with conventional Darcy's law, a new formulation can be given as
This is equivalent to the effective permeability formulation proposed by Klinkenberg:
where b is known as the Klinkenberg parameter, which depends on the gas and the porous medium structure. This is quite evident if we compare the above formulations. The Klinkenberg parameter b is dependent on permeability, Knudsen diffusivity and viscosity (i.e., both gas and porous medium properties).
Darcy's law for short time scales
For very short time scales, a time derivative of flux may be added to Darcy's law, which results in valid solutions at very small times (in heat transfer, this is called the modified form of Fourier's law),
where τ is a very small time constant which causes this equation to reduce to the normal form of Darcy's law at "normal" times (> nanoseconds). The main reason for doing this is that the regular groundwater flow equation (diffusion equation) leads to singularities at constant head boundaries at very small times. This form is more mathematically rigorous, but leads to a hyperbolic groundwater flow equation, which is more difficult to solve and is only useful at very small times, typically out of the realm of practical use.
Brinkman form of Darcy's law
Another extension to the traditional form of Darcy's law is the Brinkman term, which is used to account for transitional flow between boundaries (introduced by Brinkman in 1949),
where β is an effective viscosity term. This correction term accounts for flow through medium where the grains of the media are porous themselves, but is difficult to use, and is typically neglected.
Validity of Darcy's law
Darcy's law is valid for laminar flow through sediments. In fine-grained sediments, the dimensions of interstices are small and thus flow is laminar. Coarse-grained sediments also behave similarly but in very coarse-grained sediments the flow may be turbulent. Hence Darcy's law is not always valid in such sediments. For flow through commercial circular pipes, the flow is laminar when Reynolds number is less than 2000 and turbulent when it is more than 4000, but in some sediments it has been found that flow is laminar when the value of Reynolds number is less than 1.
- Darcy, H. (1856). Les fontaines publiques de la ville de Dijon. Paris: Dalmont.
- Whitaker, S. (1986). "Flow in porous media I: A theoretical derivation of Darcy's law". Transport in Porous Media. 1: 3–25. doi:10.1007/BF01036523.
- Amin F. Zarandi, Krishna M. Pillai, Adam S. Kimmel, "Spontaneous imbibition of liquids in glass-fiber wicks. Part I: Usefulness of a sharp-front approach". American Institute of Chemical Engineers AIChE Journal. 63: 294–305, 2018. DOI:10.1002/aic.15965
- Bejan, A. (1984). Convection Heat Transfer. John Wiley & Sons.
- Cunningham, R. E.; Williams, R. J. J. (1980). Diffusion in Gases and Porous Media. New York: Plenum Press.
- Carrigy, N.; Pant, L. M.; Mitra, S. K.; Secanell, M. (2013). "Knudsen diffusivity and permeability of pemfc microporous coated gas diffusion layers for different polytetrafluoroethylene loadings". Journal of the Electrochemical Society. 160: F81–89. doi:10.1149/2.036302jes.
- Pant, L. M.; Mitra, S. K.; Secanell, M. (2012). "Absolute permeability and Knudsen diffusivity measurements in PEMFC gas diffusion layers and micro porous layers". Journal of Power Sources. 206: 153–160. doi:10.1016/j.jpowsour.2012.01.099.
- Kerkhof, P. (1996). "A modified Maxwell–Stefan model for transport through inert membranes: The binary friction model". Chemical Engineering Journal and the Biochemical Engineering Journal. 64: 319–343. doi:10.1016/S0923-0467(96)03134-X.
- Klinkenberg, L. J. (1941). "The permeability of porous media to liquids and gases". Drilling and Production Practice. American Petroleum Institute. pp. 200–213.
- Brinkman, H. C. (1949). "A calculation of the viscous force exerted by a flowing fluid on a dense swarm of particles". Applied Scientific Research. 1: 27–34. doi:10.1007/BF02120313.
- Jin, Y.; Uth, M.-F.; Kuznetsov, A. V.; Herwig, H. (2 February 2015). "Numerical investigation of the possibility of macroscopic turbulence in porous media: a direct numerical simulation study". Journal of Fluid Mechanics. 766: 76–103. Bibcode:2015JFM...766...76J. doi:10.1017/jfm.2015.9.
- Arora, K. R. (1989). Soil Mechanics and Foundation Engineering. Standard Publishers. | <urn:uuid:d3c312da-ad0c-4bba-89bb-5287dfc24e53> | 3.59375 | 3,398 | Knowledge Article | Science & Tech. | 53.527633 | 95,501,839 |
As an illustration, suppose that we are interested in the properties of a function f(n) as n becomes very large. If f(n) = n2 + 3n, then as n becomes very large, the term 3n becomes insignificant compared to n2. The function f(n) is said to be "asymptotically equivalent to n2, as n → ∞". This is often written symbolically as f(n) ~ n2, which is read as "f(n) is asymptotic to n2".
Formally, given functions f(x) and g(x), we define a binary relation
if and only if (de Bruijn 1981, §1.4)
The symbol ~ is the tilde. The relation is an equivalence relation on the set of functions of x; the functions f and g are said to be asymptotically equivalent. The domain of f and g can be any set for which the limit is defined: e.g. real numbers, complex numbers, positive integers.
The same notation is also used for other ways of passing to a limit: e.g. x → 0, x ↓ 0, |x| → 0. The way of passing to the limit is often not stated explicitly, if it is clear from the context.
Although the above definition is common in the literature, it is problematic if g(x) is zero infinitely often as x goes to the limiting value. For that reason, some authors use an alternative definition. The alternative definition, in little-o notation, is that f ~ g if and only if | <urn:uuid:4d606771-ec67-4b56-b87d-f1d8306177ef> | 2.9375 | 338 | Knowledge Article | Science & Tech. | 69.198124 | 95,501,880 |
At their peak, it is difficult to breathe without inhaling the bugs, which hatch and emerge from the lake in blizzard-like proportions. After their short adult life, their carcasses blanket the lake, and the dead flies confer so much nutrient on the surrounding landscape that the enhanced productivity can be measured by Earth-observing satellites.
Now, however, the midge Tanytarsus gracilentus and its periodic, sky-darkening hatches are giving scientists an opportunity to assess how the slightest environmental perturbation can tip the precarious balance of an ecosystem and push it into altered states with unknown consequences. Writing this week in the journal Nature, a team led by University of Wisconsin-Madison zoologist Anthony Ives describes an ecosystem population dynamics model built on the flies of Lake Myvatn, showing how even slight human-induced changes can irreversibly alter the balance of nature.
"If our model is correct, the magnitude of these cycles should be sensitive to even the smallest changes in the hydrology of the lake," explains Ives, who conducted the research in collaboration with Árni Einarsson and Arnthor Gardarsson of the University of Iceland, and Vincent A. A. Jansen of Royal Holloway, University of London.
The new study is important because it suggests the possibility of constructing powerful models that scientists can use to assess what may occur as a result of both natural changes and human-induced changes such as those linked to global warming.
"It doesn't take much noise to cause big changes in the pattern," says Ives of phenomena, natural or human-induced, that can tip the balance of an ecosystem. "Even small amounts of environmental noise cause very different biological processes to dominate. And even if you understand the causes, you can't predict the effects."
In short, the study implies that humans are very likely and unknowingly imposing profound, unpredictable and irreversible changes on ecosystems of all kinds with very little effort.
Lake Myvatn, which means "midge lake" in Icelandic, makes a perfect laboratory for studying such environmental change. The algae-munching midge Tanytarsus gracilentus alone makes up two-thirds of the herbivores in the lake's biomass and is an important food source for birds and fish. But the populations of the midge fluctuate dramatically: "They fluctuate in abundance by six orders of magnitude; in some years you hardly see any, while in others you have to fight not to inhale them," according to Árni Einarsson who directs the Myvatn Research Station.
"The odd thing about the Myvatn midges," Ives adds, "is that the fluctuations are not random, but neither are they regular."
The model developed by Ives and his colleagues reveals an exotic mathematical property known as "alternative dynamical states." In short, the midges of Myvatn can appear in cycles of great and regular abundance, or at stable high abundances, and natural variables or "noise" such as temperature or wind can unpredictably push the dynamics between these alternative patterns.
"A practical, and serious, implication of these dynamics is that they make midges potentially susceptible to even minor disturbances," says Ives. "The magnitude of the fluctuations could be highly sensitive to disturbances that affect how low the populations crash during the cycling phase. In the last 40 years, the fluctuations in midge populations seem to have become more extreme. "
So extreme, Einarsson notes, that the Lake Myvatn fishery, a resource used by local farmers for 1,000 years has collapsed. "The fluctuations in midge populations became so extreme that the fish populations couldn't cope during midge crashes. Basically, the fish ran out of food."
The model developed by Ives' team implicates dredging in the lake, an operation initiated in the 1960s and now abandoned that was coincident with changes in the fluctuation of midge populations.
"Our model suggests that this dredging could, in principle, have caused greater fluctuations in midge populations," according to the Wisconsin biologist.
Although there are only a few species in the case of Lake Myvatn, the fragility of their dynamics makes the lake's ecosystem and the forces at play a valuable model for understanding discrete ecosystems of all kinds.
"These forces involve few species," notes Ives, "yet they have huge ramifications. They become an important test bed for looking at ecosystems in general."
Anthony R. Ives | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:f6606965-5078-4ed1-a8fe-6fe5aaba105f> | 3.625 | 1,607 | Content Listing | Science & Tech. | 34.534851 | 95,501,907 |
Designing Expression Plasmid Vectors in E. coli
The production of proteins is one of the main applications of genetic engineering in biotechnology. Even though standard cloning procedures are now routine and a large variety of host-vector systems for gene expression are available, difficulties are encountered when theoretical strategies are put into practice, so gene expression is still quite empirical. E. coli remains an important host system for the industrial production of proteins from cloned genes, and considerable lore has accumulated since the pioneering gene expression experiments. The extensive knowledge about E. coli’s physiology and genetics accounts for its preferential use as a host for gene expression. The inability of this organism to exert certain posttranslational modifications of proteins that lead to correct folding and activity represents its major drawback as a production organism.
KeywordsTranscription Initiation Ribosome Binding Site mRNA Synthesis Replicative Plasmid Strain Genotype
- 3.Gerhardt, P., Murray, R. G. E., Wood, W. A., and Krieg, N. R. (1994) Methods for General and Molecular Bacteriology American Society for Microbiology, ASM Press, Washington, DC.Google Scholar
- 4.Schwab, H. (1993) Principles of Genetic Engineering for E. coli, in Genetic Engineering of Microorganisms (Pulher, A., ed.), VCH Publishers, New York, pp. 1–53.Google Scholar
- 7.Georgiou, G. and Bowden, G. A. (1991) Inclusion body formation and the recovery of aggregated recombinant proteins, in Recombinant DNA Technology and Applications (Prokop, A., Bajpai, R. K., and Ho, C., eds.), McGraw Hill, New York, pp. 333–356.Google Scholar
- 8.Bogosian, G., Kane, J. F., Obukowicz, M. G., and Olins, P. O. (1991) Optimizing protein production in recombinant strains of E. coli, in Recombinant DNA Technology and Applications (Prokop, A., Bajpai, R. K., and Ho, C., eds.), McGraw Hill, New York, pp. 285–315.Google Scholar
- 9.Shuler, M. L. and Kargi, F. (1992) Bioprocess Engineering Basic Concepts Prentice Hall, Englewood Cliffs, NJ.Google Scholar | <urn:uuid:d2c17d45-3c4d-4e14-ac53-5c4f29d96621> | 2.703125 | 512 | Academic Writing | Science & Tech. | 50.981531 | 95,501,917 |
+44 1803 865913
Edited By: James Devillers and Minh-Ha Pham-Delegue
332 pages, 140 figs, 48 tabs
Honey Bees: Estimating the Environmental Impact of Chemicals is an updated account of the different strategies for assessing the ecotoxicity of xenobiotics against these social insects, which play a key role in both ecology and agriculture. In addition to the classical acute laboratory test, semi-field cage tests and full field funnel tests, new tests based mainly on behavioral responses are for the first time clearly described. Information on the direct and indirect effects on honey bees of radionuclides, heavy metals, pesticides, semi-volatile organic compounds and genetically modified plants is also presented.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
They [the books] arrived in wonderful condition and it was a joy to see how well they were protected.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:09dac3ff-79cb-4459-9ebe-a2c3b67098ec> | 2.765625 | 230 | Product Page | Science & Tech. | 29.418256 | 95,501,934 |
Geochemistry of obsidian from Krasnoe Lake on the Chukchi Peninsula (Northeastern Siberia)
- 33 Downloads
This report considers features of the geochemical composition of obsidian from beach sediments of Krasnoe Lake along the lower course of the Anadyr River, as well as from lava–pyroclastic rocks constituting the lake coastal outcrops and the surrounding branches of Rarytkin Ridge. The two geochemical types of obsidian, for the first time distinguished and researched, correspond in their chemical composition to lavas and ignimbrite-like tuffs of rhyolites from the Rarytkin area. The distinguished types represent the final stage of acidic volcanism in the West Kamchatkan–Koryak volcanic belt. It was assumed that the accumulation of obsidian in coastal pebble beds was caused by the erosion of extrusive domes and pyroclastic flows. The geochemical studies of obsidian artifacts from archeological sites of the regions of the Sea of Okhotsk, the Kolyma River, and the Chukchi Peninsula along with the correlation of geological and archeological samples show that Krasnoe Lake was an important source of “archeological” obsidian in Northeastern Siberia.
Unable to display preview. Download preview PDF.
- 1.V. V. Nasedkin, Acid Volcanism and Water-Containing Glasses of the USSR North-East (Nauka, Moscow, 1983) [in Russian].Google Scholar
- 2.J. C. Cook, Arct. Anthropol. 32 (1), 92–100 (1995).Google Scholar
- 3.N. N. Dikov, Archeological Monuments in Kamchatka, Chukotka and Upper Kolyma (Nauka, Moscow, 1977) [in Russian].Google Scholar
- 4.A. Yoshitani, S. Slobodin, T. Tomoda, et al., Proc. Archaeol. Mus. Kokugakuin Univ., No. 29, 1–21 (2013).Google Scholar
- 5.S. B. Slobodin, in Ecology of Ancient and Traditional Societies (Vektor-Buk, Tyumen, 2007), pp. 136–140.Google Scholar
- 6.P. I. Fedorov, N. I. Filatova, and A. I. Dvoryankin, Tikhookean. Geol. 15 (3), 3–13 (1996).Google Scholar
- 9.K. G. Shirinyan, Tr. Lab. Paleovulkanol., No. 2, 200–210 (1963).Google Scholar | <urn:uuid:d2179537-e95b-4ba3-8098-a7949af2c7dd> | 2.5625 | 568 | Academic Writing | Science & Tech. | 58.385704 | 95,501,946 |
Does habitat availability determine geographical-scale abundances of coral-dwelling fishes?
- 623 Downloads
The role of local-scale processes in determining large-scale patterns of abundance is a key issue in ecology. To test whether habitat use determines local and large-scale patterns of abundance of obligate coral-dwelling fishes (genus Gobiodon), the author compared habitat availability with the abundance of four species, G. axillaris, G. brochus, G. histrio, and G. quinquestrigatus, among four locations, from the southern Great Barrier Reef to northern Papua New Guinea. Habitat availability, measured at tens of meters, explained 47–65% of the variation in abundance of these species among geographic locations spanning over 2,000 km. Therefore, local-scale patterns of habitat use appear to determine much larger-scale patterns of abundance in these habitat-specialist fish. The abundances of all species, except G. brochus, were also closely associated with particular exposure regimes, independently of the abundance of corals. Broad-scale habitat selection for reef types within locations can most easily explain this pattern. The abundances of all species, except G. brochus, also varied among geographic locations, independently of coral abundances. Therefore, the abundances of these species are influenced by either geographic variation in local-scale processes that was not measured, or additional processes acting at very large spatial scales.
Unable to display preview. Download preview PDF. | <urn:uuid:b558fbf2-35f3-41f0-b6d9-59d86e87b514> | 3.21875 | 305 | Truncated | Science & Tech. | 21.285611 | 95,501,956 |
Please forward this error screen to sharedip-1000 solved problems in classical mechanics pdf. Please forward this error screen to sharedip-1601531662.
This article is about the math prizes. For the technology prize, see Millennium Technology Prize.
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed. The Millennium Prize Problems are seven problems in mathematics that were stated by the Clay Mathematics Institute in 2000. Mills existence and mass gap. At present, the only Millennium Prize problem to have been solved is the Poincaré conjecture, which was solved by the Russian mathematician Grigori Perelman in 2003.
In dimension 2, a sphere is characterized by the fact that it is the only closed and simply-connected surface. The Poincaré conjecture states that this is also true in dimension 3. It is central to the more general problem of classifying all 3-manifolds. | <urn:uuid:072a8b72-81bf-4195-abf6-c6c0ad2a3abb> | 2.78125 | 203 | Knowledge Article | Science & Tech. | 42.081382 | 95,501,963 |
DLR Expression is the backbone of the DLR. It is a separate feature that you can use without involving the rest of the DLR. If you do use it with other DLR features, there are basically two usage scenarios. One scenario involves defining a language’s semantics in terms of DLR expressions. The other is defining the late binding logic of binders and dynamic objects. Don’t worry if these don’t make much sense to you right now. In this chapter, you will learn how to use DLR Expression by itself, while later chapter will cover the two usage scenarios. Once you get a good grasp of using DLR Expression by itself, you’ll be in a good position to use DLR Expression and the other DLR features together.
KeywordsFactory Method Abstract Syntax Expression Class Concrete Syntax Lambda Calculus
Unable to display preview. Download preview PDF. | <urn:uuid:5f4c0018-a38e-4401-a5c1-08b797feed7d> | 2.59375 | 189 | Truncated | Software Dev. | 56.669602 | 95,501,971 |
Although bacteria have no sensory organs in the classical sense, they are still masters in perceiving their environment. A research group at the University of Basel’s Biozentrum has now discovered that bacteria not only respond to chemical signals, but also possess a sense of touch. In their recent publication in “Science”, the researchers demonstrate how bacteria recognize surfaces and respond to this mechanical stimulus within seconds. This mechanism is also used by pathogens to colonize and attack their host cells.
Be it through mucosa or the intestinal lining, different tissues and surfaces of our body are entry gates for bacterial pathogens. The first few seconds – the moment of touch – are often critical for successful infections.
Some pathogens use mechanical stimulation as a trigger to induce their virulence and to acquire the ability to damage host tissue. The research group led by Prof. Urs Jenal, at the Biozentrum of the University of Basel, has recently discovered how bacteria sense that they are on a surface and what exactly happens in these crucial first few seconds.
Research focused only on chemical signals
In recent decades, research has made enormous progress in exploring how bacteria perceive and process chemical signals. “However, we have little knowledge of how bacteria read out mechanical stimuli and how they change their behavior in response to these cues,” says Jenal.
“Using the non-pathogenic Caulobacter as a model, our group was able to show for the first time that bacteria have a ‘sense of touch’. This mechanism helps them to recognize surfaces and to induce the production of the cell's own instant adhesive.”
How bacteria recognize surfaces and adhere to them
Swimming Caulobacter bacteria have a rotating motor in their cell envelope with a long protrusion, the flagellum. The rotation of the flagellum enables the bacteria to move in liquids. Much to the surprise of the researchers, the rotor is also used as a mechano-sensing organ. Motor rotation is powered by proton flow into the cell via ion channels. When swimming cells touch surfaces, the motor is disturbed and the proton flux interrupted.
The researchers assume that this is the signal that sparks off the response: The bacterial cell now boosts the synthesis of a second messenger, which in turn stimulates the production of an adhesin that firmly anchors the bacteria on the surface within a few seconds. “This is an impressive example of how rapidly and specifically bacteria can change their behavior when they encounter surfaces,” says Jenal.
Better understanding of infectious diseases
“Even though Caulobacter is a harmless environmental bacterium, our findings are highly relevant for the understanding of infectious diseases. What we discovered in Caulobacter also applies to important human pathogens,” says Jenal. In order to better control and treat infections, it is mandatory to better understand processes that occur during these very first few seconds after surface contact.
Isabelle Hug, Siddharth Deshpande, Kathrin S. Sprecher, Thomas Pfohl, Urs Jenal
Second messenger-mediated tactile response by a bacterial rotary motor
Science (2017). doi: 10.1126/science.aan5353
Prof. Dr. Urs Jenal, University of Basel, Biozentrum, Tel. +41 61 207 21 35, email: firstname.lastname@example.org
Dr. Katrin Bühler, University of Basel, Biozentrum, Communications, Tel. +41 61 207 09 74, email: email@example.com
Dr. Katrin Bühler | Universität Basel
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:dca4ca9b-1a81-4fb4-a9b4-8368b5afb101> | 3.765625 | 1,342 | Content Listing | Science & Tech. | 37.638173 | 95,501,973 |
Frogs and toads may look like small creatures but they play a rather large role in our environment, according to wildlife experts.
Which is why Virginia named 2015 the "Year of the Frog" and why FrogWatch USA is asking for citizen scientists to help study them.
"Amphibians are also very useful to humans, and are handy to have around," says Travis Land, herpetology curator at the Virginia Living Museum in Newport News. The museum participates in FrogWatch USA and Virginia's Year of the Frog with events now and into August.
"A large portion of an adult frog's diet is comprised of things we consider as pests. A single frog or toad eats thousands of crickets, slugs, flies, moths, grasshoppers, ants, earwigs and so on each year. Also, since tadpoles munch on algae, they help clear the water to allow light to reach the native vegetation growing underneath, which in turn supports other types of life.
"And lastly, one of the other important roles they play nowadays is as the proverbial 'canary in the coal mine.' Since their skin absorbs just about everything in the water, they're usually the first ones to get affected by contaminants or pollution. When you stop hearing their songs and stop seeing them in your local watersheds — that's a warning something may be wrong."
The Virginia Department of Game and Inland Fisheries has never done anything like Year of the Frog before, but the success of the campaign may lead to more years of awareness about specific wildlife species, according to John Kleopfer, herpetologist with the department.
"Although there has been significant wetland loses, which results in the loss of breeding habitat for frogs, frogs appear to be doing OK in Virginia," says Kleopfer. Even so, 50 percent of the world's frog species are imperiled, according to the Virginia Is for Frog's website at dgif.virginia.gov/wildlife/virginia-is-for-frogs.
Frogs vs. toads
Frogs and toads are very similar actually, according to Land. There are a few physical differences, especially for the ones found in Virginia.
Generally, "true toads" in Virginia can be identified by their rough, warty skin and large bumps on the side of their head, called parotoid glands — where they produce the poison they secrete.
"That is why you really shouldn't touch a toad with your bare hands, since the poisons they secrete can make people sick," he says.
"Toads also have shorter, stubbier legs they use to crawl, rather than hop."
Frogs, however, usually have smooth skin all over, and their feet are webbed or have toepads, depending on what type of frog it is, Land adds. Their legs are much larger and longer, allowing them to jump great distances.
"You still wouldn't want to touch one though; some frogs also have toxic skin secretions to ward off predators," he says.
Did you know?
Frogs or toads cannot give you warts, according to Land. Warts that develop on people's skin are actually caused by a type of virus, while the warts on a toad are bumps that help thicken the skin and offer the toad more protection.
And, while frogs and toads eat insects and other invertebrates, bullfrogs go after just about anything that moves, he continues, eating prey such as mice, voles, snakes and even birds.
Homeowners can help protect frogs by:
•Using nonchemical weed and pest controls in yards.
•Minimizing fertilizer use.
•Picking up pet waste because it contains bacteria and viruses harmful to wildlife.
How to attract frogs
•Nature water always attracts frogs, even if it is just a temporary puddle, according to Kory Steele of Newport News, president of the Virginia Herpetology Society —virginiaherpetologicalsociety.com. Frogs generally will find their way toward any newly created backyard koi ponds.
•To help toads eating bugs under your porch light, leave out a shallow dish of unchlorinated water when the weather is dry, he adds.
•Tree frogs have been known to take up residence inside of PVC pipes; cut sections and hang them in trees among the leaves will draw them in, says Steele.
•Create a toad abode, using an old clay pot turned upside-down, according to Land. Make sure there is an opening on the ground to one side, either by making a crack or propping up one side of the pot on rocks. The pot needs to be located in a cool, shady place that's not too exposed. Don't worry about making a floor for the abode — toads like to dig into the ground. Next, make sure there is a source of water around; you can provide one by leaving a saucer with water on the ground and refilling it regularly, or simply find a depression in the ground that retains water occasionally.
Meet five frogs
Here are five frogs you can encounter in Hampton Roads, courtesy the Virginia Living Museum:
•American Bullfrog (Lithobates catesbeianus). Largest native frog species in North America with average length 31/2- 6 inches. Coloration varies from green to brown, sometimes plain in coloration or with a darker "net-like" pattern on the back. Call is a deep "Rrrruummm" or "Jug-o-rummm," sounding similar to a bull. Eats anything it can try and fit in its mouth, including fish, small mammals, small reptiles (like lizards or snakes), insects, birds or other amphibians.
•Southern Leopard Frog (Lithobates sphenocephalus). Average length is 2-3 inches long. Coloration varies from green to brown, with several dark spots along the side and back. A distinct ridge extends from the eyes down along the back on either side of the body. Call is described as a "guttural laugh," followed by some croaks. Eats insects, worms, and other invertebrates. Travel on land to forage, but prefers wet areas with dense vegetation.
•Barking Treefrog (Hyla gratiosa). Largest North American tree frog species with average length 2- 21/2 inches. Coloration is bright green to dark brown, and sometimes present with dark circular spots on the back. May occasionally have a bright white stripe running along either side that can be complete or broken. Call is described as a small dog or hound barking, especially when they call in chorus. Eats mostly insects. Found in a small handful of counties in southeast Virginia. High concern for conservation.
•American Toad (Anaxyrus americanus). Average length is 2-31/2 inches. Color varies from brown, gray, rust red, or olive with warty skin. Dark spots on back that only have one to two warts per spot. Call is a long, extended trill — longer than the other toad species (20-30 seconds). Eats a range of insects and invertebrates. Often found near shallow bodies of water for breeding. Prefers areas with loose soil, leaf litter, logs or other spots to hide.
•Eastern Narrow-Mouthed Toad (Gastrophryne carolinensis). This small species is typically 0.9-1.3 inches in length. Coloration varies from gray, brown, to reddish. Skin is smooth, with a pointed snout, as well as a distinct fold of skin across its head. Call is a sharp, short "baaaa," like a sheep's call. Eats small insects like ants, termites, etc. Spends much of its time buried in the ground until significant rain events that trigger breeding.
Contact Kathy at email@example.com.
Froggy times and fun at Virginia Living Museum
•"Frogs: a Chorus of Colors" is one of its main attractions at the Virginia Living Museum through Sept. 7. The traveling exhibit, created by Peeling Productions, showcases more than 70 exotic frogs and toads in colorful exhibits.
•Frogs are Herptile, 10 a.m.-4 p.m. June 27. Learn about the Virginia Herpetological Society and the work they do to protect frogs — and lots of other animals too. Family-friendly presentations by Kory Steele about identifying "frog calls" — the different sounds frogs make — will be held throughout the day. Special crafts and activities also will be part of the fun.
•FrogWatch USA, 10 a.m.-4 p.m. July 25. Meet citizen scientist volunteers from FrogWatch USA and learn how they're making meaningful contributions to amphibian conservation by simply listening for and identifying the calls of frogs and toads. This information is then collected, combined and evaluated by research scientists who work to protect and preserve frogs and their habitats. Learn how you and your family can become "frog-friendly" citizen scientists. Crafts and activities, too. The Virginia Living Museum is participating in FrogWatch USA, which is a citizen science program created by the Association of Zoos and Aquariums to educate people about amphibians, and teach them how to identify native frog calls. "Listening to frog calls allows us to collect useful data on amphibian populations, and how they are changing over time," says Travis Land, herpetology curator at the living museum. The museum's chapter is the 103rd in the nation, with more being created each year. Other Hampton Roads area chapters include the Tidewater Chapter at the Virginia Zoo, Peninsula Master Naturalist Chapter and the Virginia Aquarium chapter. Learn more about FrogWatch at aza.org/frogwatch.
•Virginia is for ... Frogs, 10 a.m.-4 p.m. Aug. 8. Celebrate 2015 as "Virginia's Year of the Frog." Representatives from the Virginia Department of Game and Inland Fisheries help your learn about Virginia frog friends, their habitats and their survival challenges. Learn and fun includes take-home crafts and activities. Learn more about Virginia's Year of the Frog at dgif.virginia.gov/wildlife/virginia-is-for-frogs.
Note: The Virginia Living Museum is located at 524 J. Clyde Morris Blvd., Newport News. Museum admission is required for the event; for more information, call 595-1900 or visit thevlm.org.
Virginia is for Frogs
•Learn about Virginia's 27 species of frogs and toads at dgif.virginia.gov/wildlife/virginia-is-for-frogs.
•Learn what you can do to help save and protect frogs at savethefrogs.com. | <urn:uuid:8a738606-8d3c-4c26-8e04-c2c3d170751a> | 3.46875 | 2,278 | News Article | Science & Tech. | 59.664831 | 95,501,979 |
Some features of this site are not compatible with your browser. Install Opera Mini to
better experience this site.
Different Records, Same Warming Trend
This page contains archived content and is no longer being updated. At the time of publication, it represented the best available science. However, more recent observations and studies may have rendered some content obsolete.
Each year, scientists from several major institutions—NASA’s Goddard Institute for Space Studies (GISS), NOAA’s National Climatic Data Center (NCDC), the Japanese Meteorological Agency, and the Met Office Hadley Centre in the United Kingdom—tally the temperature data collected at stations around the world and make independent judgments about whether the year was warm or cool compared to previous years.
But how much does the ranking of a single year matter? Not all that much, said James Hansen, the director of NASA GISS. In his group’s analysis, 2010 differed from 2005 by less than 0.01°C (0.018 °F), a difference so small that the temperatures of the two years are almost indistinguishable, given the uncertainty of the calculation. Meanwhile, the third warmest year, 2009, is so close to 1998, 2002, 2003, 2006, and 2007 (the maximum difference between years is 0.03°C), that all six years are virtually tied.
What matters more than a yearly record from a single group is the longer trend, as shown in the plot at the top of this page. The four records are unequivocal: the world has warmed since 1880, and the last decade has been the warmest on record.
When we focus on the annual rankings, the differences between the temperature analyses can be confusing. For example, GISS previously ranked 2005 as the warmest, while the Met Office listed 1998. The discrepancy helped fuel a misconception that findings from the research groups varied sharply or contained large degrees of uncertainty. It also fueled a misconception that global warming had stopped in 1998.
“The official records vary slightly because of subtle differences in the way we analyze the data,” said Reto Ruedy, one of Hansen’s colleagues at GISS. “But they also agree extraordinarily well.”
All four records above show peaks and valleys in sync with each other. All show particularly rapid warming in the past few decades. And all show the last decade as the warmest.
The small discrepancies between the records are mostly due to the way scientists from each institution handle regions of the world where temperature-monitoring stations are scarce—parts of Africa, Antarctica, the Arctic, and the Amazon. For instance, GISS fills in the gaps (see the first global map above) with data from the nearest land stations. The Met Office analysis (second of the two global maps above) leaves areas of the Arctic Ocean out.
Both approaches pose problems. By not inferring data, the Met Office assumes that the areas without stations have a warming equal to that of the entire Northern Hemisphere—a value that satellite and field measurements suggest is too low, given the observed rate of Arctic sea ice loss. On the other hand, GISS’s approach may either overestimate or underestimate Arctic warming.
“There’s no doubt that estimates of Arctic warming are uncertain, and should be regarded with caution,” Hansen said. “Still, the rapid pace of Arctic ice retreat leaves little question that temperatures in the region are rising fast, perhaps faster than we assume in our analysis.”
The temperature records also differ slightly because the point of reference that each group uses to calculate global temperature is different. It is not possible to reliably calculate absolute global average surface temperatures, so scientists instead calculate a relative measure called a “temperature anomaly.” They compare average temperatures at any given time and place to a long-term average, or base period, for each area. GISS uses a base period of 1951 to 1980; the Met Office uses 1961 to 1990; the Japanese Meteorological Agency uses 1971 to 2000; and NCDC uses the entire 20th century. The graph at the top of this page shows the four surface temperature records aligned to a common baseline: the average global temperature from 1951–1980.
This means that numerical values of the temperature anomalies differ. But it does not change the magnitude of temperature changes over the past century. | <urn:uuid:a0d49fd8-f5e6-4aae-b575-da03368e2972> | 3.421875 | 897 | Knowledge Article | Science & Tech. | 41.332205 | 95,501,983 |
Torque is the tendency of a force to rotate an object about an axis. It is thought to twist an object. Torque is the measure of the turning force on an object such as a bolt of a flywheel. Torque is often denoted by the Greek letter tau.
The magnitude of torque depends on three quantities: the force applied, the length or the lever arm connecting the axis to the point of force application and the angle between the force vector and the lever arm. The equations of torque are as followed:
τ=r x F
Where τ is the torque vector, r is the displacement vector, F is the force vector, x denotes a cross product and ϴ is the angle between the force vector and the lever arm vector. The SI unit for torque is the newton metre (Nm).© BrainMass Inc. brainmass.com July 17, 2018, 11:53 am ad1c9bdddf | <urn:uuid:47956937-babc-4623-ba53-0193c94b2cc1> | 3.9375 | 193 | Knowledge Article | Science & Tech. | 71.141462 | 95,502,039 |
But on the remote UK overseas territory of Ascension Island, one of the world's largest green turtle populations is undergoing something of a renaissance.
Writing in the journal Biodiversity and Conservation, scientists from the University of Exeter and Ascension Island Government Conservation Department report that the number of green turtles nesting at the remote South Atlantic outpost has increased by more than 500 per cent since records began in the 1970s.
As many as 24,000 nests are now estimated to be laid on the Island's main beaches every year, making it the second largest nesting colony for this species in the Atlantic Ocean.
Lead author, Dr Sam Weber, said: "The increase has been dramatic. Whereas in the 1970s and 80's you would have been lucky to find 30 turtles on the Island's main nesting beach on any night, in 2013 we had more than 400 females nesting in a single evening".
The scientists' report comes as Ascension Island Government announces that it is committing a fifth of the territory's land area to biodiversity conservation. New legislation enacted by the Island's Governor, Mark Capes, on the 28th of July creates seven new nature reserves and wildlife sanctuaries that include the Island's three main turtle nesting beaches, along with globally-important seabird colonies that are home to more than 800,000 nesting seabirds.
The legislation was developed during a two year project run by Ascension Island Government and the University of Exeter to develop a national biodiversity action plan for the territory.
Dr Nicola Weber, Ascension Island Government's Head of Conservation, said: "The decision to give legal protection to our most iconic wildlife sites follows extensive public consultation and has received a high level of support from across of the community. It speaks volumes as to how seriously environmental stewardship is currently taken on the Island".
Dr Annette Broderick, who is leading the project for the University of Exeter, added: "I am delighted that these globally important nesting sites have been afforded protection. This has been a goal for many years and has been achieved as a result of the dedication of the AI Government team who have been working towards this for several years."
To trace the origins of the current turtle boom you need to go back to before the Second World War. Dr Broderick, who has been researching sea turtles on Ascension Island for the past 15 years and led the recent study, said: "Green turtles were an important source of food for those on the island and passing ships would take live turtles onboard to ensure fresh meat for their voyage. Ships returning to the UK would stock up with turtles for the Lords of the Admiralty, who had a penchant for turtle soup. Records show a dramatic decline in the number of turtles harvested each year as fewer and fewer came to nest and since the 1950s no turtles have been harvested. We are now seeing the population bounce back, although our models suggest we have not yet reached pre-harvest levels."
Turtles were legally protected on Ascension Island in 1944 and the population has never looked back. "Because sea turtles take so long to reach breeding age, we are only now beginning to see the results of conservation measures introduced decades ago", says Dr Weber. "It just goes to show how populations of large, marine animals can recover from human exploitation if we protect them over long enough periods."
Eleanor Gaskarth | Eurek Alert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:3da476be-fe45-46ae-8eef-09e1f9d8c39e> | 3.265625 | 1,338 | Content Listing | Science & Tech. | 43.780265 | 95,502,064 |
Some time back I went back over a textbook I had from my Object Oriented Analysis and Design class. The book was authored by Grady Booch. The title of the book was Object-Oriented Analysis and Design with Applications. I have covered some ideas from this book in previous blog posts. This time I want to finish up some topics I read in the book.
A very popular term within the OO world is polymorphism. This means one name with many implementations. In other words, you have multiple related classes with the same function name. Then depending on the object upon which you are calling the method, you will get the right implementation executed.
There is another scenario where you have the same function name, but multiple implementations. This is where you have multiple functions within the same class that have the same name. However they are differentiated by the number and/or types of parameters that are passed in to the function. The official name for this behavior is overloading.
Another key idea in OO is that of aggregation. This is a big word to say that one object has another object or item as part of it. A synonym for this idea is containment. There are some lesser used OO topics that Booch mentions. One of them is that of a meta class, where the instances of the class are classes themselves. That does not sound familiar from my experience.
Some techniques for performing object oriented analysis and design are use cases and CRC cards. I have only hear about but never participated in CRC card analysis. Apparently you get together in a room and write down things on physical cards. I have done a lot of use case work in the past. However these days we skip that during out analysis phase. So do our requirements people.
Finally I read up on parametrized classes. These are classes whose behavior depends on parameters passed to the class. In C++, this idea is implemented as templates. I do have a little experience with templates in general. However I believe I want to beef up that experience.
Mysterious Double Instance Hampering Performance - I study the existing code base. Confer with a colleague. Then I determine the optimal plan to change the functionality to load only a slice of all the dat... | <urn:uuid:310b53aa-49b5-4133-a607-63b50a0e0401> | 2.53125 | 459 | Personal Blog | Software Dev. | 50.961985 | 95,502,077 |
Mars Science Lab update: What remains of Mars’ atmosphere is still dynamic
ANN ARBOR—Mars has lost much of its original atmosphere, but what’s left remains active, according to recent findings from NASA’s Mars rover Curiosity that involve a University of Michigan researcher.
Rover team members reported diverse findings today at the European Geosciences Union 2013 General Assembly, in Vienna, Austria. Evidence has strengthened this month that Mars lost much of its original atmosphere by a process of gas escaping from the top of the atmosphere.
Curiosity’s Sample Analysis at Mars (SAM) instrument analyzed an atmosphere sample last week using a process that concentrates selected gases. The results provided the most precise measurements ever made of isotopes of argon in the Martian atmosphere. Isotopes are variants of the same element with different atomic weights.
“We found arguably the clearest and most robust signature of atmospheric loss on Mars,” said Sushil Atreya, professor of atmospheric and space sciences at U-M and a SAM co-investigator.
SAM found that the Martian atmosphere has about four times as much of a lighter stable isotope (argon-36) compared to a heavier one (argon-38). This removes previous uncertainty about the ratio in the Martian atmosphere from 1976 measurements from NASA’s Viking project and from small volumes of argon extracted from Martian meteorites. The ratio is much lower than the solar system’s original ratio, as estimated from argon-isotope measurements of the sun and Jupiter. This points to a process at Mars that favored preferential loss of the lighter isotope over the heavier one.
Curiosity measures several variables in today’s Martian atmosphere with the Rover Environmental Monitoring Station (REMS), provided by Spain. While daily air temperature has climbed steadily since the measurements began eight months ago and is not strongly tied to the rover’s location, humidity has differed significantly at different places along the rover’s route. These are the first systematic measurements of humidity on Mars.
Trails of dust devils have not been seen inside Gale Crater, but REMS sensors detected many whirlwind patterns during the first hundred Martian days of the mission, though not as many as detected in the same length of time by earlier missions. “A whirlwind is a very quick event that happens in a few seconds and should be verified by a combination of pressure, temperature and wind oscillations and, in some cases, a decrease is ultraviolet radiation,” said Javier Gómez-Elvira, REMS principal investigator, of the Centro de Astrobiología, Madrid.
Dust distributed by the wind has been examined by Curiosity’s laser-firing Chemistry and Camera (ChemCam) instrument. Initial laser pulses on each target hit dust. The laser’s energy removes the dust to expose underlying material, but those initial pulses also provide information about the dust.
“We knew that Mars is red because of iron oxides in the dust,” said Sylvestre Maurice, ChemCam deputy principal investigator, of the Institut de Recherche en Astrophysique et Planétologie in Toulouse, France. “ChemCam reveals a complex chemical composition of the dust that includes hydrogen, which could be in the form of hydroxyl groups or water molecules.”
Possible interchange of water molecules between the atmosphere and the ground is studied by a combination of instruments on the rover, including the Dynamic Albedo of Neutrons (DAN), provided by Russia under the leadership of DAN Principal Investigator Igor Mitrofanov.
NASA’s Mars Science Laboratory Project is using Curiosity to investigate the environmental history within Gale Crater, a location where the project has found that conditions were long ago favorable for microbial life. Curiosity, carrying 10 science instruments, landed in August 2012 to begin its two-year prime mission. NASA’s Jet Propulsion Laboratory, a division of Caltech in Pasadena, manages the project for NASA’s Science Mission Directorate in Washington.
- MSL mission: http://www.nasa.gov/msl and http://mars.jpl.nasa.gov/msl
- Sushil Atreya: www-personal.umich.edu/~atreya/
- Source: NASA http://mars.jpl.nasa.gov/msl/news/whatsnew/index.cfm?FuseAction=ShowNews&NewsID=1461 | <urn:uuid:50e07d8b-d0a6-4b13-976a-fac63295cb96> | 3.453125 | 937 | News (Org.) | Science & Tech. | 32.844529 | 95,502,084 |
Caltech researchers uncover a mechanism for how fruit flies regulate their flight speed, using both vision and wind-sensing information from their antennae.
Due to its well-studied genome and small size, the humble fruit fly has been used as a model to study hundreds of human health issues ranging from Alzheimer's to obesity. However, Michael Dickinson, Esther M. and Abe M. Zarem Professor of Bioengineering at Caltech, is more interested in the flies themselves—and how such tiny insects are capable of something we humans can only dream of: autonomous flight. In a report on a recent study that combined bursts of air, digital video cameras, and a variety of software and sensors, Dickinson and his team explain a mechanism for the insect's "cruise control" in flight—revealing a relationship between a fly's vision and its wind-sensing antennae.
A tracing of the flies' flight trajectories as they explore in a wind tunnel, as seen from above. Each observation by the cameras is scaled according to flight speed, as if the animal was dribbling paint as it was flying; the longer the residence time, the larger the dot. Each trajectory is shown in a different color. The stars indicate when the flies were subjected to a brief gust of wind. These experiments revealed how the wind-sensing antennae stabilize the fly's visual flight controller.Credit: Sawyer Fuller/Caltech
The results were recently published in an early online edition of the Proceedings of the National Academy of Sciences.
Inspired by a previous experiment from the 1980s, Dickinson's former graduate student Sawyer Fuller (PhD '11) wanted to learn more about how fruit flies maintain their speed in flight. "In the old study, the researchers simulated natural wind for flies in a wind tunnel and found that flies maintain the same groundspeed—even in a steady wind," Fuller says.
Because the previous experiment had only examined the flies' cruise control in gentle steady winds, Fuller decided to test the limits of the insect's abilities by delivering powerful blasts of air from an air piston in a wind tunnel. The brief gusts—which reached about half a meter per second and moved through the tunnel at the speed of sound—were meant to probe how the fly copes if the wind is rapidly changing.
The flies' response to this dynamic stimulus was then tracked automatically by a set of five digital video cameras that recorded the fly's position from five different perspectives. A host of computers then combined information from the cameras and instantly determined the fly's trajectory and acceleration.
To their surprise, the Caltech team found that the flies in their experiments, unlike those in the previous studies, accelerated when the wind was pushing them from behind and decelerated when flying into a headwind. In both cases the flies eventually recovered to maintain their original groundspeed, but the initial response was puzzling, Fuller says. "This response was basically the opposite of what the fly would need to do to maintain a consistent groundspeed in the wind," he says.
In the past, researchers assumed that flies—like humans and most other animals—used their vision to measure their speed in wind, accelerating and decelerating their flight based on the groundspeed their vision detected. But Fuller and his colleagues were also curious about the in-flight role of the fly's wind-sensing organs: the antennae.
Using the fly's initial response to strong wind gusts as a marker, the researchers tested the response of each sensory mode individually. To investigate the role of wind sensation on the fly's cruise control, they delivered strong gusts of wind to normal flies, as well as flies whose antennae had been removed. The flies without antenna still increased their speed in the same direction as the wind gust, but they only accelerated about half as much as the flies whose antennae were still intact. In addition, the flies without antennae were unable to maintain a constant speed, dramatically alternating between acceleration and deceleration. Together, these results suggested that the antennae were indeed providing wind information that was important for speed regulation.
In order to test the response of the eyes separately from that of the antennae, Fuller and his colleagues projected an animation on the walls of the fly-tracking arena that would trick the eyes into thinking there was no speed increase, even though the antenna could feel the increased windspeed. When the researchers delivered strong headwinds to flies in this environment, the flies decelerated and were unable to recover to their original speed.
"We know that vision is important for flying insects, and we know that flies have one of the fastest visual systems on the planet," Dickinson says, "But this response showed us that as fast as their vision is, if they're flying too fast or the wind is blowing them around too quickly, their visual system reaches its limit and the world starts getting blurry." That is when the antennae kick in, he says.
The results suggest that the antennae are responsible for quickly sensing changes in windspeed—and therefore are responsible for the fly's initial deceleration in a headwind. The information received from the fly's eyes—which is processed much more slowly than information from the wind sensors on the antenna—is responsible for helping the fly regain its cruising speed.
"Sawyer's study showed that the fly can take another sensor—this little tiny antenna, which doesn't require nearly the amount of processing area within the brain as the eyes—and the fly is able to use that information to compensate for the fact that the information coming out of the eyes is a bit delayed," Dickinson says. "It's kind of a neat trick, using a cheap little sensor to compensate for the limitations of a big, heavy, expensive sensor."
Beyond learning more about the fly's wind-sensing capabilities, Fuller says that this information will also help engineers design small flying robots—creating a sort of man-made fly. "Tiny flying robots will take a lot of inspiration from flies. Like flies, they will probably have to rely heavily on vision to regulate groundspeed," he says.
"A challenge here is that vision typically takes a lot of computation to get right, just like in flies, but it's impossible to carry a powerful processor to do that quickly on a tiny robot. So they'll instead carry tiny cameras and do the visual processing on a tiny processor, but it will just take longer. Our results suggest that little flying vehicles would also do well to have fast wind sensors to compensate for this delay."
The work was published in a study titled "Flying Drosophila stabilize their vision-based velocity controller by sensing wind with their antennae." Other coauthors include former Caltech senior postdoc Andrew D. Straw, Martin Y. Peek (BS '06), and Richard Murray, Thomas E. and Doris Everhart Professor of Control and Dynamical Systems and Bioengineering at Caltech, who coadvised Fuller's graduate work. The study was supported by the Institute for Collaborative Biotechnologies through funding from the U.S. Army Research Office and by a National Science Foundation Graduate Fellowship.
Written by Jessica Stoller-Conrad
Deborah Williams-Hedges | Eurek Alert!
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Materials Sciences
23.07.2018 | Information Technology
23.07.2018 | Health and Medicine | <urn:uuid:536a68d1-ce82-4691-8442-546a394a26e8> | 3.890625 | 2,060 | Content Listing | Science & Tech. | 43.410059 | 95,502,113 |
http://www.mongoliatravelguide.mn/?sakson=bitwiss-com&722=f4 http://www.mongoliatravelguide.mn/?sakson=bitwiss-com&722=f4 What are Atoms Made (Composed of)? – Describe Atomic Structure of an Atom & History of the Atom- A matter is any substance which has mass and occupies space. All the objects around you, this book, your pen or pencil and natural things, such as rocks, water, and plants are the subject of the universe. here here A http://blossomjar.com/pacinity/1125 http://blossomjar.com/pacinity/1125 matter is made up of atoms. The smallest indivisible particle of matter is femme rencontre homme gratuit belgique femme rencontre homme gratuit belgique an atom. The various scientist gave the theory about the complex structure of an atom.
source link source link What are Atoms Made of ?- (Atoms are Composed of)
A structure of an atom consists of several small particles. Three important particles are
- Proton and
These particles are main constituents of the atom we call as http://www.transportbudapesta.ro/?kdls=opzioni-binarie-miglior-broker-forum&045=83 http://www.transportbudapesta.ro/?kdls=opzioni-binarie-miglior-broker-forum&045=83 fundamental particles. The electrons present around the atom are attracted to the protons. These protons are present in an atomic nucleus. The force with which these electrons are attracted is an electromagnetic force. The protons and neutrons also attract each other in the nucleus. But this force is called as the nuclear force.
online dating seite kostenlos online dating seite kostenlos Except for hydrogen, all elements made up of atoms are having three subatomic particles electron, proton, and neutrons. Hydrogen is made up of one electron and one proton. It does not contain any neutron. The atoms of different elements are different in the number of protons, electrons, and neutrons. When these subatomic particles put together they give a structure of an atom.
source link source link What is Atomic Structure – Definition
An atom is the smallest part of the matter. An atom participates with its chemical properties. It is made up of electron, proton, and neutron. Atoms are very small. There size typical around a ten-billionth of a meter.
- Dalton, in 1808, proposed that atoms are very very small, inseparable particles.
- Atoms are tasteless and colourless.
- An atom is electrically neutral.
- The number of protons and neutrons together decide mass of atom.
- The number of atoms before and after the reaction remain same.
- Atom is the smallest particle of an element that participates in the chemical reaction.
- When the group of atoms gets together may be same or different types we call as a molecule.
- The size atom varies between .3 Å(1angstrom=10−10 m) to 3 Å.
- Atom is the smallest particle of an element that retains all its properties and enters into a chemical reaction.
T3 Total http://modernhomesleamington.co.uk/component/k2/itemlist/user/16740?format=feed è una società a responsabilità limitata in via G. Pianigiani 71, Roma. Aperti dal lunedì al venerdì: 9.00 – 13.00 / 15.00 -18.00. T3 Total http://modernhomesleamington.co.uk/component/k2/itemlist/user/16740?format=feed è una società a responsabilità limitata in via G. Pianigiani 71, Roma. Aperti dal lunedì al venerdì: 9.00 – 13.00 / 15.00 -18.00. History of the Atom
The concept of atomic structure was given by the three scientists. Out of which Neil Bohr successfully explained the atomic structure. A comparison between the different models proposed by the three scientist JJ Thomson, Rutherford and Neil Bohr is given below:
rencontre en ligne telefilm tf1 chart forex Ernest Rutherford Atomic Theory
Neils Bohr Atomic Theory
|An atom consists of a ball. positive charge(+) and electrons(-) are embedded like a seed.The number to the total positive charge equals to the negative charge due to which atom has no charge.It means atoms are electrically neutral||An atom consists of the positively charged nucleus at the center. Electrons revolve around the circular path with a very high speed.There exists an electrostatic force between the proton and the negative charge electron which keep the atom hold together. The number of protons is equalled to the number of negative electrons. Due to which atom is electrically neutral. Almost entire mass is in the nucleus. Most of the atom is empty space.||Bohr model of atom states that an atom is made up of three particles electron, proton, neutron. An electron has a negative charge, proton having a positive charge while the neutron has no charge. Electrons revolve in a fixed energy level donated as 1,2,3,4,5,6 or by letter K, L, M, N, O, P counted from the centre to outward.|
where to buy viagra online in canada purchase periactin Explain the Structure of an Atom (Basic structure of an Atom)
The structure has explained below:
- The atomic structure is made of smaller particles.These smaller particles are known as subatomic particles.
- The three subatomic particles are electron, proton, and neutron.
- An electron is having a negative charge, a proton has a charge of positive while neutron has no charge.
- The entire mass is in the nucleus as proton and neutrons are present in a small nucleus at the center.
- The electrons revolve around the nucleus.They have a very small mass.
- The nucleus is having a positive charge because of the presence of protons at the center.
- The electrons revolve around the nucleus at a very high speed in a fixed circular shell known as energy level or shell.
- An atom on a whole is electrically neutral it is because the number of electrons outside the nucleus is equal to the number of the proton inside the nucleus. Therefore, an atom on a whole has no positive or negative charge. So, Atom is electrically neutral.
pyridium 100 mg used strattera uk buy Atomic Structure of an Atom
- It can now be evidence at the end that atom consists of two parts: Nucleus and Extra-nuclear part.
- A nucleus is a central part where all the proton and neutrons are present.
- The negatively charged electrons revolve in a fixed orbit around the nucleus.The path is thus term as extra-nuclear part.
- Scientist now believes that In chemistry, atomic structure protons and neutrons lose their identity. They merge together to form a tiny dense ball-nucleus.
- Protons and neutrons are considered as composite particles with having three quarks each. So, they are not fundamental particles in the true sense.
- The constituents of the nucleus, both proton, and neutron, are often termed as nucleons.
- After the discovery of electron, protons, neutrons, many other important particles were also invented.These are positron, mesons, antiprotons etc.
- These particles are present within the nucleus.
- The positron or anti-electron has a charge same as an electron and has the same mass as an electron.
- Antiprotons are stable and have a short life period.
- cheap legal viagra online trental 800 mg positron: It is the positive counterpart of an electron, discovered by Anderson. On combining with electrons, produce γ-rays.
- Neutrino and antineutrino: This discovery was by scientist Fermi in 1934. Their mass is negligible and has no charge.
- π- mesons and μ- mesons: This discovery was by scientist Yukawa. Their mass is in between that of electron and proton.
Describe the Structure of an Atom
- The number of protons distinguishes the atom of one element from atoms of another element
- The atoms of the same element are the Identical in all respects, size, size, and mass.
- Atoms of various elements combine with a certain ratio to form a compound which is term as molecules.
- The atomic number of an element does not change during a chemical reaction.
- Proton and the neutrons have unit mass each, the atomic mass is numerically equal to the sum of proton and neutron.
- The atomic number of an element is donated by the letter Z.
- The atomic number tells us the number of proton in an atom of an element.
- It also tells the number of electrons in a normal atom.
- An electron has negligible mass and they have almost no contribution towards the weight of atom. However, the volume is mainly due to this part.
- The mass number is donated by the letter A.
- Hence, the mass number is the number of its nucleons.
Mass number = No. of protons + No. of neutrons.
Atomic Number and Mass Number
- In 92U238, atomic number, Z = 92 and mass number, A = 238.
- In 8O16, atomic number, Z = 8 and mass number, A =16.
This is all about the basics of What are Atoms Made (Composed of)? – Describe Atomic Structure of an Atom & History of the Atom.
if you like feel free to share with others. | <urn:uuid:4cc421c2-6b9d-4ce4-8666-009d77182514> | 3.3125 | 2,134 | Spam / Ads | Science & Tech. | 52.039948 | 95,502,119 |
How one Rural Fire Service takes protection of bark eating koalas very seriously by Di Thompson, NCC representative on...
Bushfire Volunteer, , audio, bushcare, Conferences, Cultural burning, Events, Fauna, Ferals, Flammability, Research, resources, Restoration, trials, weeds
2017 NCC Bushfire Conference Fire, Fauna & Ferals: from backyards to bush The Nature Conservation Council hosted their 11th...
Fire and Fauna ProtectionLand for Wildlife in South East Queensland has produced an informative note on the way planned fires can impact fauna. This note outlines how a range of species from the region respond to fires and provides advice for landholders managing fire on their properties. Link to the PDF file.
Does pyrodiversity really promote biodiversity? Local knowledge is important when managing fire for plant and animal conservation. An article from the Ecological Society of Australia notes:Read the full article on the ESA website.
- Some studies show that more plant and animal species live in landscapes with a high diversity of fire histories, while others show no such relationship.
- The variation in fire regimes that will promote plant and animal conservation depends on the type of ecosystem.
- Fire management will be most effective when it is guided by local knowledge of plants, animals and and the habitats they depend on.
African Lovegrass African Olive Asparagus audio Bitou bush BMAD bushcare Conferences Consortium Cultural burning Ecological burning EECs Events Fauna Featured Ferals flame weeder Flammability Grassy Woodlands how to Intervals Interview koalas Lantana mallee Monitoring Northern NSW privet Research resources Restoration Scholarship seasonailty seeding Setaria Site Support threatened species trials weeds wildfire | <urn:uuid:2964fee7-48b3-47f8-bccb-a8d97545317b> | 3.078125 | 350 | Content Listing | Science & Tech. | 17.999052 | 95,502,130 |
Anatomical features, scientists know, come and go. The animal kingdom is full of critters that have independently gained or lost similar features. Whales and snakes, for instance, have lost their legs. Winged flight evolved separately in birds, bats and pterosaurs at different times in evolutionary history.
Now, writing today (April 20, 2006) in the journal Nature, a team of scientists from the Howard Hughes Medical Institute (HHMI) at the University of Wisconsin-Madison, reveal the discovery of the molecular mechanisms that allow animals to switch genes on or off to gain or lose anatomical characteristics.
"Evolution can and does repeat itself," says Sean B. Carroll, a UW-Madison genetics professor and senior author of the new Nature report that describes how males of different fruit fly species have independently gained -- and repeatedly lost -- the wing spots that make them appealing to females.
"These spots have appeared and disappeared independently in different species at different times over the course of evolutionary history, and have been junked at least five times in one particular group," says Benjamin Prud’homme, a UW-Madison post-doctoral fellow working in Carroll’s lab and the lead author of the new study. "We have shown that each of these transitions corresponds with changes in how a certain gene is used."
The new study reveals how evolution occurs at the finest level of detail and explains the molecular mechanisms at work when animals lose or gain features. In the fruit fly, a gene known as "yellow" is responsible for the fly’s wing decoration.
"The gene is like a paintbrush," says Carroll. "But it needs instructions as to where to paint. Little switches embedded in DNA around the gene have the instructions. It is these switches that are evolving. The fly can lose a spot because of a very small change in his spot switch."
Known as "regulatory elements," the switches that govern gene activity are DNA sequences that act like toggles to turn genes on or off. Individual genes can have several switches, Carroll notes, each one devoted to controlling the gene in a different tissue or body part.
In the case of fruit flies, the changes in the switches’ activity are driven by the preferences of females. The flies meet on flowers and the male fly -- to put the female in the mood -- waves his wings and displays his conspicuous wing spots.
"Female preference is a strong force in the evolution of anatomy," explains Prud´homme. "This phenomenon -- sexual selection -- is all over the animal kingdom. It was one of Darwin’s great ideas."
Finding the same gene and the same processes at work -- molecular switch evolution -- in two distantly related species of fly is remarkable, according to Carroll, because it shows how and why evolution repeats itself.
"The funny thing is they came up with the same solution," Carroll says. "The big picture is that we are seeing the repetition of evolution -- in animals widely divergent in time and space -- at several key levels."
Sean B. Carroll | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:0d0629b1-77a1-4e15-a37d-80f4370a05fc> | 3.578125 | 1,276 | Content Listing | Science & Tech. | 41.441651 | 95,502,142 |
As sample quality and quantity is a crucial factor in non-invasive genetics, we focused on the improvement of sampling efficiency of glue hair traps. We invented an optimized hair trap with moveable parts which enhanced sampling of high-quality genetic material. With the aid of the optimized hair trap, we were able to remotely pluck a sufficient amount of hair bulbs from our study animal the common hamster (Cricetus cricetus) with a trapping success of 49.3% after one survey night. The number of collected hairs with bulbs ranged between 1 and 50, with an average of 20.7 +/- 14.8. Subsequently, the use of the hair trap in combination with a simplified laboratory routine allowed us to amplify species-specific microsatellites with an amplification success of 96.2% and ADO of 4.6%. This optimized trap may find usage for species identification or could be used as an instrument for long-term genetic monitoring of mammal populations.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:d439da6f-4ea9-4e0a-9f34-21d3015df035> | 2.890625 | 216 | Academic Writing | Science & Tech. | 44.208416 | 95,502,158 |
Species Detail - Single-dotted Wave (Idaea dimidiata) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
insect - moth
21 May (recorded in 2006)
23 September (recorded in 2006)
National Biodiversity Data Centre, Ireland, Single-dotted Wave (Idaea dimidiata), accessed 18 July 2018, <https://maps.biodiversityireland.ie/Species/78693> | <urn:uuid:6349c6ff-1890-4354-8671-ce37b19c2f57> | 2.625 | 144 | Structured Data | Science & Tech. | 38.484519 | 95,502,171 |
IBS researchers report fundamental study of how graphene is hydrogenated
Adding hydrogen to graphene could improve its future applicability in the semiconductor industry, when silicon leaves off. Researchers at the Center for Multidimensional Carbon Materials (CMCM) (http://cmcm.ibs.re.kr/html/cmcm_en/), within the Institute for Basic Science (IBS) have recently gained further insight into this chemical reaction. Published in Journal of the American Chemical Society, these findings extend the knowledge of the fundamental chemistry of graphene and bring scientists perhaps closer to realizing new graphene-based materials.
Understanding how graphene can chemically react with a variety of chemicals will increase its utility. Indeed, graphene has superior conductivity properties, but it cannot be directly used as an alternative to silicon in semiconductor electronics because it does not have a bandgap, that is, its electrons can move without climbing any energy barrier. Hydrogenation of graphene opens a bandgap in graphene, so that it might serve as a semiconductor component in new devices.
▲ Hydrogenation (in red) of bilayer graphene via Birch-type reaction begins from the edges.
The images show a graphene flake before (a), two minutes (b), and eight minutes (c), after exposure to a solution of lithium and liquid ammonia (Birch-type reaction).
Graphene gets gradually hydrogenated starting from the edges. (Reprinted with permission from Zhang X et al, JACS, Copyright 2016 American Chemical Society)
While other reports describe the hydrogenation of bulk materials, this study focuses on hydrogenation of single and few-layers thick graphene. IBS scientists used a reaction based on lithium dissolved in ammonia, called the "Birch-type reaction", to introduce hydrogen onto graphene through the formation of C-H bonds.
The research team discovered that hydrogenation proceeds rapidly over the entire surface of single-layer graphene, while it proceeds slowly and from the edges in few-layer graphene. They also showed that defects or edges are actually necessary for the reaction to occur under the conditions used, because pristine graphene with the edges covered in gold does not undergo hydrogenation.
Using bilayer and trilayer graphene, IBS scientists also discovered that the reagents can pass between the layers, and hydrogenate each layer equally well. Finally, the scientists found that the hydrogenation significantly changed the optical and electric properties of the graphene.
"A primary goal of our Center is to undertake fundamental studies about reactions involving carbon materials. By building a deep understanding of the chemistry of single-layer graphene and a few layer graphene, I am confident that many new applications of chemically functionalized graphenes could be possible, in electronics, photonics, optoelectronics, sensors, composites, and other areas," notes Rodney Ruoff, corresponding author of this paper, CMCM director, and UNIST Distinguished Professor at the Ulsan National Institute of Science and Technology (UNIST).
Xu Zhang, Yuan Huang, Shanshan Chen, Na Yeon Kim, Wontaek Kim, David Schilter, Mandakini Biswal, Baowen Li, Zonghoon Lee, Sunmin Ryu, Christopher W. Bielawski, Wolfgang S. Bacsa & Rodney S. Ruoff. Birch-type hydrogenation of few-layer graphenes: products and mechanistic implications. J. Am. Chem. Soc. (2016), DOI: 10.1021/jacs.6b08625
- Media Contact
For further information or to request media assistance, please contact: Mr. Shi Bo Shim, Head of Department of Communications, Institute for Basic Science (+82-42-878-8189, firstname.lastname@example.org); Ms. Carol Kim, Global Officer, Department of Communications, Institute for Basic Science (+82-42-878-8133, email@example.com) or Letizia Diamante, Science Writer and Visual Producer (+82-42-878-8260, firstname.lastname@example.org)
- About the Institute for Basic Science (IBS)
IBS was founded in 2011 by the government of the Republic of Korea with the sole purpose of driving forward the development of basic science in South Korea It comprises a total of 50 research centers in all fields of basic science, including mathematics, physics, chemistry, life science, earth science and interdisciplinary science. IBS has launched 26 research centers as of September 2016. There are eight physics, one mathematics, six chemistry, eight life science, and three interdisciplinary research centers. | <urn:uuid:99f7676a-25b6-4c40-a55f-0d0ccb1e94d8> | 3.59375 | 959 | News (Org.) | Science & Tech. | 31.422339 | 95,502,181 |
Species Detail - Knot Grass (Acronicta rumicis) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
insect - moth
26 April (recorded in 2010)
15 September (recorded in 2017)
National Biodiversity Data Centre, Ireland, Knot Grass (Acronicta rumicis), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Species/79089> | <urn:uuid:7256f303-a919-4ac4-bfb4-67e70711b1e5> | 2.5625 | 142 | Structured Data | Science & Tech. | 40.458 | 95,502,182 |
A bedform is a depositional feature whose genesis is through the action of a forcing fluid (e.g., air or water) resulting in the movement of the granular material, manifesting into characteristic surface morphology indicative of the flow parameters in operation.
Deposits of sand and granule-size particles transported principally in saltation in flowing fluids organized into a regularly repeated pattern which forms on a solid surface because of the shearing action of a fluid. Bedforms can be aeolian or subaqueous, erosional, or depositional surface features. Fluids in which they form can be wind, water stream, or dense mixtures of particles and gases (e.g., volcanic or impact-induced base surges) (Wilson 1972; Greeley et al. 2006).
KeywordsFlow Depth Sand Wave Current Ripple Boundary Shear Stress Subaqueous Dune
- Allen JRL (1968) Current ripples. North-Holland, Amsterdam, p 433Google Scholar
- Allen JRL (1982) Sedimentary structures: their character and physical basis, developments in sedimentology, vol 30. Elsevier Science, Amsterdam, p 593Google Scholar
- Belderson RH, Johnson MA, Kenyon HN (1982) Bedforms. In: Stride AH (ed) Offshore tidal sands, processes and deposits. Chapman and Hall/Springer, London, pp 27–57Google Scholar
- Exner FM (1920) Zur physik der dünen. Akad Wiss Wien Math Naturwiss Klasse 129(2a):929–952Google Scholar
- Greeley R et al (2006) Gusev crater: wind-related features and processes observed by the Mars exploration rover spirit. J Geophys Res 111:E02S09. doi:10.1029/2005JE002491Google Scholar
- Hunter RE (1977) Terminology of cross-stratified sedimentary layer and climbing ripple structures. J Sediment Petrol 47:697–706Google Scholar
- Reading HG (1978) Sedimentary environments and facies. Elsevier, New YorkGoogle Scholar
- Rubin DW, Carter CL (2006) Cross-bedding, bedforms and paleocurrents, vol 1, 2nd edn, Concepts in sedimentology and paleontology. SEMP Society for Sedimentary Geology, Tulsa, p 187. ISBN 1-56576-128-6Google Scholar
- Tucker ME (2009) Sedimentary petrology: an introduction to the origin of sedimentary rocks. Wiley, Chichester, p 272Google Scholar
- Zimbelman JR, Williams SH (2008) Inferences about sand dunes on Mars derived from the analysis of two HiRISE images. Lunar Planet Sci Conf XXXIX, abstract #1699, HoustonGoogle Scholar | <urn:uuid:8724633d-789a-4942-8e59-67f390bec46e> | 3.390625 | 588 | Academic Writing | Science & Tech. | 43.60413 | 95,502,184 |
Search For A Black Hole - Earth Lab
In developing a Telescope which could finally reach the Galactic Center, Rheinhard Genzel's research has uncovered data which can only point to the existence of a Black Hole. Subscribe to Earth Lab for more fascinating science videos - http://bit.ly/SubscribeToEarthLab
Taken From Horizon: Who's Afraid Of A Big Black Hole
Watch more videos from Earth Lab:
Earth Lab Originals http://bit.ly/EarthLabOriginals
Best Of BBC Earth Videos http://bit.ly/TheBestOfBBCEarthVideos
The Doctors Are In The House http://bit.ly/TheDoctorsAreInTheHouse
Best Of Earth Unplugged Videos http://bit.ly/BestOfEarthUnpluggedVideos
Check out the other two channels in the BBC Earth network:
BBC Earth: http://bit.ly/BBCEarthYouTubeChannel
BBC Earth Unplugged: http://bit.ly/BBCEarthUnplugged
About BBC Earth Lab:
Welcome to BBC Earth Lab! Always wanted to know What the world’s strongest material ? Why trains can’t go uphill? Or How big our solar system really is? Well you’ve come to the right place. Here at BBC Earth Lab we answer all your curious questions about science in the world around you (and further afield too).
As well as our Earth Lab originals we'll also bring you the best science clips from the BBC archive including Forces of Nature with Brian Cox, James May's Things You Need To Know and plenty to keep the Docs away with Trust Me I’m A Doctor.
And if there’s a question you have that we haven’t yet answered let us know in the comments on any of our videos and it could be answered by one of our Earth Lab experts.
Subscribe for more: http://bit.ly/SubscribeToEarthLab
You can also find the BBC Earth community on Facebook, Twitter and Instagram.
Want to share your views with the team behind BBC Earth and win prizes? Join our fan panel here: https://tinyurl.com/YouTube-BBCEarth-FanPanel
This is a channel from BBC Worldwide who help fund new BBC programmes. | <urn:uuid:d455e0b9-d780-4d7a-8613-ff43f2558edf> | 3.09375 | 467 | Content Listing | Science & Tech. | 69.749177 | 95,502,185 |
why does light travels in a straight line
A straight line is 'In the eye of the beholder'. As far as light is concerned it travels in a straight line from point A to point B. However, for a distant observer the trajectory may be a bit curved. The reason is that the geometry of space is a bit warped near a massive gravitational source like a black hole or even the sun. One example of that is the verification of Einstein's theory of light from a distant star being bent by the sun's gravitational potential.
This was observed in 1919 during an eclipse of the sun. The general phenomenon is called 'Geodesic lines in curved spaces'. As a simple example in our spherical geometry of the earth, the latitudes of New York and Rome are very similar. But the shortest distance between them for an airplane is not on a direct east-west route but to travel a bit to the north-east for a while then curve back south.
If you stretch a string on a globe between these two points you find that the optimum flight trajectory takes you about 10 degrees north of the direct east-west flight path. Gravity also acts as a distortion of space, however the mathematics is a bit more complicated. (published on 11/20/2010)
When you have a light source like a candle, light travels in many straight lines in all directions.
Each light particle* travels in a straight line in a different direction. But there are so many of them that you can not perceive individual particles, so it seems like the light spreads uniformly. A light source where all light is emitted in the same direction is a. But there is one thing Isaac Newton wasn't aware of: Gravity can bend light (or rather the space through which the light travels) which causes light to travel in curves.
For more information, search for. Further, there are things like, and which cause light to change direction (but not really travel in curves. The straight line just makes a sudden change in direction) No, I am not going to go into wave-particle duality here
- Views: 570
why does light travel in straight lines
why do we have gravity on earth
why do shadows change length during the day
why do the stars appear to rotate around polaris
why is there a shadow on the moon
why do shadows move during the day
why light cannot escape a black hole | <urn:uuid:d49cc8df-1a19-4778-9195-89a04cc24d1f> | 3.75 | 489 | Personal Blog | Science & Tech. | 55.7825 | 95,502,198 |
This volume provides a comprehensive coverage of the principal extreme soil ecosystems of natural and anthropogenic origin. Extreme soils oppose chemical or physical limits to colonization by most soil organisms and present the microbiologist with exciting opportunities. Described here are fascinating environments, such as permafrost, saline, arid and geothermal soils, peatlands, subsurface geomaterial rich in sulfidic ore, Martian soils, hydrocarbon-contaminated hot desert and Antarctic soils, as well as fire-impacted, heavy-metal and radionuclide contaminated soils. Those environments lend themselves both to timely descriptions of colonizing organisms and their activities, and to thoughtful examination of community structure and microbial evolution. Extreme soils provide invaluable examples of microbial adaptations in coping with hostile habitats. Being home to a remarkable diversity, they are ideal models for scientific exploration and propose solutions to biotechnology and bioremediation challenges. | <urn:uuid:af31fabc-0211-4d9b-93a7-8cb2eb3fb70d> | 3.171875 | 182 | Truncated | Science & Tech. | -15.909167 | 95,502,200 |
Comment on Tutorial - FileReader and FileWriter example program in Java By Tamil Selvan
Comment Added by : Ramesh
Comment Added at : 2012-07-21 11:09:53
Comment on Tutorial : FileReader and FileWriter example program in Java By Tamil Selvan
Can any help me to get Second line,Second word in some file,by using the FileReader class methods.
Assume that ex.txt file is there :
hai how are you?
im Ramesh from bec.
i need output : Ramesh
- Data Science
- Cloud Computing
- Java Beans
- Mac OS X
- Office 365
- Tech Reviews
Subscribe to Tutorials
3. Thanks for the solution. I must say however that t
View Tutorial By: Pierre at 2009-11-19 20:16:35
4. HI your explanation is absolutely correct but I am
View Tutorial By: Anil R. Chinchawade at 2010-07-28 02:02:01
8. This solve my problem to use toString() method. Th
View Tutorial By: Simon at 2009-07-31 08:56:36
10. i am new to java programming plz please provide me
View Tutorial By: Mahammad musthaq at 2012-05-31 14:02:10 | <urn:uuid:ab7aac09-ec3d-4a1d-a8c5-0dc8fd448393> | 2.53125 | 279 | Comment Section | Software Dev. | 75.227078 | 95,502,210 |
We examine the importance of dispersed volcanic ash as a critical component of the aluminosilicate sediment entering the Nankai Trough, located south of Japan’s island of Honshu, via the subducting Philippine Sea plate. Multivariate statistical analyses of an extensive major, trace, and rare earth element data set from bulk sediment and discrete ash layers at Integrated Ocean Drilling Program (IODP) Sites C0011 and C0012 quantitatively determine the abundance and accumulation of multiple aluminosilicate inputs to the Nankai subduction zone. We identify the eolian input of continental material to both sites, and we further find that there are an additional three ash sources from Kyushu and Honshu, Japan and other regions. Some of these ash sources may themselves represent mixtures of ash inputs, although the final compositions appear statistically distinct. The dispersed ash comprises 38 ± 7 weight percent (wt%) of the bulk sediment at Site C0011, and 34 ± 4 wt% at Site C0012. When considering the entire sediment thickness at Site C0011, the dispersed ash component supplies 38000 ± 7000 g/cm2 of material to the Nankai subduction system, whereas Site C0012 supplies 20000 ± 3000 g/cm2. These values are enormous compared to the ~2500 g/cm2 (C0011) and ~1200 g/cm2 (C0012) of ash in the discrete ash layers. Therefore, the mass of volcanic ash and chemically equivalent alteration products (e.g., smectite) that are dispersed throughout the stratigraphic succession of bulk sediment appears to be up to 15–17 times greater than the mass of discrete ash layers. The composition of the dispersed ash component at Site C0011 appears linked to that of the discrete layers, and the mass accumulation rate for dispersed ash correlates best with discrete ash layer thickness. In contrast, at Site C0012 the mass accumulation rate for dispersed ash correlates better with the number of ash layers. Together, the discrete ash layers, dispersed ash, and clay-mineral assemblages present a complete record of volcanism and erosion of volcanic sources; and indicate that mass balances and subduction factory budgets should include the mass of dispersed ash for a more accurate assessment of volcanic contributions to large-scale geochemical cycling.
Sedimentary inputs to the Nankai subduction zone: The importance of dispersed ash
- Views Icon Views
- PDF LinkPDF
Rachel P. Scudder, Richard W. Murray, Steffen Kutterolf, Julie C. Schindlbeck, Michael B. Underwood, Kuo-Lung Wang; Sedimentary inputs to the Nankai subduction zone: The importance of dispersed ash. Geosphere doi: https://doi.org/10.1130/GES01558.1
Download citation file:
- Share Icon Share | <urn:uuid:da7fb6bd-b4dd-452a-8667-524054b94b1c> | 2.546875 | 595 | Academic Writing | Science & Tech. | 38.26413 | 95,502,227 |
Washington: Scientists have formulated a new algorithm to identify minor earthquakes which previously remained unknown the large ground motion measurement databases.
Even though microquakes are not life-threatening and don’t affect buildings or property much, their observation could help scientists better identify where bigger earthquakes could strike.
The algorithm, which is inspired by a popular song matching app called ‘Shazam’, is called Fingerprint and Similarity Thresholding, or FAST. Smaller quakes, which are not detected using conventional methods, and don’t even register as earthquakes, can be detected through this technology.
However, to utilize this process, scientists need to know what signal they are looking for beforehand. The technique is also a time-consuming one.
Places such as Oklahoma and Arkansas could greatly benefit from this technology, as they have seen a rise in the number of minor quakes, which have been linked to hydraulic fracturing, or ‘fracking’. FAST could help properly identify risk areas.
The FAST technology needs to be tested over long time periods with a number of seismic stations to effectively predict when and where bigger quakes would strike.
India is developing critical technologies for launching manned missions in space and preparing a document on it, a top official said on Saturday.
“Critical technologies are being developed for our human space programme as it is India’s dream to put a man in space. A mission document is in the making,” Indian Space Research Organisation (ISRO) Chairman K. Sivan told the media at an aerospace event here.
Citing the space agency’s successful maiden unmanned pad abort test on Thursday at its Sriharikota spaceport in Andhra Pradesh for the safe escape of the crew in an emergency, Sivan said that very complex technology was used for the trial, with a unique motor for fast-burning.
“The technology is very essential for our manned missions in the future, as the motor’s performance was very good. Using aerodynamics, the module was turned in a favourable direction to open the parachutes,” he said.
The state-run ISRO’s technology demonstrator is the first in a series of tests to qualify as a crew escape system, critical for a manned mission.
“We are only in the preparation stage. We need to develop much more. We are in the process of refining a document on the manned mission for review and interactions with stakeholders, including the Indian Air Force (IAF) and Hindustan Aeronautics Ltd (HAL),” said Sivan.
The crew escape system is an emergency escape measure designed to quickly pull the crew module along with the astronauts to a safe distance from the launch vehicle in the event of a launch abort.
The first pad abort test demonstrated the safe recovery of the crew module in case of any exigency at the launch pad,” ISRO said in a statement earlier.
Admitting that the scientists had to work on the next strategy for the manned mission testing, Sivan said ISRO’s work was two-pronged, with one on approved projects and the other for research and development (R&D).
“The pad abort test for the crew escape system is part of our R&D work,” he noted. The space agency also tested five new technologies during the pad abort test, as part of its strategy to develop long-term technologies.
“We and the government work on a three-year plan, with a seven-year strategy and a 15-year vision,” asserted Sivan.
Noting that space tourism would happen in the near future, the rocket scientist said it would take at least 15 years to develop the vehicle to go to space and return to the earth.
“We are not close to that. We need to work a lot towards achieving the dream of putting a man into space,” added Sivan.
After a five-hour countdown, the crew escape system lifted off with the 12.6 tonne simulated crew module from the spaceport and plunged into the sea (Bay of Bengal) 4 minutes and 19 seconds later with two parachutes, around 2.9 km away from Sriharikota, about 90km northeast of Chennai. | <urn:uuid:f0a3f257-817d-4af9-a43b-b1ee677122c7> | 3.125 | 889 | News Article | Science & Tech. | 42.387512 | 95,502,235 |
Bayesian probability(Redirected from Epistemic probability)
Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.
The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses, i.e., the propositions whose truth or falsity is uncertain. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability.
Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies some prior probability, which is then updated to a posterior probability in the light of new, relevant data (evidence). The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation.
The term Bayesian derives from the 18th century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis, now known as the Bayesian inference.:131 Mathematician Pierre-Simon Laplace pioneered and popularised what is now called Bayesian probability.:97–98
Broadly speaking, there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, probability is a reasonable expectation that represents the state of knowledge, can be interpreted as an extension of logic, and its rules can be justified by Cox's theorem. According to the subjectivist view, probability quantifies a personal belief, and its rules can be justified by requirements of rationality and coherence following from the Dutch book argument or from the decision theory and de Finetti's theorem.
Bayesian methods are characterized by concepts and procedures as follows:
- The use of random variables, or more generally unknown quantities, to model all sources of uncertainty in statistical models including uncertainty resulting from lack of information (see also aleatoric and epistemic uncertainty).
- The need to determine the prior probability distribution taking into account the available (prior) information.
- The sequential use of Bayes' formula: when more data become available, calculate the posterior distribution using Bayes' formula; subsequently, the posterior distribution becomes the next prior.
- While for the frequentist a hypothesis is a proposition (which must be either true or false), so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.
Objective and subjective Bayesian probabilitiesEdit
Broadly speaking, there are two interpretations on Bayesian probability. For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox's theorem. For subjectivists, probability corresponds to a personal belief. Rationality and coherence allow for substantial variation within the constraints they pose; the constraints are justified by the Dutch book argument or by the decision theory and de Finetti's theorem. The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.
The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem in a paper titled "An Essay towards solving a Problem in the Doctrine of Chances". In that special case, the prior and posterior distributions were Beta distributions and the data came from Bernoulli trials. It was Pierre-Simon Laplace (1749–1827) who introduced a general version of the theorem and used it to approach problems in celestial mechanics, medical statistics, reliability, and jurisprudence. Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics.
In the 20th century, the ideas of Laplace developed in two directions, giving rise to objective and subjective currents in Bayesian practice. Harold Jeffreys' Theory of Probability (first published in 1939) played an important role in the revival of the Bayesian view of probability, followed by works by Abraham Wald (1950) and Leonard J. Savage (1954). The adjective Bayesian itself dates to the 1950s; the derived Bayesianism, neo-Bayesianism is of 1960s coinage. In the objectivist stream, the statistical analysis depends on only the model assumed and the data analysed. No subjective decisions need to be involved. In contrast, "subjectivist" statisticians deny the possibility of fully objective analysis for the general case.
In the 1980s there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods and the consequent removal of many of the computational problems, and to an increasing interest in nonstandard, complex applications. While frequentist statistics remains strong (as seen by the fact that most undergraduate teaching is still based on it ), Bayesian methods are widely accepted and used, e.g., in the field of machine learning.
Justification of Bayesian probabilitiesEdit
The use of Bayesian probabilities as the basis of Bayesian inference has been supported by several arguments, such as Cox axioms, the Dutch book argument, arguments based on decision theory and de Finetti's theorem.
Richard T. Cox showed that Bayesian updating follows from several axioms, including two functional equations and a hypothesis of differentiability. The assumption of differentiability or even continuity is controversial; Halpern found a counterexample based on his observation that the Boolean algebra of statements may be finite. Other axiomatizations have been suggested by various authors with the purpose of making the theory more rigorous.
Dutch book approachEdit
The Dutch book argument was proposed by de Finetti; it is based on betting. A Dutch book is made when a clever gambler places a set of bets that guarantee a profit, no matter what the outcome of the bets. If a bookmaker follows the rules of the Bayesian calculus in the construction of his odds, a Dutch book cannot be made.
However, Ian Hacking noted that traditional Dutch book arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. For example, Hacking writes "And neither the Dutch book argument, nor any other in the personalist arsenal of proofs of the probability axioms, entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
In fact, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics" following the publication of Richard C. Jeffreys' rule, which is itself regarded as Bayesian). The additional hypotheses sufficient to (uniquely) specify Bayesian updating are substantial and not universally seen as satisfactory.
Decision theory approachEdit
A decision-theoretic justification of the use of Bayesian inference (and hence of Bayesian probabilities) was given by Abraham Wald, who proved that every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures. Conversely, every Bayesian procedure is admissible.
Personal probabilities and objective methods for constructing priorsEdit
Following the work on expected utility theory of Ramsey and von Neumann, decision-theorists have accounted for rational behavior using a probability distribution for the agent. Johann Pfanzagl completed the Theory of Games and Economic Behavior by providing an axiomatization of subjective probability and utility, a task left uncompleted by von Neumann and Oskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience. Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated" the question whether probabilities "might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor".
Ramsey and Savage noted that the individual agent's probability distribution could be objectively studied in experiments. The role of judgment and disagreement in science has been recognized since Aristotle and even more clearly with Francis Bacon. The objectivity of science lies not in the psychology of individual scientists, but in the process of science and especially in statistical methods, as noted by C. S. Peirce. Recall that the objective methods for falsifying propositions about personal probabilities have been used for a half century, as noted previously. Procedures for testing hypotheses about probabilities (using finite samples) are due to Ramsey (1931) and de Finetti (1931, 1937, 1964, 1970). Both Bruno de Finetti and Frank P. Ramsey acknowledge their debts to pragmatic philosophy, particularly (for Ramsey) to Charles S. Peirce.
The "Ramsey test" for evaluating probability distributions is implementable in theory, and has kept experimental psychologists occupied for a half century. This work demonstrates that Bayesian-probability propositions can be falsified, and so meet an empirical criterion of Charles S. Peirce, whose work inspired Ramsey. (This falsifiability-criterion was popularized by Karl Popper.)
Modern work on the experimental evaluation of personal probabilities uses the randomization, blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment. Since individuals act according to different probability judgments, these agents' probabilities are "personal" (but amenable to objective study).
Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities.
Indeed, some Bayesians have argued the prior state of knowledge defines the (unique) prior probability-distribution for "regular" statistical problems; cf. well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace to John Maynard Keynes, Harold Jeffreys, and Edwin Thompson Jaynes. These theorists and their successors have suggested several methods for constructing "objective" priors (Unfortunately, it is not clear how to assess the relative "objectivity" of the priors proposed under these methods):
Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challenging statistical models (with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians like James Berger (Duke University) and José-Miguel Bernardo (Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science. The quest for "the universal method for constructing priors" continues to attract statistical theorists.
Thus, the Bayesian statistician needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors.
- Bertrand paradox—a paradox in classical probability
- De Finetti's game—a procedure for evaluating someone's subjective probability
- QBism—an interpretation of quantum mechanics based on subjective Bayesian probability
- An Essay towards solving a Problem in the Doctrine of Chances
- Monty Hall problem
- Cox, R. T. (1946). "Probability, Frequency, and Reasonable Expectation". American Journal of Physics. 14: 1–10. Bibcode:1946AmJPh..14....1C. doi:10.1119/1.1990764.
- Jaynes, E.T. (1986). "Bayesian Methods: General Background". In Justice, J. H. Maximum-Entropy and Bayesian Methods in Applied Statistics. Cambridge: Cambridge University Press.
- de Finetti, Bruno (2017). Theory of Probability: A critical introductory treatment. Chichester: John Wiley & Sons ltd. ISBN 9781119286370.
- Paulos, John Allen (5 August 2011). "The Mathematics of Changing Your Mind [by Sharon Bertsch McGrayne]". Book Review. New York Times. Retrieved 2011-08-06.
- Stigler, Stephen M. (March 1990). The history of statistics. Harvard University Press. ISBN 9780674403413.
- Cox, Richard T. (1961). The algebra of probable inference ([Repr.]. ed.). Baltimore, Md. : London: Johns Hopkins Press ; Oxford University Press [distributor]. ISBN 9780801869822.
- Dupré, Maurice J., Tipler, Frank J. New Axioms For Rigorous Bayesian Probability, Bayesian Analysis (2009), Number 3, pp. 599–606
- McGrayne, Sharon Bertsch. (2011). The Theory That Would Not Die, p. 10., p. 10, at Google Books
- Stigler, Stephen M. (1986) The history of statistics. Harvard University press. Chapter 3.
- Fienberg, Stephen. E. (2006) When did Bayesian Inference become "Bayesian"? Archived September 10, 2014, at the Wayback Machine. Bayesian Analysis, 1 (1), 1–40. See page 5.
- "The works of Wald, Statistical Decision Functions (1950) and Savage, The Foundation of Statistics (1954) are commonly regarded starting points for current Bayesian approaches"; "Recent developments of the so-called Bayesian approach to statistics" Marshall Dees Harris, Legal-economic research, University of Iowa. Agricultural Law Center (1959), p. 125 (fn. 52); p. 126. "This revolution, which may or may not succeed, is neo-Bayesianism. Jeffreys tried to introduce this approach, but did not succeed at the time in giving it general appeal." Annals of the Computation Laboratory of Harvard University 31 (1962), p. 180. "It is curious that even in its activities unrelated to ethics, humanity searches for a religion. At the present time, the religion being 'pushed' the hardest is Bayesianism." Oscar Kempthorne, 'The Classical Problem of Inference—Goodness of Fit', Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (1967), p. 235.
- Bernardo, J.M. (2005). "Reference analysis". Handbook of statistics. 25: 17–90.
- Wolpert, R.L. (2004) A conversation with James O. Berger, Statistical science, 9, 205–218
- Bernardo, José M. (2006) A Bayesian mathematical statistics primer. ICOTS-7
- Bishop, C.M. Pattern Recognition and Machine Learning. Springer, 2007
- Halpern, J. A counterexample to theorems of Cox and Fine, Journal of Artificial Intelligence Research, 10: 67–85.
- Hacking (1967, Section 3, page 316), Hacking (1988, page 124)
- Skyrms, Brian (1987-01-01). "Dynamic Coherence and Probability Kinematics". Philosophy of Science. 54 (1): 1–20. doi:10.1086/289350. JSTOR 187470.
- "Bayes' Theorem". stanford.edu. Retrieved 2016-03-21.
- Fuchs, Christopher A.; Schack, Rüdiger (2012-01-01). Ben-Menahem, Yemima; Hemmo, Meir, eds. Probability in Physics. The Frontiers Collection. Springer Berlin Heidelberg. pp. 233–247. arXiv: . doi:10.1007/978-3-642-21329-8_15. ISBN 9783642213281.
- van Frassen, B. (1989) Laws and Symmetry, Oxford University Press. ISBN 0-19-824860-1
- Wald, Abraham. Statistical Decision Functions. Wiley 1950.
- Bernardo, José M., Smith, Adrian F.M. Bayesian Theory. John Wiley 1994. ISBN 0-471-92416-4.
- Pfanzagl (1967, 1968)
- Morgenstern (1976, page 65)
- Stigler, Stephen M. (1978). "Mathematical statistics in the early States". Annals of Statistics. 6 (March): 239–265 esp. p. 248. doi:10.1214/aos/1176344123. JSTOR 2958876. MR 0483118.
- Galavotti, Maria Carla (1989-01-01). "Anti-Realism in the Philosophy of Probability: Bruno de Finetti's Subjectivism". Erkenntnis (1975-). 31 (2/3): 239–261. doi:10.1007/bf01236565. JSTOR 20012239.
- Galavotti, Maria Carla (1991-12-01). "The notion of subjective probability in the work of Ramsey and de Finetti". Theoria. 57 (3): 239–259. doi:10.1111/j.1755-2567.1991.tb00839.x. ISSN 1755-2567.
- Dokic, Jérôme; Engel, Pascal (2003). Frank Ramsey: Truth and Success. Routledge. ISBN 9781134445936.
- Davidson et al. (1957)
- "Karl Popper" in Stanford Encyclopedia of Philosophy
- Popper, Karl. (2002) The Logic of Scientific Discovery 2nd Edition, Routledge ISBN 0-415-27843-0 (Reprint of 1959 translation of 1935 original) Page 57.
- Peirce & Jastrow (1885)
- Bernardo, J. M. (2005). Reference Analysis. Handbook of Statistics 25 (D. K. Dey and C. R. Rao eds). Amsterdam: Elsevier, 17–90
- Berger, James O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics (Second ed.). Springer-Verlag. ISBN 0-387-96098-8.
- Bessière, Pierre; Mazer, E.; Ahuacatzin, J-M; Mekhnacha, K. (2013). Bayesian Programming. CRC Press. ISBN 9781439880326.
- Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-49464-X.
- Bickel, Peter J.; Doksum, Kjell A. (2001). Mathematical statistics, Volume 1: Basic and selected topics (Second (updated printing 2007) of the Holden-Day 1976 ed.). Pearson Prentice–Hall. ISBN 0-13-850363-X. MR 0443141.
- Davidson, Donald; Suppes, Patrick; Siegel, Sidney (1957). Decision-Making: An Experimental Approach. Stanford University Press.
- de Finetti, Bruno. "Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science," (translation of 1931 article) in Erkenntnis, volume 31, September 1989.
- de Finetti, Bruno (1937) "La Prévision: ses lois logiques, ses sources subjectives," Annales de l'Institut Henri Poincaré,
- de Finetti, Bruno. "Foresight: its Logical Laws, Its Subjective Sources," (translation of the 1937 article in French) in H. E. Kyburg and H. E. Smokler (eds), Studies in Subjective Probability, New York: Wiley, 1964.
- de Finetti, Bruno (1974–5). Theory of Probability. A Critical Introductory Treatment, (translation by A.Machi and AFM Smith of 1970 book) 2 volumes. Wiley ISBN 0-471-20141-3, ISBN 0-471-20142-1
- DeGroot, Morris (2004) Optimal Statistical Decisions. Wiley Classics Library. (Originally published 1970.) ISBN 0-471-68029-X.
- Hacking, Ian (December 1967). "Slightly More Realistic Personal Probability". Philosophy of Science. 34 (4): 311–325. doi:10.1086/288169. JSTOR 186120. Partly reprinted in: Gärdenfors, Peter and Sahlin, Nils-Eric. (1988) Decision, Probability, and Utility: Selected Readings. 1988. Cambridge University Press. ISBN 0-521-33658-9
- Hajek, A. and Hartmann, S. (2010): "Bayesian Epistemology", in: Dancy, J., Sosa, E., Steup, M. (Eds.) (2001) A Companion to Epistemology, Wiley. ISBN 1-4051-3900-5 Preprint
- Hald, Anders (1998). A History of Mathematical Statistics from 1750 to 1930. New York: Wiley. ISBN 0-471-17912-4.
- Hartmann, S. and Sprenger, J. (2011) "Bayesian Epistemology", in: Bernecker, S. and Pritchard, D. (Eds.) (2011) Routledge Companion to Epistemology. Routledge. ISBN 978-0-415-96219-3 (Preprint)
- Hazewinkel, Michiel, ed. (2001) , "Bayesian approach to statistical problems", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
- Howson, C.; Urbach, P. (2005). Scientific Reasoning: the Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6.
- Jaynes E.T. (2003) Probability Theory: The Logic of Science, CUP. ISBN 978-0-521-59271-0 (Link to Fragmentary Edition of March 1996).
- McGrayne, SB. (2011). The Theory That Would Not Die: How Bayes' Rule Cracked The Enigma Code, Hunted Down Russian Submarines, & Emerged Triumphant from Two Centuries of Controversy. New Haven: Yale University Press. ISBN 9780300169690; OCLC 670481486
- Morgenstern, Oskar (1978). "Utility". In Andrew Schotter. Selected Economic Writings of Oskar Morgenstern. New York University Press. pp. 65–70. ISBN 978-0-8147-7771-8.
- Peirce, C.S. & Jastrow J. (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83.
- Pfanzagl, J (1967). "Subjective Probability Derived from the Morgenstern-von Neumann Utility Theory". In Martin Shubik. Essays in Mathematical Economics In Honor of Oskar Morgenstern. Princeton University Press. pp. 237–251.
- Pfanzagl, J.; V. Baumann & H. Huber (1968). "Events, Utility and Subjective Probability". Theory of Measurement. Wiley. pp. 195–220.
- Ramsey, Frank Plumpton (1931) "Truth and Probability" (PDF), Chapter VII in The Foundations of Mathematics and other Logical Essays, Reprinted 2001, Routledge. ISBN 0-415-22546-9,
- Stigler, SM. (1990). The History of Statistics: The Measurement of Uncertainty before 1900. Belknap Press/Harvard University Press. ISBN 0-674-40341-X.
- Stigler, SM. (1999) Statistics on the Table: The History of Statistical Concepts and Methods. Harvard University Press. ISBN 0-674-83601-4
- Stone, JV (2013). Download chapter 1 of book "Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis", Sebtel Press, England.
- Winkler, RL (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 0-9647938-4-9. Updated classic textbook. Bayesian theory clearly presented. | <urn:uuid:be8b8520-68df-4550-a5fb-3dc392d8ad82> | 2.9375 | 5,402 | Knowledge Article | Science & Tech. | 46.822451 | 95,502,290 |
Editor rating: 8 / 10Victoria Sosa –– Its global scale, considering all plant communities from the Andean Biodiversity hotspots is important
Editor rating: 8 / 10Bruno Marino –– Climate change in Chile may affect historic habitats for the guanaco, however, these changes are uncertain. This study presents insights into perturbation of guanaco habitat and implications for conservation of the lineage. The study has implications for past, present and future guanaco studies.
Editor rating: 7 / 10Bruno Marino –– This report offers clear management strategy for vertical habitat heterogeneity, such as vegetation cover, for the conservation of breeding bird diversity in South Korea. The results of the study offer conservation options for the region given the potential pressures from anthropogenic enchroachment and climate change.
Editor rating: 8 / 10Anastazia Banaszak –– It addresses the effects of climate change on the distribution of a tropical species
Editor rating: 8 / 10Magnus Johnson –– This is an enigmatic species, long-lived species of which little is known.
Editor rating: 7 / 10Xavier Pochon –– This study developed a novel protocol using DNA metabarcoding for characterizing complex hard-bottom communities within marine protected areas of two Spanish National parks. This work provides a very interesting, novel, timely and thorough comparative analysis of COI and 18S markers in the context of marine biomonitoring, and offer a valuable baseline information for future biodiversity assessment of these complex benthic ecosystems.
Editor rating: 8 / 10Mark Costello –– It suggests that sea snakes are already changing their distribution in response to climate change.
Editor rating: 7 / 10Agus Santoso –– The vulnerability of Tufted Puffins has recently been raised by the mass mortality in 2016 which appeared to be linked to warming in the North Pacific. This present study projects that the nesting habitat may increasingly become unsuitable by 2050 under global warming, providing quantitative estimates and geographical distribution of the risk.
Editor rating: 7 / 10Jack Stanford –– Great contribution to the conservation of caves and cave faunas.
Editor rating: 9 / 10Luis Eguiarte –– Gunnera is a genus of plants that have fascinated scientist for a long time, in particular for the very large leaves of some species and because its symbiotic relationship with the nitrogen fixing cyanobacteria Nostoc. For a many years, some botanists suspected it to be a very old, primitive genus, perhaps basal in the phylogeny of the Angiosperms. While latter molecular phylogenies did not support this position, this paper shows that indeed Gunnera is an old genus, with a complex evolutionary and phylogeographic history, and a recent radiation in the Andes. All these new results are relevant for understanding why the Neotropics have so many plant species, more than any other similar region in the planet.
Discussing these articles | <urn:uuid:c1f84189-69aa-49bd-a95f-ba4287e8eca4> | 2.515625 | 610 | Content Listing | Science & Tech. | 10.493591 | 95,502,291 |
A programming language is a particular language programmers use to develop applications, scripts , or different set of instructions for computers to execute. C and C++ programming languages, probably the first ones you learned, have been around for a really long time. It additionally holds sizable market share in cell video games and software growth utilizing Android and enterprise web improvement world. There are statically typed versions, resembling Microsoft’s TypeScript or the JSX, that React uses.
You will discover more reasons to study these high 5 programming languages in this article. C++ is a robust language primarily based on C. It’s designed for programming techniques software program, however has additionally been used to construct video games/sport engines, desktop apps, cell apps, and web apps.
Because of this, if you happen to’re focused on changing into a developer, it is necessary to be well-versed in a lot of programming languages so that you will be versatile and adaptable – after which continue to study/master languages throughout your profession.
It is a class-primarily based, object-oriented programming language that’s constructed for portability and cross-platform application. This variation to cloud-based computing is then driving which programming language and platform is chosen as older techniques … | <urn:uuid:549bbe51-c321-442c-b4ff-c55243d48f2f> | 3.390625 | 260 | Truncated | Software Dev. | 27.422339 | 95,502,307 |
Hydrolysis of Halogenalkanes Essay Sample
- Pages: 7
- Word count: 1,683
- Rewriting Possibility: 99% (excellent)
- Category: chemistry
Get Full Essay
Get access to this section to get all help you need with your essay and educational issues.Get Access
Introduction of TOPIC
Aim and background reading
The aim of this experiment is to show how the rate of reaction of the halogenoalkanes changes in respect to the C-X bond, where the C is the carbon and the X is the halogen. This will occur through a nucleophilic attack. The halogenoalkanes undergo hydrolysis according to the following equation:
CnHn+1X + OH� � CnHn+1OH + X�
Nucleophilic attacks are a predominant type of chemical attack. It is a type of substitution reaction where a nucleophile breaks the bond between the carbon and in this case the halogen and removes the halogen to get a halide ion. There are 3 main types of nucleophilic reaction; one involves hydrolysis, which is the one being used in this experiment and involves an OH molecule, cyanide ions, which is not being used due to cyanide being extremely dangerous and the final nucleophilic reaction involves ammonia ions. This one is not used because it will just keep substituting the chemicals and you will end up with a huge range of compounds, most, if not all of which will not be needed. These products are called amines and an example of one would be CH3CH2NH2, which is ethylamine.
The three elements that will be used for this investigation are chlorine, bromine and iodine. Chlorine is a greenish yellow gas, which combines directly with nearly all elements. Chlorine is a respiratory irritant. The gas irritates the mucous membranes and the liquid burns the skin.
Bromine is the only liquid nonmetallic element. It is a member of the halogen group. It is a heavy, volatile, mobile, dangerous reddish-brown liquid. The red vapour has a strong disagreeable odour, resembling chlorine, and is irritating effect to the eyes and throat. It has a bleaching action. When spilled on the skin it produces painful sores. It is a serious health hazard, and maximum safety precautions should be taken when handling it.
Iodine is a bluish-black, lustrous solid. It volatilises at ambient temperatures into a pretty blue-violet gas with an irritating odour.
The bond enthalpies of the 4 most reactive halogens is as follows
Bond enthalpy (Kj mol -1 )
From the above table you can see that bond enthalpy decreases going down the group. This means that the weaker bonds will be more reactive with a nucleophile, and the C-Cl bond will be the hardest to break out of the bonds being tested as it has a higher value compared to the others. Halogenalkanes are classified as tertiary, secondary and primary. This depends upon the number of carbon atoms bonded to the carbon atom to which the halogen is shown below:
Their reactions are characterised by nucleophilic substitution of the halogen atom, owing to polarity of the carbon?halogen bond in which the electron-deficient carbon is susceptible to attack by an electron-rich species, namely a nucleophile.
With primary and secondary halogenoalkanes, the reaction is very slow at ordinary temperatures, but rapid with tertiary halogenoalkanes. Alkaline hydrolysis is much faster.
A one-step mechanism is proposed, in which both reactants are involved in the rate-determining step. (The curly arrows show the direction of movement of a pair of electrons.) As the OH- ion approaches the electron-deficient carbon atom donating a pa
ir of electrons, the halide ion moves away taking with it a pair of electrons. A transition state is
My prediction for which bond will react most vigorously with the nucleophile is C-I. This is because it has the lowest bond enthalpy and is also very polar because of the large difference of sizes between the carbon and iodine thus allowing the nucleophile to attack the bond much easier. C-Br will not be as easy to break as C-I because the molecule is not as polar and also has a higher bond enthalpy. C-Cl will therefore be the hardest bond to break because it is the least polar and has the highest bond enthalpy of all the bonds being tested.
Before carrying out this experiment a hazard analysis will need to be done. The equipment needed to ensure safety would be; goggles, lab coat, gloves and long hair tied back. Halogenalkanes are highly flammable. Alcohol’s are corrosive and can cause irritation to the skin.
* 3x Test tubes
* Test tube rack
* 2 x10 cm Measuring cylinder
* Bunsen burner
* Stop Watch
* Halogenalkanes, including 1-chlorobutane, 1-bromobutane, 1-iodobutane
* 0.02 M Silver Nitrate
* Test tube holder and pipette
1. Arrange three test tubes, into a test tube rack, in a row.
2. Using the pipette, add 3 drops of the Halogenalkanes in the following sequence:
1-chlorobutane 1-bromobutane 1-iodobutane
3. Using the measuring cylinder, measure out 2 cm3 of ethanol and add to each test tube (this acts as
3. Measure out another 2 cm3 of 0.02 M of Silver nitrate and pour into each test tube.
4. Measure out 2 cm3 of 1-chlorobutane.
5. Using the test tube holder, hold the test tube containing 1-chlorobutane, ethanol and silver nitrate to the Bunsen burner (with blue flame). Make sure that the solution boils then as quickly as possible add the 1-chlorobutane and time with the stopwatch provided and note the colour change of the precipitate formed.
6. Repeat the steps 4 to 5 using the other two Halogenalkanes and note the colour change of the precipitate formed.
The results should be recorded in the below table.
Colour change of precipitate
The level of accuracy of these results was fairly high. It was possible to gain a high level of accuracy, as the procedure was fairly simple to set up and carry out. There was little chance of the data getting mixed up as the results collected contained minute information. However, there could have been some sources of error that could have occurred during the investigation, that might have disrupted the accuracy of the results. One error could have been due to the way the drops were measured and inserted into the test tubes. The pipette used did not have a measurement scale, this would have caused the error as the amount poured into each test tube was not accurate. This may have altered the time limit either making the reaction occur faster than usual or slower. Another source of error would have been when the Halogenalkane were inserted into the test tube. The possibility of the solution hitting the side of the test tube and not running all the way down is high. This happening would have made the silver nitrate reaction occur faster than it should have been.
Measurements taken from the cylinder may not have been 100 % accurate as there could have been a possibility that when the solution was poured into the test tubes droplets of the solution may have remained within the cylinder. The timing of the Halogenalkanes may not be accurate as the time taken for the colour change to occur could have happened at the start of the experiment but had taken time to develop into a stronger denser colour. There may have been spillages when transferring the chemicals into the test tubes, which would result in a lower amount of Halogenalkanes being reacted. All these errors would have been caused by human error, as the equipment used was reliable. The experiment was only carried out once. This may suggest that the result obtained may not be reliable as errors may have occurred whilst recording the result.
To minimise errors and increase reliability I would have to revise the method and improve it to minimise the errors. The changes that would be made to make the method would be to make what should be done more specific, such as to say that the measurements taken from the cylinder should be poured into each test tube carefully (avoiding spillage’s) including the last drop. It would also help if the experiment were repeated at least five times to see whether three timings out of five are the same. It is it only then that the time should be noted as a result. This would not just make the experiment seem more complex it should hopefully increase levels of accuracy and reliabilty. | <urn:uuid:b8e5fc90-a09e-4fcd-a3e7-ce5615a073ff> | 3.125 | 1,876 | Academic Writing | Science & Tech. | 45.434958 | 95,502,329 |
Pluto's mean distance from the sun is 3.67 billion mi (5.91 billion km), and its period of revolution is about 248 years. Since Pluto has an orbit that is more elliptical and tilted than those of the planets (eccentricity .250, inclination 17°), at its closest point to the sun it passes inside the orbit of Neptune between 1979 and 1999 it was closer to the sun than Neptune was. It will remain farther from the sun for 220 years, when it will again pass inside Neptune's orbit.
Pluto's surface, as imaged by the New Horizons space probe during its flyby in 2015, is complex, with cratered areas and smooth icy areas as well as ice mountains, possible ice volcanoes, and evidence of glacial activity, indicating the planet is geologically active the surface consists largely of frozen nitrogen. Pluto is thought to have a rocky, silicate core surrounded by water ice the thin atmosphere contains nitrogen, carbon monoxide, and methane, and has discrete layers of haze. The surface temperature is estimated to be about −360°C (−218°C), a temperature at which most gases exist in the frozen state.
The existence of an unknown planet beyond the orbit of Neptune was first proposed by Percival Lowell on the basis of observed perturbations of the orbits of Uranus and Neptune. He began searching for such a planet in 1905, although he did not publish his calculations of its predicted position until 1914. Independent calculations were published by W. H. Pickering and others. In 1929, the search for a ninth planet was resumed at Lowell Observatory , and on Feb. 18, 1930, using photographic plates and a blink microscope , Clyde W. Tombaugh discovered an object whose motion was consistent with that of a transneptunian planet.
In 1978, American astronomers James Christy and Robert Harrington discovered the moon Charon. Together, Pluto and Charon may be considered to form a double dwarf planet system. Pluto's diameter is c.1,400 mi (2,300 km), Charon's is c.748 mi (1,203 km), and the radius of Charon's orbit is about 12,180 mi (19,600 km). Pluto and Charon orbit a common center of mass that lies between them, above the surface of Pluto, completing one orbit in about 6.4 earth days. Both keep the same side facing one another at all times because they rotate synchronously as they orbit.
Two smaller, more distant moons, Nix and Hydra, were reported in 2005 by American astronomers Hal Weaver and S. Alan Stern, and two more small moons, Kerebos and Styx, were reported in 2011 and 2012 by American astronomer Mark Showalter. The smaller moons are irregular shaped. Hydra, the largest, is about 33 mi (54 km) along its longest axis Nix, 27 mi (43 km) Kerebos, 7 mi (12 km) and Styx, 4 mi (7 km). The smaller moons orbit at roughly two to three times the distance of Charon, with Styx being the closest, Hydra the most distant, and Nix and Kerebos between them.
As an increasing number of Kuiper belt objects were discovered after 1992, many astronomers came to believe that Pluto, rather than being a planet, was really an unusually large and close Kuiper belt object. In 1999, however, the IAU reaffirmed that Pluto was a planet because of its size and its satellite, something no other transneptunian object was then known to have, but subsequent discoveries brought Pluto's status into question once again. One Kuiper belt object, now named Eris (and originally nicknamed Xena), whose orbit extends to roughly three times the distance of Pluto's, has an estimated diameter (1,500 mi/2,400 km) slightly larger than that of Pluto and also has a moon. It was the discovery of Eris in particular that ultimately led to Pluto's classification (2006) as a dwarf planet transneptunian dwarf planets are now classified as plutoids.
See W. Hoyt, Planets X and Pluto (1980) S. A. Stern and J. Mitton, Pluto and Charon (1999) B. W. Jones, Pluto (2010).
See more Encyclopedia articles on: Astronomy: General
Browse by Subject
- Earth and the Environment +-
- History +-
- Literature and the Arts +-
- Medicine +-
- People +-
- Philosophy and Religion +-
- Places +-
- Australia and Oceania
- Britain, Ireland, France, and the Low Countries
- Commonwealth of Independent States and the Baltic Nations
- Germany, Scandinavia, and Central Europe
- Latin America and the Caribbean
- Oceans, Continents, and Polar Regions
- Spain, Portugal, Italy, Greece, and the Balkans
- United States, Canada, and Greenland
- Plants and Animals +-
- Science and Technology +-
- Social Sciences and the Law +-
- Sports and Everyday Life +- | <urn:uuid:56b65c5d-0428-4e3b-ba12-ca15cc5aed5f> | 4.09375 | 1,050 | Knowledge Article | Science & Tech. | 51.547208 | 95,502,348 |
The B.C. Conservation Data Centre (CDC) maps known element occurrences (an area of land and/or water where a species or ecosystem is known to occur) of red- and blue-listed species and ecosystems. The CDC database includes the best available information and is updated on a regular basis.
**NEW! CDC iMap has been upgraded with new functionality! This includes a new Spatial Query tool, more basemap options and the ability to use the applications across all web browsers.
- Launch CDC iMap
- Launch CDC iMap (secure): for BC Government/BCeID users with ongoing access to CDC secure layers
Learn more about CDC element occurrence data, including publicly available and secured data:
How to Use CDC iMap
- **NEW! CDC iMap YouTube tutorial
- **NEW! CDC iMap Instructions (PDF)
- General iMap Reference Guides and Training Manual
If you do not find an element occurrence in your area of interest, this means there are none currently mapped in the CDC database. The best way to verify whether an area contains a species or ecosystem at risk is to have do a detailed assessment of the property during the appropriate season. | <urn:uuid:60903c5d-2f29-4dbc-b7bf-219f67b384ff> | 2.921875 | 243 | Customer Support | Science & Tech. | 44.35754 | 95,502,367 |
Message board for the users of flat assembler.
> Examples and Tutorials > Byte Count
Just another program I wrote to help me understand how to program in Linux
Any feedback is welcome I am still learning how to do assembly.
This program counts the number of times bytes 00-FF show up in a file, text or binary.
It uses sys_brk to allocate memory then the program reads file data to allocated memory.
We scan the bytes in allocated memory and count the number of times bytes 00-FF are in file then display info to screen.
Make terminal full screen because i use columns to display info.
usage on Linux command line: ./bytect -f filename
See post below to get most recent version....
Last edited by greco558 on 17 Jul 2017, 19:47; edited 3 times in total
|07 Jul 2017, 12:46||
Updated bytect program to process command line arguments.
-c set minimum decimal count of bytes to display
-l set lower Hex limit of byte 00-ff to display
-u set upper hex limit of byte 00-ff to display
-f followed by filename REQUIRED
You can use above switches to narrow displayed results.
Changed Header printed above display dump from a static message to
one that will change with parameters entered on command line.
Updated 07/17/2017: changed the way command line is processed so you can enter
switch and parameter with or without a space ex. -c20 or -c 20
Updated header in to show lower and upper range in HEX and
Count in decimal
|12 Jul 2017, 22:45||
Found a bug in bytect program after writing the same program in LUA.
If count of lets say hex byte 20 in file was greater than 256 the count would be off
because I was storing the count in a byte size mem location.
Fix was changing from byte size count storage to dword size and adding some
indexing to account for dword size count eg. EBX*4.
Attached is Bug Fixed version
|09 Mar 2018, 19:15||
< Last Thread | Next Thread >
Copyright © 2004-2018, Tomasz Grysztar.
Powered by rwasa. | <urn:uuid:bf5c38d3-ea59-4ede-ab06-1a60fa8e231c> | 2.796875 | 477 | Comment Section | Software Dev. | 66.470469 | 95,502,368 |
The units of matter that form all chemical substances are called atoms. The smallest atom, hydrogen, is approximately 2.7 billionths of an inch in diameter. Each type of atom, such as carbon, hydrogen, oxygen, and so on, is called a chemical element. A one- or two-letter symbol is used as a shorthand identification for each element. Although slightly more than 100 elements exist in the universe, only 24 are known to be essential for the structure and function of the human body. The chemical properties of atoms can be described in terms of three subatomic particles—protons, neutrons, and electrons. The protons and neutrons are confined to a very small volume at the center of an atom, the atomic nucleus, whereas the electrons revolve in orbits at various distances from the nucleus. This miniature solar-system model of an atom is an oversimplification, but it is sufficient to provide a conceptual framework for understanding the chemical and physical interactions of atoms.
Each of the subatomic particles has a different electric charge: Protons have one unit of positive charge, electrons have one unit of negative charge, and neutrons are electrically neutral. Since the protons are located in the atomic nucleus, the nucleus has a net positive charge equal to the number of protons it contains. The entire atom has no net electric charge, however, because the number of negatively charged electrons orbiting the nucleus is equal to the number of positively charged protons in the nucleus. | <urn:uuid:f9de3e17-e623-4731-aadb-a350ff7bbcb5> | 4.375 | 299 | Knowledge Article | Science & Tech. | 29.073281 | 95,502,377 |
12-Story Library Featured Editor Pick
Rivers are powerful and mysterious bodies of water that shape our lives in more ways than one. Finding a river’s source is difficult to define. Traditionally there are two ways geographers and explorers go about defining a rivers source: the most distant point upstream that provides the largest volume of water to the river, and the farthest point upstream on the longest tributary, or stream, of the river. Even when using these two definitions, geographers and explorers still can’t find a single source for the Amazon River.…
Note: Each post links to a current event story related to this 12-Story Library book. Our editors hand-select each story and write each post.
Marine and Freshwater Ecosystems News
Wiley: Aquatic Conservation: Marine and Freshwater Ecosystems: Table of Contents
Table of Contents for Aquatic Conservation: Marine and Freshwater Ecosystems. List of articles from both the latest and EarlyView issues.
Photo Gallery: Freshwater Plants and Animals
Video: Painted Turtle
A painted turtle swims through a freshwater lake.
Freshwater Food Web
Tweets from the Freshwater Blog
RT @freshwaterblog: New open-access book on key debates, approaches and directions in river management edited by Stefan Schmutz and Jan Sen…(about 2 weeks ago)
(about 2 weeks ago)
New open-access book on key debates, approaches and directions in river management edited by Stefan Schmutz and Jan… https://t.co/BCxP7unuKC(about 2 weeks ago)
RT @freshwaterblog: Reporting from MARS: multiple stressor science and management in European aquatic ecosystems The final MARS project re…(about 3 weeks ago) | <urn:uuid:c93871d4-5d7b-465a-84fc-1f728cfec7d0> | 3.671875 | 371 | Content Listing | Science & Tech. | 31.56419 | 95,502,386 |
Arabidopsis Thaliana has become to plant biology what Drosophila melanogaster and Caenorhabditis elegans are to animal biology. Arabidopsis is an angiosperm, a dicot from the mustard family (Brassicaceae). It is popularly known as thale cress or mouse-ear cress. While it has no commercial value - in fact is considered a weed - it has proved to be an ideal organism for studying plant development.
Fig. 184.108.40.206: Arabidopsis Thaliana
Some of its advantages as a model organism:
- It has one of the smallest genomes in the plant kingdom: 135 x 106 base pairs of DNA distributed in 5 chromosomes (2n = 10) and almost all of which encodes its 27,407 genes.
- Transgenic plants can be made easily using Agrobacterium tumefaciens as the vector to introduce foreign genes.
- The plant is small - a flat rosette of leaves from which grows a flower stalk 6–12 inches high.
- It can be easily grown in the lab in a relatively small space.
- Development is rapid. It only takes 5– 6 weeks from seed germination to the production of a new crop of seeds.
- It is a prolific producer of seeds (up to 10,000 per plant) making genetics studies easier.
- Mutations can be easily generated (e.g., by irradiating the seeds or treating them with mutagenic chemicals).
- It is normally self-pollinated so recessive mutations quickly become homozygous and thus expressed.
Other members of its family cannot self-pollinate. They have an active system of self-incompatibility. Arabidopsis, however, has inactivating mutations in the genes — SRK and SCR - that prevent self-pollination in other members of the family.
- However, Arabidopsis can easily be cross-pollinated to do genetic mapping and produce strains with multiple mutations.
Many of the findings about how plants work - described throughout these pages - were learned from studies with Arabidopsis. (Photo courtesy of Nicole Hanley Markelz of the Plant Genome Research Outreach Program at Cornell University.) | <urn:uuid:f83a4c24-4c91-49bc-b7cc-f9a7150edd01> | 3.640625 | 467 | Knowledge Article | Science & Tech. | 43.979942 | 95,502,396 |
posted by snow
A wave has a wavelength of 10mm and a frequency of 5 hertz what is the speed
2 mm/s thank you
Velocity = Frequency * Wavelength
V = 5 hertz * 10 mm
V = ?
I'll let you finish.
i need this too and assuming that the formula is as above, multiply em, then the answer would be 50 sum but ofc there is two answers with 50 in them with 2 different units so gr8 (((:
this site is super unhelpful bruh | <urn:uuid:30bc7310-842f-4052-b2e8-87f83bc1da77> | 2.796875 | 113 | Q&A Forum | Science & Tech. | 63.080976 | 95,502,415 |
Chute cut-off processes along a small alluvial channel: a case study of Sangra Khal, sub-tributary of Gour Nadi, West Bengal, India
- 129 Downloads
Meandering cutoff is a natural feature of low land alluvial river. It is an inseparable part of it and it is two types, such as neck cutoff and chute cutoff. Among them chute cutoff appears to be complicated and poorly understand. Here, we have observed and tried to describe the mechanisms of chute cutoff within uniform floodplain topography and its locational factors of development. Most of the studies had put their greater emphasis over cutoff formation on larger rivers and they owe it to nick point generation, swale development and embayment formation resulting from overbank flow. Our insitu observations of several years indicate that small agricultural stream with higher return period of over bank flow showing that embayment, nick point formation, weathering, mass wasting and earthworm’s activity are working together to form a chute cutoff. Although their relative degree of dominance in chute formation varies depending on the nature of return period of over bank flow, nature of flood plain and landuse, discharge, vegetation cover, meander properties and channel bank properties.
KeywordsMeander Chute cutoff Small alluvial river Knick point Embayment formation overbank flow earthworms
We cordially thanks to the Department of Geography, The University of Burdwan for the infrastructural assistance such as using of different kinds of Remote Sensing and GIS software, and writing software. We would also like to acknowledge Professor Sanat Kumar Guchhait for his auspicious comments and suggestions.
- Bandyopadhyay, S., Kar, N. K., Das, S., Sen, J. (2014). River systems and water resources of West Bengal: a review. Rejuvenation of surface water resources of india: potential, problems and prospects. Geol Soc India, 3:63–84.Google Scholar
- Crickmay CH (1960). Lateral activity in a river of northwestern Canada. J Geol, 377–391Google Scholar
- Dietrich WE, Smith JD, Dunne T (1979). Flow and sediment transport in a sand bedded meander. J Geol, 305–315Google Scholar
- Erskine W, McFadden C, Bishop P (1992). Alluvial cutoffs as indicators of former channel conditions. Earth Surf Process Landf 17(1), 23–37Google Scholar
- Gagliano SM, Howard PC (1984). The neck cutoff oxbow lake cycle along the Lower Mississippi River. In River Meandering, ASCE, pp. 147–158.Google Scholar
- Howard, AD (1992). Modeling channel migration and floodplain sedimentation in meandering streams. Lowland Floodplain Rivers: Geomorphol Perspect, 1–41.Google Scholar
- Mukhopadhyay S, Mukhopadhyay, M., Pal, S. (2012). Advance river geography. ACB PublicationsGoogle Scholar
- Ratzlaff, JR (1981). Development and cutoff of Big Bend meander, Brazos River, Texas. Texas J Sci, 33(2–4):121–129.Google Scholar | <urn:uuid:f6dfded2-4347-4425-9c20-a1e3401ae3bc> | 2.734375 | 679 | Academic Writing | Science & Tech. | 50.383339 | 95,502,426 |
An intense light pulse interacting with a weakly bound van der Waals cluster consisting of thousands of atoms can eventually lead to the explosion of the cluster and its complete disintegration. During this process, novel ionization mechanisms occur that are not observed in atoms. With a light pulse that is intense enough, many electrons are removed from their atoms that can move within the cluster, where they form a plasma with the ions on the nanometer scale, a so called nanoplasma. Due to collisions between the electrons, some of them may eventually gain sufficient energy to leave the cluster. A large part of the electrons, however, will remain confined to the cluster. It was theoretically predicted that electrons and ions in the nanoplasma recombine to form Rydberg atoms, however, an experimental proof of this hypothesis is still missing. Previous experiments were carried out at large scale facilities like free-electron lasers that have sizes from a few hundred meters to a few kilometers showing already surprising results such as the formation of very high charge states when an intense XUV pulse interacts with the cluster. However, the accessibility to such sources is strongly limited, and the experimental conditions are extremely challenging. The availability of intense light pulses in the extreme-ultraviolet range from an alternative source is therefore important to gain a better understanding of the various processes occurring in clusters and other extended systems such as biomolecules exposed to intense XUV pulses.
Scientists from the Max-Born-Institut have developed a new light source that is based on the process of high-order harmonic generation. In the experiment, an intense pulse in the extreme-ultraviolet range with a duration of 15 fs (1fs=10-15s) interacted with clusters consisting of argon or xenon atoms. In the current issue of Physical Review Letters (Vol. 112-073003 publ. 20 February 2014) http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.073003 Bernd Schütte, Marc Vrakking and Arnaud Rouzée present the results of these studies, which are in very good agreement with previously obtained results from free-electron lasers: the formation of a nanoplasma was inferred by measuring the kinetic energy distributions of electrons formed in the cluster ionization process, showing a characteristic plateau up to a maximum kinetic energy given by the kinetic energy resulting from photoionization of an individual atom. In collaboration with the theoreticians Mathias Arbeiter and Thomas Fennel from the University of Rostock, it was possible to numerically simulate the ionization processes in the cluster and to reproduce the experimental results. In addition, by using the velocity map imaging technique, a yet undiscovered distribution of very slow electrons was observed and attributed the formation of high-lying Rydberg atoms by electron-ion recombination processes during the cluster expansion. Since the binding energies of the electrons are very small, the DC detector electric field used in the experiment was strong enough to ionize these Rydberg atoms, leading to the emission of low energy electrons. This process is also known as frustrated recombination and could now be confirmed experimentally for the first time. The current findings may also explain why in recent experiments using intense X-ray pulses, high charge states up to Xe26+ were observed in clusters, although a large number of recombination processes is expected to take place. Moreover, the opportunity to carry out this type of experiment with a high-order harmonic source makes it possible in the future to perform pump-probe experiments in clusters and other extended systems with a time resolution down to the attosecond range.
Bernd Schütte, Mathias Arbeiter, Thomas Fennel, Marc J. J. Vrakking and Arnaud Rouzée, "Rare-gas clusters in intense extreme-ultraviolet pulses from a high-order harmonic source", Physical Review Letters 112, (2014)
Dr. Bernd Schütte, +49 (0)30 6392 1248
Prof. Marc J. J. Vrakking, +49 (0)30 6392 1200
Dr. Arnaud Rouzée, +49 (0)30 6392 1240
Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI)
Karl-Heinz Karisch | Forschungsverbund Berlin e.V.
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:4b7799a1-b152-4823-867d-6d873b656777> | 2.984375 | 1,545 | Content Listing | Science & Tech. | 38.38772 | 95,502,431 |
Thorarchaeota are a new archaeal phylum within the Asg ard superphylum, whose ancestors have been proposed to play possible ecological roles in cellular evolution. However, little is known about the lifestyles of these uncultured archaea. To
provide a better resolution of the ecological roles and metabolic capacity of Thorarchaeota, we obtained Thorarchaeota genomes reconstructed from metagenomes of different depth layers in mangrove and mudflat sediments. These genomes from deep anoxic layers suggest the presence of Thorarchaeota with the potential to degrade organic matter, fix inorganic carbon, reduce sulfur/sulfate and produce acetate. In particular, Thorarchaeota may be involved in ethanol production, nitrogen fixation, nitrite reduction, and arsenic detoxification. Inter estingly, these Thorarchaeotal genomes are inferred to
contain the tetrahydromethanopterin and tetrahydrofolate Wood–Ljungdahl (WL) pathways for CO2 reduction, and the latter WL pathway appears to have originated from bacteria. These archaea are predicted to be able to use various inorganic and organic carbon sources, possessing genes inferred to encode ribulose bisph osphate carboxy lase-like proteins (normally without RuBisCO activity) and a near-complete Calvin–Benson– Bassham cycle. The existence of eukaryotic selen ocysteine insertion sequences and many genes for proteins previously considered eukaryote-specific in Thorarchaeota genomes provide new insights into their evolutionary roles in the origin of eukaryo tic cellular complexity. Resolving the metabolic capacities of these enigmatic archaea and their origins will enhance our understanding of the origins of eukaryotes and their
roles in ecosystems.
Here is some of the press that covered the impacts of the storm on my lab, institute, lab members, and family.
New insights into microbial hydrocarbon cycling and metabolic interdependencies in hydrothermal sediments
Congrats Nina and Kiley!
This paper details the genetic diversity of these sediments and describes genomes belonging to a uncultured archaeal group (GoM-Arc1) that contain novel pathways for hydrocarbon cycling, related to ANME (anaerobic methane oxidizers).
In addition to Theionarchaea, this new paper that appears in ISME Journal also details a variety of archaeal genomes there were obtained from the White Oak River Estuary in North Carolina. This digram summarizes the ecological roles we have inferred from these genomes. This is important because NONE of these lineages have been grown in a laboratory, so having their genomes has significantly advanced our understanding of what they do in nature.
Genomic reconstruction of multiple lineages of uncultured benthic archaea suggests distinct biogeochemical roles and ecological niches
Asgard archaea illuminate the origin of eukaryotic cellular complexity
This week our new paper describing the discovery of 4 archaea phyla that are related to eukaryotes was published in Nature. These phyla are belong to the same branch of life and have been named after different Norse gods, Thor, Odin, Heimdall, and Loki. This is a collaboration with Thijs Ettema’s lab in Sweden. Last year we published the discovery of Thorarchaeota in ISME.
Genomic reconstruction of a novel, deeply branched sediment archaeal phylum with pathways for acetogenesis and sulfur reduction
This paper adds 2 additional phyla, Odinarchaeota and Heimdallarchaeota. The focus of this paper is further resolve the phylogenetic position of eukaryotes in this new superphylum. It also examines the presence of several new ESPs or eukaryotic signature proteins. These proteins were mostly thought to exist in eukaryotes, but these genomes contain a variety of them!
Press releases to accompany this study:
UT press release
The Atlantic article by Ed Yong
Uppsala press release
by Rachel Mackelprang & Olivia U. Mason
DNA-SIP metagenomic experimental strategy for the identification and characterization of hydrocarbon-degrading microorgansims from DWH oil spill deep-plume and surface-slick samples.
Roy Rodriguez Carrero has joined the lab as an REU student for the summer.
Roy is doing some exciting work looking at the diversity and metabolisms of deeply-branched Archaea in the estuaries in south TX. | <urn:uuid:9eafa64d-1bef-46d8-aaff-2b065cedf608> | 2.765625 | 961 | Content Listing | Science & Tech. | 6.373535 | 95,502,441 |
Current Event March 28, 2014
In Washington state the clean up effort is still underway after a large mud slide killed at least two dozen people. Landslides are hard to predict. Scientists can determine which hills are most vulnerable, but getting the information to people that could use is it difficult.
Current Event March 24, 2014
The drought didn't just close down ski resorts and impact agriculture— it also increased coffee bean prices due to a bean condition called "coffee disease rust." The widespread impact might even have customers at the local coffee chop feeling the price change.
Current Event March 21, 2014
Nevada's farms are few and far between, and the recent drought has not made survival easier. Some farms decided to "hack" the drought by adapting to the region's water shortage by growing better suited crops.
Current Event March 13, 2014
The same methane gas emitted from humans is also produced by the same bacteria that lives in old pizza crusts, curdled milk, and other discarded food. Scientists have found ways to convert the methane gas from old food into energy. Several cities are already converting waste into energy, listen to this story to learn how New York City is trying it out.
Current Event February 28, 2014
The Keystone XL Pipeline was initially labeled as a harmful source of carbon dioxide by leading scientist, Marcia McNutt. However, she recently switched her position and supports its construction — her reason? The pipeline will save the country money.
Current Event January 6, 2014
Animal manure creates the necessary nutrients, phosphorous and nitrogen, to help plants grow. However, water sources surrounding animal farms are also heavily polluted, mainly due to phosphorous in the water beds. Find out how farmers are trying to be economic about the environment. | <urn:uuid:c14dd818-9db0-4f1d-abdc-bc8597062158> | 2.828125 | 359 | Content Listing | Science & Tech. | 46.961502 | 95,502,450 |
Tags: discoveries, electronics, electrons, friction, harnessing static electricity, inventions, nanogenerators, portable electronics, science, self-powered devices, static charge, static electricity, static energy, thermionic emission, Thomas Edison, triboelectric nanogenerators, triboelectrification effect.
(Natural News) Do you remember your encounters with static electricity, e.g. every single time you walked across the carpet and grabbed the doorknob? Researchers from the Georgia Institute of Technology (Georgia Tech) are unraveling the process with the hopes of turning it into an energy source, reported a Newswise article.
According to Zhong Lin Wang, a Regent’s Professor at the university, his research team discovered how certain materials can store the energy from contact electrification at room temperature for hours.
“Our research showed that there’s a potential barrier at the surface that prevents the charges generated from flowing back to the solid where they were from or escaping from the surface after the contacting,” he explained.
Their study determined that electron transfer is the primary process for contact electrification between two non-organic solids.
“There has been some debate around contact electrification -- namely, whether the charge transfer occurs through electrons or ions and why the charges retain on the surface without a quick dissipation,” remarked Wang.
Old study on triboelectric nanogenerators serves new research
His team is building off their earlier study on triboelectric nanogenerators eight years ago. Those generators were constructed from materials that generated electrical charges whenever the device was moving.
Triboelectric generators could draw energy from traditional wind, the currents of the ocean, or even the vibrations made by sound. So long as the source made the generator move, it would produce electricity.
“Previously we just used trial and error to maximize this effect. But with this new information, we can design materials that have better performance for power conversion,” Wang said regarding the connection between his past and previous work.
For their current study on static electricity, the Georgia Tech team used a triboelectric nanogenerator to quantify the amount of static charge stored on surfaces while friction was taking place. They used titanium for the nanoscale-sized device, which gave it a high resistance to heat.
Their new method allowed real-time observation of the accumulated charges in real time. Because the generator was quite sturdy and heat-resistant, it could be used at temperatures of up to 572 degrees Fahrenheit.
Heat allows electrons to jump barriers
After analyzing the data, the researchers found a suitable theory that accounted for the behaviors of the triboelectric effect and static electricity.
The electron thermionic emission theory explained that heat induces electrons to flow from surfaces or over potential-energy barriers. The most common model is the way electrons flowed from a hot cathode into cooler vacuum inside a vacuum tube.
The latter example is also known as the Edison effect because Thomas Edison was the first to observe it while improving the incandescent bulb.
Wang’s team only realized that temperature played a big part in the triboelectric effect because they built the new nanogenerator out of heat-resistant material. (Related: MXene: A new miracle material with applications in energy storage, biology, electronics, medicine and more.)
“We never realized it was a temperature dependent phenomenon. But we found that when the temperature reaches about 300 Celsius [572 Fahrenheit], the triboelectric transfer almost disappears,” Wang reported.
Furthermore, the researchers studied the ability of the titanium surfaces of the generator to maintain a charge at temperatures ranging from about 176 degrees to 572 degrees Fahrenheit.
Based on these findings, they constructed a physics model that explained the triboelectrification effect.
“As the temperature rises, the energy fluctuations of electrons become larger and larger,” they reported.
“Thus, it is easier for electrons to hop out of the potential well, and they either go back to the material where they came from or emit into air,” concluded Wang and his team.
Read more news regarding scientific breakthroughs at Science.news. | <urn:uuid:87cc2f19-8123-4045-ae1f-8be360bdbae4> | 3.453125 | 870 | News Article | Science & Tech. | 21.677345 | 95,502,455 |
Svein Vaage Broadband Air Gun Study
Marine seismic exploration is a method for collecting geophysical data that offer an opportunity for a detailed look at the geological structure beneath the seabed. The product of a seismic survey can be either a two-dimensional or a three-dimensional image, which can then be used to identify potential areas for oil and gas exploration and production. Seismic imaging is analogous to the ultrasound technology that is commonly used in the medical profession for imaging the human body.
KeywordsSeismic Source Seismic Survey Fair Weather Condition Hydrophone Array Particle Velocity Measurement
- Mattsson A (2008) Single airgun and cluster measurement project. Exploration and Production Sound and Marine Life Joint Industry Programme Review Meeting, 28–30 October 2008, Houston, TX, p 22.Google Scholar | <urn:uuid:3224b7e4-0b08-4c4e-b00a-823e04ae8a34> | 3.125 | 168 | Academic Writing | Science & Tech. | 18.921313 | 95,502,471 |
Forbes and his colleagues found that a parasitic wasp (Diachasma alloeum) that attacks the apple maggot (Rhagoletis pomonella) has “formed new incipient species as a result of specializing on diversifying fly hosts, including the recently derived apple-infesting race of R. pomonella.”
The apple maggot, native to North America, shifted from its ancestral hawthorn host (Crataegus spp.), to introduced European apples less than 250 years ago. “The two populations,” Forbes said, “have since become partially reproductively isolated due to a number of host-related adaptations and are now distinct host races, on their way to becoming separate species.”
A host race is a group of organisms in the process of becoming a new species due to its close association with a particular host (plant or animal).
The research, “Sequential Sympatric Speciation Across Trophic Levels,” published Feb. 6 in the journal Science, provides insight into what Forbes calls “the tangled bank of life.”
“As new species form, they create new opportunities for others to exploit which, in turn, begets ever more new species,” he said. “And all this is happening right before our eyes in our own backyards.”
The scientists, led by Forbes, studied genetic differences, diapause length and fruit odor preferences for apple-associated populations of D. alloeum wasps collected from numerous Midwestern sites. They collected the same measurements for D. alloeum in hawthorn, blueberry and snowberry.
Their work showed that D. alloeum associated with apple has genetic, phenological and ehavioral differences that isolate it from the other wasp populations. “D. alloeum attacks only R. pomonella complex flies found on these four host plants,” Forbes said, “so we can be fairly confident that the apple wasp is derived from one of the other wasp populations and has rapidly evolved into this new race.”In its larval form, the apple maggot is a major pest of apples throughout the United States. Other Rhagoletis species attack cherries, walnuts and blueberries. The female apple maggot fly deposits her eggs in ripe fruit. The eggs then develop into larvae (maggots), which later leave the fruit to pupate and overwinter in the soil.
Forbes published the work in Science with co-authors Thomas H. Q. Powell and Jeffrey Feder, University of Notre Dame; Lukasz Stelinski of the University of Florida Citrus Research and Education Center; and James J. Smith, Department of Entomology, Michigan State University.
The research was funded by a National Science Foundation Doctoral Dissertation Improvement Grant and an American Museum of Natural History Theodore Roosevelt Fund Grant.Media contacts:
Kathy Keatley Garvey | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:5bcc8e10-52d1-49d5-a193-9d2cbeed5809> | 3.40625 | 1,284 | Content Listing | Science & Tech. | 39.496231 | 95,502,559 |
But instead of electronic circuitry, life relies on biochemical circuitry—complex networks of reactions and pathways that enable organisms to function. Now, researchers at the California Institute of Technology (Caltech) have built the most complex biochemical circuit ever created from scratch, made with DNA-based devices in a test tube that are analogous to the electronic transistors on a computer chip.
Engineering these circuits allows researchers to explore the principles of information processing in biological systems, and to design biochemical pathways with decision-making capabilities. Such circuits would give biochemists unprecedented control in designing chemical reactions for applications in biological and chemical engineering and industries. For example, in the future a synthetic biochemical circuit could be introduced into a clinical blood sample, detect the levels of a variety of molecules in the sample, and integrate that information into a diagnosis of the pathology.
"We're trying to borrow the ideas that have had huge success in the electronic world, such as abstract representations of computing operations, programming languages, and compilers, and apply them to the biomolecular world," says Lulu Qian, a senior postdoctoral scholar in bioengineering at Caltech and lead author on a paper published in the June 3 issue of the journal Science.
Along with Erik Winfree, Caltech professor of computer science, computation and neural systems, and bioengineering, Qian used a new kind of DNA-based component to build the largest artificial biochemical circuit ever made. Previous lab-made biochemical circuits were limited because they worked less reliably and predictably when scaled to larger sizes, Qian explains. The likely reason behind this limitation is that such circuits need various molecular structures to implement different functions, making large systems more complicated and difficult to debug. The researchers' new approach, however, involves components that are simple, standardized, reliable, and scalable, meaning that even bigger and more complex circuits can be made and still work reliably.
"You can imagine that in the computer industry, you want to make better and better computers," Qian says. "This is our effort to do the same. We want to make better and better biochemical circuits that can do more sophisticated tasks, driving molecular devices to act on their environment."
To build their circuits, the researchers used pieces of DNA to make so-called logic gates—devices that produce on-off output signals in response to on-off input signals. Logic gates are the building blocks of the digital logic circuits that allow a computer to perform the right actions at the right time. In a conventional computer, logic gates are made with electronic transistors, which are wired together to form circuits on a silicon chip. Biochemical circuits, however, consist of molecules floating in a test tube of salt water. Instead of depending on electrons flowing in and out of transistors, DNA-based logic gates receive and produce molecules as signals. The molecular signals travel from one specific gate to another, connecting the circuit as if they were wires.
Winfree and his colleagues first built such a biochemical circuit in 2006. In this work, DNA signal molecules connected several DNA logic gates to each other, forming what's called a multilayered circuit. But this earlier circuit consisted of only 12 different DNA molecules, and the circuit slowed down by a few orders of magnitude when expanded from a single logic gate to a five-layered circuit. In their new design, Qian and Winfree have engineered logic gates that are simpler and more reliable, allowing them to make circuits at least five times larger.
Their new logic gates are made from pieces of either short, single-stranded DNA or partially double-stranded DNA in which single strands stick out like tails from the DNA's double helix. The single-stranded DNA molecules act as input and output signals that interact with the partially double-stranded ones.
"The molecules are just floating around in solution, bumping into each other from time to time," Winfree explains. "Occasionally, an incoming strand with the right DNA sequence will zip itself up to one strand while simultaneously unzipping another, releasing it into solution and allowing it to react with yet another strand." Because the researchers can encode whatever DNA sequence they want, they have full control over this process. "You have this programmable interaction," he says.
Qian and Winfree made several circuits with their approach, but the largest—containing 74 different DNA molecules—can compute the square root of any number up to 15 (technically speaking, any four-bit binary number) and round down the answer to the nearest integer. The researchers then monitor the concentrations of output molecules during the calculations to determine the answer. The calculation takes about 10 hours, so it won't replace your laptop anytime soon. But the purpose of these circuits isn't to compete with electronics; it's to give scientists logical control over biochemical processes.
Their circuits have several novel features, Qian says. Because reactions are never perfect—the molecules don't always bind properly, for instance—there's inherent noise in the system. This means the molecular signals are never entirely on or off, as would be the case for ideal binary logic. But the new logic gates are able to handle this noise by suppressing and amplifying signals—for example, boosting a signal that's at 80 percent, or inhibiting one that's at 10 percent, resulting in signals that are either close to 100 percent present or nonexistent.
All the logic gates have identical structures with different sequences. As a result, they can be standardized, so that the same types of components can be wired together to make any circuit you want. What's more, Qian says, you don't have to know anything about the molecular machinery behind the circuit to make one. If you want a circuit that, say, automatically diagnoses a disease, you just submit an abstract representation of the logic functions in your design to a compiler that the researchers provide online, which will then translate the design into the DNA components needed to build the circuit. In the future, an outside manufacturer can then make those parts and give you the circuit, ready to go.
The circuit components are also tunable. By adjusting the concentrations of the types of DNA, the researchers can change the functions of the logic gates. The circuits are versatile, featuring plug-and-play components that can be easily reconfigured to rewire the circuit. The simplicity of the logic gates also allows for more efficient techniques that synthesize them in parallel.
"Like Moore's Law for silicon electronics, which says that computers are growing exponentially smaller and more powerful every year, molecular systems developed with DNA nanotechnology have been doubling in size roughly every three years," Winfree says. Qian adds, "The dream is that synthetic biochemical circuits will one day achieve complexities comparable to life itself."
The research described in the Science paper, "Scaling up digital circuit computation with DNA strand displacement cascades," is supported by a National Science Foundation grant to the Molecular Programming Project and by the Human Frontier Science Program.
Marcus Woo | EurekAlert!
Further reports about: > DNA > DNA molecule > DNA sequence > Science TV > biochemical pathway > biological system > building block > chemical process > chemical reaction > molecular signal > molecular structure > molecular system > programming language > signal molecule > single-stranded DNA > test tube
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:aea53ffe-a6c0-47e6-ae19-8009cc4a1a04> | 3.703125 | 2,047 | Content Listing | Science & Tech. | 32.157689 | 95,502,570 |
Prior probability(Redirected from Non-informative prior)
This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (February 2008) (Learn how and when to remove this template message)
In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.
Bayes' theorem calculates the renormalized pointwise product of the prior and the likelihood function, to produce the posterior probability distribution, which is the conditional distribution of the uncertain quantity given the data.
Priors can be created using a number of methods.(pp27–41) A prior can be determined from past information, such as previous experiments. A prior can be elicited from the purely subjective assessment of an experienced expert. An uninformative prior can be created to reflect a balance among outcomes when no information is available. Priors can also be chosen according to some principle, such as symmetry or maximizing entropy given constraints; examples are the Jeffreys prior or Bernardo's reference prior. When a family of conjugate priors exists, choosing a prior from that family simplifies calculation of the posterior distribution.
- p is a parameter of the underlying system (Bernoulli distribution), and
- α and β are parameters of the prior distribution (beta distribution); hence hyperparameters.
An informative prior expresses specific, definite information about a variable. An example is a prior distribution for the temperature at noon tomorrow. A reasonable approach is to make the prior a normal distribution with expected value equal to today's noontime temperature, with variance equal to the day-to-day variance of atmospheric temperature, or a distribution of the temperature for that day of the year.
This example has a property in common with many priors, namely, that the posterior from one problem (today's temperature) becomes the prior for another problem (tomorrow's temperature); pre-existing evidence which has already been taken into account is part of the prior and, as more evidence accumulates, the posterior is determined largely by the evidence rather than any original assumption, provided that the original assumption admitted the possibility of what the evidence is suggesting. The terms "prior" and "posterior" are generally relative to a specific datum or observation.
Weakly informative priorsEdit
A weakly informative prior expresses partial information about a variable. An example is, when setting the prior distribution for the temperature at noon tomorrow in St. Louis, to use a normal distribution with mean 50 degrees Fahrenheit and standard deviation 40 degrees, which very loosely constrains the temperature to the range (10 degrees, 90 degrees) with a small chance of being below -30 degrees or above 130 degrees. The purpose of a weakly informative prior is for regularization, that is, to keep inferences in a reasonable range.
An uninformative prior or diffuse prior expresses vague or general information about a variable. The term "uninformative prior" is somewhat of a misnomer. Such a prior might also be called a not very informative prior, or an objective prior, i.e. one that's not subjectively elicited.
Uninformative priors can express "objective" information such as "the variable is positive" or "the variable is less than some limit". The simplest and oldest rule for determining a non-informative prior is the principle of indifference, which assigns equal probabilities to all possibilities. In parameter estimation problems, the use of an uninformative prior typically yields results which are not too different from conventional statistical analysis, as the likelihood function often yields more information than the uninformative prior.
Some attempts have been made at finding a priori probabilities, i.e. probability distributions in some sense logically required by the nature of one's state of uncertainty; these are a subject of philosophical controversy, with Bayesians being roughly divided into two schools: "objective Bayesians", who believe such priors exist in many useful situations, and "subjective Bayesians" who believe that in practice priors usually represent subjective judgements of opinion that cannot be rigorously justified (Williamson 2010). Perhaps the strongest arguments for objective Bayesianism were given by Edwin T. Jaynes, based mainly on the consequences of symmetries and on the principle of maximum entropy.
As an example of an a priori prior, due to Jaynes (2003), consider a situation in which one knows a ball has been hidden under one of three cups, A, B or C, but no other information is available about its location. In this case a uniform prior of p(A) = p(B) = p(C) = 1/3 seems intuitively like the only reasonable choice. More formally, we can see that the problem remains the same if we swap around the labels ("A", "B" and "C") of the cups. It would therefore be odd to choose a prior for which a permutation of the labels would cause a change in our predictions about which cup the ball will be found under; the uniform prior is the only one which preserves this invariance. If one accepts this invariance principle then one can see that the uniform prior is the logically correct prior to represent this state of knowledge. This prior is "objective" in the sense of being the correct choice to represent a particular state of knowledge, but it is not objective in the sense of being an observer-independent feature of the world: in reality the ball exists under a particular cup, and it only makes sense to speak of probabilities in this situation if there is an observer with limited knowledge about the system.
As a more contentious example, Jaynes published an argument (Jaynes 1968) based on Lie groups that suggests that the prior representing complete uncertainty about a probability should be the Haldane prior p−1(1 − p)−1. The example Jaynes gives is of finding a chemical in a lab and asking whether it will dissolve in water in repeated experiments. The Haldane prior gives by far the most weight to and , indicating that the sample will either dissolve every time or never dissolve, with equal probability. However, if one has observed samples of the chemical to dissolve in one experiment and not to dissolve in another experiment then this prior is updated to the uniform distribution on the interval [0, 1]. This is obtained by applying Bayes' theorem to the data set consisting of one observation of dissolving and one of not dissolving, using the above prior. The Haldane prior is an improper prior distribution (meaning that it does not integrate to 1) that puts 100% of the probability content at either p = 0 or at p = 1 if a finite number of observations have given the same result. Harold Jeffreys devised a systematic way for designing uninformative proper priors for e.g., Jeffreys prior p−1/2(1 − p)−1/2 for the Bernoulli random variable.[clarification needed Not sure everybody agrees with this assertion.]
Priors can be constructed which are proportional to the Haar measure if the parameter space X carries a natural group structure which leaves invariant our Bayesian state of knowledge (Jaynes, 1968). This can be seen as a generalisation of the invariance principle used to justify the uniform prior over the three cups in the example above. For example, in physics we might expect that an experiment will give the same results regardless of our choice of the origin of a coordinate system. This induces the group structure of the translation group on X, which determines the prior probability as a constant improper prior. Similarly, some measurements are naturally invariant to the choice of an arbitrary scale (e.g., whether centimeters or inches are used, the physical results should be equal). In such a case, the scale group is the natural group structure, and the corresponding prior on X is proportional to 1/x. It sometimes matters whether we use the left-invariant or right-invariant Haar measure. For example, the left and right invariant Haar measures on the affine group are not equal. Berger (1985, p. 413) argues that the right-invariant Haar measure is the correct choice.
Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy (MAXENT). The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximizing the entropy over a suitable set of probability distributions on X, one finds the distribution that is least informative in the sense that it contains the least amount of information consistent with the constraints that define the set. For example, the maximum entropy prior on a discrete space, given only that the probability is normalized to 1, is the prior that assigns equal probability to each state. And in the continuous case, the maximum entropy prior given that the density is normalized with mean zero and variance unity is the standard normal distribution. The principle of minimum cross-entropy generalizes MAXENT to the case of "updating" an arbitrary prior distribution with suitable constraints in the maximum-entropy sense.
A related idea, reference priors, was introduced by José-Miguel Bernardo. Here, the idea is to maximize the expected Kullback–Leibler divergence of the posterior distribution relative to the prior. This maximizes the expected posterior information about X when the prior density is p(x); thus, in some sense, p(x) is the "least informative" prior about X. The reference prior is defined in the asymptotic limit, i.e., one considers the limit of the priors so obtained as the number of data points goes to infinity. In the present case, the KL divergence between the prior and posterior distributions is given by
Here, is a sufficient statistic for some parameter . The inner integral is the KL divergence between the posterior and prior distributions and the result is the weighted mean over all values of . Splitting the logarithm into two parts, reversing the order of integrals in the second part and noting that does not depend on yields
The inner integral in the second part is the integral over of the joint density . This is the marginal distribution , so we have
Now we use the concept of entropy which, in the case of probability distributions, is the negative expected value of the logarithm of the probability mass or density function or Using this in the last equation yields
In words, KL is the negative expected value over of the entropy of conditional on plus the marginal (i.e. unconditional) entropy of . In the limiting case where the sample size tends to infinity, the Bernstein-von Mises theorem states that the distribution of conditional on a given observed value of is normal with a variance equal to the reciprocal of the Fisher information at the 'true' value of . The entropy of a normal density function is equal to half the logarithm of where is the variance of the distribution. In this case therefore where is the arbitrarily large sample size (to which Fisher information is proportional) and is the 'true' value. Since this does not depend on it can be taken out of the integral, and as this integral is over a probability space it equals one. Hence we can write the asymptotic form of KL as
where is proportional to the (asymptotically large) sample size. We do not know the value of . Indeed, the very idea goes against the philosophy of Bayesian inference in which 'true' values of parameters are replaced by prior and posterior distributions. So we remove by replacing it with and taking the expected value of the normal entropy, which we obtain by multiplying by and integrating over . This allows us to combine the logarithms yielding
This is a quasi-KL divergence ("quasi" in the sense that the square root of the Fisher information may be the kernel of an improper distribution). Due to the minus sign, we need to minimise this in order to maximise the KL divergence with which we started. The minimum value of the last equation occurs where the two distributions in the logarithm argument, improper or not, do not diverge. This in turn occurs when the prior distribution is proportional to the square root of the Fisher information of the likelihood function. Hence in the single parameter case, reference priors and Jeffreys priors are identical, even though Jeffreys has a very different rationale.
Reference priors are often the objective prior of choice in multivariate problems, since other rules (e.g., Jeffreys' rule) may result in priors with problematic behavior.[clarification needed A Jeffreys prior is related to KL divergence?]
Objective prior distributions may also be derived from other principles, such as information or coding theory (see e.g. minimum description length) or frequentist statistics (see frequentist matching). Such methods are used in Solomonoff's theory of inductive inference
Philosophical problems associated with uninformative priors are associated with the choice of an appropriate metric, or measurement scale. Suppose we want a prior for the running speed of a runner who is unknown to us. We could specify, say, a normal distribution as the prior for his speed, but alternatively we could specify a normal prior for the time he takes to complete 100 metres, which is proportional to the reciprocal of the first prior. These are very different priors, but it is not clear which is to be preferred. Jaynes' often-overlooked method of transformation groups can answer this question in some situations.
Similarly, if asked to estimate an unknown proportion between 0 and 1, we might say that all proportions are equally likely, and use a uniform prior. Alternatively, we might say that all orders of magnitude for the proportion are equally likely, the logarithmic prior, which is the uniform prior on the logarithm of proportion. The Jeffreys prior attempts to solve this problem by computing a prior which expresses the same belief no matter which metric is used. The Jeffreys prior for an unknown proportion p is p−1/2(1 − p)−1/2, which differs from Jaynes' recommendation.
Practical problems associated with uninformative priors include the requirement that the posterior distribution be proper. The usual uninformative priors on continuous, unbounded variables are improper. This need not be a problem if the posterior distribution is proper. Another issue of importance is that if an uninformative prior is to be used routinely, i.e., with many different data sets, it should have good frequentist properties. Normally a Bayesian would not be concerned with such issues, but it can be important in this situation. For example, one would want any decision rule based on the posterior distribution to be admissible under the adopted loss function. Unfortunately, admissibility is often difficult to check, although some results are known (e.g., Berger and Strawderman 1996). The issue is particularly acute with hierarchical Bayes models; the usual priors (e.g., Jeffreys' prior) may give badly inadmissible decision rules if employed at the higher levels of the hierarchy.
Let events be mutually exclusive and exhaustive. If Bayes' theorem is written as
then it is clear that the same result would be obtained if all the prior probabilities P(Ai) and P(Aj) were multiplied by a given constant; the same would be true for a continuous random variable. If the summation in the denominator converges, the posterior probabilities will still sum (or integrate) to 1 even if the prior values do not, and so the priors may only need to be specified in the correct proportion. Taking this idea further, in many cases the sum or integral of the prior values may not even need to be finite to get sensible answers for the posterior probabilities. When this is the case, the prior is called an improper prior. However, the posterior distribution need not be a proper distribution if the prior is improper. This is clear from the case where event B is independent of all of the Aj.
Statisticians sometimes use improper priors as uninformative priors. For example, if they need a prior distribution for the mean and variance of a random variable, they may assume p(m, v) ~ 1/v (for v > 0) which would suggest that any value for the mean is "equally likely" and that a value for the positive variance becomes "less likely" in inverse proportion to its value. Many authors (Lindley, 1973; De Groot, 1937; Kass and Wasserman, 1996) warn against the danger of over-interpreting those priors since they are not probability densities. The only relevance they have is found in the corresponding posterior, as long as it is well-defined for all observations. (The Haldane prior is a typical counterexample.[clarification needed])
Examples of improper priors include:
- Carlin, Bradley P.; Louis, Thomas A. (2008). Bayesian Methods for Data Analysis (Third ed.). CRC Press. ISBN 9781584886983.
- This prior was proposed by J.B.S. Haldane in "A note on inverse probability", Mathematical Proceedings of the Cambridge Philosophical Society 28, 55–61, 1932, doi:10.1017/S0305004100010495. See also J. Haldane, "The precision of observed values of small frequencies", Biometrika, 35:297–300, 1948, doi:10.2307/2332350, JSTOR 2332350.
- Jaynes (1968), pp. 17, see also Jaynes (2003), chapter 12. Note that chapter 12 is not available in the online preprint but can be previewed via Google Books.
- Christensen, Ronald; Johnson, Wesley; Branscum, Adam; Hanson, Timothy E. (2010). Bayesian Ideas and Data Analysis : An Introduction for Scientists and Statisticians. Hoboken: CRC Press. p. 69. ISBN 9781439894798.
- Rubin, Donald B.; Gelman, Andrew; John B. Carlin; Stern, Hal (2003). Bayesian Data Analysis (2nd ed.). Boca Raton: Chapman & Hall/CRC. ISBN 1-58488-388-X. MR 2027492.
- Berger, James O. (1985). Statistical decision theory and Bayesian analysis. Berlin: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611.
- Berger, James O.; Strawderman, William E. (1996). "Choice of hierarchical priors: admissibility in estimation of normal means". Annals of Statistics. 24 (3): 931–951. doi:10.1214/aos/1032526950. MR 1401831. Zbl 0865.62004.
- Bernardo, Jose M. (1979). "Reference Posterior Distributions for Bayesian Inference". Journal of the Royal Statistical Society, Series B. 41 (2): 113–147. JSTOR 2985028. MR 0547240.
- James O. Berger; José M. Bernardo; Dongchu Sun (2009). "The formal definition of reference priors". Annals of Statistics. 37 (2): 905–938. arXiv: . doi:10.1214/07-AOS587.
- Jaynes, Edwin T. (Sep 1968). "Prior Probabilities" (PDF). IEEE Transactions on Systems Science and Cybernetics. 4 (3): 227–241. doi:10.1109/TSSC.1968.300117. Retrieved 2009-03-27.
- Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science. Cambridge University Press. ISBN 0-521-59271-2.
- Williamson, Jon (2010). "review of Bruno di Finetti. Philosophical Lectures on Probability" (PDF). Philosophia Mathematica. 18 (1): 130–135. doi:10.1093/philmat/nkp019. Archived from the original (PDF) on 2011-06-09. Retrieved 2010-07-02. | <urn:uuid:7c000d99-352a-45b6-8bf2-2f5498895245> | 3.546875 | 4,269 | Knowledge Article | Science & Tech. | 44.570408 | 95,502,580 |
Rise in Carbon Emissions According to New Study
Carbon dioxide emissions are rising at a higher rate than any time since the dinosaurs walked the earth, scientists have found in a new study.
Published Monday in the journal Nature Geoscience, the findings reveal that our planet's carbon release rates are higher right now than they've been in millions of years, and that's especially bad for our oceans.
(MORE: Report Ties Climate Change to Extreme Weather Events)
“Given currently available records, the present anthropogenic carbon release rate is unprecedented during the past 66 million years,” the researchers wrote in the study.
The scientists looked at sediments off the New Jersey coast, as well as climate computer models, to gather more information about the organisms that lived and died during the period known as the Paleocene-Eocene Thermal Boundary (PETM). The PETM occurred about 56 million years ago, and during this period, average global temperatures rose as much as 14 degrees Fahrenheit, scientists believe.
One reason for the temperature change was that there was a massive release of carbon dioxide, and because the oceans are known to absorb much of the greenhouse gas, acidity levels can spike and marine life is in danger of a widespread extinction event in a short amount of time.
The scientists studied the PETM because it's the time period that closest resembles what's happening now with carbon dioxide emissions, Richard Zeebe, study co-author from the University of Hawaii at Manoa, told Mashable. But during the PETM, the study found annual carbon emissions were about 1.1 billion tons, spread over 4,000 years. Currently, carbon emissions are about 10 billion tons per year.
The study connected the dots, and the scientists' findings paint a bad picture for the future of marine life.
“Future ecosystem disruptions are likely to exceed the relatively limited extinctions observed at the PETM,” Zeebe told Reuters.
MORE ON WEATHER.COM: Illustrations of Sea Level Rise | <urn:uuid:c0cc133e-8d6b-4199-8c37-b4560c59681e> | 3.8125 | 414 | News Article | Science & Tech. | 32.917842 | 95,502,586 |
The map shows the risk of water flowing over land (runoff) carrying potential pollutants into water courses. It covers most of Scotland’s cultivated agricultural land area.
The runoff risk is shown in 3 classes: Low, Moderate or High.
|Low runoff risk||Soils can store large volumes of water or can allow water to quickly infiltrate and so surface runoff is limited.|
|Moderate runoff risk||Soils have a moderate capacity to store rainfall or to allow water to infiltrate. Soils will reach saturation under some circumstances, leading to runoff.|
|High runoff risk||Soils have a limited capacity to store rainfall or to allow water to infiltrate. The soil will quickly saturate, leading to rapid runoff.|
The digital dataset gives information on the likelihood of water running off the land, carrying potential pollutants with it. The runoff risk is given in 3 classes: Low, Moderate or High.
The risk of runoff depends on how easily water can drain away from the soil surface. It also depends on how much water the soil can store. These in turn depend on fundamental soil characteristics such as porosity and flow pathways through the soil.
Each of the soils in the Soil Map of Scotland (partial cover) dataset was first allocated to one of 29 Hydrology of Soil Type (HOST) classes and then the Standard Percentage Runoff for these classes was determined from Boorman et al. (1995). These runoff values were then allocated to one of 3 classes that reflected the likelihood of a soil becoming saturated leading to water flowing over the land. The three classes, Low, Moderate or High, equate to less than 20, 20 to 40 and more than 40 percent runoff. Where the soil map units were described as complexes (that is, more than one soil type if found in a soil map unit), the precautionary principle was applied and the soil at most risk of generating runoff was used to describe the whole map unit.
The map will be updated when new areas of digitised soil information become available.
You can click on the map, or insert a grid reference or post code, to find out the risk of runoff occurring at that point. You can also download the map data from the James Hutton Institute data download page.
Be aware: This map is produced at a fixed scale; zooming-in does not change the resolution of the map.
Please cite as: Lilly, A. and Baggaley N.J. 2018. Runoff risk map of Scotland (partial cover). James Hutton Institute, Aberdeen.
This work was partly funded by the Rural & Environment Science & Analytical Services Division of the Scottish Government.
Boorman, D.B., Hollis, J.M and Lilly, A. 1995. Hydrology of soil types: a hydrologically-based classification of the soils of the United Kingdom. Institute of Hydrology Report No.126. Institute of Hydrology, Wallingford.
Adobe Acrobat Reader is the free, trusted leader for reliably viewing, annotating and signing PDFs.
Download Adobe Acrobat Reader | <urn:uuid:599a2e24-34c5-4b73-9bbc-f079f7095f52> | 3.859375 | 632 | Knowledge Article | Science & Tech. | 51.046227 | 95,502,596 |
BaryonsBaryons are massive particles consisting of three quarks in the standard model. Protons, neutrons and the lambda, sigma, xi, and omega particles are baryons. Baryons are distinct from mesons in that mesons are composed of only two quarks. Baryons and mesons, being subject to the strong force, are all members of the family of hadrons. Baryons are fermions - having spin 1/2 and subject to the Pauli exclusion principle - while the mesons are bosons, with spin 1, and not subject to the exclusion principle.
All particle interactions obey the 'Law if Conservation of Baryon Number', in addition to the other conservation laws.
The conservation of baryon number is an important rule for interactions and decays of baryons. No known interactions violate conservation of baryon number.
Recent experimental evidence shows the existence of five-quark combinations which are being called pentaquarks. The pentaquark would be included in the classification of baryons,being a combination of an ordinary baryon plus a meson, having a 'net quark number' of three. | <urn:uuid:a9ea9074-18cb-449c-a3dc-74c5fb684183> | 3.765625 | 246 | Knowledge Article | Science & Tech. | 37.401865 | 95,502,597 |
Conduit has a cross section 482 mm. Maybe put it into 6 conductors with a cross section S2 $mm2?
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
- Pipe cross section
The pipe has an outside diameter 1100 mm and the pipe wall is 100 mm thick. Calculate the cross section of this pipe.
The length of segment AB is 24 cm and the point M and N divided it into thirds. Calculate the circumference and area of this shape.
- 10 pieces
How to divide the circle into 10 parts (geometrically)?
Calculate area of the quatrefoil which is inscribed in a square with side 6 cm.
- Roof tiles
The roof has a trapezoidal shape with bases of 15 m and 10 m, height of roof is 4 meters. How many tiles will need if on 1 m2 should be used 8 tiles?
- Basket of fruit
In six baskets, the seller has fruit. In individual baskets, there are only apples or just pears with the following number of fruits: 5,6,12,14,23 and 29. "If I sell this basket," the salesman thinks, "then I will have just as many apples as a pear." Which
- Diameters of circles
How many percent of the area of a larger circle is a smaller circle if the smaller circle has a diameter 120 mm and a larger one has a diameter 300 mm?
- Circular lawn
Around a circular lawn area is 2 m wide sidewalk. The outer edge of the sidewalk is curb whose width is 2 m. Curbstone and the inner side of the sidewalk together form a concentric circles. Calculate the area of the circular lawn and the result round to 1
- The dice
What is the probability of events that if we throw a dice is rolled less than 6?
If you give me two antennas will be same. If you give me again your two antenna I have a 5× so many than you. How many antennas have both mans?
If Alena give Lenka 3 candy will still have 1 more candy. If Lenka give Alena 1 candy Alena will hame twice more than Lenka. How many candies have each of them?
- Three friends
The three friends spent 600 KC in a teahouse. Thomas paid twice as much as Paul. Paul a half less than Zdeněk. How many each paid?
- In the orchard
In the orchard, they planted 25 apple trees, 20 pears, 15 plums and 40 marbles. A strong late frost, however, destroyed a fifth of all new trees. Unfortunately, it was all the trees of one kind of fruit. What is the probability that the plums have died out
- Functions f,g
Find g(1) if g(x) = 3x - x2 Find f(5) if f(x) = x + 1/2
- One half
One half of ? is: ?
- 6 terms
Find the first six terms of the sequence. a1 = 7, an = an-1 + 6
- Theorem prove
We want to prove the sentense: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? | <urn:uuid:56a85a81-0937-41e5-8101-fb1093c4efba> | 3.5625 | 732 | Tutorial | Science & Tech. | 81.325713 | 95,502,603 |
In the quest for evidence of dark matter, physicist Andrea Pocar of UMass Amherst and his students have played a role in designing and building a part of the argon-based DarkSide-50 detector in Italy.
In researchers' quest for evidence of dark matter, physicist Andrea Pocar of the University of Massachusetts Amherst and his students have played an important role in designing and building a key part of the argon-based DarkSide-50 detector located underground in Italy's Gran Sasso National Laboratory.
Undergraduate Arthur Kurlej welding the grid electrode onto its support ring inside the radon-suppressed clean room in the underground laboratory at Gran Sasso, Italy. Kurlej's advisor, physics professor Andrea Pocar, says undergraduates were notably helpful in designing and making this component of the dark matter detector there, known as DarkSide-50.
Credit: UMass Amherst
This week, scientists from around the world who gathered at the University of California, Los Angeles, at the Dark Matter 2018 Symposium learned of new results in the search for evidence of the elusive material in Weakly Interacting Massive Particles (WIMPs) by the DarkSide-50 detector. WIMPs have been candidate dark matter particles for decades, but none have been found to date.
Pocar says the DarkSide detector has demonstrated the great potential of liquid argon technology in the search for so-called "heavy WIMPs," those with mass of about 100 to 10,000 times the mass of a proton. Further, he adds, the double-phase argon technique used by the DarkSide-50 detector has unexpected power in the search for "low-mass WIMPs," with only 1-10 times the mass of a proton.
He adds, "The component we made at UMass Amherst, with very dedicated undergraduates involved from the beginning, is working very well. It's exciting this week to see the first report of our success coming out at the symposium." His graduate student Alissa Monte, who has studied surface and radon-related backgrounds using DarkSide-50, will present a poster at the UCLA meeting.
Pocar says, "There is a vibrant community of researchers around the world conducting competing experiments in this 'low mass' WIMP area. Over the past two years we collected data for a measurement we didn't expect to be able to make. At this point we are in a game we didn't think we could be in. We are reporting the high sensitivity we have achieved with the instrument, which is performing better than expected." Sensitivity refers to the instrument's ability to distinguish between dark matter and background radiation.
Dark matter, Pocar explains, represents about 25 percent of the energy content of the universe and while it has mass that can be inferred from gravitational effects, physicists have great difficulty detecting and identifying it because it hardly interacts, if at all, with "regular" matter through other forces. "Dark matter doesn't seem to want to interact much at all with the matter we know about," the physicist notes.
The DarkSide-50 detector uses 50 kg (about 110 lbs.) of liquid argon in a vat, with a small pocket of argon gas at the top, Pocar explains, as a target to detect WIMPs. The researchers hope for a WIMP to hit the nucleus of an argon atom in the tank, which then can be detected by the ionization produced by the nuclear recoil in the surrounding argon medium. Some of the ionization signal, proportional to the energy deposited inside the detector, is collected by applying an electric field to the target, he explains.
A flash of light is also produced in the argon with ionization, Pocar says. For high-enough energy events, the light pulse is bright enough to be used to tell the difference in "signature" between a nuclear recoil like that induced by a WIMP, and electron recoils induced by background or environmental radioactivity.
Pocar's lab designed, made and installed one of the electrodes that apply the electric field. He says, "For low-mass WIMPs, the amount of energy transmitted to the nucleus of argon by a WIMP is incredibly tiny. It's like hitting a billiard ball with a slow ping-pong ball. But a key thing for us is that now with two years of data, we have an exquisite understanding of our detector and we understand all non-WIMP events very well. Once you understand your detector, you can apply all that understanding in search mode, and plan for follow-up experiments."
Cristiano Galbiati, spokesperson for the DarkSide project, said at this week's symposium, "This is the best way to start the adventure of the future experiment DarkSide-20k. The results of DarkSide-50 provide great confidence on our technological choices and on the ability to carry out a compelling discovery program for dark matter. If a detector technology will ever identify convincingly dark matter induced events, this will be it."
The DarkSide-50 apparatus and experiments are supported by the Italian National Institute for Nuclear Physics, which operates the Gran Sasso Laboratory, the U.S. National Science Foundation, and funding agencies at collaborating institutions in Brazil, China, France, Poland, Spain and Russia.
Janet Lathrop | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1f0c9b0e-c929-403b-a77e-b5b0c3c45c0b> | 2.84375 | 1,693 | Content Listing | Science & Tech. | 41.013829 | 95,502,604 |
Frauenheim, Karin; Neumann, V; Thiel, Hjalmar; Türkay, Michael (1989): Benthos fauna in the North Sea from 2 surveys in 1986 and 1987. PANGAEA, https://doi.org/10.1594/PANGAEA.717149, Supplement to: Frauenheim, K et al. (1989): The distribution of the larger epifauna during summer and winter in the North Sea and its suitability for environmental monitoring. Senckenbergiana maritima, 20(3/4), 101-118, http://www.schweizerbart.de:80/pubs/books/sng/senckenber-190702003-desc.html
Always quote above citation when using data! You can download the citation in several formats below.
During two surveys in the North Sea, in summer 1986 and in winter 1987, larger epibenthos was collected with a 2 m beam trawl. The distributions of the species were checked for average linkage by means of the JACCARD-index cluster analysis. In summer two main clusters can be recognized. These are situated to the north and to the south of the Dogger Bank. In winter two main clusters may be recognized as well, but these clusters divide the North Sea into a western and an eastern part. We conclude, that these differences of epibenthos characteristics are correlated with seasonal changes in water body distributions.
Median Latitude: 57.446034 * Median Longitude: 1.619241 * South-bound Latitude: 51.203330 * West-bound Longitude: -4.500000 * North-bound Latitude: 61.616700 * East-bound Longitude: 10.666700
Date/Time Start: 1986-05-03T19:19:00 * Date/Time End: 1987-03-06T05:01:00
VA44_1/00 * Latitude: 56.670000 * Longitude: 2.983300 * Date/Time: 1986-05-03T19:19:00 * Elevation: -69.0 m * Location: North Sea * Campaign: VA44 (ZISCH) * Basis: Valdivia (1961) * Device: Multiple investigations (MULT)
VA44_100/00 * Latitude: 58.150000 * Longitude: 8.900000 * Date/Time: 1986-06-04T20:40:00 * Elevation: -470.0 m * Location: Skagerrak * Campaign: VA44 (ZISCH) * Basis: Valdivia (1961) * Device: Multiple investigations (MULT) | <urn:uuid:bf312fd8-309a-4b05-b236-d074efbbd26e> | 3.1875 | 572 | Academic Writing | Science & Tech. | 74.304211 | 95,502,632 |
Leiopathes glaberrima is a tall arborescent black coral species structuring important facies of the deep-sea rocky bottoms of the Mediterranean Sea that are severely stifled by fishing activities. At present, however, no morphological in vivo description, ecological characterization, age dating and evaluation of the possible conservation actions have ever been made for any population of this species in the basin. A dense coral population was reported during two Remotely Operated Vehicle (ROV) surveys conducted on a rocky bank off the SW coasts of Sardinia (Western Mediterranean Sea). L. glaberrima forms up to 2 m-tall colonies with a maximal observed basal diameter of nearly 7 cm. The radiocarbon dating carried out on a colony from this site with a 4 cm basal diameter revealed an approximately age of 2000 years. Considering the size-frequency distribution of the colonies in the area it is possible to hypothesize the existence of other millennial specimens occupying a supposedly very stable ecosystem. The persistence of this ecosystem is likely guaranteed by the heterogeneous rocky substrate hosting the black coral population that represents a physical barrier against the mechanical impacts acted on the surrounding muddy areas, heavily exploited as trawling fishing grounds. This favorable condition, together with the existence of a nursery area for catsharks within the coral ramifications and the occurrence of a meadow of the now rare soft bottom alcyonacean Isidella elongata in small surviving muddy inclaves, indicates that this ecosystem have to be considered a pristine Mediterranean deep-sea coral sanctuary that would deserve special protection.
|Titolo:||Persistence of pristine deep-sea coral gardens in the Mediterranean Sea (SW Sardinia)|
|Data di pubblicazione:||2015|
|Tipologia:||1.1 Articolo in rivista| | <urn:uuid:94f6e140-62b6-44a1-800c-0ab1d714dd7e> | 2.859375 | 371 | Knowledge Article | Science & Tech. | 7.193919 | 95,502,633 |
Hello World in Python
Python was created by Guido van Rossum.
The first version of Python as released in the year 1991.
Python is an interpreted language.
Python's design philosophy encourages code readability.
Python creates code blocks via indentation as opposed to using curly braces like other Programming languages.
Python is both Functional and Object Oriented Programming language. You can run a Python program without creating a class.
Python programs can be developed via an IDE like Eclipse IDE or IntelliJ PyCharms or BOA Constructor.
However for beginners it is recommended to start programming via a text editor or IDLE so that you can get a firm grasp of the concepts of the language.
We have started this training module with a hello world program as displayed above.
All programming training in cosmiclearn.com will be started with a hello world module.
You can use the left hand navigation to jump directly to a Python module you are interested in.
Alternative you can use previous and next links in a given module to go step by step. Happy learning! | <urn:uuid:c006f72c-dd59-4fea-825c-b1d4a0bffd91> | 3.34375 | 222 | Product Page | Software Dev. | 48.485 | 95,502,657 |
Corals are the building blocks of marine habitats and oxygen-giving marine organisms. They cover only about one percent of the ocean floor, but they have a huge effect on the health of the rest of the ocean and the planet. The Florida Aquarium is working to protect and restore oceans and raise awareness about the threats coral reefs face (increasing water temperature, pollution and overfishing).
According to a recent press release, Rachel Serafin, a coral biologist at The Florida Aquarium, spoke at the World Aquaculture Society Conference held recently in Las Vegas.
Serafin spoke about The Florida Aquarium’s unprecedented success in coral reproduction, and this year has been the Aquarium’s most successful year to date in coral reproduction. The Aquarium currently has roughly 600 coral juveniles that have survived and flourished from last August’s spawning event (when corals release eggs and sperm in the water at the same time to reproduce).
“Speaking at this conference, on a global stage, is a necessary step forward for coral restoration. Corals are not your typical cute, cuddly animal. They are often forgotten, but they are vital to the health of our oceans. Speaking at such a prestigious conference allows us to bring attention to this critical issue before it is too late and all our reefs are beyond repair,” said Serafin.
“Aquaculture is the rearing of aquatic animals such red drum or snook, and coral is no different,” said Serafin. “While some rear fish to replenish wild populations, we rear corals to help replenish those wild populations that are in dire need of our help.”
There are different types of coral aquaculture practices that The Florida Aquarium uses to aid in coral restoration, but genetic creation through sexual reproduction was the focus of Serafin’s presentation.
The Florida Aquarium is a leading partner during the annual staghorn coral spawn in the Florida Keys. The annual coral spawn gives corals their only chance to sexually reproduce on their own and build future coral reefs, and this process is vital for scientists to witness for coral research and conservation.
Every year, The Florida Aquarium and other partners dive 30 feet below the ocean’s surface in Tavernier Key, expertly collecting the spawn from the Coral Restoration Foundation’s coral nursery, and deliver them to teams aboard research boats. Those teams immediately begin the fertilization process using the bundles of eggs and sperm (gametes) and rush them to on-shore labs to maximize the development of embryos and ultimately free-swimming larvae. Some of the larvae were released to the wild, while others were brought back to grow at The Florida Aquarium’s Center for Conservation in Apollo Beach, FL and at other partner institutions.
Powered by WPeMatico | <urn:uuid:f5b9a535-9142-409b-ae67-c4432deb90db> | 3.5625 | 588 | News (Org.) | Science & Tech. | 29.917719 | 95,502,674 |
Species Detail - Little Grebe (Tachybaptus ruficollis) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Protected Species: Wildlife Acts || Threatened Species: Birds of Conservation Concern || Threatened Species: Birds of Conservation Concern >> Birds of Conservation Concern - Amber List
1 January (recorded in 2017)
31 December (recorded in 2010)
National Biodiversity Data Centre, Ireland, Little Grebe (Tachybaptus ruficollis), accessed 21 July 2018, <https://maps.biodiversityireland.ie/Species/10020> | <urn:uuid:97aa7509-ab97-4668-b2b7-5fc44f593c24> | 2.65625 | 178 | Structured Data | Science & Tech. | 23.693158 | 95,502,682 |
Coastal & Marine Communities
Enhancing the Performance of Marine Reserves in Estuaries: Just Add Water. 2017. Gilby, B.L. (University of the Sunshine Coast, Queensland, Australia, email@example.com), A.D. Olds, N.A. Yabsley, R.M. Connolly, P.S. Maxwell and T.A. Schlacher. Biological Conservation 210:1–7. dx.doi.org/10.1016/j.biocon.2017.03.027
Marine reserves are an important tool for coastal conservation. Their implementation in estuaries, one of the world's most impacted and arguably important ecosystems, are seldom tested. Establishing reserves usually comes with a cost, especially for commercial and recreational fisheries. In order to avoid confrontation, reserves are often placed in "residual" locations, which in turn reduce their effectiveness. In this study, Gilby and colleagues evaluated the effectiveness of an estuarine reserve network in Eastern Australia. They sampled six no-take estuarine reserves in residual locales and compared abiotic (i.e., size, tidal estuarine width range, habitat composition) and biotic characteristics (fish assemblage, abundance and composition) to other nearby non-reserve estuaries where commercial fishing was permitted. Results showed that no-take reserves established in less impactful places did not positively affect fish abundance. Usually, reserves fail in promoting fish abundance and diversity if the habitats they support have little or no ecological value for fish, or if they represent only a small subset of the abiotic conditions present at a regional scale. The reserves analyzed here were consistently smaller, had higher proportions of intertidal sand flat cover, and were less connected to the ocean than the fished areas. The authors conclude that placing reserves in small estuaries does not aid in the protection of commercially important fish species; conservation efforts could be better directed elsewhere.
Reintroduction of a Dioecious Aquatic Macrophyte (Stratiotes aloides L.) Regionally Extinct in the Wild. Interesting Answers from Genetics. 2017. Orsenigo S., R. Gentili (University of Milano-Bicocca, Milano, Italy, firstname.lastname@example.org), A.J.P. Smolders, A. Efremov, G. Rossie, N.M.G. Ardenghi, S. Citterio and T. Abeli. Aquatic Conservation: Marine and Freshwater Ecosystems 27:10–23. doi: 10.1002/aqc.2626
A common restoration practice for locally extant species is to introduce individuals from nearby populations. Recreating the population's genetic make-up is difficult at best and can limit establishment due to founder effects. This is more important in the case of dioecious plants, where sex ratio is as critical as inbreeding potential. Stratiotes aloides (water soldiers) is a dioecious aquatic plant, threatened in Europe and currently extinct in Italy due to high nitrogen loads from agricultural runoff. S. aloides stands support high macroinvertebrate diversity and are considered a keystone species supporting several endangered vertebrate and invertebrate species. Two remnant populations in Italy exist (kept by amateur botanists), composed solely of asexually reproductive females. In this study, Orsenigo and colleagues used DNA analyses to characterize genetic patterns of these populations to determine if females were genetically diverse enough to maintain a healthy population. They sampled six natural populations throughout Europe and one ex situ population cultivated from Botanical Garden of Berlin specimen in order to determine which males were suitable for reintroduction. An unexpected level of high genetic variation in the female-only populations was reported. Investigators speculate that founder effects were absent due to increased gene flow from floating propagules or by the presence of hermaphrodite or tetraploidy plants. In terms of genetic similarity, Italian populations clustered with Romanian and Dutch populations, making these the best sources for male introduction in Italy. This study highlights the use of genetics in determining the best population selection for species reintroduction. The authors conclude that practitioners must consider sex ratio and interpopulation ecological differences prior to population restoration. [End Page 266]
Algal Subsidies Enhance Invertebrate Prey for Threatened Shorebirds: A Novel Conservation Tool on Ocean Beaches? 2017. Schlacher, T.A. (The University of the Sunshine Coast, Maroochydore, Australia, email@example.com), B.M. Hutton, B.L. Gilby, N. Porch, G.S. Maguire, B. Maslo, R.M. Connolly, A.D. Olds and M.A. Weston. Estuarine, Coastal and Shelf Science 191:28–38. dx.doi.org/10.1016/j.ecss.2017.04.004
Anthropogenic activities cause declines in numerous shore bird species... | <urn:uuid:28e8e539-5a02-4381-b21a-7fa87db2dc9b> | 2.71875 | 1,060 | Content Listing | Science & Tech. | 34.043693 | 95,502,697 |
Washington: NASA`s Solar Dynamics Observatory (SDO) satellite has spotted what astronomers say is one of the largest sunspots to have appeared on the Sun in years and is
likely to shoot solar flares towards the Earth by next week.
The massive sunspot, called AR1339, has been estimated to be about 80,000km long and 40,000km wide -- almost eight times as big as the Earth, SpaceWeather.com reported.
The spacecraft`s photos of the giant sunspot, spotted on November 3, show the solar region as it comes into view on the northeastern edge, or limb, of the Sun.
The sunspot behemoth isn`t yet facing our planet, but is expected to shoot an X-Class solar flare towards us when it will move in Earth`s direction next week, astronomers said.
X-class solar flares on low scales are not too dangerous for the planet, but if one of these sunspots shoots a high end X-class number toward us, it could put most of the modern
society world into the dark ages.
The sunspot is actually a group of nearby darkened spots on the Sun, some of which are individually wider than planet Earth. It appears when intense magnetic activity ramps up on the sun, blocking the flow of heat through the process of convection, which causes areas of the sun`s surface to cool down. These isolated areas then appear dimmer than the
surrounding area, creating a dark spot, LiveScience reported.
The intense magnetic activity around sunspots can often cause solar flares, which are large releases of energy that can actually brighten up the Sun. Flares are also accompanied by flows of charged particles out into space, called coronal mass ejections, which can wreak havoc on satellites and power grids on Earth if they head our way.
The National Oceanic and Atmospheric Administration (NOAA) has predicted a 50 per cent chance of medium-class M solar flares over the next 24 hours due to this sunspot.
As the sunspot turns towards the Earth in the coming days, we may be in for a greater chance of these ejections, the scientists said. | <urn:uuid:dce39624-e3cf-491c-a3e0-001b1b14a3a6> | 3.4375 | 449 | News Article | Science & Tech. | 48.711023 | 95,502,707 |
TagsAeolis Mons Arizona State University ASU atmosphere Beautiful Mars Cape Byron Cape Tribulation channels clouds craters Curiosity dunes dust Endeavour Crater Gale Crater High Resolution Imaging Science Experiment HiRISE Malin Space Science Systems Marathon Valley MARCI Mars Color Imager Mars Exploration Rover Mars Odyssey Mars Reconnaissance Orbiter Mars Science Laboratory mass wasting MER Mount Sharp MRO MSL MSSS Murray Formation NASA Opportunity sand dunes Stimson Formation storms THEMIS THEMIS Image of the Day Thermal Emission Imaging System University of Arizona Vera Rubin Ridge volcanics weather wind
- CRISM: Compact Reconnaissance Imaging Spectrometer for Mars
- CTX: Context Camera
- HiRISE: High Resolution Imaging Science Experiment
- MARSIS: Mars Advanced Radar for Subsurface and Ionosphere Sounding
- SHARAD: Shallow Radar
- THEMIS: Thermal Emission Imaging System
- All Mars missions list
- Mars 2020 Rover
- Mars Atmosphere and Volatile Evolution Mission (MAVEN)
- Mars Exploration Rovers (MER)
- Mars Express (MEX)
- Mars Odyssey
- Mars Orbiter Mission (MOM) / Mangalyaan
- Mars Reconnaissance Orbiter (MRO)
- Mars Science Laboratory (MSL)
Tag Archives: dust devil tracks
THEMIS Image of the Day, July 2, 2018. Today’s VIS image shows dust devil tracks in Utopia Planitia. The tracks occur where dust devils have scoured the fine materials off the underlying surface. In some cases dust devils can create … Continue reading
A circular structure in lava in southwestern Elysium Planitia. Beautiful Mars series.
Well-preserved 4-kilometer impact crater on the floor of Hellas Planitia. Why do we say “well-preserved?” Mainly due to the fact that the rim of the crater is still very visible. Beautiful Mars series.
At around 2,200 kilometers in diameter, Hellas Planitia is the largest visible impact basin in the Solar System, and hosts the lowest elevations on Mars’ surface as well as a variety of landscapes. This image covers a small central portion … Continue reading
Within a crater in the Southern Highlands. This is a very ancient crater, long filled-in. You can see polygonal patterns in the terrain, mainly due to the contraction of subsurface ice as the seasons change. Beautiful Mars series.
Dunes and slopes. Beautiful Mars series.
The jagged saw-tooth dichotomy, over a grainy texture, seen in this close-up image, reminds us of a scene from an old silent horror movie. Stark and unnerving, like that time between dusk and darkness, as the campfire burns out…was that … Continue reading
Sliding ice block dunes. Beautiful Mars series.
THEMIS Image of the Day, June 21, 2017. Do you see what I see? Look out, a barrage of bullets is headed our way! (THEMIS Art #130) More THEMIS Images of the Day by geological topic.
Valleys on east rim and wall of Darwin Crater imaged by Viking — Viking Image 571B01. Beautiful Mars series. | <urn:uuid:4a3368bf-2772-4bcd-94dd-699538f658d3> | 2.640625 | 666 | Content Listing | Science & Tech. | 34.206652 | 95,502,720 |
Authors: Ilija Barukčić
The deterministic relationship between cause and effect is deeply connected with our understanding of the physical sciences and their explanatory ambitions. Though progress is being made, the lack of theoretical predictions and experiments in quantum gravity makes it difficult to use empirical evidence to justify a theory of causality at quantum level in normal circumstances, i. e. by predicting the value of a well-confirmed experimental result. For a variety of reasons, the problem of the deterministic relationship between cause and effect is related to basic problems of physics as such. Despite the common belief, it is a remarkable fact that a theory of causality should be consistent with a theory of everything and is because of this linked to problems of a theory of everything. Thus far, solving the problem of causality can help to solve the problems of the theory of everything (at quantum level) too.
Comments: 19 Pages. (C) Ilija Barukčić, Jever, Germany, 2015. Published by: International Journal of Applied Physics and Mathematics vol. 6, no. 2, pp. 45-65, 2016. http://dx.doi.org/10.17706/ijapm.2016.6.2.45-65
Unique-IP document downloads: 152 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:1b92e1d5-eefe-4d5b-8321-2eb3509514d0> | 2.53125 | 403 | Academic Writing | Science & Tech. | 47.746154 | 95,502,721 |
Researchers working at the U.S. Department of Energy's (DOE) SLAC National Accelerator Laboratory have used powerful X-rays to help decipher how certain natural antibiotics defy a longstanding set of chemical rules – a mechanism that has baffled organic chemists for decades.
Their result, reported today in Nature, details how five carbon atoms and one oxygen atom in the structure of lasalocid, a natural antibiotic produced by bacteria in soil, can link into a six-membered ring through an energetically unfavorable chemical reaction. Unlocking this chemical pathway could enable scientists to synthesize many important chemicals currently found only in nature.
"Our study has a broad implication because the six-membered ring is a common structural feature found in hundreds of drug molecules produced by nature," said the study's principal investigator, Chu-Young Kim of the National University of Singapore. "We have actually analyzed the genes of six other organisms that produce similar drugs and we are now confident that the chemical mechanism we have uncovered applies to these other organisms as well."
According to "Baldwin's Rules for Ring Closure," which govern the way these rings form, this compound should contain a five-membered ring instead of the observed six-membered ring.
The solution to the molecular mystery depended in large part on a deeper understanding of the unique protein Lsd19, the catalyst that enables the formation of lasalocid's rings. To determine the protein's atomic structure, the researchers hit frozen crystals of Lsd19 with X-rays from SLAC's Stanford Synchrotron Radiation Lightsource and observed how the crystals diffracted the X-rays passing through. "You need atomic-level detail of the crystal's structure to understand what's really happening," said co-author Irimpan Mathews, a staff scientist at SLAC.
"The bugs have taught us a valuable chemistry lesson," Kim said. "With a new understanding of how nature synthesizes the six-membered rings, chemists may be able to develop novel methods that will enable us to produce these drugs with ease in the chemical laboratory. Alternatively, protein engineers may be able to use our results to develop a biofactory where these drugs are mass produced using a fermentation method. Either method will make more effective and more affordable drugs available to the public."
Kim's group has moved on to their next challenge: investigating how nature synthesizes the anti-cancer drug echinomycin. In the meantime, "The knowledge we have generated will help researchers in academia and industry to develop new methods for biological production of important polyether drugs," he said. "We are not talking about the distant future."
Additional authors included Kinya Hotta, Xi Chen, Hao Li and Kunchithapadam Swaminathan of the National University of Singapore, Robert S. Paton of Oxford University, Atsushi Minami and Hideaki Oikawa of Hokkaido University, Kenji Watanabe of the University of Shizuoka and Kendall N. Houk of the University of California Los Angeles. The research was supported by the Royal Commission for the Exhibition of 1851 and Fulbright-AstraZeneca Research Fellowship, the Japan Society for the Promotion of Science, the National Institutes of Health, the Ministry of Education, Culture, Sports, Science & Technology in Japan, and the National University of Singapore Life Sciences Institute Young Investigator Award. The Stanford Synchrotron Radiation Lightsource is supported by DOE's Office of Science and its Structural Molecular Biology program is also supported by the National Institutes of Health.
SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visit http://www.slac.stanford.edu.
DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://www.science.energy.gov
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
Biological signalling processes in intelligent materials
18.07.2018 | Albert-Ludwigs-Universität Freiburg im Breisgau
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Life Sciences
18.07.2018 | Information Technology | <urn:uuid:75dd18bb-7e77-4381-b2a8-289e57298c27> | 3.328125 | 1,486 | Content Listing | Science & Tech. | 33.373543 | 95,502,741 |
My name is Lê and I believe that the greatest challenge in education is to make science and math appealing.
This is why I aim at bringing enthusiasm and excitement to the readers’ learning experience.
Science4All is also available in French.
By Scott McKinney | Updated:2016-01 | Views: 7766
Over 300 years ago, a mathematician named Fermat discovered a subtle property about prime numbers. In the 1970’s, three mathematicians at MIT showed that his discovery could be used to formulate a remarkably powerful method for encrypting information to be sent online. The RSA algorithm, as it is known, is used to secure ATM transactions, online business, banking, and even electronic voting. Surprisingly, it’s not too difficult to understand, so let’s see how it works. | <urn:uuid:9dac29ad-81f2-4c88-a137-25cda697d179> | 2.890625 | 168 | About (Org.) | Science & Tech. | 43.867244 | 95,502,751 |
The Microsoft Build Engine is a platform for building applications. This engine, which is also known as MSBuild, provides an XML schema for a project file that controls how the build platform processes and builds software. Visual Studio uses MSBuild, but it doesn't depend on Visual Studio. By invoking msbuild.exe on your project or solution file, you can orchestrate and build products in environments where Visual Studio isn't installed.
Visual Studio uses MSBuild to load and build managed projects. The project files in Visual Studio (.csproj, .vbproj, .vcxproj, and others) contain MSBuild XML code that executes when you build a project by using the IDE. Visual Studio projects import all the necessary settings and build processes to do typical development work, but you can extend or modify them from within Visual Studio or by using an XML editor.
For information about MSBuild for C++, see MSBuild (Visual C++).
The following examples illustrate when you might run builds by using an MSBuild command line instead of the Visual Studio IDE.
Visual Studio isn't installed.
You want to use the 64-bit version of MSBuild. This version of MSBuild is usually unnecessary, but it allows MSBuild to access more memory.
You want to run a build in multiple processes. However, you can use the IDE to achieve the same result on projects in C++ and C#.
You want to modify the build system. For example, you might want to enable the following actions:
Preprocess files before they reach the compiler.
Copy the build outputs to a different place.
Create compressed files from build outputs.
Do a post-processing step. For example, you might want to stamp an assembly with a different version.
You can write code in the Visual Studio IDE but run builds by using MSBuild. As another alternative, you can build code in the IDE on a development computer but use an MSBuild command line to build code that's integrated from multiple developers.
You can use Team Foundation Build to automatically compile, test, and deploy your application. Your build system can automatically run builds when developers check in code (for example, as part of a Continuous Integration strategy) or according to a schedule (for example, a nightly Build Verification Test build). Team Foundation Build compiles your code by using MSBuild. For more information, see Build and release.
This topic provides an overview of MSBuild. For an introductory tutorial, see Walkthrough: Using MSBuild.
Use MSBuild at a command prompt
To run MSBuild at a command prompt, pass a project file to MSBuild.exe, together with the appropriate command-line options. Command-line options let you set properties, execute specific targets, and set other options that control the build process. For example, you would use the following command-line syntax to build the file MyProj.proj with the
Configuration property set to
MSBuild.exe MyProj.proj /property:Configuration=Debug
For more information about MSBuild command-line options, see Command-line reference.
Before you download a project, determine the trustworthiness of the code.
MSBuild uses an XML-based project file format that's straightforward and extensible. The MSBuild project file format lets developers describe the items that are to be built, and also how they are to be built for different operating systems and configurations. In addition, the project file format lets developers author reusable build rules that can be factored into separate files so that builds can be performed consistently across different projects in the product.
The following sections describe some of the basic elements of the MSBuild project file format. For a tutorial about how to create a basic project file, see Walkthrough: Creating an MSBuild project file from scratch.
Properties represent key/value pairs that can be used to configure builds. Properties are declared by creating an element that has the name of the property as a child of a PropertyGroup element. For example, the following code creates a property named
BuildDir that has a value of
<PropertyGroup> <BuildDir>Build</BuildDir> </PropertyGroup>
You can define a property conditionally by placing a
Condition attribute in the element. The contents of conditional elements are ignored unless the condition evaluates to
true. In the following example, the
Configuration element is defined if it hasn't yet been defined.
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
Properties can be referenced throughout the project file by using the syntax $(<PropertyName>). For example, you can reference the properties in the previous examples by using
For more information about properties, see MSBuild properties.
Items are inputs into the build system and typically represent files. Items are grouped into item types, based on user-defined item names. These item types can be used as parameters for tasks, which use the individual items to perform the steps of the build process.
Items are declared in the project file by creating an element that has the name of the item type as a child of an ItemGroup element. For example, the following code creates an item type named
Compile, which includes two files.
<ItemGroup> <Compile Include = "file1.cs"/> <Compile Include = "file2.cs"/> </ItemGroup>
Item types can be referenced throughout the project file by using the syntax @(<ItemType>). For example, the item type in the example would be referenced by using
In MSBuild, element and attribute names are case-sensitive. However, property, item, and metadata names are not. The following example creates the item type
comPile, or any other case variation, and gives the item type the value "one.cs;two.cs".
<ItemGroup> <Compile Include="one.cs" /> <comPile Include="two.cs" /> </ItemGroup>
Items can be declared by using wildcard characters and may contain additional metadata for more advanced build scenarios. For more information about items, see Items.
Tasks are units of executable code that MSBuild projects use to perform build operations. For example, a task might compile input files or run an external tool. Tasks can be reused, and they can be shared by different developers in different projects.
The execution logic of a task is written in managed code and mapped to MSBuild by using the UsingTask element. You can write your own task by authoring a managed type that implements the ITask interface. For more information about how to write tasks, see Task writing.
MSBuild includes common tasks that you can modify to suit your requirements. Examples are Copy, which copies files, MakeDir, which creates directories, and Csc, which compiles Visual C# source code files. For a list of available tasks together with usage information, see Task reference.
A task is executed in an MSBuild project file by creating an element that has the name of the task as a child of a Target element. Tasks typically accept parameters, which are passed as attributes of the element. Both MSBuild properties and items can be used as parameters. For example, the following code calls the MakeDir task and passes it the value of the
BuildDir property that was declared in the earlier example.
<Target Name="MakeBuildDirectory"> <MakeDir Directories="$(BuildDir)" /> </Target>
For more information about tasks, see Tasks.
Targets group tasks together in a particular order and expose sections of the project file as entry points into the build process. Targets are often grouped into logical sections to increase readability and to allow for expansion. Breaking the build steps into targets lets you call one piece of the build process from other targets without copying that section of code into every target. For example, if several entry points into the build process require references to be built, you can create a target that builds references and then run that target from every entry point where it's required.
Targets are declared in the project file by using the Target element. For example, the following code creates a target named
Compile, which then calls the Csc task that has the item list that was declared in the earlier example.
<Target Name="Compile"> <Csc Sources="@(Compile)" /> </Target>
In more advanced scenarios, targets can be used to describe relationships among one another and perform dependency analysis so that whole sections of the build process can be skipped if that target is up-to-date. For more information about targets, see Targets.
Use MSBuild in Visual Studio
Visual Studio uses the MSBuild project file format to store build information about managed projects. Project settings that are added or changed by using the Visual Studio interface are reflected in the .*proj file that's generated for every project. Visual Studio uses a hosted instance of MSBuild to build managed projects. This means that a managed project can be built in Visual Studio or at a command prompt (even if Visual Studio isn't installed), and the results will be identical.
For a tutorial about how to use MSBuild in Visual Studio, see Walkthrough: Using MSBuild.
By using Visual Studio, you can compile an application to run on any one of several versions of the .NET Framework. For example, you can compile an application to run on the .NET Framework 2.0 on a 32-bit platform, and you can compile the same application to run on the .NET Framework 4.5 on a 64-bit platform. The ability to compile to more than one framework is named multitargeting.
These are some of the benefits of multitargeting:
You can develop applications that target earlier versions of the .NET Framework, for example, versions 2.0, 3.0, and 3.5.
You can target frameworks other than the .NET Framework, for example, Silverlight.
You can target a framework profile, which is a predefined subset of a target framework.
If a service pack for the current version of the .NET Framework is released, you could target it.
Multitargeting guarantees that an application uses only the functionality that's available in the target framework and platform.
For more information, see Multitargeting.
|Walkthrough: Creating an MSBuild project file from scratch||Shows how to create a basic project file incrementally, by using only a text editor.|
|Walkthrough: Using MSBuild||Introduces the building blocks of MSBuild and shows how to write, manipulate, and debug MSBuild projects without closing the Visual Studio IDE.|
|MSBuild concepts||Presents the four building blocks of MSBuild: properties, items, targets, and tasks.|
|Items||Describes the general concepts behind the MSBuild file format and how the pieces fit together.|
|MSBuild properties||Introduces properties and property collections. Properties are key/value pairs that can be used to configure builds.|
|Targets||Explains how to group tasks together in a particular order and enable sections of the build process to be called on the command line.|
|Tasks||Shows how to create a unit of executable code that can be used by MSBuild to perform atomic build operations.|
|Conditions||Discusses how to use the
|Advanced concepts||Presents batching, performing transforms, multitargeting, and other advanced techniques.|
|Logging in MSBuild||Describes how to log build events, messages, and errors.|
|Additional resources||Lists community and support resources for more information about MSBuild.|
Links to topics that contain reference information.
Glossary Defines common MSBuild terms. | <urn:uuid:ccec01f8-b35a-47bb-b750-69415e617387> | 3.140625 | 2,438 | Documentation | Software Dev. | 43.475562 | 95,502,793 |
Computational principles underlying the recognition of acoustic signals in insects
Many animals produce pulse-like signals during acoustic communication. These signals exhibit structure on two time scales: they consist of trains of pulses that are often broadcast in packets—so called chirps. Temporal parameters of the pulse and of the chirp are decisive for female preference. Despite these signals being produced by animals from many different taxa (e.g. frogs, grasshoppers, crickets, bushcrickets, flies), a general framework for their evaluation is still lacking. We propose such a framework, based on a simple and physiologically plausible model. The model consists of feature detectors, whose time-varying output is averaged over the signal and then linearly combined to yield the behavioral preference. We fitted this model to large data sets collected in two species of crickets and found that Gabor filters—known from visual and auditory physiology—explain the preference functions in these two species very well. We further explored the properties of Gabor filters and found a systematic relationship between parameters of the filters and the shape of preference functions. Although these Gabor filters were relatively short, they were also able to explain aspects of the preference for signal parameters on the longer time scale due to the integration step in our model. Our framework explains a wide range of phenomena associated with female preference for a widespread class of signals in an intuitive and physiologically plausible fashion. This approach thus constitutes a valuable tool to understand the functioning and evolution of communication systems in many species.
KeywordsPerceptual decision making Insect Song Linear-nonlinear model Gabor filter
We thank Klaus-Gerhardt Heller for valuable discussions.
- Alexander, R.D. (1957). The song relationships of four species of ground crickets (Orthoptera: Gryllidae: Nemobius). Ohio Journal of Science, 57(3), 153–163.Google Scholar
- Bush, S.L., & Schul, J. (2005). Pulse-rate recognition in an insect: evidence of a role for oscillatory neurons. Journal of Comparative Physiology A: Sensory Neural, and Behavioral Physiology, 192, 1–9.Google Scholar
- Gerhardt, C.H., & Huber, F. (2002). Acoustic Communication in Insects and Anurans. Chicago: University of Chicago Press.Google Scholar
- Mitchell, M. (1998). An introduction to genetic algorithms (complex adaptive systems) (3rd printing ed.). A Bradford Book.Google Scholar
- Weissman, D.B., Gray, D.A., Pham, H.T., Tijssen, P. (2012). Billions and billions sold: Pet-feeder crickets (Orthoptera: Gryllidae), commercial cricket farms, an epizootic densovirus, and government regulations make for a potential disaster. Zootaxa, 3504, 67–88.Google Scholar | <urn:uuid:dd91cc49-3a62-42e8-ae3e-46b145bba6b0> | 3.125 | 608 | Academic Writing | Science & Tech. | 38.559353 | 95,502,810 |
- Referenced in 8127 articles
- traditional programming languages, such as C/C++ or Java™. You can use MATLAB for a range...
- Referenced in 2077 articles
- ILOG® CPLEX® offers C, C++, Java, .NET, and Python libraries that solve linear programming...
- Referenced in 1360 articles
- CompCert compiler certification project or Java Card EAL7 certification in industrial context), the formalization...
- Referenced in 496 articles
- possible in languages such as C++ or Java. The language provides constructs intended to enable...
- Referenced in 178 articles
- Java Modeling Language (JML) is a behavioral interface specification language that can be used ... specify the behavior of Java modules. It combines the design by contract approach of Eiffel...
- Referenced in 129 articles
- model-check properties of concurrent Java software. The Bandera Tool Set is an integrated collection ... designed to facilitate experimentation with model-checking Java source code. Bandera takes as input Java ... values of variables and internal states of Java lock objects. par In this tutorial paper ... simple concurrent Java program to illustrate the functionality of the main components of Bandera...
- Referenced in 111 articles
- Model checking JAVA programs using JAVA PathFinder. The paper describes a translator called JAVA PATHFINDER ... which translates from JAVA to PROMELA, the modeling language of the SPIN model checker ... translates a given JAVA program into a PROMELA model, which then can be model checked ... using SPIN. The JAVA program may contain assertions, which are translated into similar assertions...
- Referenced in 122 articles
- aspect-oriented extension to the Java. AspectJ TM is a simple and practical aspect-oriented ... extension to Java TM . With just a few new constructs, AspectJ provides support for modular ... crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into ... standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse...
- Referenced in 131 articles
- Extended Static Checker for Java version 2 (ESC/Java2) is a programming tool that attempts ... common run-time errors in JML-annotated Java programs by static analysis of the program...
- Referenced in 76 articles
- Featherweight Java: A minimal core calculus for Java and GJ. Several recent studies have introduced ... lightweight versions of Java: reduced languages in which complex features like threads and reflection ... assignment) to obtain a small calculus, Featherweight Java, for which rigorous proofs are not only ... possible but easy. Featherweight Java bears a similar relation to Java as the lambda-calculus...
- Referenced in 144 articles
- algorithms. The codes in Matlab, C and Java for them could be found...
- Referenced in 77 articles
- basic structure of an environment for proving JAVA programs annotated with JML specifications. Our method ... translator of our own, which reads the JAVA files and produces specifications ... representation of the JAVA semantics of the JAVA program into WHY’s input language...
- Referenced in 127 articles
- offers interfaces to C, C++, Fortran, Java, AMPL, AIMMS, GAMS, MPL, Mathematica, MATLAB Microsoft Excel...
- Referenced in 123 articles
- Modula-3, and another checker for Java. The aim of ESC is to increase software...
- Referenced in 114 articles
- download and include in your applications, Java applets which will generate matrices in your...
- Referenced in 54 articles
- release 2.2 system description. Sat4j is a java library for solving boolean satisfaction and optimization ... Minimally Unsatisfiable Subset (MUS) problems. Being in Java, the promise ... solve those problems (a SAT solver in Java is about 3.25 times slower than ... featured, robust, user friendly, and to follow Java design guidelines and code conventions (checked using...
- Referenced in 91 articles
- intermediate language for the verification of C, Java, or Ada programs. Why3 is a complete...
- Referenced in 89 articles
- Eiffel later found their way into Java, C#, and other languages. New language design ideas...
- Referenced in 89 articles
- TAPENADE can be utilized as a server (JAVA servlet), which runs at INRIA Sophia-Antipolis... | <urn:uuid:2d50cb0a-e039-487a-af57-8ac8590eccc4> | 2.640625 | 988 | Content Listing | Software Dev. | 40.137117 | 95,502,822 |
Unit 4 Beyond Visible Light. EM Spectrum pic. Part 1 What do you know?. What have we learned so far about how energy travels from the sun to the Earth?
EM Spectrum pic
Electromagnetic SpectrumWhat do you see in this chart?Waves to the left of visible light get ___________.Waves to the right get ______________.How does the amount of energy relate to frequency?
Sir Frederick William Herschel
1. Draw and label a picture of you experiment set up.
2. Make a data chart to record temperature differences/changes.
3. Describe the patterns or changes in the temperature data.
4. What can you conclude from you data? Include you claim,
evidence, and reasoning.
Claim: A statement that answers the investigation question
Evidence: Scientific data from investigation or research that is appropriate and sufficient to support the claim.
Reasoning: Scientific principles are cited or stated that link the evidence and claim. Shows why the data supports the claim and why it makes sense.
Write your conclusion on the What We Think Chart.
Evidence: The data shows that after _____ of time the thermometer in the violet read ____, yellow _____, red_____ and Infrared____________. The evidence shows that there is light beyond the red area because there was an increase in temperature in the area beyond the red area.
Reasoning:: Light energy is transformed to heat when it interacts with matter. The increase in temperature outside the red of the visible spectrum is evidence of light converting to heat energy. The addition of heat energy can be inferred from the increase in temperature.
Stop video at 2:10
Discuss the experiment.
Why is Ozone in the atmosphere important? | <urn:uuid:110dac81-30ec-4fa1-8a64-e9313315e01f> | 3.828125 | 357 | Truncated | Science & Tech. | 57.565271 | 95,502,829 |
We’ve already seen our share of winter storms, severe weather, cold outbreaks, flooding and droughts so far in 2018. But there are some weather events every year that are downright strange, and this year is no exception.
The events we consider strange are weather phenomena happening repeatedly in one place, in a place where you wouldn’t think they would occur or during an unusual time of year.
Some are phenomena you may not find in a Weather 101 textbook.
Freezing Rain in Florida
Just after New Year’s Day, Winter Storm Grayson blanketed Tallahassee, Florida, with its first measurable snow since 1989, and the first January such occurrence, there, in records dating to 1885.
That’s eye-catching enough. What was even more bizarre was seeing an ice accumulation map involving the Sunshine State.
Up to a quarter inch of ice accumulation was measured in Lake City, and light icing on elevated surfaces was reported as far south as Levy County.
A Horseshoe Cloud
While the nor’easter parade was hammering the East Coast, a bizarre cloud was captured in video over Nevada in early March.
As meteorologist Jonathan Belles explained, this rare horseshoe vortex is fleeting, lasting only minutes, when a relatively flat cloud moves over a column of rising air, which also gives the cloud some spin.
A State Record Hailstone
Alabama’s notorious history of severe weather, particularly tornadoes, is well documented. On March 19, however, it was a hailstone that captured meteorologists’ attention.
One softball-size hailstone near Cullman, Alabama, was found to set a new state record, more than 5 inches in diameter.
Please like, share and tweet this article.
Pass it on: Popular Science | <urn:uuid:1f61673a-ee48-4cb0-afbf-9e8cc69724d5> | 2.796875 | 376 | Listicle | Science & Tech. | 39.99 | 95,502,843 |
The seconds between the warning of an impending earthquake and the moment the quake hits can be the difference between life or death. In that time, automatic brakes can halt trains; people can duck for cover or rush for safety. But current warning systems aren’t always where they are needed, and scientists don’t fully understand what determines the size and location of earthquakes. Nearly 10,000 people were killed in earthquakes in 2015, the majority from the devastating Nepal quake. The federal government estimates that earthquakes cause $5.3 billion in damage per year to buildings in the U.S.
Ground-based sensors help warn of quakes, but they have their limits. Now, a group of researchers at Columbia University are taking measurement somewhere new: underwater. They’re designing a system that could lead to faster warnings for people living near areas affected by underwater earthquakes and tsunamis. If they succeed, they could help reduce the damage caused by these natural disasters and save many lives.
I recently visited a laboratory at Columbia’s Lamont-Doherty Earth Observatory, in Rockland County, New York, where a technician was testing pieces of the boxy, three-foot-long underwater seismometers under a microscope. The lab’s floor-to-ceiling shelves were stacked with bright yellow and orange parts that will have to endure crushing pressures on the ocean floor at depths of thousands of feet for years at a time.
The networks of land-based earthquake monitors around the world warn of quakes by watching for changes in pressure and seismic signals. Underwater sensors could more accurately locate underwater earthquakes than ground-based networks, says Spahr Webb, the Lamont-Doherty researcher leading the project, because “the system is designed to be deployed over the top of a large earthquake and faithfully record the size and location of both the earthquake and the tsunami. … By installing pressure and seismic sensors offshore you get a much more accurate determination of location and depth of a nearby earthquake.”
Webb pointed out the crab-like shape of a thick steel shell that is designed to prevent the seismometers from being pried from the sea floor by fishing trawl nets. “Keeping these things where they belong is the key,” he told me.
When they are launched about a year from now, 10 to 15 seismometers will be carefully lowered by a crane from a ship to the seabed. Similar to the land-based monitors, they will contain sensitive pressure sensors and accelerometers to measure and separate out seismic and oceanic signals. These sensors will monitor subduction zones, the areas where one plate of the earth’s crust slides under another. An earthquake produces a tsunami at a subduction zone when an underwater plate snaps back like a giant spring after it is forced out of position by the collision of an adjacent plate.
According to Webb, the land-based seismometers monitoring the regions that produce the largest tsunamis are sometimes more than 100 miles away, which hinders speed and accuracy. “A big motivation for the offshore observations is the size of the tsunami from any given earthquake has a large uncertainty based on land observations alone,” says Webb. In Japan, after the devastating 2011 earthquake, an expensive cable with numerous sensors was installed offshore to speed up warnings and boost accuracy. Now the Columbia seabed-based seismometers will obtain data in regions of the globe with similar tsunami hazards as Japan to augment land-based early warning systems.
The project is not alone. Columbia’s seismometer system is just one of a wide array of new earthquake-monitoring technologies that are being developed. “There are many exciting techniques coming online,” says Elizabeth Cochran, a geophysicist with the U.S. Geological Survey.
While the ocean depths offer opportunities to monitor quakes close to their source, for instance, watching from space could provide a wider view. Scientists at University College London have proposed launching several small satellites to look for signs of earthquakes using electromagnetic and infrared sensors. So far, experiments have proven that the concept works, but a problem has kept the project from getting off the ground: Electromagnetic and infrared signals are emitted by all sorts of things, natural as well as man-made.
Dhiren Kataria, one of the leaders of the proposed project, which has been dubbed TwinSat, hopes that using a large enough number of satellites should allow researchers to separate out the seismic from the non-seismic events. Multiple satellites would also provide extensive global coverage, because each would orbit the earth every 90 minutes, he adds.
The TwinSat team has previously failed to get funding from the U.K. Space Agency, but it plans to resubmit its proposal in the next few months. If approved, the team could launch its satellites within three years, Kataria claims. To keep costs low, the satellites are designed to be small and use some off-the-shelf commercial components.
Another approach researchers are using is turning cell phones into science instruments. The app MyShake constantly runs a phone’s motion sensors to analyze how it’s shaking around. If the movement fits the vibrational profile of an earthquake, the app relays this information along with the phone’s GPS coordinates to the app’s creators, the seismological laboratory at the University of California, Berkeley, for analysis.
While the app’s not intended to replace traditional seismic sensor networks like those run by the U.S. Geological Survey, says Richard Allen, the seismological laboratory’s director, it could provide faster and more accurate warnings through vast amounts of crowd-sourced data. More than 250,000 people have downloaded the app since it debuted a year ago.
Quicker warnings like these can be used improve safety by being incorporated right into existing infrastructure. San Francisco’s Bay Area Rapid Transit has integrated Allen’s earthquake warnings into its system so that trains automatically slow when they receive a signal that an earthquake will hit. The system relies on the fact that the electronic signals from monitoring stations travel faster than seismic waves, giving the brakes time to act. “I can push out the warning before many people can feel the tremors,” Allen says.
Even better than faster earthquake warnings would be a way to predict quakes. Researchers at Los Alamos National Laboratory are using artificial intelligence to simulate earthquakes so that they can forecast when they will occur. But Cochran of the USGS doubts it will ever be possible to reliably predict quakes. “Earthquakes are very complex,” she says. “It’s hard to predict such chaotic systems.”
We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org. | <urn:uuid:85897ed1-fc81-4c08-bee7-1d719ce77995> | 3.96875 | 1,398 | Truncated | Science & Tech. | 39.352839 | 95,502,859 |
Geometric Complexes and Polyhedra
Topology is an abstraction of geometry; it deals with sets having a structure which permits the definition of continuity for functions and a concept of “closeness” of points and sets. This structure, called the “topology” on the set, was originally determined from the properties of open sets in Euclidean spaces, particularly the Euclidean plane.
KeywordsEuclidean Plane Algebraic Topology Klein Bottle Simple Closed Curve Homology Theory
Unable to display preview. Download preview PDF. | <urn:uuid:34c72023-acb5-467a-bdc4-3400c6775004> | 2.84375 | 117 | Truncated | Science & Tech. | 17.236842 | 95,502,868 |
time - get time in seconds
time_t time(time_t *tloc);
() returns the time as the number of seconds since the Epoch,
1970-01-01 00:00:00 +0000 (UTC).
is non-NULL, the return value is also stored in the memory
pointed to by tloc
On success, the value of time in seconds since the Epoch is returned. On error,
is returned, and errno
- tloc points outside your accessible address space
(but see BUGS).
- On systems where the C library time() wrapper
function invokes an implementation provided by the vdso(7) (so that
there is no trap into the kernel), an invalid address may instead trigger
a SIGSEGV signal.
SVr4, 4.3BSD, C89, C99, POSIX.1-2001. POSIX does not specify any error
POSIX.1 defines seconds since the Epoch
using a formula that approximates
the number of seconds between a specified time and the Epoch. This formula
takes account of the facts that all years that are evenly divisible by 4 are
leap years, but years that are evenly divisible by 100 are not leap years
unless they are also evenly divisible by 400, in which case they are leap
years. This value is not the same as the actual number of seconds between the
time and the Epoch, because of leap seconds and because system clocks are not
required to be synchronized to a standard reference. The intention is that the
interpretation of seconds since the Epoch values be consistent; see
POSIX.1-2008 Rationale A.4.15 for further rationale.
On Linux, a call to time
() with tloc
specified as NULL cannot fail
with the error EOVERFLOW
, even on ABIs where time_t
is a signed
32-bit integer and the clock ticks past the time 2**31 (2038-01-19 03:14:08
UTC, ignoring leap seconds). (POSIX.1 permits, but does not require, the
error in the case where the seconds since the Epoch will not
fit in time_t
.) Instead, the behavior on Linux is undefined when the
system time is out of the time_t
range. Applications intended to run
after 2038 should use ABIs with time_t
wider than 32 bits.
Error returns from this system call are indistinguishable from successful
reports that the time is a few seconds before
the Epoch, so the C
library wrapper function never sets errno
as a result of this call.
argument is obsolescent and should always be NULL in new code.
is NULL, the call cannot fail.
On some architectures, an implementation of time
() is provided in the
This page is part of release 4.16 of the Linux man-pages
description of the project, information about reporting bugs, and the latest
version of this page, can be found at | <urn:uuid:e1ac3a7e-c85a-47e0-81cb-fceb21d5a666> | 3.3125 | 648 | Documentation | Software Dev. | 59.21175 | 95,502,874 |
Three million cubic kilometers of ice won’t wash into the ocean overnight, but researchers have been tracking increasing melt rates since at least 1979. Last summer, however, the melt was so large that similar events show up in ice core records only once every 150 years or so over the last four millennia.
“In July 2012, a historically rare period of extended surface melting raised questions about the frequency and extent of such events,” says Ralf Bennartz, professor of atmospheric and oceanic sciences and scientist at the University of Wisconsin–Madison‘s Space Science and Engineering Center. “Of course, there is more than one cause for such widespread change. We focused our study on certain kinds of low-level clouds.”
In a study to be published in the April 4 issue of the journal Nature, Bennartz and collaborators describe the moving parts that led to the melt, which was observed from the ICECAPS experiment funded by the National Science Foundation and run by UW–Madison and several partners atop the Greenland ice sheet.
“The July 2012 event was triggered by an influx of unusually warm air, but that was only one factor,” says Dave Turner, physical scientist at the National Oceanic and Atmospheric Administration’s National Severe Storms Laboratory. “In our paper we show that low-level clouds were instrumental in pushing temperatures up above freezing.”
Low-level clouds typically reflect solar energy back into space, and snow cover also tends to bounce energy from the sun back from the Earth’s surface.
Under particular temperature conditions, however, clouds can be both thin enough to allow solar energy to pass through to the surface and thick enough to “trap” some of that heat even if it is turned back by snow and ice on the ground.
While low, thin cloud cover is just one element within a complex interaction of wind speed, turbulence and humidity, the extra heat energy trapped close to the surface can push temperatures above freezing.
That is exactly what happened in July 2012 over large parts of the Greenland ice sheet, and similar conditions may help answer climate conundrums elsewhere.
“We know that these thin, low-level clouds occur frequently,” Bennartz says. “Our results may help to explain some of the difficulties that current global climate models have in simulating the Arctic surface energy budget.”
Current climate models tend to underestimate the occurrence of the clouds ICECAPS researchers found, limiting those models’ ability to predict cloud response to Arctic climate change and possible feedback like spiking rates of ice melt.
By using a combination of surface-based observations, remote sensing data, and surface energy-balance models, the study not only delineates the effect of clouds on ice melting, but also shows that this type of cloud is common over both Greenland and across the Arctic, according to Bennartz.
“Above all, this study highlights the importance of continuous and detailed ground-based observations over the Greenland ice sheet and elsewhere,” he says. “Only such detailed observations will lead to a better understanding of the processes that drive Arctic climate. ”
NOAA’s Earth System Research Laboratory and the Department of Energy’s Atmospheric Radiation Measurement program contributed to the work at NSF’s Summit Station, supporting collaborating scientists Matt Shupe of the University of Colorado Boulder, ICECAPS principal investigator Von Walden of the University of Idaho, Konrad Steffen of the Swiss Federal Institute for Forest, Snow and Landscape Research, UW–Madison’s Nate Miller and Mark Kulie, and graduate students Claire Pettersen (UW-Madison) and Chris Cox (Idaho).
— Mark Hobson, 608-263-3373, email@example.com
Mark Hobson | Newswise
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:5d600ba2-451f-4f8d-b564-40db97793f10> | 4.1875 | 1,362 | Content Listing | Science & Tech. | 35.731271 | 95,502,894 |
Dating back to the first century, scientists, philosophers and reporters have noted the occasional occurrence of "bright nights," when an unexplained glow in the night sky lets observers see distant mountains, read a newspaper or check their watch.
A new study accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union, uses satellite data to present a possible explanation for these puzzling historical phenomena.
The authors suggest that when waves in the upper atmosphere converge over specific locations on Earth, it amplifies naturally occurring airglow, a faint light in the night sky that often appears green due to the activities of atoms of oxygen in the high atmosphere. Normally, people don't notice airglow, but on bright nights it can become visible to the naked eye, producing the unexplained glow detailed in historical observations.
Few, if any, people observe bright nights anymore due to widespread light pollution, but the new findings show that they can be detected by scientists and may still be noticeable in remote areas. Bright airglow can be a concern for astronomers, who must contend with the extra light while making observations with telescopes.
"Bright nights do exist, and they're part of the variability of airglow that can be observed with satellite instruments," said Gordon Shepherd, an aeronomer at York University in Toronto, Canada, and lead author of the new study.
A historical mystery
Historical accounts of bright nights go back centuries. Pliny the Elder described bright nights, saying, "The phenomenon commonly called 'nocturnal sun', i.e. a light emanating from the sky during the night, has been seen during the consulate of C. Caecilius and Cn. Papirius (~ 113 BCE), and many other times, giving an appearance of day during the night."
European newspapers and the scientific literature also carried observations of these events in 1783, 1908 and 1916.
"The historical record is so coherent, going back over centuries, the descriptions are very similar," Shepherd said.
Modern observations of bright nights from Earth are practically nonexistent. Even devoted airglow researchers like Shepherd and his colleagues have never seen a true bright night with their eyes. But even before the advent of artificial lighting, bright nights were rare and highly localized.
"Bright nights have disappeared," Shepherd said. "Nobody sees them, nobody talks about them or records them any longer, but they're still an interesting phenomenon."
Shepherd knew of the historical observations and could see bright night events reflected in airglow data from the Wind Imaging Interferometer (WINDII), an instrument once carried by NASA's Upper Atmosphere Research Satellite (1991-2005), but he couldn't explain why the phenomena occurred.
He and his co-author, Youngmin Cho, a research associate at York University, searched for mechanisms that would cause airglow to increase to visible levels at specific locations.
Airglow comes from emissions of different colors of light from chemical reactions in the upper reaches of the atmosphere. The green portion of airglow occurs when light from the sun splits apart molecular oxygen into individual oxygen atoms. When the atoms recombine, they give off the excess energy as photons in the green part of the visible light spectrum, giving the sky a greenish tinge.
To find factors that would cause peaks in airglow and create bright nights, the researchers searched two years of WINDII data for unusual airglow profiles, ruling out meteors and aurora, which have their own distinct signatures. They identified 11 events where WINDII detected a spike in airglow levels that would be visible to the human eye, two of which they describe in detail in the study.
Finally, the researchers matched up the events with the ups and downs of zonal waves, large waves in the upper atmosphere that circle the globe and are impacted by weather. When the peaks of certain waves aligned, they produced bright night events that could last for several nights at a specific location. These events were four to 10 times brighter than normal airglow and could be responsible for the bright nights observed throughout history.
"This [study] is a very clear, new approach to the old enigma of what makes some night skies so remarkably bright, and the answer is atmospheric dynamics," said Jürgen Scheer, an aeronomer at Instituto de Astronomía y Física del Espacio in Buenos Aires, who was not connected to the study. "We now have a good idea which dynamical phenomena are behind [airglow] events of extreme brightness."
Observing a bright night
From their data, the researchers estimate that at a specific location, visible bright nights occur only once per year, and their observation would rely on a sky watcher looking from a remote location on a clear, moonless night with dark-adjusted eyes. Shepherd estimates that a bright night occurs somewhere on Earth, at different longitudes, on about 7 percent of nights.
If an astronomer wanted to experience a bright night personally, Shepherd suspects that scientists could predict their occurrence if they monitored the waves continuously, so that they could calculate when their peaks would align.
The next challenge will be to reproduce the observed convergence of these waves through modeling and to consider the effects of other types of waves in the atmosphere, Scheer said.
"Maybe it's an almost dead question," Shepherd said. "I'm having the last word before it dies."
Explore further: Beautiful green 'airglow' spotted by aurora hunters – but what is it?
G. G. Shepherd et al, WINDII Airglow Observations of Wave Superposition and the Possible Association with Historical ?Bright Nights?, Geophysical Research Letters (2017). DOI: 10.1002/2017GL074014 | <urn:uuid:969be8e4-5e97-4223-8112-c318a715e8a5> | 3.859375 | 1,177 | News Article | Science & Tech. | 39.272344 | 95,502,910 |
A new analysis of dust from the comet Wild 2, collected in 2004 by NASA's Stardust mission, has revealed an oxygen isotope signature that suggests an unexpected mingling of rocky material between the center and edges of the solar system.
Despite the comet's birth in the icy reaches of outer space beyond Pluto, tiny crystals collected from its halo appear to have been forged in the hotter interior, much closer to the sun.
The result, reported in the Sept. 19 issue of the journal Science by researchers from Japan, NASA and the University of Wisconsin-Madison, counters the idea that the material that formed the solar system billions of years ago has remained trapped in orbits around the sun. Instead, the new study suggests that cosmic material from asteroid belts between Mars and Jupiter can migrate outward in the solar system and mix with the more primitive materials found at the fringes.
"Observations from this sample are changing our previous thinking and expectations about how the solar system formed," says UW-Madison geologist Noriko Kita, an author of the paper.
The Stardust mission captured Wild 2 dust in hopes of characterizing the raw materials from which our solar system coalesced. Since the comet formed more than 4 billion years ago from the same primitive source materials, its current orbit between Mars and Jupiter affords a rare opportunity to sample material from the farthest reaches of the solar system and dating back to the early days of the universe. These samples, which reached Earth in early 2006, are the first solid samples returned from space since Apollo.
"They were originally hoping to find the raw material that pre-dated the solar system," explains Kita. "However, we found many crystalline objects that resemble flash-heated particles found in meteorites from asteroids."
In the new study, scientists led by Tomoki Nakamura, a professor at Kyushu University in Japan, analyzed oxygen isotope compositions of three crystals from the comet's halo to better understand their origins. He and UW-Madison scientist Takayuki Ushikubo analyzed the tiny grains — the largest of which is about one-thousandth of an inch across — with a unique ion microprobe in the Wisconsin Secondary Ion Mass Spectrometer (Wisc-SIMS) laboratory, the most advanced instrument of its kind in the world.
To their surprise, they found oxygen isotope ratios in the comet crystals that are similar to asteroids and even the sun itself. Since these samples more closely resemble meteorites than the primitive, low-temperature materials expected in the outer reaches of the solar system, their analysis suggests that heat-processed particles may have been transported outward in the young solar system.
"This really complicates our simple view of the early solar system," says Michael Zolensky, a NASA cosmic mineralogist at the Johnson Space Center in Houston.
"Even though the comet itself came from way out past Pluto, there's a much more complicated history of migration patterns within the solar system and the material originally may have formed much closer to Earth," says UW-Madison geology professor John Valley. "These findings are causing a revision of theories of the history of the solar system."
Noriko Kita | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:62b6bee4-65a5-4b63-b9b0-4d968f52c6ef> | 3.65625 | 1,234 | Content Listing | Science & Tech. | 36.567981 | 95,502,916 |
A CREEPY skull-shaped space rock is due to haunt the world again next year
The asteroid named 2015 TB145 first past by 300,000 miles from Earth on October 31 2015.
Owing to its timing and skull-like appearance it was nicknamed the “Halloween asteroid” at the time.
But astronomers think the skeletal rock will be revisiting us at some time in mid-November next year.
During it’s flyby two years ago space boffins were able to become more familiar with its characteristics.
They managed to pin down it measurements to around 2,100 feet across and plot its orbital path.
As they examined it, NASA was able to categorise it as a “potentially hazardous asteroid”.
Scientists also managed to work out that it most likely completes a rotation every three hours as it shoots through space.
The incredibly dark stone also reflects just 5 or 6 per cent of the sunlight that hits it.
Pablo Santos-Sanz, an astrophysicist at the Institute of Astrophysics of Andalusia in Spain, said in a statement earlier this week: “This means that it is very dark – only slightly more reflective than charcoal.”
MOST READ IN TECH
TOTAL ECLIPSE FROM THE STARTTonight's total eclipse is visible in Ireland - here's when and how long we have to view amazing sight
Although the next fly-past won’t be as impressive as the last, since it will be passing by about 100 times further away, astronomers are still looking forward to a second look at the skull-shaped rock.
Santos-Sanz added: “Although this approach shall not be so favourable, we will be able to obtain new data that could help improve our knowledge of this mass and other similar masses that come close to our planet.”
We pay for your stories! Do you have a story for The Sun Online news team? Email us at firstname.lastname@example.org or call 0207 782 4368 . We pay for videos too. Click here to upload yours. | <urn:uuid:2d4c6f5c-d357-4bc3-b067-26dda5177074> | 2.609375 | 438 | Truncated | Science & Tech. | 59.252095 | 95,502,942 |
Researchers from North Carolina State University have found that applying a small electric field results in faster formation of ceramic products during manufacture at lower temperatures, and enhances the strength of the ceramic itself.
A team of Duke University chemists has perfected a simple way to make tiny copper nanowires in quantity. The cheap conductors are small enough to be transparent, making them ideal for thin-film solar cells, flat-screen TVs and computers, and flexible displays.
The mystery surrounding what happens when bubbles collide has finally been busted. And knowing how bubbles bounce apart and fuse together could improve the quality of ice-cream and champagne as well as increase efficiency in the mining industry.
Metal mirrors made with extremely high precision and exactly positioned are the key elements of modern telescopes. A new production technique enables complex optical surfaces to be manufactured with excellent trueness of shape and hitherto unattained positional accuracy.
Light-emitting diodes are gaining ground: They are now being used as background lighting for displays. But the manufacturing of complex LED optics is still complex and expensive. A new technology is revolutionizing production: Large-scale LED components can now be manufactured cost-effectively.
Counterfeit products create losses in the billions each year. Beside the economic damages, all too often additional risks arise from the poor materials and shoddy workmanship of 'knock-off artists'. Yet with the aid of fluorescing dyes, materials can be individually tagged and identified with certainty.
On December 2 - 3, 2010 experts from all over the world will gather for the 7th NanoMed conference in Berlin to discuss the state of the art in biomedical applications of nanotechnology. This year's Focus Topic is Nanotechnology-Enabled Diagnosis and Treatment of Cancer.
The Beilstein-Institut, a non-profit foundation, launches a scientific journal in the area of nanotechnology and nanoscience. The 'Beilstein Journal of Nanotechnology' is an Open Access Journal, which is globally available and publishes the latest research results and reviews. Publishing in this Journal is offered without any fees for authors and readers. The call for papers starts on June 1, 2010.
When the first warm rays of springtime sunshine trigger a burst of new plant growth, it's almost as if someone flicked a switch to turn on the greenery and unleash a floral profusion of color. Opening a window into this process, scientists have deciphered the structure of a molecular 'switch' much like the one plants use to sense light. | <urn:uuid:62374414-c88f-4eac-883b-27aaa3d2b21e> | 2.71875 | 512 | Content Listing | Science & Tech. | 31.355401 | 95,502,981 |
The pyrolysis of anisole (C6H5OCH3) using a hyperthermal nozzle
We have investigated the pyrolysis of anisole (C6H5OCH3), a model compound for methoxy functional groups in lignin. An understanding of the pyrolysis of this simple compound can provide valuable insight into the mechanisms for the thermal decomposition of biomass. Our emphasis in this study is the formation of polynuclear aromatic hydrocarbons (PAHs) and in particular we investigate the formation of naphthalene. The route to the formation of naphthalene from anisole follows the simple unimolecular decomposition of anisole, which leads to the phenoxy radical and then cyclopentadienyl radical. This chemical pathway has been demonstrated before, but the subsequent reaction of two cyclopentadienyl radicals to give naphthalene has only been the subject of theoretical investigations. We have used matrix isolation FTIR spectroscopy together with photoionization time-of-flight (TOF) mass spectrometry to identify intermediates in this reaction mechanism. Using this technique, we have trapped phenoxy and cyclopentadienyl radicals and measured their IR spectra. The formation of these species is confirmed in our TOF mass spectrometer. We have also identified the formation of 9,10-dihydrofulvalene, the adduct from the reaction of two cyclopentadienyl radicals. Finally, we have used molecular beam mass spectrometry (MBMS) and factor analysis to demonstrate the formation of naphthalene from the pyrolysis of anisole.
Friderichsen, A. V., Shin, E. J., Evans, R. J., Nimlos, M. R., Dayton, D., & Ellison, G. B. (2001). The pyrolysis of anisole (C6H5OCH3) using a hyperthermal nozzle. Fuel, 80(12), 1747-1755. DOI: 10.1016/S0016-2361(01)00059-X | <urn:uuid:2752a8f6-3bd8-4626-9f91-7d73b5069dae> | 2.59375 | 449 | Academic Writing | Science & Tech. | 31.31945 | 95,502,983 |
Space dust which created human life found in another solar system
By DAVID DERBYSHIRE
Last updated at 11:19 05 January 2008
Scientists believe they have spotted the chemicals in a disc of red dust around a star 220 light years away, raising the tantalising prospect that man is not alone in the universe.
It is the first time these socalled building blocks of life have been detected outside our solar system.
The dust was found by the Hubble space telescope around a young star, known as HR 4796A.
It lies one thousand million million miles away in Centaurus, a constellation visible mainly from the southern hemisphere.
The star is just eight million years old - making it a relative baby in the timescale of planets and solar systems.
It is 20 times brighter than the Sun and is in the late stages of planet formation.
The dust would have been created in the collisions of comets and asteroids orbiting the star.
An analysis of the dust by scientists revealed it was red. The wavelength of the light scattered off the dust suggests it contained large organic carbon molecules called tholins.
These molecules are believed to have existed on the primitive Earth billions of years ago and may have created the 'biomolecules' that make up all living things.
Tholins no longer form naturally on today's Earth, where they would be quickly destroyed by oxygen in the atmosphere.
However, they have been detected elsewhere in the solar system including in comets and on Saturn's moon Titan, where they give the atmosphere a reddish tinge.
This is the first time tholins have been found in another star system.
Many experts believe tholincarrying comets and other small clumps of dust and gas sowed the seeds of life on Earth more than four billion years ago.
They could be doing the same for newly formed planets orbiting HR 4796A.
Dr John Debes, who led the team of U.S. astronomers from the Carnegie Institution in Washington DC, said: "Astronomers are just beginning to look for planets around stars much different from the Sun.
"HR 4796A is twice as massive, nearly twice as hot as the Sun, and 20 times more luminous.
"Studying this system provides new clues to understanding the different conditions under which planets form and, perhaps, life can evolve."
The findings are reported in the latest issue of the Astrophysical Journal Letters.
Most watched News videos
- Model Annabelle Neilson walks the catwalk in 2010 fashion show
- Comedian is forced to move her scooter from disability space on train
- The terrifying moment a plane comes crashing down in South Africa
- Boris Johnson attacks Theresa May over Brexit 'fog of self-doubt'
- 'Africa won the world cup': Trevor Noah mocks France World Cup win
- Brutal bat attack caught on surveillance video in the Bronx
- Drowned woman and child found next to survivor clinging to wreck
- Piglets brutally killed by having their heads slammed on floor
- Macron's security advisor IMPERSONATES police to beat protestors
- Trump's daughter grasps her Secret Service agent's hand
- 'I won't go anywhere near children': Sir Cliff Richard
- Police release video of Stormy Daniels' arrest outside strip club | <urn:uuid:737e3cba-aee9-4f86-94e9-89d59dbedc97> | 3.375 | 686 | Truncated | Science & Tech. | 43.197119 | 95,502,985 |
The western burrowing owl (Athene cunicularia hypugaea) doesn’t ask for much. Commonly found alongside concrete culverts and medians, the burrowing owl requires only a few basic ingredients to survive Californian urban settings: open, well-drained soil; short, sparse vegetation; and underground burrows.
But Bay Area biologists say even those simple needs are threatened by a new land exchange between the city of Dublin and the U.S. Army. Under the exchange, 189 acres of open grassland now part of the Camp Parks Reserve Forces Training Area will be converted into six major development projects, including new military facilities, residential homes, and commercial properties. The developments threaten one of the few remaining colonies of burrowing owls in Alameda County.
“Trying to preserve the burrowing owl has been the most frustrating aspect of my career as a conservationist to date,” said Craig Breon, past director of Santa Clara Valley Audubon Society, in an email. “It wouldn’t take too much to save these guys, and we’re just not willing to do it.”
In 2003, the Center for Biological Diversity and its allies were denied a petition to protect the California population of burrowing owls under the California Endangered Species Act. The petition showed that breeding owls had lost an estimated 60 percent of their former range in California from the 1980s to the early 1990s, but the decision by the state Department of Fish & Game (DFG) stated that owl populations in the Imperial and San Joaquin Valley were healthy.
Jeff Miller, conservation advocate at the Center of Biological Diversity and one of the petition’s authors, said the decision assumes that there will be repopulation of declining or extirpated populations by Imperial and San Joaquin Valley populations — but no source populations have been found to exist.
“Once lost,” Miller said, “forever lost.”
Breon, who also helped author the petition, said opposition to listing the owl as endangered is a result of the owls preferring valley grasslands that are highly desirable habitat for development.
Sandra Menzel, a wildlife biologist with resource consulting group Albion Environmental, said that in the Bay Area, populations of burrowing owls were and still are rapidly declining.
To try and help the remaining owls, biologists have studied a number of development mitigation strategies. The most likely strategy for Camp Parks, transplanting or relocating owls to another habitat, has one of the worst outcomes.
In one recent paper, Lynne Trulio, a professor of environmental studies at San José State University, followed 27 relocations in Northern California and found that 63 percent of the owls disappeared, 26 percent flew back to their original site, and 7 percent bred successfully in the new habitat; the rest were subject to predation or unsuccessful breeding.
In the Camp Parks area, SunCal, one of the nation’s largest land developers, is planning six major development projects within the 189 acres, including an elementary school and 2,000-home neighborhood.
SunCal representative Joe Aguirre told the news site Around Dublin that SunCal plans to construct habitat away from the project to direct the owls to viable habitat. A draft environmental impact report released in June does not specify the habitat or location.
Dublin Mayor Tim Sbranti said development will not only provide a connection between the eastern and western parts of Dublin, but will include a 30-acre city park. Sbranti said open-space corridors within the city park will allow long-term preservation by preventing future indeterminate development.
“The development at Camp Parks is a win-win for everyone – the Army gets new facilities in exchange for their land, the city will become connected rather than divided by the base, and the owls will be provided trails linking an open space corridor [in the city park] to the adjacent grasslands,” Sbranti said.
Menzel said managing undisturbed habitat adjacent to the project provides the best hope for burrowing owls at Camp Parks. The “Burrowing Owl Survey Protocol and Mitigation Guidelines” states that if off-site mitigation is necessary, at least 9.75 acres of suitable burrowing owl habitat per pair or single bird should be preserved.
In California, burrowing owls feed mostly on crickets and meadow voles, and require the presence of the California ground squirrel (Otospermophilus beecheyi). Despite their name, burrowing owls do not dig burrows but benefit from the efforts and abandoned burrows of California ground squirrels.
The larger the area of habitat, the easier the owl lives. Burrowing owls are generalist eaters, so higher biodiversity means more options. The owls have an ingenious method of passive prey capture: After lining the entrance of their burrows with large mammal dung, the owls sit and wait for dung beetles to arrive then snatch up the pilfering insects as they try to roll away the “treasure.”
“They are a hoot to observe,” Menzel said. “It is an absolute privilege to have them in the neighborhood and any local extinction is an immense loss.”
Emily Moskal is a Bay Nature editorial intern.
Like this article?
Help Bay Nature tell more stories about nature in the Bay Area
Make a tax deductible donation to Bay Nature today!
Most recent in Urban Nature
Do these large, wild, fearsome fish predators prefer our built-up shoreline bristling with apartments, cargo ships, and manufacturing equipment? And what does it mean if they do?
Urban Nature | Wildlife | <urn:uuid:8b150bda-1130-4fd9-afc2-a2863b84870f> | 3.109375 | 1,189 | News Article | Science & Tech. | 32.290504 | 95,502,988 |