text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Acidity (pH) and its changes play an important role in many physiological processes, including protein folding, and can act as indicators of cancer. In the journal Angewandte Chemie, American researchers have now introduced an unconventional pH sensor that makes it possible to monitor changes in pH values in living cells over longer periods of time, with previously unobtainable spatial resolution. This is possible through the combination of fluorescent nanocrystals with mobile molecular “arms” that can fold or unfold depending on the pH of their environment.
Endosomes, cell organelles that play a role in transport within cells, experience a considerable drop in their pH value as they mature. This was observed by the team working with Moungi G. Bawendi at the Massachusetts Institute of Technology (MIT) in Cambridge (USA) by using a new nanoscopic pH sensor and a fluorescence microscope.
The secret to their success lies in the unconventional design of their sensor: A mobile molecular “arm” connects a green fluorescent nanocrystal to a red fluorescent dye. The nanocrystals are particles of semiconductor materials that easily transfer the light energy they absorb to fluorescent dyes through a radiation-free mechanism (fluorescence resonance energy transfer, or FRET). This causes the dye to fluoresce—as long as both of the FRET partners are close enough to each other.
The distance between the nanocrystal and the dye is controlled by folding and unfolding of the molecular arm on the nano pH sensor—and this motion is pH-dependent. The arm consists of one piece of double-stranded and one piece of single-stranded DNA. As the concentration of H+ ions increases, so does the tendency to form a “triple strand”, in which the single strand fits into the groove of the double strand, causing the arm to fold. This “arm movement” takes place in the physiologically important range around pH 7 and is very sensitive to the slightest change.
At higher pH values, the arm is stretched out and the FRET partners are too far away from each other for energy transfer to occur. The nanocrystal emits green fluorescence and the dye does not fluoresce. As the pH gets lower, the arm folds enough to allow FRET energy transfer. The green fluorescence of the nanocrystal decreases and the dye begins to glow red. Because this technique measures the ratio of green to red fluorescence instead of an absolute value, variations in intensity make no difference. The sensor thus has an internal reference.
In this type of sensor, the actual “pH tester” and the optical signaling device are two separate components. By replacing the pH tester with a molecular arm that responds to a different analyte it should be possible to use the same principle and the same optical signaling device to build sensors for other target molecules.
Nanocrystal fluorophores have caused much excitement in several fields because of their attractive optical properties. Nanocrystals offer superior properties compared to traditional molecular fluorophores, in particular in biology, where they can help reveal the inner workings of the cell. However, converting these new nanomaterials into fluorescent sensors has proven difficult. The concept of harnessing a molecular conformational change to create a sensor is new for nanocrystal sensors and could prove a general solution to the problem of making sensors from nanocrystals.Author: Moungi G. Bawendi, Massachusetts Institute of Technology, Cambridge (USA), http://nanocluster.mit.edu/people.php
Title: Conformational Control of Energy Transfer: A Mechanism for Biocompatible Nanocrystal-Based Sensors
Angewandte Chemie International Edition, Permalink to the article: http://dx.doi.org/10.1002/anie.201207181
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences
18.07.2018 | Health and Medicine | <urn:uuid:d17cee83-8fab-4996-a493-ed1ecd43602b> | 3.828125 | 1,417 | Content Listing | Science & Tech. | 37.635786 | 95,540,429 |
Scientists at USC and Lawrence Berkeley National Lab have discovered a new route by which a proton (a hydrogen atom that lost its electron) can move from one molecule to another – a basic component of countless chemical and biological reactions.
"This is a radically new way by which proton transfer may occur," said Anna Krylov, professor of chemistry at the USC Dornsife College of Letters, Arts and Sciences. Krylov is a co-corresponding author of a paper on the new process that was published online by Nature Chemistry on March 18.
Krylov and her colleagues demonstrated that protons are not obligated to travel along hydrogen bonds, as previously believed. The finding suggests that protons may move efficiently in stacked systems of molecules, which are common in plant biomass, membranes, DNA and elsewhere.
Armed with the new knowledge, scientists may be able to better understand chemical reactions involving catalysts, how biomass (plant material) can be used as a renewable fuel source, how melanin (which causes skin pigmentation) protects our bodies from the sun's rays, and damage to DNA.
"By better understanding how these processes operate at molecular level, scientists will be able to design new catalysts, better fuels, and more efficient drugs," Krylov said.
Hydrogen atoms are often shared between two molecules, forming a so-called hydrogen bond. This bond determines structures and properties of everything from liquid water to the DNA double helix and proteins.
Hydrogen bonds also serve as pathways by which protons may travel from one molecule to another, like a road between two houses. But what happens if there's no road?
To find out, Krylov and fellow corresponding author Ahmed Musahid of the Lawrence Berkeley National Lab created a system in which two molecules were stacked on top of each other, without hydrogen bonds between them. Then they ionized one of the molecules to coax a proton to move from one place to another.
Ahmed and Krylov discovered that when there's no straight road between the two houses, the houses (molecules) can rearrange themselves so that their front doors are close together. In that way, the proton can travel from one to the other with no hydrogen bond – and with little energy. Then the molecules return to their original positions.
"We've come up with the picture of a new process," Krylov said.
This research was performed under the auspices of the iOpenShell Center and supported by the US Department of Energy, the Defense Threat Reduction Agency, and the National Science Foundation.
Robert Perkins | EurekAlert!
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:633235bc-307f-46d6-aa72-e1c6d4403ac0> | 3.828125 | 1,158 | Content Listing | Science & Tech. | 40.325057 | 95,540,430 |
Is there life on Mars? It’s possible, but it may not be Martian, say scientists. New research, published in the open access journal BMC Microbiology, suggests that conditions on Mars are capable of supporting dormant bacteria, known as endospores. This raises concern about future attempts to detect Martian life forms because endospores originating on Earth could potentially hitch a ride to Mars and survive on its surface.
Soil on Mars is thought to be rich in oxidising chemicals that are known to destroy life. The high levels of ultraviolet radiation on the surface of the planet make it unlikely that any organism could survive. Ronald Crawford and colleagues from the University of Idaho have investigated whether bacterial endospores can exist in Mars’s hostile environment.
Endospores are a survival form of bacteria, formed when they find themselves in an unfavourable environment, and are perhaps the most resilient life form on Earth. They are resistant to extreme temperatures, most disinfectants, radiation, drying, and can survive for thousands of years in this dormant state. There is even evidence that they can survive in the vacuum of space. Given the possibility of endospores hitching a lift on spacecraft bound for Mars, Ronald Crawford and his colleagues investigated whether endospores could survive in a simulated Martian environment.
Gordon Fletcher | BioMed Central Limited
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:3438470c-639f-45c9-920f-e80b747c3677> | 3.875 | 911 | Content Listing | Science & Tech. | 37.188376 | 95,540,445 |
A speciman of the Chromodorididae Nudibranch Glossodoris sedna moving across the coral reef on the base of the Conch Wall off the coast of Key Largo in Florida, USA.
This species is an introduced species, likely coming across the ocean in ships ballast. They have adapted well to Florida and there is at least three well known populations over a 100 mile area.
Locally this species is known as the Red-tipped Sea Goddess
· Date: Sun August 12, 2007 · Reference ID: /22007-05-14_03-53-54 · Views: 14346 · | <urn:uuid:0765e3d0-be9d-45bb-bcfe-247f0be824c4> | 2.703125 | 129 | Truncated | Science & Tech. | 49.200303 | 95,540,452 |
The Acoustic Bubble
This volume deals with the interaction of acoustic fields with bubbles in liquids. The principles of cavitation (generation of bubbles in liquids by rapid changes as those introduced by ultrasound) are expounded. When cavity bubbles implode they produce shock waves in the liquid. Components can be damaged by cavitation if it is induced by turbulent flow. These phenomena have important implications, particularly in underwater acoustics (the fastest growing field in acoustics research). Later chapters concentrate on cavitation due to ultrasound. This interdisciplinary research should be of interest to those engaged in research from sonochemistry to the sensitization of explosives. The physical processes involved are explained both by analogy and formulation. In this way, the concepts should be accessible to those of lesser mathematical ability.
- Hardback | 613 pages
- 175.26 x 248.92 x 38.1mm | 1,202.01g
- 01 Mar 1994
- Elsevier Science Publishing Co Inc
- Academic Press Inc
- San Diego, United States
Table of contents
The sound field; cavitation inception and fluid dynamics; the freely-oscilating bubble; the forced bubble; effects and mechanisms.
"One can only effevesce with praise upon perusing The Acoustic Bubble, by T.G. Leighton. Here is a volume that is both comprehensive and readable--that informs the expert and the novice...The reader is well served by the author's careful attention to detail, his extensive table of symbols (14 pages!), his excellent use of photographs and graphics, an extraordinarily comprehensive bibliography, and, most of all, by his lucid physical descriptions of phenomena not only having to do with bubbles but also with a variety of physical systems that are related to foundations of the sciences that one must understand if one is to apply them to the bubble problem...The Acoustic Bubble is not only interesting reading, it is an extraordinarily useful reference for those working in or planning to work in the field...There's no trouble with this Bubble." --Journal of the Acoustic Society of America "The strength of this book lies in its effortless and enjoyable coverage of a wide swathe of rather intimidating literature and its unified mathematical approach...undoubtedly a major contribution to the field by an accomplished teacher, scientist and enthusiast." --Physics in Medicine and Biology "There is no doubt that this is the most comprehensive and accessible text on acoustic cavitation available...This is a required reference for scientists and, in particular, medical physicists interested in becoming aquainted with the mathematical description of bubble dynamics and the wealth of associated physical phenomena. It is user friendly, thorough in its treatment of the theory and encyclopaedic in its coverage of the experimental literature. I have no doubts that it will become a standard text." --Scope (The Journal of the IPSM) | <urn:uuid:58dda5be-4e5a-4c41-80c8-aa882e783566> | 2.921875 | 574 | Product Page | Science & Tech. | 36.265886 | 95,540,461 |
The diffraction relation between a plane and another plane that is both tilted and translated with respect to the first one is revisited. The derivation of the result becomes easier when the impulse function over a surface is used as a tool. Such an approach converts the original 2D problem to an intermediate 3D problem and thus allows utilization of easy-to-interpret Fourier transform properties due to rotation and translation. An exact solution for the scalar monochromatic propagating waves case when the propagation direction is restricted to be in the forward direction is presented.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:fb12b135-6350-48bf-aa90-9a81a7b2e36c> | 2.515625 | 132 | Academic Writing | Science & Tech. | 21.663345 | 95,540,462 |
The oxides of iron, aluminium, and manganese
THE oxides (including the hydroxides and hydrous oxides) of iron and aluminium are, along with silica, the most common accessory minerals in clays; manganese oxides, on the other hand, occur more sporadically. Nevertheless, it is appropriate here to consider these oxides together, since there are many fundamental resemblances not only in composition and structure, but also in occurrence and origin. The crystalline oxides and hydroxides of iron and aluminium are well-defined and distinctive in character but those of manganese are more poorly-defined and the validity of certain reputed species is as yet by no means certain: in consequence, only well-defined species can be considered. All three metals also apparently form oxides with too low a degree of order to diffract X-rays or even electrons: these will be termed amorphous. Structural relationships between many of these oxides have been discussed by Rooksby (1961) and thermal characteristics by Mackenzie (1957). | <urn:uuid:fecd53e6-afd9-4b45-8d10-980f0c4f91c5> | 3.375 | 223 | Knowledge Article | Science & Tech. | 3.229698 | 95,540,471 |
Huddling rats act like 'super-organisms': Study reveals how rodents keep warm by shape-shifting into one terrifying mass
- Rats in a huddle rotate so those on the edge are brought into the warmth
- Behaviour causes the rats to act like a centrally-controlled, larger creature
- 'Rat super-organism' relies on a combination of selfishness and sacrifice
- Model shows how each rat must sacrifice has some of its own heat to make sure the group has a balanced temperature
Like penguins, rats huddle together when it's cold, and separate when it's warm.
They also rotate, so that the rats on the outer edge are brought into the warmth of the centre before being moved back out again.
This behaviour, new research suggests, causes the rats to act like a terrifying, self-organising 'super-organism.'
Like penguins, rats huddle together when it's cold, and separate when it's warm. They also rotate, so that the rats on the outer edge are brought into the warmth of the centre before being moved back out again
Scientists as the University of Sheffield say the huddled mass resembles the actions of a larger, centrally-controlled creature that can shape-shift to retain heat.
But rather than a centralised brain, this 'rat super-organism' relies on a combination of selfishness and sacrifice.
The researchers created a model which shows how each rat must sacrifice has some of its own heat to make sure the group has a balanced temperature.
Lead author Jonathan Glancy at Sheffield University explains: 'Our model describes the huddle as a self-organising system.
This behaviour causes the rats to act like a terrifying, self-organising 'super-organism.' Each rat must sacrifice has some of its own heat to make sure the group has a balanced temperature
'[It] reveals how complex group behaviours can emerge from very simple interactions between animals.'
Huddling is an important example of a self-organising behaviour with a clear evolutionary advantage, because animals that can coordinate their movements to keep warm are more likely to survive.
Future experiments could test the accuracy of the model, and shed light on how evolution might take advantage of useful tricks like huddling.
This work, published in the journal PLOS Computational Biology, is part a study into how huddling equations could be used to coordinate movement patterns in teams of cooperating robots.
It could also be used to create 'bio hybrid' teams of robots and animals.
RATS COULD ONE DAY BE BIGGER THAN COWS, CLAIMS STUDY
Rats could grow to the size of cows or even bigger as they evolve to fill vacant ecological niches
Giant rats, the size of cows or even bigger, could one day fill a 'significant chunk' of Earth's emptying ecospace.
The terrifying scenario could become a reality as super-adaptable rats take advantage of larger mammals becoming extinct, an expert predicts.
'Animals will evolve, over time, into whatever designs will enable them to survive and to produce offspring,' said geologist Dr Jan Zalasiewicz, from the University of Leicester, following a study that was published last year.
For instance, in the Cretaceous Period, when the dinosaurs lived, there were mammals, but these were very small, rat and mouse-sized, because dinosaurs occupied the larger ecological niches.
Only once the dinosaurs were out of the way did these mammals evolve into many different forms.
'Given enough time, rats could probably grow to be at least as large as the capybara, the world's largest rodent, that lives today, that can reach 80 kilos (17lb). 'If the ecospace was sufficiently empty, then they could get larger still.'
Most watched News videos
- Bikies filmed hitting car that allegedly collided with a rider
- The streets of Alcudia in Mallorca are flooded by mini-tsunami
- Brigitte Macron all smiles as she raises World Cup with France team
- Sharks feast on huge whale carcass off popular surf beach
- Courageous woman hides victim from kidnappers till cops arrive
- Love Island TEASER: Georgia gets anxious as she could be kicked off
- Brave lion cub forced to jump into raging river to follow mother
- Moment off-duty cop shoots armed motorbike thief dead
- CVS manager calls cops on black woman trying to use coupon
- Shocking moment young girl is attacked by golden eagle
- Moment cops on duty do Fortnite's Floss dance at Little Mix concert
- Shocking video shows driver knocking cyclists off their bikes | <urn:uuid:4e295f83-d970-4e80-987e-86f072a37c1c> | 2.984375 | 965 | Truncated | Science & Tech. | 26.905712 | 95,540,513 |
The Earth’s climate has, during its long history, oscillated between warm periods and cool periods, but the extreme ranges have remained within biological limits (Lovelock, 1982). Such oscillations have been attributed to a wide variety of natural phenomena such as perturbations in the Earth’s major orbital parameters, changes in the solar energy flux, changes in atmospheric gas components, changes in the particulate matter within the atmpshere such as volcanic dust, changes in ocean currents such as with El Niño, changes within biological life itself such as described by the Gaia hypotheses, changes in distributions of continents and oceans, and even combinations of several of these phenomena. The surface of the Earth is slowly but constantly being changed by geologic processes, and local weather and climate changes accordingly. The major cool periods of the past were, however, associated with times when there was land in a polar or subpolar position; warm periods were associated with times when land masses were in temperate or equatorial positions.
The space allotted for this short chapter does not permit discussion of a “normal” climate of the Earth, or even if the Earth has one. This short synthesis deals only with broad generalities, and gives a small sample of the various methods used to decipher our climatic changes during the Quaternary. Literature on this subject is vast and grows at a fast pace, but most of it concerns the Holocene and Wisconsin paleoclimatology; very little of it deals with pre-Wisconsin Quaternary.
The terrestrial system | <urn:uuid:6f7871c9-50e3-4ea5-b721-ff7df08641b1> | 3.859375 | 314 | Knowledge Article | Science & Tech. | 15.396175 | 95,540,515 |
Areas of Composite Shapes
Find the areas of combined shapes made up of one or more simple polygons and circles
This is level 2; Using letters to show how the areas of composite shapes are calculated.
If the area of
a and the area of
is b then the area of this compound
shape is 2a - b. Find the simplest way of expressing the areas of these shapes:
Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help.
When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file.
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 3 October 'Starter of the Day' page by Mrs Johnstone, 7Je:
"I think this is a brilliant website as all the students enjoy doing the puzzles and it is a brilliant way to start a lesson."
Comment recorded on the 9 May 'Starter of the Day' page by Liz, Kuwait:
"I would like to thank you for the excellent resources which I used every day. My students would often turn up early to tackle the starter of the day as there were stamps for the first 5 finishers. We also had a lot of fun with the fun maths. All in all your resources provoked discussion and the students had a lot of fun."
There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer.
A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves.
Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members.
If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school.
Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic.
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
© Transum Mathematics :: This activity can be found online at:
Level 1 - A reminder of the area formulas for basic shapes
Level 2 - Using letters to show how the areas of composite shapes are calculated
Level 3 - Composite shapes made up of rectangles
Level 4 - Composite shapes made up of quadrilaterals, triangles and circles
Level 5 - Real life composite area questions from photographs
Level 6 - Some puzzling composite area questions designed to challenge
Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers. | <urn:uuid:c71dd936-d9b1-4744-a8eb-5797c0f72be8> | 4.1875 | 912 | Tutorial | Science & Tech. | 46.06343 | 95,540,527 |
Nicholas A. Dembsey
Jonathan R. Barnett
Brian J. Savilonis
Kathy A. Notarianni
The Cone Calorimeter has been used widely for various purposes as a bench - scale apparatus. Originally the retainer frame (edge frame) was designed to reduce unrepresentative edge burning of specimens. In general, the frame has been used in most Cone tests without enough understanding of its effect. It is very important to have one - dimensional (1D) conditions in order to estimate thermal properties of materials. It has been implicitly assumed that the heat conduction in the Cone Calorimeter is 1D using the current specimen preparation. However, the assumption has not been corroborated explicitly to date. The first objective of this study was to evaluate the heat transfer behavior of a Cone specimen by examining its three - dimensional (3D) heat conduction. It is essential to understand the role of wall lining materials when they are exposed to a fire from an ignition source. Full - scale test methods permit an assessment of the performance of a wall lining material. Fire growth models have been developed due to the costly expense associated with full - scale testing. The models require heat flux maps from the ignition burner flame as input data. Work to date was impeded by a lack of detailed spatial characterization of the heat flux maps due to the use of limited instrumentation. To increase the power of fire modeling, accurate and detailed heat flux maps from the ignition burner are essential. High level spatial resolution for surface temperature can be provided from an infrared camera. The second objective of this study was to develop a heat flux mapping procedure for a room test burner flame to a wall configuration with surface temperature information taken from an infrared camera. A prototype experiment is performed using the ISO 9705 test burner to demonstrate the developed heat flux mapping procedure. The results of the experiment allow the heat flux and spatial resolutions of the method to be determined and compared to the methods currently available.
Worcester Polytechnic Institute
Fire Protection Engineering
All authors have granted to WPI a nonexclusive royalty-free license to distribute copies of the work. Copyright is held by the author or authors, with all rights reserved, unless otherwise noted. If you have any questions, please contact email@example.com.
Choi, K. (2005). 3D Thermal Mapping of Cone Calorimeter Specimen and Development of a Heat Flux Mapping Procedure Utilizing an Infrared Camera. Retrieved from https://digitalcommons.wpi.edu/etd-dissertations/57
temperature measurement, heat flux maps, Cone Calorimeter, three-dimensional heat conduction, fire growth models, retainer frame, ceramic fiberboard, edge effect, one-dimensional heat conduction, heat flux mapping procedure, infrared camera, specimen preparation, edge frame, one-dimensional heat conduction model, thermal properties, Heat, Transmission, Heat, Conduction, Walls, Fire testing, Cone calorimeters, Infrared photography | <urn:uuid:c6e19e10-bbc1-440e-86d3-c3e34893360d> | 2.84375 | 625 | Academic Writing | Science & Tech. | 24.306592 | 95,540,528 |
One hundred and seventy-three years ago, the last two Great Auks, Pinguinusimpennis, ever reliably seen were killed. Their internal organs can be found in the collections of the Natural History Museum of Denmark, but the location of their skins has remained a mystery. In 1999, Great Auk expert Errol Fuller proposed a list of five potential candidate skins in museums around the world. Here we take a palaeogenomic approach to test which-if any-of Fuller’s candidate skins likely belong to either of the two birds. Using mitochondrial genomes from the five candidate birds (housed in museums in Bremen, Brussels, Kiel, Los Angeles, and Oldenburg) and the organs of the last two known individuals, we partially solve the mystery that has been on Great Auk scholars' minds for generations and make new suggestions as to the whereabouts of the still-missing skin from these two birds.
Longer ice-free seasons increase the risk of nest depredation by polar bears for colonial breeding birds in the Canadian Arctic
- Proceedings. Biological sciences / The Royal Society
- Published over 4 years ago
Northern polar regions have warmed more than other parts of the globe potentially amplifying the effects of climate change on biological communities. Ice-free seasons are becoming longer in many areas, which has reduced the time available to polar bears (Ursus maritimus) to hunt for seals and hampered bears' ability to meet their energetic demands. In this study, we examined polar bears' use of an ancillary prey resource, eggs of colonial nesting birds, in relation to diminishing sea ice coverage in a low latitude region of the Canadian Arctic. Long-term monitoring reveals that bear incursions onto common eider (Somateria mollissima) and thick-billed murre (Uria lomvia) nesting colonies have increased greater than sevenfold since the 1980s and that there is an inverse correlation between ice season length and bear presence. In surveys encompassing more than 1000 km of coastline during years of record low ice coverage (2010-2012), we encountered bears or bear sign on 34% of eider colonies and estimated greater egg loss as a consequence of depredation by bears than by more customary nest predators, such as foxes and gulls. Our findings demonstrate how changes in abiotic conditions caused by climate change have altered predator-prey dynamics and are leading to cascading ecological impacts in Arctic ecosystems.
Unmanned aerial vehicles (UAVs) provide an opportunity to rapidly census wildlife in remote areas while removing some of the hazards. However, wildlife may respond negatively to the UAVs, thereby skewing counts. We surveyed four species of Arctic cliff-nesting seabirds (glaucous gull Larus hyperboreus, Iceland gull Larus glaucoides, common murre Uria aalge and thick-billed murre Uria lomvia) using a UAV and compared censusing techniques to ground photography. An average of 8.5% of murres flew off in response to the UAV, but >99% of those birds were non-breeders. We were unable to detect any impact of the UAV on breeding success of murres, except at a site where aerial predators were abundant and several birds lost their eggs to predators following UAV flights. Furthermore, we found little evidence for habituation by murres to the UAV. Most gulls flew off in response to the UAV, but returned to the nest within five minutes. Counts of gull nests and adults were similar between UAV and ground photography, however the UAV detected up to 52.4% more chicks because chicks were camouflaged and invisible to ground observers. UAVs provide a less hazardous and potentially more accurate method for surveying wildlife. We provide some simple recommendations for their use.
High flight costs, but low dive costs, in auks support the biomechanical hypothesis for flightlessness in penguins
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 5 years ago
Flight is a key adaptive trait. Despite its advantages, flight has been lost in several groups of birds, notably among seabirds, where flightlessness has evolved independently in at least five lineages. One hypothesis for the loss of flight among seabirds is that animals moving between different media face tradeoffs between maximizing function in one medium relative to the other. In particular, biomechanical models of energy costs during flying and diving suggest that a wing designed for optimal diving performance should lead to enormous energy costs when flying in air. Costs of flying and diving have been measured in free-living animals that use their wings to fly or to propel their dives, but not both. Animals that both fly and dive might approach the functional boundary between flight and nonflight. We show that flight costs for thick-billed murres (Uria lomvia), which are wing-propelled divers, and pelagic cormorants (Phalacrocorax pelagicus) (foot-propelled divers), are the highest recorded for vertebrates. Dive costs are high for cormorants and low for murres, but the latter are still higher than for flightless wing-propelled diving birds (penguins). For murres, flight costs were higher than predicted from biomechanical modeling, and the oxygen consumption rate during dives decreased with depth at a faster rate than estimated biomechanical costs. These results strongly support the hypothesis that function constrains form in diving birds, and that optimizing wing shape and form for wing-propelled diving leads to such high flight costs that flying ceases to be an option in larger wing-propelled diving seabirds, including penguins.
Tufted Puffin (Fratercula cirrhata) populations have experienced dramatic declines since the mid-19th century along the southern portion of the species range, leading citizen groups to petition the United States Fish and Wildlife Service (USFWS) to list the species as endangered in the contiguous US. While there remains no consensus on the mechanisms driving these trends, population decreases in the California Current Large Marine Ecosystem suggest climate-related factors, and in particular the indirect influence of sea-surface temperature on puffin prey. Here, we use three species distribution models (SDMs) to evaluate projected shifts in habitat suitable for Tufted Puffin nesting for the year 2050 under two future Intergovernmental Panel on Climate Change (IPCC) emission scenarios. Ensemble model results indicate warming marine and terrestrial temperatures play a key role in the loss of suitable Tufted Puffin nesting conditions in the California Current under both business-as-usual (RCP 8.5) and moderated (RCP 4.5) carbon emission scenarios, and in particular, that mean summer sea-surface temperatures greater than 15 °C are likely to make habitat unsuitable for breeding. Under both emission scenarios, ensemble model results suggest that more than 92% of currently suitable nesting habitat in the California Current is likely to become unsuitable. Moreover, the models suggest a net loss of greater than 21% of suitable nesting sites throughout the entire North American range of the Tufted Puffin, regardless of emission-reduction strategies. These model results highlight continued Tufted Puffin declines-particularly among southern breeding colonies-and indicate a significant risk of near-term extirpation in the California Current Large Marine Ecosystem.
Detailed information acquired using tracking technology has the potential to provide accurate pictures of the types of movements and behaviors performed by animals. To date, such data have not been widely exploited to provide inferred information about the foraging habitat. We collected data using multiple sensors (GPS, time depth recorders, and accelerometers) from two species of diving seabirds, razorbills (Alca torda, N = 5, from Fair Isle, UK) and common guillemots (Uria aalge, N = 2 from Fair Isle and N = 2 from Colonsay, UK). We used a clustering algorithm to identify pursuit and catching events and the time spent pursuing and catching underwater, which we then used as indicators for inferring prey encounters throughout the water column and responses to changes in prey availability of the areas visited at two levels: individual dives and groups of dives. For each individual dive (N = 661 for guillemots, 6214 for razorbills), we modeled the number of pursuit and catching events, in relation to dive depth, duration, and type of dive performed (benthic vs. pelagic). For groups of dives (N = 58 for guillemots, 156 for razorbills), we modeled the total time spent pursuing and catching in relation to time spent underwater. Razorbills performed only pelagic dives, most likely exploiting prey available at shallow depths as indicated by the vertical distribution of pursuit and catching events. In contrast, guillemots were more flexible in their behavior, switching between benthic and pelagic dives. Capture attempt rates indicated that they were exploiting deep prey aggregations. The study highlights how novel analysis of movement data can give new insights into how animals exploit food patches, offering a unique opportunity to comprehend the behavioral ecology behind different movement patterns and understand how animals might respond to changes in prey distributions.
Which factors shape animals' migration movements across large geographical scales, how different migratory strategies emerge between populations, and how these may affect population dynamics are central questions in the field of animal migration that only large-scale studies of migration patterns across a species' range can answer . To address these questions, we track the migration of 270 Atlantic puffins Fratercula arctica, a red-listed, declining seabird, across their entire breeding range. We investigate the role of demographic, geographical, and environmental variables in driving spatial and behavioral differences on an ocean-basin scale by measuring puffins' among-colony differences in migratory routes and day-to-day behavior (estimated with individual daily activity budgets and energy expenditure). We show that competition and local winter resource availability are important drivers of migratory movements, with birds from larger colonies or with poorer local winter conditions migrating further and visiting less-productive waters; this in turn led to differences in flight activity and energy expenditure. Other behavioral differences emerge with latitude, with foraging effort and energy expenditure increasing when birds winter further north in colder waters. Importantly, these ocean-wide migration patterns can ultimately be linked with breeding performance: colony productivity is negatively associated with wintering latitude, population size, and migration distance, which demonstrates the cost of competition and migration on future breeding and the link between non-breeding and breeding periods. Our results help us to understand the drivers of animal migration and have important implications for population dynamics and the conservation of migratory species.
Microplastics have been reported everywhere around the globe. With very limited human activities, the Arctic is distant from major sources of microplastics. However, microplastic ingestions have been found in several Arctic marine predators, confirming their presence in this region. Nonetheless, existing information for this area remains scarce, thus there is an urgent need to quantify the contamination of Arctic marine waters. In this context, we studied microplastic abundance and composition within the zooplankton community off East Greenland. For the same area, we concurrently evaluated microplastic contamination of little auks (Alle alle), an Arctic seabird feeding on zooplankton while diving between 0 and 50 m. The study took place off East Greenland in July 2005 and 2014, under strongly contrasted sea-ice conditions. Among all samples, 97.2% of the debris found were filaments. Despite the remoteness of our study area, microplastic abundances were comparable to those of other oceans, with 0.99 ± 0.62 m(-3) in the presence of sea-ice (2005), and 2.38 ± 1.11 m(-3) in the nearby absence of sea-ice (2014). Microplastic rise between 2005 and 2014 might be linked to an increase in plastic production worldwide or to lower sea-ice extents in 2014, as sea-ice can represent a sink for microplastic particles, which are subsequently released to the water column upon melting. Crucially, all birds had eaten plastic filaments, and they collected high levels of microplastics compared to background levels with 9.99 and 8.99 pieces per chick meal in 2005 and 2014, respectively. Importantly, we also demonstrated that little auks took more often light colored microplastics, rather than darker ones, strongly suggesting an active contamination with birds mistaking microplastics for their natural prey. Overall, our study stresses the great vulnerability of Arctic marine species to microplastic pollution in a warming Arctic, where sea-ice melting is expected to release vast volumes of trapped debris.
Pair collaborative behavior may play an important role in avian reproduction. However, evidence for this mainly comes from certain ecological groups (e.g. passerines). We studied the coordination of parents in foraging and its effect on food provisioning rate and chick growth in a small seabird, the Dovekie (Little auk, Alle alle). The species exhibits a dual foraging strategy, where provisioning adults make foraging trips of short (mean ~2 h; to provide food for the chick) and long duration (mean ~ 13 h; mainly for adults self-maintenance, although the food is also brought to the chick). We expected that offspring would benefit if parents coordinate their foraging patterns: one making short trips in the time when the other performing the long one. We examined this hypothesis using Monte Carlo randomization tests on field data collected during observations of individually marked birds. We found that parents did indeed adjust provisioning, making their long and short trips in an alternating pattern with respect to each other. Furthermore, we found that a higher level of coordination is associated with a lower variability in the duration of inter-feeding intervals, although this does not affect chick growth. Nevertheless, our results provide compelling evidence on the coordinated behavior of breeding partners.
Breeding density, fine-scale tracking and large-scale modeling reveal the regional distribution of four seabird species
- Ecological applications : a publication of the Ecological Society of America
- Published about 1 year ago
Population-level estimates of species' distributions can reveal fundamental ecological processes and facilitate conservation. However, these may be difficult to obtain for mobile species, especially colonial central-place foragers (CCPFs; e.g. bats, corvids, social insects), because it is often impractical to determine the provenance of individuals observed beyond breeding sites. Moreover, some CCPFs, especially in the marine realm (e.g. pinnipeds, turtles and seabirds) are difficult to observe because they range 10s to 10,000s km from their colonies. It is hypothesized that the distribution of CCPFs depends largely on habitat availability and intraspecific competition. Modeling these effects may therefore allow distributions to be estimated from samples of individual spatial usage. Such data can be obtained for an increasing number of species using tracking technology. However, techniques for estimating population-level distributions using the telemetry data are poorly developed. This is of concern because many marine CCPFs, such as seabirds, are threatened by anthropogenic activities. Here, we aim to estimate the distribution at sea of four seabird species, foraging from approximately 5500 breeding sites in Britain and Ireland. To do so, we GPS-tracked a sample of 230 European shags Phalacrocorax aristotelis, 464 black-legged kittiwakes Rissa tridactyla, 178 common murres Uria aalge and 281 razorbills Alca torda from 13, 20, 12 and 14 colonies respectively. Using Poisson point process habitat use models, we show that distribution at sea is dependent on: (i) density-dependent competition among sympatric conspecifics (all species) and parapatric conspecifics (kittiwakes and murres); (ii) habitat accessibility and coastal geometry, such that birds travel further from colonies with limited access to the sea; and (iii) regional habitat availability. Using these models, we predict space use by birds from unobserved colonies and thereby map the distribution at sea of each species at both the colony and regional level. Space use by all four species' British breeding populations is concentrated in the coastal waters of Scotland, highlighting the need for robust conservation measures in this area. The techniques we present are applicable to any CCPF. This article is protected by copyright. All rights reserved. | <urn:uuid:a8fc199e-51c4-44f5-a686-8f3509107554> | 3.265625 | 3,433 | Academic Writing | Science & Tech. | 23.421786 | 95,540,541 |
The two black holes are only about 1,500 light-years apart in the heart of Cygnus A.
NASA revealed a stunning image of a Crab Nebula, a remnant of a supernova explosion. The image was from combined data of many different instruments.
You have already subscribed. Thank you. | <urn:uuid:08fb5e9b-a691-4ba2-976a-dcd88f1f9c79> | 2.59375 | 62 | Truncated | Science & Tech. | 55.31249 | 95,540,573 |
Mice lose their fear of territorial rivals when a tiny piece of their brain is neutralized, a new study reports.
The study adds to evidence that primal fear responses do not depend on the amygdala – long a favored region of fear researchers – but on an obscure corner of the primeval brain.
A group of neuroscientists led by Larry Swanson of the University of Southern California studied the brain activity of rats and mice exposed to cats, or to rival rodents defending their territory.
Both experiences activated neurons in the dorsal premammillary nucleus, part of an ancient brain region called the hypothalamus.
Swanson's group then made tiny lesions in the same area. Those rodents behaved far differently.
"These animals are not afraid of a predator," Swanson said. "It's almost like they go up and shake hands with a predator."
Lost fear of cats in rodents with such lesions has been observed before. More important for studies of social interaction, the study replicated the finding for male rats that wandered into another male's territory.
Instead of adopting the usual passive pose, the intruder frequently stood upright and boxed with the resident male, avoided exposing his neck and back, and came back for more even when losing.
"It's amazing that these lesions appear to abolish innate fear responses," said Swanson, who added: "The same basic circuitry is found in primates and people that we find in rats and mice."
The study was slated for online publication the week of March 9 in Proceedings of the National Academy of Sciences.
Swanson predicted that his group's findings would shift some research away from the amygdala, a major target of fear studies for the past 30 years.
"This is a new perspective on what part of the brain controls fear," he said.
He explained that most amygdala studies have focused on a different type of fear, which might more accurately be called caution or risk aversion.
In those studies, animals receive an electric shock to their feet. When placed in the same environment a few days later, they display caution and increased activity of the amygdala.
But the emotion experienced in that case may differ from the response to a physical attack.
"We're not just dealing with one system that controls all fear," Swanson said.
Swanson and collaborators have been studying the role of the hypothalamus in the fear response since 1992.
Because of its role in basic survival functions such as feeding, reproduction and the sleep-wake cycle, the hypothalamus seems a plausible candidate for fear studies.
Yet, said Swanson, "nobody's paid any attention to it."
The PNAS study is the most recent of several by Swanson on fear and the hypothalamus. The few other researchers in the area include Newton Canteras of the University of Sao Paulo in Brazil, who collaborated with Swanson on the PNAS study, as well as Robert and Caroline Blanchard of the University of Hawaii.
Carl Marziali | EurekAlert!
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:af7ddbf9-63b7-4da3-9360-64d250d8adfd> | 3.171875 | 1,187 | Content Listing | Science & Tech. | 40.575279 | 95,540,600 |
Furu Mienis studied the development of carbonate mounds dominated by cold-water corals in the Atlantic Ocean at depths of six hundred to a thousand metres. These reefs can be found along the eastern continental slope from Morocco to Norway, on the Mid-Atlantic Ridge and on the western continental slope along the east coast of Canada and the United States. Mienis studied the area to the west of Ireland along the edges of the Rockall Trough.
In her research Mienis analysed environmental factors like temperature, current speed and flow direction of seawater as these determine the growth of cold-water corals and the carbonate mounds. The measurements were made using bottom landers, observatories placed on the seabed from the NIOZ oceanographic research vessel ‘Pelagia’ and brought back to the surface a year later.Food highways down to the deep
This research was funded by the Netherlands Organisation for Scientific Research (NWO) and the European Science Foundation (ESF).
Kim van den Wijngaard | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:fd2969fc-36f6-407d-9d47-66b762c3f13c> | 2.765625 | 815 | Content Listing | Science & Tech. | 37.740572 | 95,540,601 |
NASA’S Juno spacecraft capped a five-year journey to Jupiter on Monday with a do-or-die engine burn that looped it into orbit to probe the origins of the biggest planet in the solar system and how it impacted the rise of life on Earth, the US space agency said.
Juno fired its main engine for 35 minutes beginning at 11:18 a.m. EDT/0318 Tuesday GMT, slowing the spacecraft so it could be captured by the planet’s gravity.
Once in position to begin its 20-month science mission, Juno will fly in egg-shaped orbits, each one lasting 14 days, to learn if Jupiter has a dense core beneath its clouds and map its massive magnetic field.
- NASA’s Juno spacecraft spots volcano on Jupiter moon Io
- NASA extends Juno’s Jupiter mission until July 2021
- Juno spacecraft set to reach farthest point in its Jupiter orbit
- Destination Jupiter: What to expect during the NASA Juno mission
- NASA’s Juno probe mission: Solar-powered spacecraft is now orbiting Jupiter
- NASA Juno mission as it happened: Spacecraft is now in Jupiter’s orbit
The probe also will hunt for water in Jupiter’s thick atmosphere, a key yardstick for figuring out how far away from the sun the gas giant formed. Jupiter, which could hold 1,300 Earths, orbits five times farther from the sun than Earth, but it may have started out elsewhere and migrated, jostling its smaller sibling planets as it moved.
Earth and Mars were positioned at the right distance from the sun for liquid surface water, which is believed to be necessary for life. Scientists have been studying Mars to figure out why the planet lost its water.
Jupiter’s immense gravity also diverts many asteroid and comets from potentially catastrophic collisions with Earth and the rest of the inner solar system.
NASA expects Juno to be in position for its first close-up images of Jupiter on Aug. 27, the same day its science instruments are turned on for a test run.
Juno is only the second spacecraft to orbit Jupiter, following NASA’s 1995-2003 Galileo mission. | <urn:uuid:81d27a9e-de96-4b50-b33d-4dec5a1b8042> | 3.59375 | 447 | News Article | Science & Tech. | 45.544732 | 95,540,613 |
According to Ars Technica Bill Gerstenmaier, NASA's associate administrator for human exploration and operations, has announced that the space agency is looking in moving on from the International Space Station and start bringing humans in cislunar space the area of space closer to the moon. The agency just does not have enough money to have both the ISS and continue exploring space so it is hopeful the private sector will take over. If not, ISS will be left to fall into the Pacific Ocean.
NASA says it would like to see the private space industry “take over” low-Earth orbit, although it acknowledges that any successor space station or orbiting module will be far smaller than the $140 billion space station, a collaboration between 15 countries. The message from NASA to the US industry is simple: we’re serious about the commercialization of low-Earth orbit, we have this marvelous facility available with unique capabilities, and we want you to use the heck out of it.
No one really knows what innovations may come from working in microgravity. Financially, does it make sense to fabricate delicate nanostructures or grow protein crystals there? Other opportunities for commercial development include space tourism, industry, and marketing. Some of these ideas will work, and some will fail. Companies should find out now by experimenting on the station while it exists, NASA officials say, and while the space agency is paying for most of the costs.
|TED-Ed Explains How Cosmic Rays Help Us Understand the Universe|
|"Contact could mean extraordinary things for humanity if it happens soon."|
|View of the turbulent heart of our Milky Way galaxy|
|The Breathing Earth|
|"This is good. This is science."|
|“Prof Mallett has wanted to build a time machine for most of his life.”|
|“A company headquartered in Toronto runs a successful affordable mobile phone service in the US.”|
|Japanese Robot Serves Ice Cream From Inside a Vending Machine|
|“Inhibiting this pathway has extended life span in every species studies to date.”|
|Somebody Needs to Build a New Facebook Stat|
|Why, Typewriters Are Alive and Well, Thank you|
|“Social media can be good but we must be careful with how we use it.”|
|The (Very Scary) People of Public Transit|
|“Forget everything you were taught about having your phone out at the table — you'll need it to call the robots that serve you.”|
|“Artificial intelligence can detect your sexuality and politics just by looking at your face.”|
|CaptchaTweet: Write Tweets in Captcha Form|
|How to Avoid Jury Duty| | <urn:uuid:258584a1-b5ca-42cd-8e17-6080b7be2ffa> | 2.75 | 583 | Content Listing | Science & Tech. | 36.942975 | 95,540,709 |
How on earth are busy nerve cells supposed to pick out and respond to relevant signals amidst all that information overload?
Somehow neurons do manage to accomplish the daunting task, and they do it with more finesse than anyone ever realized, new research by University of Michigan mathematician Daniel Forger and coauthors demonstrates. Their findings---which not only add to basic knowledge about how neurons work, but also suggest ways of better designing the brain implants used to treat diseases such as Parkinson's disease---were published July 7 in the online, open-access journal PLoS Computational Biology.
Forger and coauthors David Paydarfar at the University of Massachusetts Medical School and John Clay at the National Institute of Neurological Disorders and Stroke studied neuronal excitation using mathematical models and experiments with that most famous of neuroscience study subjects, the squid giant axon---a long arm of a nerve cell that controls part of the water jet propulsion system in squid.
Among the key findings: Neurons are quite adept at their job. "They can pick out a signal from hundreds of other, similar signals," said Forger, an associate professor of mathematics in the College of Literature, Science and the Arts and a research assistant professor of computational medicine and bioinformatics at the U-M Medical School.
Neurons discriminate among signals based on the signals' "shape," (how a signal changes over time), and Forger and coauthors found that, contrary to prior belief, a neuron's preference depends on context. Neurons are often compared to transistors on a computer, which search for and respond to one specific pattern, but it turns out that neurons are more complex than that. They can search for more than one signal at the same time, and their choice of signal depends on what else is competing for their attention.
"We found that a neuron can prefer one signal---call it signal A---when compared with a certain group of signals, and a different signal---call it signal B---when compared with another group of signals," Forger said. This is true even when signal A and signal B aren't at all alike.
The findings could contribute in two main ways to the design and use of brain implants in treating neurological disorders.
"First, our results determine the optimal signals to stimulate a neuron," Forger said. "These signals are much more effective and require less battery power than what is currently used." Such efficiency would translate into less frequent surgery to replace batteries in patients with brain implants.
"Second, we found that the optimal stimulus is context-dependent," he said, "so the best signal will differ, depending on the part of the brain where the implant is placed."
The research was funded by the Air Force Office of Scientific Research and the National Institutes of HealthMore information:
PLoS Computational Biology---http://www.ploscompbiol.org/home.action
Nancy Ross-Flanigan | Newswise Science News
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:325d100e-3337-4a7b-a4cc-b10a04a83c28> | 3.375 | 1,266 | Content Listing | Science & Tech. | 40.397377 | 95,540,748 |
Now, physicists at the National Institute of Standards and Technology (NIST) have measured this effect at a more down-to-earth scale of 33 centimeters, or about 1 foot, demonstrating, for instance, that you age faster when you stand a couple of steps higher on a staircase.
Described in the Sept. 24 issue of Science,* the difference is much too small for humans to perceive directly—adding up to approximately 90 billionths of a second over a 79-year lifetime—but may provide practical applications in geophysics and other fields.
The NIST researchers also observed another aspect of relativity—that time passes more slowly when you move faster—at speeds comparable to a car travelling about 20 miles per hour, a more comprehensible scale than previous measurements made using jet aircraft.
NIST scientists performed the new “time dilation” experiments by comparing operations of a pair of the world’s best experimental atomic clocks. The nearly identical clocks are each based on the “ticking” of a single aluminum ion as it vibrates between two energy levels over a million billion times per second. One clock keeps time to within 1 second in about 3.7 billion years (see NIST announcement from Feb. 4, 2010, “NIST’s Second ‘Quantum Logic Clock’ Based on Aluminum Ion is Now World’s Most Precise Clock” at http://www.nist.gov/physlab/div847/logicclock_020410.cfm) and the other is close behind in performance. The clocks are precise and stable enough to reveal slight differences that could not be seen until now.
The NIST experiments test two predictions of Einstein’s theories of relativity. First, when two clocks are subjected to unequal gravitational forces due to their different elevations above the surface of the Earth, the higher clock—experiencing a smaller gravitational force—runs faster. Second, when an observer is moving, a stationary clock’s tick appears to last longer, so the clock appears to run slow. Scientists refer to this as the “twin paradox,” in which a twin sibling who travels on a fast-moving rocket ship would return home younger than the other twin.
In one set of experiments, scientists raised one of the clocks by jacking up the laser table to a height one-third of a meter (about a foot) above the second clock. Sure enough, the higher clock ran at a slightly faster rate than the lower clock, exactly as predicted.
The second set of experiments examined the effects of altering the physical motion of the ion in one clock. The ions are almost completely motionless during normal clock operations. NIST scientists tweaked the one ion so that it gyrated back and forth at speeds equivalent to several meters per second. That clock ticked at a slightly slower rate than the second clock, as predicted by relativity.
Such comparisons of super-precise clocks eventually may be useful in geodesy, the science of measuring the Earth and its gravitational field, with applications in geophysics and hydrology, and possibly in space-based tests of fundamental physics theories, suggests physicist Till Rosenband, leader of NIST’s aluminum ion clock team.
The research was supported in part by the Office of Naval Research. For more details, see the NIST Sept. 23, 2010, announcement, “NIST Pair of Aluminum Atomic Clocks Reveal Einstein’s Relativity at a Personal Scale” at http://www.nist.gov/public_affairs/releases/aluminum-atomic-clock_092310.cfm.
* C.W. Chou, D.B. Hume, T. Rosenband and D.J. Wineland. Optical clocks and relativity. Science. Sept. 24, 2010
Laura Ost | Newswise Science News
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:0cef61ed-ac2a-445b-bb47-fe7a8b20a006> | 3.65625 | 1,441 | Content Listing | Science & Tech. | 46.893092 | 95,540,749 |
New analysis of Pluto's atmosphere explains why New Horizons spacecraft measured temperatures much colder than predicted
The gas composition of a planet's atmosphere generally determines how much heat gets trapped in the atmosphere. For the dwarf planet Pluto, however, the predicted temperature based on the composition of its atmosphere was much higher than actual measurements taken by NASA's New Horizons spacecraft in 2015.
A new study published November 16 in Nature proposes a novel cooling mechanism controlled by haze particles to account for Pluto's frigid atmosphere.
"It's been a mystery since we first got the temperature data from New Horizons," said first author Xi Zhang, assistant professor of Earth and planetary sciences at UC Santa Cruz. "Pluto is the first planetary body we know of where the atmospheric energy budget is dominated by solid-phase haze particles instead of by gases."
The cooling mechanism involves the absorption of heat by the haze particles, which then emit infrared radiation, cooling the atmosphere by radiating energy into space. The result is an atmospheric temperature of about 70 Kelvin (minus 203 degrees Celsius, or minus 333 degrees Fahrenheit), instead of the predicted 100 Kelvin (minus 173 Celsius, or minus 280 degrees Fahrenheit).
According to Zhang, the excess infrared radiation from haze particles in Pluto's atmosphere should be detectable by the James Webb Space Telescope, allowing confirmation of his team's hypothesis after the telescope's planned launch in 2019.
Extensive layers of atmospheric haze can be seen in images of Pluto taken by New Horizons. The haze results from chemical reactions in the upper atmosphere, where ultraviolet radiation from the sun ionizes nitrogen and methane, which react to form tiny hydrocarbon particles tens of nanometers in diameter. As these tiny particles sink down through the atmosphere, they stick together to form aggregates that grow larger as they descend, eventually settling onto the surface.
"We believe these hydrocarbon particles are related to the reddish and brownish stuff seen in images of Pluto's surface," Zhang said.
The researchers are interested in studying the effects of haze particles on the atmospheric energy balance of other planetary bodies, such as Neptune's moon Triton and Saturn's moon Titan. Their findings may also be relevant to investigations of exoplanets with hazy atmospheres.
Zhang's coauthors are Darrell Strobel, a planetary scientist at Johns Hopkins University and co-investigator on the New Horizons mission, and Hiroshi Imanaka, a scientist at NASA Ames Research Center in Mountain View, who studies the chemistry of haze particles in planetary atmospheres. This research was funded by NASA.
Tim Stephens | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:bc35a8e7-170a-4adb-870e-4d3e290d87fa> | 4.0625 | 1,156 | Content Listing | Science & Tech. | 35.187249 | 95,540,750 |
|Scientific Name:||Balaenoptera bonaerensis Burmeister, 1867|
|Taxonomic Notes:||Until the 1990s, only one species of Minke Whale was recognized, the Antarctic Minke Whale Balaenoptera bonaerensis being regarded as conspecific with the Common Minke Whale (B. acutorostrata). Most of the scientific literature prior to the late 1990s uses the name B. acutorostrata for all Minke Whales including Antarctic Minke Whales. Since 2000, the International Whaling Commission (IWC) Scientific Committee has recognized Antarctic Minke Whales as the separate species B. bonaerensis, while all Northern Hemisphere Minke Whales and all Southern Hemisphere "dwarf" Minke Whales are regarded as B. acutorostrata (IWC 2001). This has been followed by management and treaty bodies, such as Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). This is based on genetic and morphological evidence that the two Minke Whale species, which are partially sympatric in the Southern Hemisphere, are distinct species (Rice 1998, Wada et al. 1991, Pastene et al. 1994, Pastene et al. 2007).|
|Red List Category & Criteria:||Near Threatened ver 3.1|
|Assessor(s):||Cooke, J.G., Zerbini, A.N. & Taylor, B.L.|
|Reviewer(s):||Reeves, R., Jackson, J. & Brownell Jr., R.L.|
The Antarctic Minke Whale was previously listed as Data Deficient pending clarification of abundance and trends (Reilly et al. 2008). The IWC Scientific Committee has since accepted circumpolar population estimates of about 500,000 based on surveys conducted during 1993-2004 (IWC 2013). The population was estimated to have declined by 31% relative to the previous circumpolar surveys (1986-1991) but imprecision in the abundance estimates means that the decline is not statistically significant. In addition, an unknown proportion of the population would have been in unsurveyed pack ice habitat at the time of the surveys (Williams et al. 2014). The imprecise abundance and unknown proportion of whales in pack ice contributes to an overall lack of confidence in status determination based on decline rate. Because the decline may not have ceased and its causes are not understood, Red List criterion A2b for Vulnerable, for which the decline threshold is 30%, could apply. Given an estimated generation time of 22 years (Taylor et al. 2007), the time window for application of the A2 criterion would be 1952-2018. In the absence of a decline estimate for the whole period, and lacking understanding of the cause of the suspected decline, the Antarctic Minke Whale is classified as Near Threatened (NT), approaching Criterion A2b, following the Red List Guidelines because the species represents a case where, considering all available evidence, Least Concern, NT, and Vulnerable are equally plausible.
|Previously published Red List assessments:|
The Antarctic Minke Whale is considered a Southern Hemisphere species, although there are records north of the equator from Suriname (de Boer 2015) and occasional vagrants as far as the Arctic (Glover et al. 2010). In summer they are abundant throughout the Antarctic south of 60°S, occurring in greatest densities near the ice edge, and to some extent within the pack ice and in polynyas. Particularly high densities have been observed in some years in high Antarctic areas such as Prydz Bay, the Weddell Sea, and the Ross Sea (Kasamatsu et al. 1998). Although Common Minke Whales have been found in the Antarctic as far south as 65°S they are much less common there than Antarctic Minke Whales (Branch and Butterworth 2001), such that all “Minke Whale” abundance estimates south of 60°S can for practical purposes be treated as estimates of Antarctic Minke Whale abundance.
The winter distribution is not well known. Some Minke Whales remain in the Antarctic in winter (Ensor 1989). Following the unambiguous association of the "bio-duck" call with Antarctic Minke Whales (Risch et al. 2014), it has become easier to detect their presence in winter, and it appears they remain abundant year-round in at least some areas, such as the western Antarctic Peninsula (Dominella and Širović 2016). There is a wintering area off Costinha, Brazil (7°S), where Minke Whales, almost exclusively Antarctic Minke Whales, were the target of a whaling operation during 1964-85, with the peak abundance in October (da Rocha and Braga 1982). Minke Whales were also seen (and small numbers caught) off Durban, South Africa: the seasonal distribution was bimodal, with peaks in April/May and September/October, suggestive of migration past the area (Best 1982). There are occasional records from Peru (VanWaerebeek and Reyes 1994).
Migratory connections between wintering and feeding grounds are poorly known. The recovery of two Minke Whales marked in the Antarctic in Area II at 62° and 69°S (Buckland and Duff 1989) by the whaling station in the Costinha demonstrates that at least some individuals from Brazil migrate to the Antarctic. In addition, one whale accidentally marked with a Discovery mark at 28°S, 154°W and recovered at 73°S, 167°W (Horwood 1990) and another instrumented with a satellite transmitter near the Antarctic Peninsula and tracked to ~15°S, 100°W (Gales et al. 2013) provide evidence of similar north-south movements in the Pacific.
Native:Antarctica; Argentina; Australia; Brazil; Chile; French Southern Territories; Namibia; New Zealand; Peru; South Africa; South Georgia and the South Sandwich Islands; Suriname; Uruguay
|FAO Marine Fishing Areas:|
Atlantic – southwest; Atlantic – southeast; Atlantic – western central; Indian Ocean – western; Indian Ocean – eastern; Indian Ocean – Antarctic; Pacific – southeast; Pacific – southwest; Pacific – Antarctic
|Range Map:||Click here to open the map viewer and explore range.|
As for other baleen whales, the IWC’s management of Antarctic Minke Whales has been based on six Areas, I through VI, which are longitudinal pie slices 50°–70° wide. The population structure is poorly known, but recent analyses suggest a genetic distinction between whales in the Indian Ocean sector of the Antarctic (west of 165°E) and the Pacific Ocean sector (east of this line) with presumably some overlap (Pastene and Goto 2016). With the exception of the two marked whales mentioned above, the relationship between the Antarctic distribution and putative breeding areas is largely unknown.
The IWC Scientific Committee in 2012 agreed upon abundance estimates totalling 720,000 (95% confidence interval (CI) 512,000-1,012,000) for the period 1986-91 and 515,000 (95% CI 361,000-733,000) for the period 1993-2002, with a 31% decline between the means of the two periods. However, the confidence intervals of the two estimates overlap and the IWC report listed a number of factors that could affect the comparison (IWC 2013). The Committee did not feel able to produce reliable estimates from the 1979-85 data. The Committee noted substantial inter-annual variability in the estimates over and above what would be expected from sampling variance, which is suggestive of genuine fluctuations in distribution (IWC 2015). The Committee has to date been unable to identify a definite cause for the decline, but has considered population models that are capable of reproducing the decline given certain assumptions (IWC 2015). Some evidence suggests that the pre-whaling population of Antarctic Minke Whales was lower than recent abundance (Mori and Butterworth 2006), while other evidence points to pre-whaling populations similar to or greater than recent abundance (Ruegg et al. 2010).
|Current Population Trend:||Unknown|
|Habitat and Ecology:|
While in the Antarctic, Minke Whales feed almost exclusively on euphausiids (krill), primarily Euphausia superba, but also E. crystallorophias, E. frigida, and Thysanoessa macrura (Tamura and Konishi 2009). Observed densities of Minke Whales are highest near the edge of the pack ice, but they also occur within the pack ice (Williams et al. 2014). It is not known whether Antarctic Minke Whales feed to any significant extent while outside the Antarctic on their wintering grounds or migration routes. Best (1982) found a very low level of feeding, almost entirely on euphausiids, by Antarctic Minke Whales taken in winter off Durban, South Africa. Antarctic Minke Whales may themselves be an important prey for type-A Killer Whales, Orcinus orca (Pitman and Ensor 2003).
The Antarctic Minke Whale is considered pagophilic (ice-loving) in the sense of being better able than the larger baleen whales to use habitat with high pack ice densities. The proportion of the population found within the pack ice is not well known but has been estimated at 10-50% in Area IV (southeast Indian Ocean sector) in summer (Kelly et al. 2014).
Antarctic Minke Whales reach sexual maturity at about 7-8 years of age and the generation time is estimated to be 22 years (Taylor et al. 2007).
|Continuing decline in area, extent and/or quality of habitat:||Unknown|
|Generation Length (years):||22|
|Use and Trade:||
Antarctic Minke Whales are hunted under special permits issued by the government of Japan for scientific purposes according to Article VIII of the International Convention for Regulation of Whaling. Products from whales taken under special permits are sold only on the Japanese domestic market. The only international trade (in the sense defined by CITES) involves Introduction from the Sea (CITES 2017).
Whaling of Antarctic Minke Whales has not been as intensive as for the larger baleen whales. Substantial catches, apart from some experimental catches in the late 1960s, have been made by pelagic expeditions only since 1971, following depletion of the larger baleen whales. Nearly 100,000 Minke Whales were taken by pelagic whaling expeditions in the Antarctic during 1972-87, in addition to over 14,000 taken from the Brazilian land station at Costinha during 1964–85 and over 1,100 off South Africa during 1968-75 (Allison 2017). Since 1987, pelagic whaling continued under special permit at a reduced level. Nearly 11,000 Minke Whales were taken under such permits during 1987-2014. Catches were suspended for the 2014/15 season following a ruling by the International Court of Justice but resumed from the 2015/16 season with an annual catch target of 333 whales (IWC 2017).
Sea ice cover in the Antarctic is predicted to decline by 50% in winter and 30% in summer (Cavanagh et al. 2017) and there is concern that this could negatively impact species such as Antarctic Minke Whales for which areas with sea ice constitute a substantial part of their habitat.
Antarctic Minke Whales were subject to IWC catch limits soon after exploitation started. Catch limits for commercial whaling became zero from 1986 with the coming into effect of the IWC moratorium on commercial whaling. The summer range of Antarctic Minke Whales is also nominally protected by the IWC Southern Ocean Sanctuary, adopted in 1994, which prohibits catches south of a boundary located mainly at 40°S. Neither the moratorium nor the sanctuary provision applies to takes of whales under Special Permits issued by IWC member governments. Such catches continued from 1987 until 2014 when the International Court of Justice ordered a stop to the permit programme on the grounds that it was not for purposes of scientific research (Clapham 2015). Catches resumed from the 2015/16 season under a new programme (IWC 2017).
Antarctic Minke Whales are listed on Appendix I of the Convention on International Trade in Endangered Species (CITES), but this does not apply to products landed in Japan because the party holds a reservation on this species under CITES. Japan also holds a reservation on the IWC Sanctuary provision and therefore is not bound by it. The species is listed in Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals.
|Citation:||Cooke, J.G., Zerbini, A.N. & Taylor, B.L. 2018. Balaenoptera bonaerensis. The IUCN Red List of Threatened Species 2018: e.T2480A50350661.Downloaded on 19 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:36e708b7-7289-4902-a840-5e06fb02b068> | 3.71875 | 2,782 | Knowledge Article | Science & Tech. | 41.702102 | 95,540,759 |
Federal Institute for Geoscience & Natural Resources, Germany (BGR)
Project INDEX: Indian Ocean Exploration on polymetallic sulfides
For the exploration work in the Indian Ocean, BGR is using a deep-towed high-resolution MBESS called HOMESIDE. This BGR-developed tool allows us to acquire data of the seafloor topography, the seafloor backscatter signal and water column data to a depth of 6000 m. While hunting for inactive sulphide mineralization on the seafloor in 3000 to 4500 m depth, also sites with active hydrothermal exhalation are discovered by chance.
Please click here to see the provided movie gives an impression on the different data quality and shows the possibilities of a deep-towed system. Note that the mapped plumes in the water column data are not reflections of water-gas contact, but is detected due to slightly different densities (suspension & temperature) between the seawater and the exhalation. The data from the movie is a collection of MBESS’s of different manufacturer in different formats and quality levels. BGR use QPS Fledermaus, FMGeocoderToolbox, FMMidwater and recently Qimera to visualize and analyze the acquired data with great success.
Special thanks to Dr Ralf Freitag and Dr Kai Schumann, BGR for this content.
Do you have a QPS story to share? If so, please email us! We would love to highlight your work using our software as a News Item or Client Spotlight. | <urn:uuid:e744212a-d336-4e68-a38a-f378066f669d> | 2.75 | 327 | News (Org.) | Science & Tech. | 36.262608 | 95,540,773 |
Imagine a world where any surface could be coated with solar cells, converting sunlight and even the glow of light bulbs into small amounts of usable energy. This is the goal of a new startup called Ubiquitous Energy. The company hopes to develop affordable, transparent coatings and films that could harvest light energy when applied to windows or the screens of e-readers or tablet devices. One way to use the technology might be in electrochromic windows that turn from clear to dark when the sun is brightest.
The trick is the way the company’s photovoltaics take up light: They collect wavelengths in the ultraviolet and infrared portion of the spectrum but let visible light pass through. Traditional solar cells, in contrast, collect light in the ultraviolet and visible regions and therefore can’t be made completely transparent.
“It’s definitely an interesting approach if the cost of such cells can be low enough and the stability of the materials is sufficient,” says Zhenan Bao, a professor of chemical engineering at Stanford University, who is not affiliated with the startup. He adds that by collecting infrared and ultraviolet light, the technology is filtering the parts of the spectrum people generally want windows to keep out anyway.
Miles Barr, president and chief technology officer of Ubiquitous Energy, says the transparent solar cells are made of various organic layers, deposited one at a time on top of a glass or film. This process could easily be integrated into thin-film deposition systems found in industrial processing. Many modern windows, for instance, have some sort of coating for solar control or insulation; Barr envisions his company’s solar cells being manufactured and added similarly. Ubiquitous Energy, which was spun out of the lab of Vladimir Bulović, a professor of electrical engineering at MIT, hasn’t yet announced plans for products or pricing.
A paper published in Applied Physics Letters in 2011 described the company’s spectrally selective approach: Prototypes made of organic materials yielded slightly less than 2% efficiencies and a visible transparency of about 70%. (Building windows usually require transparencies from 55% to 90%, while mobile electronic displays require 80% to 90%.) Barr says his team is nudging both efficiency and transparency numbers upward.
While the company is still in the research and development phase, it is exploring various materials and designs for products. “We’re getting a catalog of device structures and ingredients for higher-efficiency devices that can power more power-hungry devices or offset energy for buildings,” says Miles. “Once you hit 10% efficiency, a lot of applications open up.” The company hopes to achieve efficiencies greater than 10% at “high visible transparency.”
There are other types of see-through solar cells, but many of them still harvest some light in the visible range and therefore don’t have the transparency potential of an approach that ignores visible light. These materials achieve semitransparency when they are sparsely deposited over a surface or when the photovoltaic devices are thinned to allow more visible light to pass through.
“Current photovoltaic technology extensively utilizes the ultraviolet-visible range, but not much in the infrared range,” says Shenqiang Ren, a professor of chemistry at the University of Kansas, who is unaffiliated with the company. “Under solar radiation, there is about 45% radiant energy from infrared [light].”
As Ubiquitous Energy seeks to improve the efficiency of its solar cells, Barr explains, it is looking at two standard ways to collect more light. The first is optimizing the design of its semiconductor materials. Its current materials include molecular dyes that have selective absorption peaks in the ultraviolet and near-infrared parts of the spectrum; Barr says the company is developing materials that also gather energy deeper into the infrared. The second involves nanoscale engineering and tweaking optical interference within the device to improve light absorption—tricks employed to improve the efficiency of nontransparent solar cells in the past. “There are a lot of knobs you can tune to boost performance,” he says.
Image courtesy of Flickr, DBduo Photography
- A replacement for traffic lights gets its first test
- The Best of the Physics arXiv (week ending July 7, 2018)
- The US may have just pulled even with China in the race to build supercomputing’s next big thing
- A better way to measure magnetic fields could make fetal heart problems easier to detect
This article originally published at MIT Technology Review here | <urn:uuid:507117f7-07bb-44de-95d1-a2957f35e5df> | 3.4375 | 944 | Truncated | Science & Tech. | 27.333675 | 95,540,777 |
|TUNICATA : PHLEBOBRANCHIA : Ascidiidae||SEA SQUIRTS|
Description: A tall solitary sea squirt usually found in clumps and attached by its base. The body is oval with a fluted oral siphon at the top and an upward-directed atrial siphon 1/3 of the way down the side of the body. The test is grey and semi-transparent, usually covered with lightly adhering detritus, filamentous algae, etc. There is a series of lighter marks around the edge of each siphon, and the large oral tentacles can easily be seen inside the oral siphon in expanded animals. Typical size 50-100mm.
Habitat: Usually in shallow sheltered sites, harbours, sea laughs, etc., attached to shells or pebbles on mud or on silty rock if this is present. Often abundant.
Distribution: All around the British Isles and from Norway to the Mediterranean. Scarce in the North Sea.
Similar Species: Ciona intestinalis always has yellow marks around the siphons, and is different in shape and consistency. Ascidia spp. have firmer tests. Ascidiella scabra is usually smaller and more squat, with both siphons more or less on a level.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Ascidiella aspersa (O F Müller, 1776). [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=ZD1410 Accessed on 2018-07-23
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:ab02a676-a5f7-4982-a103-1e42e2818988> | 3.359375 | 407 | Knowledge Article | Science & Tech. | 52.671429 | 95,540,806 |
The Euclidean Algorithm and Irrational Numbers
April 26, 2010
Bailey Hall 207
Refreshments will be served at 4:15 in Bailey 204
The Euclidean Algorithm is a procedure for determining the greatest common divisor of two positive integers. Irrational numbers are real numbers that cannot be expressed as the ratio of two integers. These two ideas certainly do not seem to be related. We shall explore a rather surprising historical connection between these ideas. This exploration will include a quick tour of ancient Greek mathematics.
|Union College Math Department Home Page|
Comments to: firstname.lastname@example.org
Created automatically on: Mon Jul 23 03:56:54 EDT 2018 | <urn:uuid:3522ab80-8684-438c-a610-409cce4f2c31> | 3.46875 | 142 | News (Org.) | Science & Tech. | 42.341667 | 95,540,807 |
London: A chance discovery by a team of researchers including physicists from the Tata Institute of Fundamental Research in Mumbai has provided experimental evidence that stars may generate sound.
The study of fluids in motion -- now known as hydrodynamics -- goes back to the Egyptians, so it is not often that new discoveries are made. However when examining the interaction of an ultra-intense laser with a plasma target, the team observed something unexpected.
John Pasley from University of York realised that in the trillionth of a second after the laser strikes, plasma flowed rapidly from areas of high density to more stagnant regions of low density, in such a way that it created something like a traffic jam.
Plasma piled up at the interface between the high and low density regions, generating a series of pressure pulses: a sound wave. However, the sound generated was at such a high frequency that it would have left even bats and dolphins struggling!
With a frequency of nearly a trillion hertz, the sound generated was not only unexpected, but was also at close to the highest frequency possible in such a material -- six million times higher than that which can be heard by any mammal!
"One of the few locations in nature where we believe this effect would occur is at the surface of stars. When they are accumulating new material, stars could generate sound in a very similar manner to that which we observed in the laboratory," explained Pasley.
So the stars might be singing -- but, since sound cannot propagate through the vacuum of space, no one can hear them.
The technique used to observe the sound waves in the lab works very much like a police speed camera.
It allows the scientists to accurately measure how fluid is moving at the point that is struck by the laser on timescales of less than a trillionth of a second.
"It was initially hard to determine the origin of the acoustic signals, but our model produced results that compared favourably with the wavelength shifts observed in the experiment," concluded Alex Robinson from the Plasma Physics Group at York. | <urn:uuid:c2acc0a1-43d8-465b-9f3b-778b6b14ddac> | 3.578125 | 413 | News Article | Science & Tech. | 33.986988 | 95,540,810 |
Protecting Earth From Killer Asteroids: NEOWISE Detects 114 Near Earth Objects, 10 Potentially Hazardous
The danger of potentially hazardous near-Earth objects is well known and NASA reactivated the WISE telescope in September 2013 to help in the tracking of known objects and the detection of those heretofore unknown potentially hazardous asteroids — commonly referred to by the media as “killer asteroids” — that have gone undetected. This week NASA announced that the repurposed telescope, dubbed NEOWISE, had found 114 previously unknown near-Earth objects, and nearly a hundred of those have been discovered in just the last year.
Space described the large number of detected near-Earth objects — and the data gathered on nearly 600 more asteroids, comets and other space objects already catalogued — as a “treasure trove,” noting that the NEOWISE (Near-Earth Object Wide-field Survey Explorer) mission has only been in operation for three-and-a-half years. Of the 114 unknown near-Earth objects NEOWISE has found, 97 of them have been discovered in just the last 12 months.
“NEOWISE is not only discovering previously uncharted asteroids and comets, but it is [also] providing excellent data on many of those already in our catalog,” NEOWISE principal investigator Amy Mainzer of NASA’s Jet Propulsion Laboratory (which oversees the mission) in Pasadena said in a statement. “It is also proving to be an invaluable tool in the refining and perfecting of techniques for near-Earth object discovery and characterization by a space-based infrared observatory.”
Near-Earth Objects (NEOs), as defined by NASA, are asteroids and comets that have been guided “by the gravitational attraction of the planets in our solar system into orbits that allow them to enter Earth’s neighborhood.”
Potentially Hazardous Asteroids (PHAs) are objects that could cause significant regional damage on impact if they were to proceed unabated to Earth. They are at least 140 meters (359 feet) in diameter. Congress made finding large asteroids a priority in 2005 when it passed the George E. Brown, Jr. Near-Earth Object Survey Act, which gave authorization to NASA to “establish a program to detect, track, catalogue, and characterize the physical characteristics of near-Earth asteroids and comets equal to or greater than 100 meters in diameter in order to assess the threat of such near-Earth objects in striking the Earth.”
The law charged NASA to carry out its mission in order to “provide warning and mitigation of the potential hazard” presented by the PHAs.
Thus far, according to NASA statistics, it is estimated that over 90 percent of the near-Earth objects larger than one kilometer have been discovered. The NEO Program is now engaged in finding what they believe is 90 percent of the NEO population — those objects larger than 140 meters.
Currently, most of the near-Earth objects detected are picked up by the NEO surveys Catalina Sky Survey in Tucson, Arizona, and the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) in Hawaii.
In hard numbers, catalogued NEOs hit the 15,000 plateau in October, according to a NASA statement issued at the time.
Using NEOWISE data, NASA estimates that there are between 3,200 to 4,700 PHAs. Of them, only about 20 to 30 percent have been discovered.
“While no known NEO currently poses a risk of impact with Earth over the next 100 years,” said NASA Planetary Defense Officer Lindley Johnson. “we’ve found mostly the larger asteroids, and we have a lot more of the smaller but still potentially hazardous ones to find.”
The operative term in the statement, of course, is “known.” But regardless if the potentially hazardous space rock is known or unknown, there is no planetary defense system in place to defend the Earth from a potential strike by one.
Scientists have been attempting to issue a call to arms for years, all to no avail, the danger lacking in apparent urgency. This past December at a conference in San Francisco, NASA scientist Joseph Nuth told the gathering, according to The Inquisitr, that Earth was overdue for a “dinosaur killer” asteroid and, if one were to be discovered that would arrive inside a couple years, there was “not a hell of a lot we can do about it at the moment.”
[Featured Image by muratart/Shutterstock] | <urn:uuid:060c9e74-555b-4177-8937-b19c50edfddd> | 3.296875 | 948 | News Article | Science & Tech. | 33.204766 | 95,540,824 |
Using population data, plot-scale vegetation analyses and satellite imagery, the ecologists from from the Australian Antarctic Division (AAD), the University of Tasmania, Blatant Fabrications Pty Ltd and Stellenbosch University found that after cats were eradicated from Macquarie in 2000, the island's rabbit population increased so much that its vegetation has been devastated.
According to the study's lead author, Dr Dana Bergstrom of the Australian Antarctic Division: “Satellite images show substantial island-wide rabbit-induced vegetation change. By 2007, impacts on some protected valleys and slopes had become acute. We estimate that nearly 40% of the whole island area had changed, with almost 20% having moderate to severe change.”
Rabbits were introduced to Macquarie Island in 1878 by sealing gangs. After reaching large numbers, the rabbits became the main prey of cats, which had been introduced 60 years earlier. Because the rabbits were causing catastrophic damage to the island's vegetation, Myxomatosis and the European rabbit flea (which spreads the Myxoma virus) were introduced in 1968. As a result, rabbit numbers fell from a peak of 130,000 in 1978 to less than 20,000 in the 1980s and vegetation recovered. However, with fewer rabbits as food, the cats began to eat the island's native burrowing birds, so a cat eradication programme began in 1985. Since the last cat was killed in 2000, Myxomatosis failed to keep rabbit numbers in check; their numbers bounced back and in little over six years rabbits substantially altered large areas of the island.
According to Bergstrom: “Increased rabbit herbivory has caused substantial damage at both local and landscape scales including changes from complex vegetation communities, to short, grazed lawns or bare ground.”
Invasive species can cause large-scale changes to ecosystems, including species extinctions and – in extreme cases – ecosystem “meltdown”. As a result, control or eradication of invasive alien species is widely undertaken. However, important lessons must be learned from events on Macquarie Island, say the authors.
“Our study shows that between 2000 and 2007 there has been widespread ecosystem devastation and decades of conservation effort compromised. The lessons for conservation agencies globally is that interventions should be comprehensive, and include risk assessments to explicitly consider and plan for indirect effects, or face substantial subsequent costs. On Macquarie Island, this cost will be around A$24 million,” says Bergstrom.
The changes documented in this study are a rare example of so-called “trophic cascades” - the knock-on effects of changes in one species' abundance across several links in the food web. “This study is one of only a handful which demonstrate that theoretically plausible trophic cascades associated with invasive species removal not only do take place, but can also result in rapid and detrimental changes to ecosystems, so negating the direct benefits of the removal of the target species,” Bergstrom says.
Macquarie Island (34 km long x 5 km wide) is an oceanic island in the Southern Ocean, 1,500 km south-east of Tasmania and approximately halfway between Australia and the Antarctic continent. Low-lying, with a cool, maritime climate, it is covered with tundra-like vegetation. It was inscribed as a World Heritage Site in 1997 because of its geological significance – it is the only place on Earth where rocks from the Earth’s mantle (6 km below the ocean floor) are being actively exposed above sea-level.
Becky Allen | alfa
Further reports about: > Antarctic > Myxoma virus > Myxomatosis > World Heritage island > ecosystem > ecosystem devastation > eradication of invasive alien species > invasive species > island's rabbit population > plot-scale vegetation analyses > population data > satellite imagery > trophic cascades > tundra-like vegetation
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:16c3a718-e481-4c86-b277-fb050db99fa5> | 3.671875 | 1,473 | Content Listing | Science & Tech. | 33.510627 | 95,540,832 |
Techniques of Integration
The purpose of this chapter is to teach you certain basic tricks to find indefinite integrals. It is of course easier to look up integral tables, but you should have a minimum of training in standard techniques.
KeywordsInverse Function Fourier Coefficient Partial Fraction Hyperbolic Sine Root Sign
Unable to display preview. Download preview PDF. | <urn:uuid:8b683baf-5159-47e6-a5e7-4c0fbd57a67b> | 2.671875 | 77 | Truncated | Science & Tech. | 38.013355 | 95,540,873 |
Impaired function of a receptor that regulates release of a mood elevating hormone in the brain may be responsible for causing depression, anxiety and cardiovascular disorders, according to a Yale study in Pharmacogenetics and Genomics.
The genetic variant that causes this malfunction is nearly 15 times as prevalent in African-Americans as Caucasians and might explain why African-Americans have a higher rate of congestive heart failure, according to the first author, Alexander Neumeister, M.D., associate professor in the Department of Psychiatry at Yale School of Medicine.
In his study of 29 healthy African-Americans, 11 had the genetic variant, nine were carriers of the genetic variant, and nine were not carriers. Those who tested positive for the genetic variant had elevated norepinephrine levels, heart rate and blood pressure, suggesting impaired receptor function. The effects were more pronounced after the subjects were given a medication that blocked the action of the receptor, causing a sustained release of norepinephrine.
Jacqueline Weaver | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Social Sciences
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences | <urn:uuid:c70ffe80-1509-4384-8657-4f62f9fad980> | 2.640625 | 847 | Content Listing | Science & Tech. | 35.217567 | 95,540,876 |
General Chemistry/Print version
| This is the print version of General Chemistry
You won't see this message or any elements not part of the book's content when you print or preview this page.
A Free Online Textbook
A three-dimensional representation of an atomic 4f orbital.
About General Chemistry
General Chemistry is an introduction to the basic concepts of chemistry, including atomic structure and bonding, chemical reactions, and solutions. Other topics covered include gases, thermodynamics, kinetics and equilibrium, redox, and chemistry of the elements.
It is assumed that the reader has basic scientific understanding. Otherwise, minimal knowledge of chemistry is needed prior to reading this book.
Beyond General Chemistry
- Organic Chemistry - Chemistry studies focusing on the carbon atom and compounds.
- Inorganic Chemistry - Chemistry studies focusing on salts, metals, and other compounds not based on carbon.
- Biochemistry - Chemistry studies of or relating to living organisms.
This is a wiki textbook. Anyone from around the world can read, as well as write it! All of the content in the book is covered by the GNU Free Document Licence, which means it is guaranteed to remain free and open.
Authors and Significant Contributors
Chemistry is Everywhere
The modern human experience places a large emphasis upon the material world. From the day of our birth to the day we die, we are frequently preoccupied with the world around us. Whether struggling to feed ourselves, occupying ourselves with modern inventions, interacting with other people or animals, or simply meditating on the air we breathe, our attention is focused on different aspects of the material world. In fact only a handful of disciplines—certain subsets of religion, philosophy, and abstract math—can be considered completely unrelated to the material world. Everything else is somehow related to chemistry, the scientific discipline which studies the properties, composition, and transformation of matter.
Branches of Chemistry
Chemistry itself has a number of branches:
- Analytical chemistry seeks to determine the composition of substances.
- Biochemistry is the study of chemicals found in living things (such as DNA and proteins).
- Inorganic Chemistry studies substances that do not contain carbon.
- Organic chemistry studies carbon-based substances. Carbon, as described in more detail in this book, has unique properties that allow it to make complex chemicals, including those of living organisms. An entire field of chemistry is devoted to substances with this element.
- Physical chemistry is the study of the physical properties of chemicals, which are characteristics that can be measured without changing the composition of the substance.
Chemistry as a discipline is based on a number of other fields. Because it is a measurement-based science, math plays an important role in its study and usage. A proficiency in high-school level algebra should be all that is needed in this text, and can be obtained from a number of sources. Chemistry itself is determined by the rules and principles of physics. Basic principles from physics may be introduced in this text when necessary.
Why Study Chemistry?
There are many reasons to study chemistry. It is one pillar of the natural sciences necessary for detailed studies in the physical sciences or engineering. The principles of biology and psychology are rooted in the biochemistry of the animal world, in ways that are only now beginning to be understood. Modern medicine is firmly rooted in the chemical nature of the human body. Even students without long-term aspirations in science find beauty in the infinite possibilities that originate from the small set of rules found in chemistry.
Chemistry has the power to explain everything in this world, from the ordinary to the bizarre. Why does iron rust? What makes propane such an efficient, clean burning fuel? How can soot and diamond be so different in appearance, yet so similar chemically? Chemistry has the answer to these questions, and so many more. Understanding chemistry is the key to understanding the world as we know it.
This Book: General Chemistry
An introduction to the chemical world is set forth in this text. The units of study are organized as follows.
- Properties of Matter: An explanation of the most fundamental concept in chemistry: matter.
- Atomic Structure: While technically in the domain of physics, atoms determine the behavior of matter, making them a necessary starting point for any discussion of chemistry.
- Compounds and Bonding: Chemical bonding is introduced, which explains how less than one hundred naturally-occurring elements can combine to form all the different compounds that fill our world.
- Chemical Reactions: Things get interesting once chemical reactions start making and breaking bonds.
- Aqueous Solutions: Substances dissolved in water have special properties. This is when acids and bases are introduced.
- Phases of Matter: A detailed look at the organization of substances, with particular focus on gases.
- Chemical Equilibria: Chemical reactions don't go on forever. Equilibrium is the balance that reactions seek to achieve.
- Chemical Kinetics: Kinetics explain why it takes years for an iron nail to rust, but only a split second for a hydrogen-filled hot air balloon to explode.
- Thermodynamics: Two things decide which reactions can occur and which reactions cannot: heat and chaos. Or enthalpy and entropy, as they are called in thermodynamics
- Chemistries of Various Elements: An exploration of the elements that make up all substance. Includes an introduction to nuclear chemistry and carbon, the essence of organic chemistry.
Basic Properties of Matter
What is Matter?
Matter is defined as anything that occupies space and has mass.
Mass is a measure of an object's inertia. It is proportional to weight: the more mass an object has, the more weight it has. However, mass is not the same as weight. Weight is a force created by the action of gravity on a substance while mass is a measure of an object's resistance to change in motion. Mass is measured by comparing the substance of interest to a standard kilogram called the International Prototype Kilogram (IPK). The IPK is a metal cylinder for which the height and diameter both equal 39.17 millimeters and is made of an alloy of 90% platinum and 10% iridium. Thus, the standard kilogram is defined and all other masses are a comparison to this kilogram. When atom masses are measured in a mass spectrometer, a different internal standard is used. Your take home lesson with regard to mass is that mass is a relative term judged by a comparison.
Volume is a measure of the amount of space occupied by an object. Volume can be measured directly with equipment designed using graduations marks or indirectly using length measurements depending on the state (gas, liquid, or solid) of the material. A graduated cylinder, for example, is a tube that can hold a liquid which is marked and labeled at regular intervals, usually every 1 or 10 mL. Once a liquid is placed in the cylinder, one can read the graduation marks and record the volume measurement. Since volume changes with temperature, graduated equipment has limits to the precision with which one can read the measurement. Solid objects that have regular shape can have their volume calculated by measuring their dimensions. In the case of a box, its volume equals length times width times height.
It is particularly interesting to note that measuring is different from calculating a specific value. While mass and volume can both be determined directly relative to either a defined standard or line marks on glass, calculating other values from measurements is not considered measuring. For example, once you have measured the mass and volume of a liquid directly, one can then calculate the density of a substance by dividing the mass by the volume. This is considered indirectly determining density. Interestingly enough, one can also measure density directly if an experiment which allows the comparison of density to a standard is set up.
Another quantity of matter directly or indirectly determined is the amount of substance. This can either represent a counted quantity of objects (e.g. three mice or a dozen bagels) or the indirectly determined number of particles of a substance being dealt with such as how many atoms are contained in a sample of a pure substance. The latter quantity is described in terms of moles. One mole is specifically defined as the number of particles in 12 grams of the isotope Carbon-12. This number is 6.02214078(18)x 1023 particles.
- Mass: the kilogram (kg). Also, the gram (g) and milligram (mg).
- 1 kg = 1000 g
- 1000 mg = 1 g.
- Volume: the liter (L), milliliter (mL). Also, cubic centimeters (cc) and cubic meters (m3).
- 1 cc = 1 mL
- 1000 mL = 1 L
- 1000 L = 1 m3
- Amount: the mole (mol).
- 1 mol = 6.02214078(18)x 1023 particles
Atoms, Elements, and Compounds
The fundamental building block of matter is the atom.
Any atom is composed of a little nucleus surrounded by a "cloud" of electrons. In the nucleus there are protons and neutrons.
However, the term "atom" just refers to a building block of matter; it doesn't specify the identity of the atom. It could be an atom of carbon, or an atom of hydrogen, or any other kind of atom.
This is where the term "element" comes into play. When an atom is defined by the number of protons contained in its nucleus, chemists refer to it as an element. All elements have a very specific identity that makes them unique from other elements. For example, an atom with 6 protons in its nucleus is known as the element carbon. When speaking of the element fluorine, chemists mean an atom that contains 9 protons in its nucleus.
- Atom: A fundamental building block of matter composed of protons, neutrons, and electrons.
- Element: A uniquely identifiable atom recognized by the number of protons in the nucleus.
Despite the fact that we define an element as a unique identifiable atom, when we speak, for example, 5 elements, we don't usually mean those 5 atoms are of the same type (having the same number of protons in their nucleus). We mean 5 'types' of atoms. It is not necessary there are only 5 atoms. There may be 10, or 100, etc. atoms, but those atoms belong to one of 5 types of atoms. I'd rather define 'element' as 'type of atom'. I think it is more precise. If we'd like to refer to 5 atoms having the same 6 protons in their nucleus, I'd say '5 carbon atoms' or '5 atoms of carbon'.
It is important to note that if the number of protons in the nucleus of an atom changes, so does the identity of that element. If we could remove a proton from nitrogen (7 protons), it is no longer nitrogen. We would, in fact, have to identify the atom as carbon (6 protons). Remember, elements are unique and are always defined by the number of protons in the nucleus. The Periodic Table of the Elements shows all known elements organized by the number of protons they have.
An element is composed of the same type of atom; elemental carbon contains any number of atoms, all having 6 protons in their nuclei. In contrast, compounds are composed of different type of atoms. More precisely, a compound is a chemical substance that consists of two or more elements. A carbon compound contains some carbon atoms (with 6 protons each) and some other atoms with different numbers of protons.
Compounds have properties different from the elements that created them. Water, for example, is composed of hydrogen and oxygen. Hydrogen is an explosive gas and oxygen is a gas that fuels fire. Water has completely different properties, being a liquid that is used to extinguish fires.
The smallest representative for a compound (which means it retains characteristics of the compound) is called a molecule. Molecules are composed of atoms that have "bonded" together. As an example, the formula of a water molecule is "H2O": two hydrogen atoms and one oxygen atom.
Properties of Matter
Properties of matter can be divided in two ways: extensive/intensive, or physical/chemical.
States of Matter
One important physical property is the state of matter. Three are common in everyday life: solid, liquid, and gas. The fourth, plasma, is observed in special conditions such as the ones found in the sun and fluorescent lamps. Substances can exist in any of the states. Water is a compound that can be liquid, solid (ice), or gas (steam).
Solids have a definite shape and a definite volume. Most everyday objects are solids: rocks, chairs, ice, and anything with a specific shape and size. The molecules in a solid are close together and connected by intermolecular bonds. Solids can be amorphous, meaning that they have no particular structure, or they can be arranged into crystalline structures or networks. For instance, soot, graphite, and diamond are all made of elemental carbon, and they are all solids. What makes them so different? Soot is amorphous, so the atoms are randomly stuck together. Graphite forms parallel layers that can slip past each other. Diamond, however, forms a crystal structure that makes it very strong.
Liquids have a definite volume, but they do not have a definite shape. Instead, they take the shape of their container to the extent they are indeed "contained" by something such as beaker or a cupped hand or even a puddle. If not "contained" by a formal or informal vessel, the shape is determined by other internal (e.g. intermolecular) and external (e.g. gravity, wind, inertial) forces. The molecules are close, but not as close as a solid. The intermolecular bonds are weak, so the molecules are free to slip past each other, flowing smoothly. A property of liquids is viscosity, the measure of "thickness" when flowing. Water is not nearly as viscous as molasses, for example.
Gases have no definite volume and no definite shape. They expand to fill the size and shape of their container. The oxygen that we breathe and steam from a pot are both examples of gases. The molecules are very far apart in a gas, and there are minimal intermolecular forces. Each atom is free to move in any direction. Gases undergo effusion and diffusion. Effusion occurs when a gas seeps through a small hole, and diffusion occurs when a gas spreads out across a room. If someone leaves a bottle of ammonia on a desk, and there is a hole in it, eventually the entire room will reek of ammonia gas. That is due to the diffusion and effusion. These properties of gas occur because the molecules are not bonded to each other.
|Technically, a gas is called a vapor if it does not occur at standard temperature and pressure (STP). STP is 0° C and 1.00 atm of pressure. This is why we refer to water vapor rather than water gas.|
- In gases, intermolecular forces are very weak, hence molecules move randomly colliding with themselves, and with the wall of their container, thus exerting pressure on their container. When heat is given out by gases, the internal molecular energy decreases; eventually, the point is reached when the gas liquifies.
Changes in Matter
There are two types of change in matter: physical change and chemical change. As the names suggest, physical changes affect physical properties, and chemical changes affect chemical properties.
Chemical changes are also known as chemical reactions. The "ingredients" of a reaction are the reactants, and the end results are called the "products". The change from reactants to products can be signified by an arrow.Chemical changes are mostly irreversible
- A Chemical Reaction
Reactants → Products
Note that the number of reactants and products don't necessarily have to be the same. However, the number of each type of atom must remain constant. This is called the Law of Conservation of Matter. It states that matter can never be created or destroyed, only changed and rearranged. If a chemical reaction begins with 17 moles of carbon atoms, it must end with 17 moles of carbon atoms. They may be bonded into different molecules, or in a different state of matter, but they cannot disappear.
When changes occur, energy is often transformed. However, like atoms, energy cannot disappear. This is called the Law of Conservation of Energy. A simple example would be putting ice cubes into a soft drink. The ice cubes get warmer as the drink gets colder, because energy cannot be created or destroyed, only transferred. Note that energy can be "released" or "stored" by making and breaking bonds. When a plant converts the energy from sunlight into food, that energy is stored in the chemical bonds within the sugar molecules.
Chemical change or Physical change?
Physical changes do not cause a substance to become a fundamentally different substance. Chemical changes, on the other hand, cause a substance to change into something entirely new. Chemical changes are typically irreversible, but that is not always the case. It is easier to understand the difference between physical and chemical changes with examples.
|State changes are physical.||Phase changes are when you melt, freeze, boil, condense, sublimate, or deposit a substance. They do not change the nature of the substance unless a chemical change occurs along with the physical change.|
|Cutting, tearing, shattering, and grinding are physical.||These may be irreversible, but the result is still composed of the same molecules. When you cut your hair, that is a physical change, even though you can't put the hair back on your head again.|
|Mixing together substances is physical.||For example, you could mix salt and pepper, dissolve salt in water, or mix molten metals together to produce an alloy.|
|Gas bubbles forming is chemical.||Not to be confused with bubbles from boiling, which would be physical (a phase change). Gas bubbles indicate that a chemical reaction has occurred.|
|Precipitates forming is chemical.||When dissolved substances are mixed, and a cloudy precipitate appears, there has been a chemical change.|
|Rotting, burning, cooking, and rusting (for example) are chemical.||The resulting substances are entirely new chemical compounds. For instance, wood becomes ash and heat; iron becomes rust; sugar ferments into alcohol.|
|Changes of color or release of odors (i.e. release of a gas) might be chemical.||As an example, the element chromium shows different colors when it is in different compounds, but a single chromium compound will not change color on its own without some sort of reaction.|
|Release/absorption of energy (heat, light, sound) is generally not easily categorized.||Hot/cold packs involve dissolving a salt in water to change its temperature (more on that in later chapters); popping popcorn is mostly physical (but not completely).|
Classification of Matter
Matter can be classified by its state.
- Solids have a set volume and shape.The inter molecular force of attraction for solid matter is very strong.
- Liquids have a set volume, but change shape. The inter molecular force of attraction for liquid matter is weaker than solid matter.
- Gases have neither definite volume nor shape. The inter molecular force of attraction for gaseous matter is negligible.
- Plasma which are usually gaseous state of matter in which a part or all of the atoms or molecules are dissociated to form ions.
Matter can also be classified by its chemical composition.
- An element is a pure substance made up of atoms with the same number of protons. As of 2011, 118 elements have been observed, 92 of which occur naturally. Carbon (C), Oxygen (O), Hydrogen (H) are examples of elements. The periodic table is a tabular representation of the known elements.
- A compound consists of two or more chemical elements that are chemically bonded together. Water (H2O) and table sugar (C12H22O11) are examples of chemical compounds. The ratio of the elements in a compound is always the same. For example in water, the number of H atoms is always twice the number of O atoms.
- A mixture consists of two or more substances (element or compound) mixed together without any chemical bond. Salad is a good example. A mixture can be separated into its individual components by mechanical means.
Types of Mixtures
There are many kinds of mixtures. They are classified by the behavior of the phases, or substances that have been mixed.
A homogeneous mixture is uniform, which means that any given sample of the mixture will have the same composition. Air, sea water, and carbonation dissolved in soda are all examples of homogeneous mixtures, or solutions. No matter what sample you take from the mixture, it will always be composed of the same combination of phases. Chocolate chip ice cream is not homogeneous—one spoonful taken might have two chips, and then another spoonful might have several chips.
An example for a homogeneous mixture is a solution. The substance that gets dissolved is the solute. The substance that does the dissolving is the solvent. Together they make a solution. If you stir a spoonful of salt into a glass of water, salt is the solute that gets dissolved. Water is the solvent. The salty water is now a solution, or homogeneous mixture, of salt and water.
When different gases are mixed, they always form a solution. The gas molecules quickly spread out into a uniform composition.
A heterogeneous mixture is not uniform. Different samples may have different compositions, like the example of chocolate chip ice cream. Concrete, soil, blood, and salad are all examples of heterogeneous mixtures.
When sand gets kicked up in a pond, it clouds the water. It has a greater mass than water hence it sinks to the bottom and settles down, and is no longer mixed into the water. This is an example of a suspension. Suspensions are heterogeneous mixtures that will eventually settle. They are usually, but not necessarily, composed of phases in different states of matter. Italian salad dressing has three phases: the water, the oil, and the small pieces of seasoning. The seasonings are solids that will sink to the bottom, and the oil and water are liquids that will separate.
What exactly is toothpaste? We can't exactly classify it by its state of matter. It has a definite shape and volume, like a solid. But then you squeeze the tube, and it flows almost like a liquid. And then there's jelly, shaving cream, smoke, dough, and Silly Putty...
These are examples of colloids. A colloid is a heterogeneous mixture of two substances of different phases. Shaving cream and other foams are gas dispersed in liquid. Jello, toothpaste, and other gels are liquid dispersed in solid. Dough is a solid dispersed in a liquid. Smoke is a solid dispersed in a gas.
|Colloids differ from suspensions in that they will not settle.|
Colloids consist of two phases: a dispersed phase inside of a continuous medium.
The Tyndall Effect
The Tyndall effect distinguishes colloids from solutions. In a solution, the particles are so fine that they will not scatter light. This is not true for a colloid. If you shine light through a solution, the beam of light will not be visible. It will be visible in a colloid. For instance, if you have ever played with a laser pointer, you have seen the Tyndall effect. You cannot see the laser beam in air (a solution), but if you shine it into a mist, the beam is visible. Clouds look white (or gray), as opposed to blue, because of the Tyndall effect - the light is scattered by the small droplets of suspended water.
Methods for Separating Mixtures
Because there is no chemical bonding in a mixture, the phases can be separated by mechanical means. In a heterogeneous mixture like a salad, the pieces can easily be picked out and separated. It is as simple as sifting through the salad and picking out all the tomatoes and radishes, for example. However, many mixtures contain particles that are too small, liquids, or too many particles to be separated manually. We must use more sophisticated methods to separate the mixture.
Imagine you have a sandbox, but there are bits of broken glass in it. All you would need is some sort of filter. The sand particles are much smaller than the glass chips, so a mesh filter would let sand pass but stop the glass. Filtration is used in all sorts of purification methods. Some filters, like dialysis tubing, are such fine filters that water can pass, but dissolved glucose cannot.
|Filtration works with particles that are significantly different in size, like sand and rock, or water and glucose.|
If you were given a glass of saltwater, could you drink it? Sure, if you distill it first. Distillation is the boiling of a mixture to separate its phases. Salt is a solid at room temperature, and water is a liquid. Water will boil far before salt even begins to melt. So separating the two is as simple as boiling the water until all that remains is the solid salt. If desired, the water vapor can be collected, condensed, and used as a source of pure water.
Distillation can also be used if two liquids are mixed but have different boiling points. Separation of several liquids with similar boiling points can be achieved using fractionation.
Centrifugation and Sedimentation
These processes rely on differences in density. In a medical lab, blood often goes into a centrifuge. A centrifuge is a machine that spins a sample at fairly high rates of speed. Red blood cells are much denser than the watery substance (called plasma, but it's not the plasma state of matter) that makes up blood. As a result of the spinning, the denser phases move outward and the less dense phases move inward, towards the axis of rotation. Then, the red blood cells can be separated from the plasma.
Sedimentation is similar, but it happens when particles of different densities have settled within a liquid. If a jar of muddy water is left to settle, the heaviest particles sink to the bottom first. The lightest particles sink last and form a layer on top the heavier particles. You may have seen this effect in a bottle of salad dressing. The seasonings sink to the bottom, the water forms a lower layer, and the oil forms an upper layer. The separate phases can be skimmed out. To return it to a mixture, simply shake it up to disturb the layers.
The differences in substances' properties can be exploited to allow separation. Consider these examples:
- A mixture of sand and iron filings can be separated by magnet.
- Salt and sand can be separated by solution (sand will not dissolve in water, salt will)
- Helium can be separated from a mixture with hydrogen by combustion (this is a very dangerous operation, since hydrogen in the presence of oxygen is highly explosive). Hydrogen is flammable, but helium is not.
There are countless other ways to separate mixtures. For instance, gel electrophoresis is used to separate different sized pieces of DNA. They are placed into gel, and an electric current is applied. The smaller pieces move faster and separate from the larger pieces.
Chromatography separates phases dissolved in liquid. If you want to see an example, take a strip of paper and draw a dot on it with a colored marker. Dip the strip into water, and wait a while. You should see the ink separate into different colors as they spread out from the dot.
Numbers Used to Describe Atoms
The Atomic number is the number of protons in the nucleus of an atom. This number determines the element type of the atom. For instance, all neon atoms have exactly ten protons. If an atom has ten protons, then it must be neon. If an atom is neon, then it must have ten protons.
The atomic number is sometimes denoted Z. Continuing with the example of neon, .
The Neutron number is the number of neutrons in the nucleus of an atom. Remember that neutrons have no electric charge, so they do not affect the chemistry of an element. However, they do affect the nuclear properties of the element. For instance, Carbon-12 has six neutrons, and it is stable. Carbon-14 has eight neutrons, and it happens to be radioactive. Despite these differences, both forms of carbon behave the same way when forming chemical compounds.
The neutron number is sometimes denoted N.
The Mass number is the sum of protons and neutrons in an atom. It is denoted A. To find the mass number of an atom, remember that A = Z + N. The mass number of an atom is always an integer. Because the number of neutrons can vary among different atoms of the same element, there can be different mass numbers of a given element. Look back to the example of carbon. Carbon-14 has a mass number of 14, and Carbon-12 has a mass number of 12. Every carbon atom must have six protons, so Carbon-14 has eight neutrons and Carbon-12 has six neutrons.
|Elements with the same atomic number but different atomic masses are isotopes.|
Isotopes of the same element have nearly identical chemical properties (because they have the same number of protons and electrons). Their only difference is the number of neutrons, which changes their nuclear properties like radioactivity.
There is a convenient way of writing the numbers that describe atoms. It is easiest to learn by examples.
|Keep in mind that any of the three numbers written around the element symbol are optional, but they should be written if it is important to the given situation. The charge number is left off if the atom has zero charge (equal number of protons and electrons). The mass number and atomic number are usually left off.|
|This is how we write fluorine-19. The atomic number is below and the mass number is above, followed by its symbol on the periodic table of the elements.|
|This example shows carbon-12. Notice how the atomic number is missing. You know which element it is because of the C, so there is no need to write the number of protons. The atomic number is rarely written because the element symbol implies how many protons there are.|
|The last example shows both the atomic number and mass number, along with a charge. The charge is the difference in the number of protons compared to the number of electrons. You can read more about charge, protons, and electrons later on. From the example, you can see that this magnesium atom would have 12 protons, 13 neutrons, and 10 electrons. Its mass is 25 (12 p + 13 n) and its charge is +2 (12 p - 10 e).|
- Exercise for the reader!
Try writing the symbol for an atom with seven protons, seven neutrons, and eight electrons. You will need to look up its symbol on the periodic table.
The mass of an atom is measured in atomic mass units (amu). An atom's mass can be found by summing the number of protons and neutrons. By definition, 12 amu equals the atomic mass of carbon-12. Protons and neutrons have an approximate mass of 1 amu, and electrons have a negligible mass.
|There is a difference between an atom's mass number and an element's atomic mass. The mass number measures the number of protons and neutrons in the nucleus of a particular atom. The atomic mass measures the average mass of all atoms for an element. For example, a carbon atom might have a mass number of 12 or 14 (or something else), but carbon in general has a mass of 12.011 amu.|
Usually, a pure element is made up of a number of isotopes in specific ratios. Because of this, the measured atomic mass of carbon is not exactly 12. It is an average of all the masses of all the isotopes, with the more common ones contributing more to the measured atomic mass. By convention atomic masses are given no units.
Pretend that the element Wikibookium has two isotopes. The first has a mass number of 104, and the second has a mass number of 107. Considering that 75% of the naturally occurring atoms are of the first isotope, and the rest are of the second. The average atomic mass is calculated as
0.75 × 104 + 0.25 × 107 = 104.75
In this case, a bunch of Wikibookium atoms would have an average mass of 104.75 amu, but each individual atom might have a mass number of 104 or 107. Keep in mind that all of the atoms would have the same number of protons. Their masses are different because of the number of neutrons.
A mole is defined as the amount of an element whose number of particles are equal to that in 12g of C-12 carbon, also known as Avogadro's number. Avogadro's number equals 6.022 × 1023. Moles are not very confusing: if you have a dozen atoms, you would have 12. If you have a mole of atoms, you would have 6.022 × 1023. Why is this ridiculously large number important? It can be used to convert between atomic mass units and grams. One mole of carbon-12 is exactly 12 grams, by definition. Similarly, one mole of any element is the atomic mass of that element expressed as a weight in grams. The atomic mass is equal to the number of grams per mole of that element.
There are 128.2 g of rubidium (atomic mass = 85.47 amu). How many atoms are there?
(128.2 g) / (85.47 g/mol) = 1.5 mol
(1.5 mol) × (6.022 × 1023) = 9.03 × 1023 atoms of rubidium
Moles are also important because every 22.4 liters of gas contain 1 mole of gas molecules at standard temperature and pressure (STP, 0 °C and 1 atmosphere of pressure). Avogadro discovered this. (That's why it's his number.) A container filled with fluorine gas would have to be 22.4 L large to hold one mole of F2 molecules. Knowing this fact allows you to determine the mass of a gas molecule if you know the volume of the container. This holds true for every gas.
Why every single gas? Atoms and molecules are tiny. The volume of a gas is mostly empty space, so the molecules have an insignificantly small volume. As you will eventually learn, this ensures that there is always one mole of gas atoms for every 22.4 liters at STP.
History of Atomic Structure
Why Is The History Of The Atom So Important?
It is fundamental to the understanding of science that science is understood to be a process of trial and improvement and represents the best known at the time, not an unerring oracle of truth. Development of an idea and refinement through testing is shown more in the understanding of atomic structure.
The Greek Theorists
The earliest known proponent of anything resembling modern atomic theory was the ancient Greek thinker Democritus. He proposed the existence of indivisible atoms as a response to the arguments of Parmenides, and the paradoxes of Zeno.
Parmenides argued against the possibility of movement, change, and plurality on the premise that something cannot come from nothing. Zeno attempted to prove Parmenides' point by a series of paradoxes based on difficulties with infinite divisibility.
In response to these ideas, Democritus posited the existence of indestructible atoms that exist in a void. Their indestructibility provided a retort to Zeno, and the void allowed him to account for plurality, change, and movement. It remained for him to account for the properties of atoms, and how they related to our experiences of objects in the world.
Democritus proposed that atoms possessed few actual properties, with size, shape, and mass being the most important. All other properties, he argued, could be explained in terms of the three primary properties. A smooth substance, for instance, might be composed of primarily smooth atoms, while a rough substance is composed of sharp ones. Solid substances might be composed of atoms with numerous hooks, by which they connect to each other, while the atoms of liquid substances possess far fewer points of connection.
Democritus proposed 5 points to his theory of atoms. These are:
- All matter is composed of atoms, which are bits of matter too small to be seen. These atoms CANNOT be further split into smaller portions.
- There is a void, which is empty space between atoms.
- Atoms are completely solid.
- Atoms are homogeneous, with no internal structure.
- Atoms are different in: their sizes, their shapes, and their weights.
Empedocles proposed that there were four elements, air, earth, water, and fire and that everything else was a mixture of these. This belief was very popular in the medieval ages and introduced the science of alchemy. Alchemy was based on the belief that since everything was made of only four elements, you could transmute a mixture into another mixture of the same type. For example, it was believed that lead could be made into gold.
Alchemy's problem was exposed by Antoine Lavoisier when he heated metallic tin in a sealed flask. A grayish ash appeared on the surface of the melting tin, which Lavoisier heated until no more ash formed. After the flask cooled, he inverted it and opened it underwater. He discovered the water rose one-fifth of the way into the glass, leading Lavoisier to conclude that air itself is a mixture, with one-fifth of it having combined with the tin, yet the other four-fifths did not, showing that air was not an element.
Lavoisier repeated the experiment again, substituting mercury for tin, and found that the same happened. Yet after heating gently, he found that the ash released the air, showing that the experiment could be reversed. He concluded that the ash was a compound of the metal and oxygen, which he proved by weighing the metal and the ash, and showing that their combined weight was greater than that of the original metal.
Lavoisier then stated that combustion was not an element, but instead was a chemical reaction of a fuel and oxygen.
Modern atomic theory was born with Dalton when he published his theories in 1803. His theory consists of five important points, which are considered to be mostly true today: (from Wikipedia)
- Elements are composed of tiny particles called atoms.
- All atoms of a given element are identical.
- The atoms of a given element are different from those of any other element; the atoms of different elements can be distinguished from one another by their respective relative weights.
- Atoms of one element can combine with atoms of other elements to form chemical compounds; a given compound always has the same relative numbers of types of atoms.
- Atoms cannot be created, divided into smaller particles, nor destroyed in the chemical process; a chemical reaction simply changes the way atoms are grouped together.
We now know that elements have different isotopes, which have slightly different weights. Also, nuclear reactions can divide atoms into smaller parts (but nuclear reactions aren't really considered chemical reactions). Otherwise, his theory still stands today.
In the late 1800's, Russian scientist Dmitri Mendeleev was credited with creating one of the first organized periodic tables. Organizing each element by atomic weight, he cataloged each of the 56 elements discovered at the time. Aside from atomic weight, he also organized his table to group the elements according to known properties.
While writing a series of textbooks, Mendeleev realized he was running out of space to treat each element individually. He began to regularly "linewrap" the elements onto the next line, and create what is now called the periodic table of the elements. Using his table, he predicted the existence of later-discovered elements, such as "eka-aluminum" and "eka-silicon" (gallium and germanium) according to patterns found earlier. His predictions were a success, proving his table to be exceptionally accurate. Later theories, those of the electrons around the atom, explain why elements in the same period, or group, have similar chemical properties. Chemists would later organise each element by atomic number, not atomic weight, giving rise to the modern Periodic Table of Elements.
Discovery of the Electron
In the year 1889 the British physicist J.J. Thomson discovered the electron. Thomson conducted a number of experiments using cathode ray tube. Cathode rays are constructed by sealing two electrodes in a glass tube connected to a voltage supplier and a vacuum pump removing the air from it. When the electrodes are attached to high voltage of about 15000v and low pressure, a beam of radiation is emitted from the negative electrode(cathode)moving towards the positive electrode(anode). These beams are said to be green rays called cathode rays. (1) The ray is said to be taken towards Faraday's tube(gold leaf electroscope) and was charged by induction and deflect positively charged gold leaf electroscope (2) a freely moving paddle wheel was placed on the path of the rays and it was able to move it suggesting that it has momentum (3) The ray was placed in a magnetic and electric field moving towards the north and positive pole respectively Thomson discovered that cathode rays travel in straight lines except, when they are bent by electric or magnetic fields. The cathode rays bent away from a negatively charged plate, Thomson concluded that these rays are made of negatively charged particles; today we call them electrons. Thomson found that he could produce cathode rays using electrodes of various materials. He then concluded that electrons were found in all atoms and are over a thousand times smaller than protons. Thompson used an apparatus to determine the charge to mass ratio(e/m)after the ray emerge he placed a magnetic field of a known magnetic influence and the rays bent towards the north to a particular position then he added an electric field to return the ray back to it original position and recorded the charges used by the electric field so he divided them to get the ratio of about -1.7*10^8
The "Plum Pudding" Atomic Model
Soon after the discovery of the electron, Thomson began speculating on the nature of the atom. He suggested a "plum pudding" model. In this model the bits of "plum" were the electrons which were floating around in a "pudding" of positive charge to match that of the electrons and make an electrically neutral atom. A modern illustration of this idea would be a chocolate chip cookie, with the chips representing negatively charged electrons and the dough representing positive charge.
Ernest Rutherford is known for his famous gold foil experiment in 1911. Alpha particles, which are heavy and positively charged (actually, helium nuclei, but that's beside the point), were fired at a very thin layer of gold. Most of the alpha particles passed straight through, as expected. According to the plum pudding model all of the particles should have slowed as they passed through the "pudding", but none should have been deflected. Surprisingly, a few alpha particles were deflected back the way they came. He stated that it was "as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you."
The result of the experiment allowed Rutherford to conclude that the plum pudding model is wrong.
- Atoms have a nucleus, very small and dense, containing the positive charge and most of the atom's mass.
- The atom consists of mostly empty space.
- The electrons are attracted to the nucleus, but remain far outside it.
Bohr created his own model of the atom, improving on Rutherford's. Bohr used an equation developed by Rydberg that provided a mathematical relationship between the visible spectral lines of the hydrogen atom. The equation requires that the wavelengths emitted from the hydrogen atom be related to the difference of two integers. Bohr theorized that these integers represent "shells" or "orbits" in which electrons travel around the nucleus, each with a certain energy level. The energy of an orbit is proportional to its distance from the nucleus. An atom will absorb and release photons that have a specific amount of energy. The energy is the result of an electron jumping to a different shell. Starting with Rydberg's equation, along with Planck and Einstein's work on the relationship between light and energy, Bohr was able to derive an equation to calculate the energy of each orbit in the hydrogen atom. The Bohr model depicts the atom as a nucleus with electrons orbiting around it at specific distances. This model is also referred to as the Planetary Model.
Robert Millikan is accredited for the "Oil Drop Experiment", in which the value of the electron charge was determined. He created a mechanism where he could spray oil drops that would settle into a beam of X rays. The beam of X rays caused the oil drops to become charged with electrons. The oil droplets were in between a positively charged plate and a negatively charged plate which, when proper electric voltage was applied, caused the oil droplet to remain still. Robert Millikan measured the diameter of each individual oil drop using a microscope.
Millikan was able to calculate the mass of each oil droplet because he knew the density of the oil (). Using the mass of each oil droplet and the equation for the force of gravitational attraction (which he rearranged from to , where is the mass of each individual oil droplet, is the acceleration due to gravity, and is the electrical force which equals force in the first equation), Millikan was able to find the value of the charge of the electron, .
The X rays, however, did not always produce an oil drop with only one negative charge. Thus, the values Millikan obtained may have looked like this:
Millikan found that these values all had a common divisor: coulomb. He concluded that different values occurred because the droplets acquired charges of -5, -4, -3, and -2, as in this example. Thus, he stated that the common divisor, coulomb, was the charge of the electron.
Before learning about subatomic particles, some basic properties should be understood.
Particles may be electrically charged. Charge is a property which defines the force that a particle will exert on other charged particles. There is a well known saying that applies perfectly: "Opposites attract." (Likewise, like charges repel.) Positive charges and negative charges will attract each other and come together. Two positive or two negative charges will push each other away.
|Not all particles have charge.|
The amount of charge a particle has is measured in coulombs, but it is more conveniently expressed in terms of an integer. For instance, a helium ion that has 2 less electrons than usual has a charge of +2, and a bromide ion with one more electron than usual has a charge of -1. (This may seem backwards, but remember that an electron has a negative charge.) Notice that charge not only applies to subatomic particles, but also ions and other things as well. Always remember to specify if a charge is positive or negative. Unlike ordinary numbers, we always write the plus sign for positive charges to avoid confusion with a negative charge.
|It may be useful to understand Coulomb's Law. It explains the electromagnetic force: . is the charge on each particle, is the distance between them, and is a constant. So, if the distance between two particles is doubled, the force will be reduced four times. Double the charge would mean double the force, be it attractive or repulsive. Coulomb's Law will be especially important when understanding periodic trends. However, there is no need to solve it exactly. Just remember the relationships between the variables.|
Mass is the measure of inertia. From a subatomic point of view, mass can also be understood in terms of energy, but that does not concern us when dealing with chemistry. Mass for particles, atoms, and molecules is not measured in grams, as with ordinary substances. Instead, it is measured in atomic mass units, or amu. For more information about mass and amu, read the previous chapters on properties of matter.
At the center of each atom lies the nucleus. It is incredibly small: if you were to take the average atom (itself miniscule in size) and expand it to the size of a football stadium, then the nucleus would be about the size of a marble. It is, however, astoundingly dense: despite the tiny percentage of the atom's volume it contains nearly all of the atom's mass. The nucleus almost never changes under normal conditions, remaining constant throughout chemical reactions. Nuclei are themselves made up of a pair of smaller and more dense particles, the proton and the neutron. These particles are collectively dubbed nucleons.
Protons have a charge of +1 and a mass of 1 amu. They are often represented by a .
Protons will be important when learning about acids and bases—they are the essence of acid. Remember that the number of protons in an atom is its atomic number, and defines what element it will be. The number of protons in a nucleus ranges from one to over a hundred.
Consider the element hydrogen. Its atomic number is 1, so it has one proton and one electron. If it is made into an ion (an atom with missing or extra electrons), it will simply be a lone proton. Thus, a proton is the nucleus of a hydrogen atom, and a proton is a hydrogen ion. Therefore, a proton can be written as or , both symbols for a hydrogen ion.
Neutrons have no charge and a mass of 1 amu. A neutron is slightly heavier than a proton, but the difference is insignificant. Neutrons are often written .
Unlike the protons, neutrons cannot exist outside the nucleus indefinitely as they become unstable and break down. Within one nucleus there can be many protons and neutrons all in close proximity to one another. The number of neutrons in a nucleus ranges from zero to over a hundred.
You may wonder why neutrons exist. They have no charge, so can they do anything? The answer is yes—neutrons are very important. Remember that opposites attract and likes repel. If so, then how can several protons stay clumped together in the dense nucleus of an atom? It would seem as if the protons would repel and scatter the nucleus. However, there is a strong nuclear force that holds the nucleus together. This incredible force causes nucleons to attract each other with much greater strength than the electric force can repel them, but only over extremely short distances.
A delicate balance exists between the number of protons and neutrons. Protons, which are attracted to one another via the strong force but simultaneously repelled by their electromagnetic charges, cannot exist in great numbers within the nucleus without the stabilizing action of neutrons, which are attracted via the strong force but are not charged. Conversely, neutrons lend their inherent instability to the nucleus and too many will destabilize it.
|Neutrons are the reason for isotopes, or atoms with the same number of protons but different masses. The masses are different because of different numbers of neutrons. Isotopes of a given element have almost identical chemical properties (like color, melting point, reactions, etc.), but they have different nuclear properties. Some isotopes are stable, others are radioactive. Different isotopes decay in different ways.|
Lastly, neutrons are very important in nuclear reactions, such as those used in power plants. Neutrons act like a bullet that can split an atom's nucleus. Because they have no charge, neutrons are neither attracted nor repelled by atoms and ions.
The Electron Cloud
Surrounding the dense nucleus is a cloud of electrons. Electrons have a charge of -1 and a mass of 0 amu. That does not mean they are massless. Electrons do have mass, but it is so small that it has no effect on the overall mass of an atom. An electron has approximately 1/1800 the mass of a proton or neutron. Electrons are written .
Electrons orbit the outside of a nucleus, unaffected by the strong nuclear force. They define the chemical properties of an atom because virtually every chemical reaction deals with the interaction or exchange of the outer electrons of atoms and molecules.
Electrons are attracted to the nucleus of an atom because they are negative and the nucleus (being made of protons and neutrons) is positive. Opposites attract. However, electrons don't fall into the nucleus. They orbit around it at specific distances because the electrons have a certain amount of energy. That energy prevents them from getting too close, as they must maintain a specific speed and distance. Changes in the energy levels of electrons cause different phenomena such as spectral lines, the color of substances, and the creation of ions (atoms with missing or extra electrons).
Atoms will always have equal numbers of protons and electrons, so their overall charge is zero. Atoms are neutral. Ions, on the other hand, are atoms that have gained or lost electrons and now have an unequal number of protons and electrons. If there are extra electrons, the ion will be negatively charged. If there are missing electrons, the ion will be positively charged, due to the majority of positive protons.
Valence electrons (the outermost electrons) are responsible for an atom's behavior in chemical bonds. The core electrons are all of the electrons not in the outermost shell, and they rarely get involved. An atom will attempt to fill its valence shell. This occurs when an atom has eight valence electrons (as explained in the next chapter), so atoms will undergo chemical bonds to either share, give, or take the electrons it needs. Sodium, for example, is very likely to give up its one valence electron, so that its outer shell is empty (the shell underneath it is full). Chlorine is very likely to take an electron because it has seven and wants eight. When sodium and chlorine are mixed, they exchange electrons and create sodium chloride (table salt). As a result, both elements have full valence shells, and a very stable compound is formed.
Introduction to Quantum Theory
Introduction to Quantum Mechanics
In the late 19th century, many physicists believed that they had made great progress in physics, and there wasn't much more that needed to be discovered. The classical physics at the time was widely accepted in the scientific community. However, by the early 20th century, physicists discovered that the laws of classical mechanics break down in the atomic world, and experiments such as the photoelectric effect completely contradict the laws of classical physics. As a result of these crises, physicists began to construct new laws of physics which would apply to the atomic world; these theories would be collectively known as quantum mechanics. Quantum mechanics, in some ways, completely changed the way physicists view the universe, and it also marked the end of the idea of a clockwork universe (the idea that the universe was predictable).
Electromagnetic radiation (ER) is a form of energy that sometimes acts like a wave, and other times acts like a particle. Visible light is a well-known example. All forms of ER have two inversely proportional properties: wavelength and frequency. Wavelength is the distance from one wave peak to the next, which can be measured in meters. Frequency is the number of wave peaks observed in a given point during a second. The unit for frequency is hertz.
Since wavelength and frequency are inversely related, their product (multiplication) always equals a constant — specifically, 3.0 x 108 m/sec represented by the letter c, which is better known as the speed of light. This relationship is written mathematically as , with the greek letter λ (lambda) representing wavelength and the letter representing frequency.
The wavelength and frequency of any specific occurrence of ER determine its position on the electromagnetic spectrum.
As you can see, visible light is only a tiny fraction of the spectrum.
The energy of a single particle of an electromagnetic wave (called a photon) is given by , where is Plank's constant and is the frequency. Energy is directly proportional to frequency — doubling the frequency will double the energy.
The Discovery of the Quantum
So far we have only discussed the wave characteristics of energy. However, the wave model cannot account for something known as the photoelectric effect. This effect is observed when light focused on certain metals apparently causes electrons to be emitted. (Photoelectric or solar panels work on this principle.)
For each metal it was found that there is a minimum frequency of electromagnetic radiation that is needed to be shone on it in order for it to emit electrons. This conflicted with the earlier thought that the energy of light was linked only to its intensity. Under that theory, the effect of light should be cumulative - dim light should add up, little by little, until it causes electrons to be emitted. Instead, there is a clear-cut minimum of the frequency of light that triggers the electron emissions.
The implication of this is that the energy of light is tied to frequency, and furthermore that it is quantized, meaning that it carries "packets" of energy in discrete amounts. These packets are more commonly referred to as photons. This observation led to the discovery of the minimum amount of energy that could be gained or lost by an atom. Max Planck named this minimum amount the quantum, plural "quanta", meaning "how much". One photon of light carries exactly one quantum of energy.
The Quantum Model
It turns out that photons are not the only thing that act like waves and particles. Electrons, too, have this characteristic, known as wave-particle duality. Electrons can be thought of as waves of a certain length, thus they would only be able to form a circle around the nucleus at certain distances that are multiples of the wavelength. Of course, this brings up a problem: are electrons particles in a specific location, or waves in a general area? Werner Heisenberg tried using photons to locate electrons. Of course, when photons reach electrons, the electrons change velocity, and move to an excited state. As a result, it is impossible to precisely measure the velocity and location of an electron at the same time. This is known as the observer effect. This is frequently confused with the Heisenberg Uncertainty Principle, which goes even further, saying that there are limits to the degree to which both the position and momentum of a particle can even be known. This is due to the fact that electrons cannot exhibit both their wave and particle properties at the same time when being observed to interact with their surroundings. The momentum of an electron is proportional to its velocity, but based on its wave properties; its position is based on its particle position in space. The Heisenberg uncertainty principle is a kind of scientific dilemma: the more you know about something's velocity, the less you know about its position; and the more you know about its position, the less you know about its velocity. The significance of this uncertainty is that you can never know exactly where an atom's electrons are, only where they are most likely to be.
On the tiny scale of an atom, the particle model of an electron does not accurately describe its properties. An electron tends to act more like a water wave than a billiard ball. At any one moment in time the ball is in some definite place; it is also moving in some definite direction at a definite speed. This is certainly not true for waves or electrons in general. The Heisenberg uncertainty principle states that the exact position and momentum of an electron cannot be simultaneously determined. This is because electrons simply don't have a definite position, and direction of motion, at the same time!
One way to try to understand this is to think of an electron not as a particle but as a wave. Think of dropping a stone into a pond. The ripples start to spread out from that point. We can answer the question "Where is the wave?" with "It's where you plonked the stone in". But we can't answer the question "What direction is the wave moving?" because it's moving in all directions. It's spreading out. Now think of a wave at the seaside. We know the direction of motion. It's straight in towards the beach. But where is the wave? We can't pinpoint an exact location. It's all along the water.
The Wave Function
If we can never know exactly where an electron is, then how do we know about the way they orbit atoms? Erwin Schrödinger developed the Quantum Mechanical model, which describes the electron's behavior in a given system. It can be used to calculate the probability of an electron being found at a given position. You don't know exactly where the electron is, but you know where it is most likely and least likely to be found. In an atom, the wave function can be used to model a shape, called an orbital, which contains the area an electron is almost certain to be found inside.
In the following sections, we will learn about the shells, subshells, and orbitals that the electrons are in. Try not to get confused; it can be difficult. Understanding this information will help you to learn about bonding, which is very important.
Each electron orbiting in an atom has a set of four numbers that describe it. Those four numbers, called quantum numbers, describe the electron's orbit around the nucleus. Each electron in an atom has a unique set of numbers, and the numbers will change if the electron's orbit is altered. Examples are if bonding occurs, or an electron is energized into a higher-energy orbit. In the next chapter, we will learn the meaning of those four values.
|Keep in mind that the pictures of the orbitals you will soon see show the area in which the electron is most likely to be, not its exact orbit. It's like a picture of a sprinkler watering a lawn, and the electrons are drops of water. You know the general area of the water, but not the exact location of each droplet. In the orbital pictures, you know the general area the electron could be in, but not its exact path. This is a result of the Uncertainty Principle.|
The Quantum Atom
The Quantum Numbers
These four numbers are used to describe the location of an electron in an atom.
|Principal Quantum Number|
|Angular Momentum Quantum Number|
|Magnetic Quantum Number|
|Spin Quantum Number|
Principal Quantum Number (n)
Determines the shell the electron is in. The shell is the main component that determines the energy of the electron (higher n corresponds to higher energy), as well as size of the orbital, corresponding to maximum nuclear distance (higher n means further possible distance from the nucleus). The row that an element is placed on the periodic table tells how many shells there will be. Helium (n = 1), neon (n = 2), argon (n = 3), etc. Note that the shells will have different numbers, as described by the table above; for example, argon will contain the , , and subshells, for that total of 3.
Angular Momentum Quantum Number (l)
Also known as azimuthal quantum number. Determines the subshell the electron is in. Each subshell has a unique shape and a letter name. The s orbital is shaped like a sphere and occurs when l = 0. The p orbitals (there are three) are shaped like teardrops and occur when l = 1. The d orbitals (there are five) occur when l = 2. The f orbitals (there are seven) occur when l = 3. (By the way, when l = 4, the orbitals are "g orbitals", but they (and the l = 5 "h orbitals") can safely be ignored in general chemistry.). The numbers of the subshells in each shell can be calculated using the principal quantum number like so. For example, in the shell, the subshells are an subshell, and 3 subshells. You will learn how to determine the number of orbitals for each subshells in the next section.
This number also gives information as to what the angular node of an orbital is. A node is defined as a point on a standing wave where the wave has minimal amplitude. When applied to chemistry this is the point of zero-displacement and thus where no electrons are found. In turn angular node means the planar or conical surface in which no electrons are found or where there is no electron density. The models shown on this page show the most simple representations of these orbitals and their nodes. More accurate, but more complex depictions are not necessary for the scope of this book.
Here are pictures of the orbitals. Keep in mind that they do not show the actual path of the electrons, due to the Heisenberg Uncertainty Principle. Instead, they show the volume where the electron is most likely to occur, i.e. the probability amplitude is largest. The two colors represent two signs (phases) of the wave function (the choice is arbitrary). Each of the depicted orbitals is a superposition of two opposite m quantum numbers (see below).
|ml||0||-1 and 1||-2 and 2||-3 and 3|
|S orbital →|
|P orbitals →|
|D orbitals →|
|F orbitals →|
Magnetic Quantum Number (ml)
|S orbital →|
|P orbitals →|
|D orbitals →|
|F orbitals →|
Magnetic quantum number determines the orbital in which the electron lies. The number of orbitals in each subshell can be calculated like so: . ml determines how rapidly the complex phase increases around the z-axis. Without magnetic field, these orbitals all have the same energy, they are degenerate and can be combined into different shapes and spatial orientations. The orbitals in a subshell with degeneracy are called degenerate orbitals. This simply means that the orbitals in each p subshell all have the same energy level. The difference in shapes as well as orientation of higher subshells is not important during general chemistry, and the orbitals in the same higher subshells are still degenerate regardless of shape differences.
Spin Quantum Number (ms)
Does not determines the spin on the electron. +½ corresponds to the up arrow in an electron configuration box. If there is only one electron in an orbital (one arrow in one box), then it is always considered +½. The second arrow, or down arrow, is considered -½. Every orbital can contain one "spin up" electron, and one "spin down" electron.
Let's examine the quantum numbers of electrons from a magnesium atom, 12Mg. Remember that each list of numbers corresponds to (n, l, ml, ms).
|Two s electrons:||(1, 0, 0, +½)||(1, 0, 0, -½)|
|Two s electrons:||(2, 0, 0, +½)||(2, 0, 0, -½)|
|Six p electrons:||(2, 1, -1, +½)||(2, 1, -1, -½)||(2, 1, 0, +½)||(2, 1, 0, -½)||(2, 1, 1, +½)||(2, 1, 1, -½)|
|Two s electrons:||(3, 0, 0, +½)||(3, 0, 0, -½)|
The Periodic Table
Notice a pattern on the periodic table. Different areas, or blocks, have different types of electrons. The two columns on the left make the s-block. The six columns on the right make the p-block. The large area in the middle (transition metals) makes the d-block. The bottom portion makes the f-block (Lanthanides and Actinides). Each row introduces a new shell (aka energy level). Basically, the row tells you how many shells of electrons there will be, and the column tells you which subshells will occur (and which shells they occur in). The value of ml can be determined by some of the rules we will learn in the next chapter. The value of ms doesn't really matter as long as there are no repeating values in the same orbital.
|To see the pattern better, take a look at this periodic table.|
Shells and Orbitals
Each shell is subdivided into subshells, which are made up of orbitals, each of which has electrons with different angular momentum. Each orbital in a subshell has a characteristic shape, and is named by a letter. They are: s, p, d, and f. In a one-electron atom (e.g. H, He+, Li+2, etc.) the energy of each orbital within a particular shell is identical. However, when there are multiple electrons, they interact and split the orbitals into slightly different energies. Within any particular shell, the energy of the orbitals depends on the angular momentum of orbitals s, p, d, and f in order of lowest to highest energy. No two orbitals have the same energy level.
The s orbital
The simplest orbital in the atom is the 1s orbital. It has no radial or angular nodes: the 1s orbital is simply a sphere of electron density. A node is a point where the electron probability is zero. As with all orbitals the number of radial nodes increases with the principle quantum number (i.e. the 2s orbital has one radial node, the 3s has two etc.). Because the angular momentum quantum number is 0, there is only one choice for the magnetic quantum number - there is only one s orbital per shell. The s orbital can hold two electrons, as long as they have different spin quantum numbers. S orbitals are involved in bonding.
The p orbitals
Starting from the 2nd shell, there is a set of p orbitals. The angular momentum quantum number of the electrons confined to p orbitals is 1, so each orbital has one angular node. There are 3 choices for the magnetic quantum number, which indicates 3 differently oriented p orbitals. Finally, each orbital can accommodate two electrons (with opposite spins), giving the p orbitals a total capacity of 6 electrons.
The p orbitals all have two lobes of electron density pointing along each of the axes. Each one is symmetrical along its axis. The notation for the p orbitals indicate which axis it points down, i.e. px points along the x axis, py on the y axis and pz up and down the z axis. Note that although pz corresponds to the ml = 0 orbital, px and py are actually mixtures of ml = -1 and ml = 1 orbitals. The p orbitals are degenerate — they all have the same energy. P orbitals are very often involved in bonding.
The d orbitals
The first set of d orbitals is the 3d set. The angular momentum quantum number is 2, so each orbital has two angular nodes. There are 5 choices for the magnetic quantum number, which gives rise to 5 different d orbitals. Each orbital can hold two electrons (with opposite spins), giving the d orbitals a total capacity of 10 electrons.
Note that all the d orbitals have four lobes of electron density, except for the dz2 orbital, which has two opposing lobes and a doughnut of electron density around the middle. The d orbitals can be further subdivided into two smaller sets. The dx2-y2 and dz2 all point directly along the x, y, and z axes. They form an eg set. On the other hand, the lobes of the dxy, dxz and dyz all line up in the quadrants, with no electron density on the axes. These three orbitals form the t2g set. In most cases, the d orbitals are degenerate, but sometimes they can split, with the eg and t2g subsets having different energy. Crystal Field Theory predicts and accounts for this. D orbitals are sometimes involved in bonding, especially in inorganic chemistry.
The f orbitals
The first set of f orbitals is the 4f subshell. There are 7 possible magnetic quantum numbers, so there are 7 f orbitals. Their shapes are fairly complicated, and they rarely come up when studying chemistry. There are 14 f electrons because each orbital can hold two electrons (with opposite spins).
Filling Electron Shells
Filling Electron Shells
When an atom or ion receives electrons into its orbitals, the orbitals and shells fill up in a particular manner.
You may consider an atom as being "built up" from a naked nucleus by gradually adding to it one electron after another, until all the electrons it will hold have been added. Much as one fills up a container with liquid from the bottom up, the orbitals of an atom are filled from the lowest energy orbitals to the highest energy orbitals.
Orbitals with the lowest principal quantum number () have the lowest energy and will fill up first, in smaller atoms. Larger atoms with more subshells will seem to fill "out of order", as the other factors influencing orbital energy become important. Within a shell, there may be several orbitals with the same principal quantum number. In that case, more specific rules must be applied. For example, the three p orbitals of a given shell all occur at the same energy level. So, how are they filled up? Answer: all the three p orbitals have same energy so while filling the p orbitals we can fill any one of the Px, Py or Pz first. It is a convention that we chose to fill Px first, then Py and then Pz for our simplicity. Hence you can opt for filling these three orbitals from right to left also. Aufbau principle state that “atomic orbitals are filled with electrons in order of increasing energy level”.
According to Hund's rule, orbitals of the same energy are each filled with one electron before filling any with a second. Also, these first electrons have the same spin.
This rule is sometimes called the "bus seating rule". As people load onto a bus, each person takes his or her own seat, sitting alone. Only after all the seats have been filled will people start doubling up.
Pauli Exclusion principle
No two electrons can have all four quantum numbers the same. What this translates to in terms of our picture of orbitals is that each orbital can only hold two electrons, one "spin up" (+½) and one "spin down" (-½).
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, 8s.
Although this looks confusing, there is an easy way to remember. Go in order of the lines from top to bottom, top right end to bottom left of each line.
Understanding the above rules and diagrams will allow you to determine the electron configuration of almost any atom or ion.
How to Write the Electron Configuration of an Isolated Atom
Electron-configuration notation is relatively straightforward. An isolated Calcium atom 20Ca, for example, would have configuration of 1s22s22p63s23p64s2 in its ground state. Other configurations like 1s22s22p63s23p64s14p1 are possible, but these excited states have a higher energy. They are not stable and generally only exist for a brief moment.
The ground state configuration for Ca could be abbreviated by using the preceding noble gas (the elements found all the way on the right of the periodic table) as [Ar]4s2, where Ar is argon.
Noble gases have very stable configurations, and are extremely reluctant to lose or gain electrons. Noble gas atoms are also the only ones regularly found as isolated atoms in the ground state. Atoms of other elements all undergo bonding under the conditions that we live under and this affects the orbitals that the outermost electrons are in. In that sense the electron configurations for the other elements are somewhat hypothetical: to encounter an isolated atom of, say, tungsten (W), we would have to first vaporize a metal that boils at 5800K. However, knowing atomic configurations is useful because it does help us to understand how and why they bond, i.e. why and how they change the configuration of their outer valence electrons.
Rule of Stability
A subshell is particularly stable if it is half full or full. Given two configurations, the atom would "choose" the more stable one.
Example: In the following configuration, Cu: [Ar]4s23d9, copper's d shell is just one away from stability, and therefore, one electron from the s shell jumps into the d shell: [Ar]4s13d10. This way, the d shell is full, and is therefore stable, and the s shell is half full, and is also stable.
Another example: Chromium has a configuration of [Ar]4s13d5, although you would expect to see four d electrons instead of five. This is because an s electron has jumped into the d orbital, giving the atom two half-full shells—much more stable than a d orbital with only four electrons.
The stability rule applies to atoms in the same group as chromium and copper.
If one of these atoms has been ionized, that is, it loses an electron, it will come from the s orbital rather than the d orbital. For instance, the configuration of Cu+ is [Ar]4s03d10. If more electrons are removed, they will come from the d orbital.
Magnetism is a well-known effect. Chances are, you have magnets on your refrigerator. As you already know, only certain elements are magnetic. Electron configurations help to explain why.
Diamagnetism is actually a very weak repulsion to magnetic fields. All elements have diamagnetism to some degree. It occurs when there are paired electrons.
Paramagnetism is an attraction to external magnetic fields. It is also very weak. It occurs whenever there is an unpaired electron in an orbital.
Both diamagnetism and paramagnetism are responses of spins acting independently from each other. This leads to rather weak repulsion and attraction respectively. However, when they are located in a solid they may also interact with each other and respond collectively and that can lead to rather different properties:
Ferromagnetism is the permanent magnetism that we encounter in our daily lives. It occurs when all the unpaired spins in a solid couple and tend to align themselves in the same direction, leading to a strong attraction when exposed to a magnetic field. This only occurs at room temperature with three elements: iron (Fe), nickel (Ni), and cobalt (Co). Gadolinium (Gd) is a borderline case. It loses its ferromagnetism at 20oC; above that temperature the spins start to act alone. However, there are many alloys and compounds that exhibit strong ferromagnetic coupling. The strongest one is Nd2Fe14B
Antiferromagnetism is also a permanent magnetism in which unpaired spins align, but they do so in opposite directions. The result that the material does not react very strongly to a magnetic field at all. Chromium (Cr) is an example.
'Ferrimagnetism is a combination of ferro- and antiferromagetism. Unpaired spins align partly in opposite directions, but the compensation is not complete. This is why the material is still attracted strongly to a magnetic field. Magnetite Fe3O4 is such a substance. It was the first material studied for its magnetic properties and may well be the one sitting on your fridge.
Periodicity and Electron Configurations
Blocks of the Periodic Table
The Periodic Table does more than just list the elements. The word periodic means that in each row, or period, there is a pattern of characteristics in the elements. This is because the elements are listed in part by their electron configuration. The Alkali metals and Alkaline earth metals have one and two valence electrons (electrons in the outer shell) respectively. These elements lose electrons to form bonds easily, and are thus very reactive. These elements are the s-block of the periodic table. The p-block, on the right, contains common non-metals such as chlorine and helium. The noble gases, in the column on the right, almost never react, since they have eight valence electrons, which makes it very stable. The halogens, directly to the left of the noble gases, readily gain electrons and react with metals. The s and p blocks make up the main-group elements, also known as representative elements. The d-block, which is the largest, consists of transition metals such as copper, iron, and gold. The f-block, on the bottom, contains rarer metals including uranium. Elements in the same Group or Family have the same configuration of valence electrons, making them behave in chemically similar ways.
Causes for Trends
There are certain phenomena that cause the periodic trends to occur. You must understand them before learning the trends.
Effective Nuclear Charge
The effective nuclear charge is the amount of positive charge acting on an electron. It is the number of protons in the nucleus minus the number of electrons in between the nucleus and the electron in question. The nucleus attracts the electron, but other electrons in lower shells repel it (opposites attract, likes repel).
The shielding (or screening) effect is similar to effective nuclear charge. The core electrons repel the valence electrons to some degree. The more electron shells there are (a new shell for each row in the periodic table), the greater the shielding effect is. Essentially, the core electrons shield the valence electrons from the positive charge of the nucleus.
When two electrons are in the same shell, they will repel each other slightly. This effect is mostly canceled out due to the strong attraction to the nucleus, but it does cause electrons in the same shell to spread out a little bit. Lower shells experience this effect more because they are smaller and allow the electrons to interact more.
Coulomb's law is an equation that determines the amount of force with which two charged particles attract or repel each other. It is , where is the amount of charge, is the distance between charges, and is a constant. For atoms, the charges are typically described as integer multiples (positive for protons, negative for electrons) of the elementary charge , which is 1.6022 x 10-19 coulombs. You can see that doubling the distance would quarter the force.
Trends in the Periodic table
Most of the elements occur naturally on Earth. However, all elements beyond uranium (number 92) are called trans-uranium elements and never occur outside of a laboratory. Most of the elements occur as solids or gases at STP. STP is standard temperature and pressure, which is 0° C and 1 atmosphere of pressure. There are only two elements that occur as liquids at STP: mercury (Hg) and bromine (Br).
Bismuth (Bi) is the last stable element on the chart. All elements after bismuth are radioactive and decay into more stable elements. Some elements before bismuth are radioactive, however.
Leaving out the noble gases, atomic radii are larger on the left side of the periodic chart and are progressively smaller as you move to the right across the period. As you move down the group, radii increase.
Atomic radii decrease along a period due to greater effective nuclear charge. Atomic radii increase down a group due to the shielding effect of the additional core electrons, and the presence of another electron shell.
For nonmetals, ions are bigger than atoms, as the ions have extra electrons. For metals, it is the opposite.
Extra electrons (negative ions, called anions) cause additional electron-electron repulsions, making them spread out farther. Fewer electrons (positive ions, called cations) cause fewer repulsions, allowing them to be closer.
| Ionization energy is the energy required to strip an electron from the atom (when in the gas state).
Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period or upward within a group, the first ionization energy generally increases. As the atomic radius decreases, it becomes harder to remove an electron that is closer to a more positively charged nucleus.
Ionization energy decreases going left across a period because there is a lower effective nuclear charge keeping the electrons attracted to the nucleus, so less energy is needed to pull one out. It decreases going down a group due to the shielding effect. Remember Coulomb's Law: as the distance between the nucleus and electrons increases, the force decreases at a quadratic rate.
It is considered a measure of the tendency of an atom or ion to surrender an electron, or the strength of the electron binding; the greater the ionization energy, the more difficult it is to remove an electron. The ionization energy may be an indicator of the reactivity of an element. Elements with a low ionization energy tend to be reducing agents and form cations, which in turn combine with anions to form salts.
| Electron affinity is the opposite of ionization energy. It is the energy released when an electron is added to an atom.
Electron affinity is highest in the upper left, lowest on the bottom right. However, electron affinity is actually negative for the noble gasses. They already have a complete valence shell, so there is no room in their orbitals for another electron. Adding an electron would require creating a whole new shell, which takes energy instead of releasing it. Several other elements have extremely low electron affinities because they are already in a stable configuration, and adding an electron would decrease stability.
Electron affinity occurs due to the same reasons as ionization energy.
Electronegativity is how much an atom attracts electrons within a bond. It is measured on a scale with fluorine at 4.0 and francium at 0.7. Electronegativity decreases from upper right to lower left.
Electronegativity decreases because of atomic radius, shielding effect, and effective nuclear charge in the same manner that ionization energy decreases.
Metallic elements are shiny, usually gray or silver colored, and good conductors of heat and electricity. They are malleable (can be hammered into thin sheets), and ductile (can be stretched into wires). Some metals, like sodium, are soft and can be cut with a knife. Others, like iron, are very hard. Non-metallic atoms are dull, usually colorful or colorless, and poor conductors. They are brittle when solid, and many are gases at STP. Metals give away their valence electrons when bonding, whereas non-metals take electrons.
The metals are towards the left and center of the periodic table—in the s-block, d-block, and f-block . Poor metals and metalloids (somewhat metal, somewhat non-metal) are in the lower left of the p-block. Non-metals are on the right of the table.
Metallic character increases from right to left and top to bottom. Non-metallic character is just the opposite. This is because of the other trends: ionization energy, electron affinity, and electronegativity.
Octet Rule and Exceptions
The octet rule refers to the tendency of atoms to prefer to have eight electrons in the valence shell. When atoms have fewer than eight electrons, they tend to react and form more stable compounds. When discussing the octet rule, we do not consider d or f electrons. Only the s and p electrons are involved in the octet rule, making it useful for the representative elements (elements not in the transition metal or inner-transition metal blocks). An octet corresponds to an electron configuration ending with s2p6.
Atoms will react to get in the most stable state possible. A complete octet is very stable because all orbitals will be full. Atoms with greater stability have less energy, so a reaction that increases the stability of the atoms will release energy in the form of heat or light. Reactions that decrease stability must absorb energy, getting hotter.
The other tendency of atoms is to maintain a neutral charge. Only the noble gases (the elements on the right-most column of the periodic table) have zero charge with filled valence octets. All of the other elements have a charge when they have eight electrons all to themselves. The result of these two guiding principles is the explanation for much of the reactivity and bonding that is observed within atoms: atoms seek to share electrons in a way that minimizes charge while fulfilling an octet in the valence shell.
|The noble gases rarely form compounds. They have the most stable configuration (full octet, no charge), so they have no reason to react and change their configuration. All other elements attempt to gain, lose, or share electrons to achieve a noble gas configuration.|
The formula for table salt is NaCl. It is the result of Na+ ions and Cl- ions bonding together. If sodium metal and chlorine gas mix under the right conditions, they will form salt. The sodium loses an electron, and the chlorine gains that electron. In the process, a great amount of light and heat is released. The resulting salt is mostly unreactive — it is stable. It won't undergo any explosive reactions, unlike the sodium and chlorine that it is made of.
Why? Referring to the octet rule, atoms attempt to get a noble gas electron configuration, which is eight valence electrons. Sodium has one valence electron, so giving it up would result in the same electron configuration as neon. Chlorine has seven valence electrons, so if it takes one it will have eight (an octet). Chlorine has the electron configuration of argon when it gains an electron.
The octet rule could have been satisfied if chlorine gave up all seven of its valence electrons and sodium took them. In that case, both would have the electron configurations of noble gasses, with a full valence shell. However, their charges would be much higher. It would be Na7- and Cl7+, which is much less stable than Na+ and Cl-. Atoms are more stable when they have no charge, or a small charge.
There are exceptions to the octet rule.
The main exception to the rule is hydrogen, which is at its lowest energy when it has two electrons in its valence shell. Helium (He) is similar in that it, too, only has room for two electrons in its only valence shell.
Hydrogen and helium have only one electron shell. The first shell has only one s orbital and no p orbital, so it holds only two electrons. Therefore, these elements are most stable when they have two electrons. You will occasionally see hydrogen with no electrons, but H+ is much less stable than hydrogen with one or two electrons.
Lithium, with three protons and electrons, is most stable when it gives up an electron.
Less Than an Octet
Other notable exceptions are aluminum and boron, which can function well with six valence electrons. Consider BF3. The boron shares its three electrons with three fluorine atoms. The fluorine atoms follow the octet rule, but boron has only six electrons. Although atoms with less than an octet may be stable, they will usually attempt to form a fourth bond to get eight electrons. BF3 is stable, but it will form BF4- when possible. Most elements to the left of the carbon group have so few valence electrons that they are in the same situation as boron: they are electron deficient. Electrons deficient elements often show metallic rather than covalent bonding.
More Than an Octet
In Period 3, the elements on the right side of the periodic table have empty d orbitals. The d orbitals may accept electrons, allowing elements like sulfur, chlorine, silicon and phosphorus to have more than an octet. Compounds such as PCl5 and SF6 can form. These compounds have 10 and 12 electrons around their central atoms, respectively.
|Xenon hexafluoride uses d-electrons to form more than an octet. This compound shows another exception: a noble gas compound.|
Some elements, notably nitrogen, have an odd number of electrons and will form somewhat stable compounds. Nitric oxide has the formula NO. No matter how electrons are shared between the nitrogen and oxygen atoms, there is no way for nitrogen to have an octet. It will have seven electrons instead. A molecule with an unpaired electron is called a free radical and radicals are highly reactive. So reactive that many of them only exist for a fraction of a second. As radicals go NO and NO2 are actually remarkably stable. At low temperatures NO2 does react with itself to form N2O4, its dimer, that is not a radical.
|Nitrogen dioxide has an unpaired electron. (Note the positive charge above the N).|
Compounds and Bonding
Overview of Bonding
Introduction to Bonding
Put simply, chemical bonding join atoms together to form more complex structures (like molecules or crystals). Bonds can form between atoms of the same element, or between atoms of different elements. There are several types of chemical bonding which have different properties and give rise to different structures.
In many cases, atoms try to react to form valence shells containing eight electrons. The octet rule describes this, but it also has many exceptions
- Ionic bonding occurs between positive ions (cations) and negative ions (anions). In an ionic solid, the ions arrange themselves into a rigid crystal lattice. NaCl (common salt) is an example of an ionic substance. In ionic bonding there is an attractive force established between large numbers of positive cations and negative anions, such that a neutral lattice is formed. This attraction between oppositely-charged ions is collective in nature and called ionic bonding.
- Covalent bonds are formed when the orbitals of two non-metal atoms physically overlap and share electrons with each other. There are two types of structures to which this can give rise: molecules and covalent network solids. Methane (CH4) and water (H2O) are examples of covalently bonded molecules, and glass is a covalent network solid.
- Metallic bonding occur between atoms that have few electrons compared to the number of accessible orbitals. This is true for the vast majority of chemical elements. In a metallically bonded substance, the atoms' outer electrons are able to freely move around - they are delocalised to form an 'electron pool'. Iron is a metallically bonded substance.
Chemical bonding is one of the most crucial concepts in the study of chemistry. In fact, the properties of materials are basically defined by the type and number of atoms they contain and how they are bonded together.
So far, you have seen examples of intramolecular bonds. These bonds connect atoms into molecules or whole crystals. There are also intermolecular bonds that connect molecules into large substances. These are also called intermolecular forces, or IMF. IMF are weaker than intramolecular bonds, and as they do not permanently join two molecules or ions, it is generally considered incorrect to refer to them as bonds. Sometimes, a substance will not have both IMF and intramolecular bonds. In the case of ionic crystals (like salt) or covalent networks (like diamond), the solid is made out of a network of intramolecular bonds connecting all the component atoms or ions in a repeating pattern, with no separate units to be attracted to each other by IMF. In the case of metallic bonding, the atoms are all interconnected into one large piece of metal, but the electrons move freely rather than being confined to the static bonds of a crystal lattice or covalent network.
What determines the type of bond formed between two elements? There are two ways of classifying elements to determine the bond formed: by electronegativity, or by metallic/non-metallic character.
Electronegativity is a property of atoms which is reflected in the layout of the periodic table of the elements. Electronegativity is greatest in the elements in the upper right of the table (e.g., fluorine), and lowest in the lower left (e.g., francium).
Electronegativity is a relative measure of how strongly an atom will attract the electrons in a bond. Although bonds are the result of atoms sharing their electrons, the electrons can be shared unequally. The more electronegative atom in a bond will have a slight negative charge, and the less electronegative atom will have a slight positive charge. Overall, the molecule may have no charge, but the individual atoms will. This is a result of the electronegativity—by attracting the electrons in a bond, an atom gains a slight negative charge. Of course, if two elements have equal electronegativity, they will share the electrons equally.
Metallic elements have low electronegativity, and non-metallic elements have high electronegativity. If two elements are close to each other on the periodic table, they will have similar electronegativities.
Electronegativity is measured on a variety of scales, the most common being the Pauling scale. Created by chemist Linus Pauling, it assigns 4.0 to fluorine (the highest) and 0.7 to francium (the lowest).
Non-polar covalent bonds occur when there is equal or near-equal sharing of electrons between the two bonded atoms. This should make sense because covalent bonds are the sharing of electrons between two atoms. Molecules such as Cl2, H2 and F2 are good examples. Typically, a difference in electronegativity between 0.0 and 0.4 indicates a non-polar covalent bond.
Polar covalent bonds occur when there is unequal sharing of the electrons between the atoms. Molecules such as NH3 and H2O are examples of this. The typical rule is that bonds with an electronegativity difference between 0.5 and 1.7 are considered polar. The electrons are still being shared between two atoms, but one atom attracts the electrons more than the other.
Ionic bonding occur when there is complete transfer of the electrons in the bond. This type of bonding does not lead to the formation of molecule, but rather consists of a stacking of a great many ions, such that an overall neutral lattice is formed. Substances such as NaCl and MgCl2 are examples. Generally, electronegativity differences of 1.8 or greater lead to ionic bonding. The electronegativity difference is so great that one atom can attract the electrons enough to "take" them from the other atom.
When drawing diagrams of bonds, we indicate covalent bonds with a line. We may write the electronegativity using the symbols and . Look at this example.
The plus goes over the less electronegative atom. From the above diagram, we can see that the fluorine attracts the electrons in the covalent bond more than the hydrogen does. Fluorine will have a slight negative charge because of this, and hydrogen will have a slight positive charge. Overall, hydrogen fluoride is neutral.
What are ions?
Ions are atoms or molecules which are electrically charged. Cations are positively charged and anions carry a negative charge. Ions form when atoms gain or lose electrons. Since electrons are negatively charged, an atom that loses one or more electrons will become positively charged; an atom that gains one or more electrons becomes negatively charged.
Description of Ionic Bonding
Ionic bonding is the attraction between positively- and negatively-charged ions. These oppositely charged ions attract each other to form ionic networks (or lattices). Electrostatics explains why this happens: opposite charges attract and like charges repel. When many ions attract each other, they form large, ordered, crystal lattices in which each ion is surrounded by ions of the opposite charge. Generally, when metals react with non-metals, electrons are transferred from the metals to the non-metals. The metals form positively-charged ions and the non-metals form negatively-charged ions. The smallest unit of an ionic compound is the formula unit, but this unit merely reflects that ratio of ions that leads to neutrality of the whole crystal, e.g. NaCl or MgCl2. One cannot distinguish individual NaCl or MgCl2 molecules in the structure.
It is however possible that the stacking consists of molecular ions like NH4+ and NO3- in ammonium nitrate. In such structures the ions are charged molecules rather than charged atoms.
Example ionic compounds: Sodium chloride (), potassium nitrate ().
Ionically bonded substances typically have the following characteristics.
- High melting point (solid at room temperature)
- Hard but brittle (can shatter)
- Many dissolve in water
- Conductors of electricity when dissolved or melted
In general the forces keeping the lattice together depend on the product of the charges of the ions it consists of. A comparison e.g. of NaCl (+1)*(-1) to MgO (+2)*(-2) shows that magnesium oxide is kept together much more strongly -roughly 4 times- than sodium chloride. This is why sodium chloride has a much lower melting point and also dissolves much more easily in a solvent like water than magnesium oxide does.
Ionic bonding occurs when metals and non-metals chemically react. As a result of its low ionization energy, a metal atom is not destabilized very much if it loses electrons to form a complete valence shell and becomes positively charged. As its affinity is rather large, a non-metal is stabilized rather strongly by gaining electrons to complete its valence shell and become negatively charged. When metals and non-metals react, the metals lose electrons by transferring them to the non-metals, which gain them. The total process -a small loss plus a large gain- leads to a net lowering of the energy. Consequently, ions are formed, which instantly attract each other leading to ionic bonding.
For instance, in the reaction of Na (sodium) and Cl (chlorine), each Cl atom takes one electron from a Na atom. Therefore each Na becomes a Na+ cation and each Cl atom becomes a Cl- anion. Due to their opposite charges, they attract each other and are joined by millions and millions of other ions to form an ionic lattice. The lattice energy that results from this massive collective stacking further stabilizes the new compound. The formula (ratio of positive to negative ions) in the lattice is NaCl, i.e. there are equal numbers of positive and negative charges ensuring neutrality.
The charges must balance because otherwise the repulsion between the majority charges would become prohibitive. In the case of magnesium chloride, the magnesium atom gives up two electrons to become stable. Note that it is in the second group, so it has two valence electrons. The chlorine atom can only accept one electron, so there must be two chlorine ions for each magnesium ion. Therefore, the formula for magnesium chloride is MgCl2. If magnesium oxide were forming, the formula would be MgO because oxygen can accept both of magnesium's electrons.
It should also be noted that some atoms can form more than one ion. This usually happens with the transition metals. For instance Fe (iron) can become Fe2+ (called iron(II) or -by an older name- ferrous). Fe can also become Fe3+ (called iron(III) or -sometimes still- ferric).
Ionic bonding typically occurs in reactions between a metal and non-metal, but there are also certain molecules called polyatomic ions that undergo ionic bonding. Within these polyatomic ions, there can be covalent (or polar) bonding, but as a unit it undergoes ionic bonding. There are countless polyatomic ions, but you should be familiar with the most common ones. You would be well advised to memorize these ions.
Covalent bonds create molecules, which can be represented by a molecular formula. For chemicals such as a basic sugar (C6H12O6), the ratios of atoms have a common multiple, and thus the empirical formula is CH2O. Note that a molecule with a certain empirical formula is not necessarily the same as one with the same molecular formula.
Formation of Covalent Bonds
Covalent bonds form between two atoms which have incomplete octets — that is, their outermost shells have fewer than eight electrons. They can share their electrons in a covalent bond. The simplest example is water (H2O). Oxygen has six valence electrons (and needs eight) and the hydrogens have one electron each (and need two). The oxygen shares two of its electrons with the hydrogens, and the hydrogens share their electrons with the oxygen. The result is a covalent bond between the oxygen and each hydrogen. The oxygen has a complete octet and the hydrogens have the two electrons they each need.
When atoms move closer, their orbitals change shape, letting off energy. However, there is a limit to how close the atoms get to each other—too close, and the nuclei repel each other.
One way to think of this is a ball rolling down into a valley. It will settle at the lowest point. As a result of this potential energy "valley", there is a specific bond length for each type of bond. Also, there is a specific amount of energy, measured in kilojoules per mole (kJ/mol) that is required to break the bonds in one mole of the substance. Stronger bonds have a shorter bond length and a greater bond energy.
The Valence Bond Model
One useful model of covalent bonding is called the Valence Bond model. It states that covalent bonds form when atoms share electrons with each other in order to complete their valence (outer) electron shells. They are mainly formed between non-metals.
An example of a covalently bonded substance is hydrogen gas (H2). A hydrogen atom on its own has one electron—it needs two to complete its valence shell. When two hydrogen atoms bond, each one shares its electron with the other so that the electrons move about both atoms instead of just one. Both atoms now have access to two electrons: they become a stable H2 molecule joined by a single covalent bond.
Double and Triple Bonds
Covalent bonds can also form between other non-metals, for example chlorine. A chlorine atom has 7 electrons in its valence shell—it needs 8 to complete it. Two chlorine atoms can share 1 electron each to form a single covalent bond. They become a Cl2 molecule.
Oxygen can also form covalent bonds, however, it needs a further 2 electrons to complete its valence shell (it has 6). Two oxygen atoms must share 2 electrons each to complete each other's shells, making a total of 4 shared electrons. Because twice as many electrons are shared, this is called a 'double covalent bond'. Double bonds are much stronger than single bonds, so the bond length is shorter and the bond energy is higher.
Furthermore, nitrogen has 5 valence electrons (it needs a further 3). Two nitrogen atoms can share 3 electrons each to make a N2 molecule joined by a 'triple covalent bond'. Triple bonds are stronger than double bonds. They have the shortest bond lengths and highest bond energies.
Electron Sharing and Orbitals
Carbon, contrary to the trend, does not share four electrons to make a quadruple bond. The reason for this is that the fourth pair of electrons in carbon cannot physically move close enough to be shared. The valence bond model explains this by considering the orbitals involved.
Recall that electrons orbit the nucleus within a cloud of electron density (orbitals). The valence bond model works on the principle that orbitals on different atoms must overlap to form a bond. There are several different ways that the orbitals can overlap, forming several distinct kinds of covalent bonds.
The Sigma Bond
The first and simplest kind of overlap is when two s orbitals come together. It is called a sigma bond (sigma, or σ, is the Greek equivalent of 's'). Sigma bonds can also form between two p orbitals that lie pointing towards each other. Whenever you see a single covalent bond, it exists as a sigma bond. When two atoms are joined by a sigma bond, they are held close to each other, but they are free to rotate like beads on a string.
The Pi Bond
The second, and equally important kind of overlap is between two parallel p orbitals. Instead of overlapping head-to-head (as in the sigma bond), they join side-to-side, forming two areas of electron density above and below the molecule. This type of overlap is referred to as a pi (π, from the Greek equivalent of p) bond. Whenever you see a double or triple covalent bond, it exists as one sigma bond and one or two pi bonds. Due to the side-by-side overlap of a pi bond, there is no way the atoms can twist around each other as in a sigma bond. Pi bonds give the molecule a rigid shape.
Pi bonds are weaker than sigma bonds since there is less overlap. Thus, two single bonds are stronger than a double bond, and more energy is needed to break two single bonds than a single double bond.
Consider a molecule of methane: a carbon atom attached to four hydrogen atoms. Each atom is satisfying the octet rule, and each bond is a single covalent bond.
Now look at the electron configuration of carbon: 1s22s22p2. In its valence shell, it has two s electrons and two p electrons. It would not be possible for the four electrons to make equal bonds with the four hydrogen atoms (each of which has one s electron). We know, by measuring bond length and bond energy, that the four bonds in methane are equal, yet carbon has electrons in two different orbitals, which should overlap with the hydrogen 1s orbital in different ways.
To solve the problem, hybridization occurs. Instead of a s orbital and three p orbital, the orbitals mix, to form four orbitals, each with 25% s character and 75% p character. These hybrid orbitals are called sp3 orbitals, and they are identical. Observe:
Now these orbitals can overlap with hydrogen 1s orbitals to form four equal bonds. Hybridization may involve d orbitals in the atoms that have them, allowing up to a sp3d2 hybridization.
Metallic bonds occur among metal atoms. Whereas ionic bonds join metals to non-metals, metallic bonding joins a bulk of metal atoms. A sheet of aluminum foil and a copper wire are both places where you can see metallic bonding in action.
When metallic bonds form, the s and p electrons delocalize. Instead of orbiting their atoms, they form a "sea of electrons" surrounding the positive metal ions. The electrons are free to move throughout the resulting network. The delocalized nature of the electrons explains a number of unique characteristics of metals:
|Metals are good conductors of electricity||The sea of electrons is free to flow, allowing electrical currents.|
|Metals are ductile (able to draw into wires)
and malleable (able to be hammered into thin sheets)
|As the metal is deformed, local bonds are broken but quickly reformed in a new position.|
|Metals are gray and shiny||Photons (particles of light) cannot penetrate the metal, so they bounce off the sea of electrons.|
|Gold is yellow and copper is reddish-brown||There is actually an upper limit to the frequency that is reflected. It is too high to be visible in most metals, but not gold and copper.|
|Metals have very high melting and boiling points||Metallic bonding is very strong, so the atoms are reluctant to break apart into a liquid or gas.|
Metallic bonds can occur between different elements. A mixture of two or more metals is called an alloy. Depending on the size of the atoms being mixed, there are two different kinds of alloys that can form:
The resulting mixture will have a combination of the properties of both metals involved.
Covalent molecules are bonded to other atoms by electron pairs. Being mutually negatively charged, the electron pairs repel the other electron pairs and attempt to move as far apart as possible in order to stabilize the molecule. This repulsion causes covalent molecules to have distinctive shapes, known as the molecule's molecular geometry. There are several different methods of determining molecular geometry. A scientific model, called the VSEPR (valence shell electron pair repulsion) model can be used to qualitatively predict the shapes of molecules. Within this model, the AXE method is used in determining molecular geometry by counting the numbers of electrons and bonds related to the center atom(s) of the molecule.
The VSEPR model is by no means a perfect model of molecular shape! It is simply a system which explains the known shapes of molecular geometry as discovered by experiment. This can allow us to predict the geometry of similar molecules, making it a fairly useful model. Modern methods of quantitatively calculating the most stable (lowest energy) shapes of molecules can take several hours of supercomputer time, and is the domain of computational chemistry.
Table of Geometries
|3 Groups||Trigonal Planar||Trigonal Pyramidal||T-Shaped|
|4 Groups||Tetrahedral||See-saw||Square Planar|
|5 Groups||Trigonal Bipyramidal||Square Pyramidal|
The hybridization is determined by how many "things" are attached to the central atom. Those "things" can be other atoms or non-bonding pairs of electrons. The number of groups is how many atoms or electron pairs are bonded to the central atom. For example, methane (CH4) is tetrahedral-shaped because the carbon is attached to four hydrogens. Ammonia (NH3) is not trigonal planar, however. It is trigonal pyramidal because it is attached to four "things": the three hydrogens and a non-bonding pair of electrons (to fulfill nitrogen's octet).
Consider a simple covalent molecule, methane (CH4). Four hydrogen atoms surround a carbon atom in three-dimensional space. Each CH bond consists of one pair of electrons, and these pairs try to move as far away from each other as possible (due to electrostatic repulsion). You might think this would lead to a flat shape, with each hydrogen atom 90° apart. However, in three dimensions, there is a more efficient arrangement of the hydrogen atoms. If each hydrogen atom is at a corner of a tetrahedron centered around the carbon atom, they are separated by about cos-1(-1/3) ≈ 109.5°—the maximum possible.
To align four orbitals in this tetrahedral shape requires the reformation of one s and three p orbitals into an sp3 orbital.
Lone Electron Pairs
The VSEPR model treats lone electron pairs in a similar way to bonding electrons. In ammonia (NH3) for example, there are three hydrogen atoms and one lone pair of electrons surrounding the central nitrogen atom. Because there are four groups, ammonia has a tetrahedral shape but unlike methane, the angle between the hydrogen atoms is slightly smaller, 107.3°. This can be explained by theorizing that lone electron pairs take up more space physically than bonding pairs. This is a reasonable theory: in a bond, the electron pair is distributed over two atoms whereas a lone pair is only located on one. Because it is bigger, the lone pair forces the other electron pairs together.
Testing this assumption with water provides further evidence. In water (H2O) there are two hydrogen atoms and two lone pairs, again making four groups in total. The electron pairs repel each other into a tetrahedral shape. The angle between the hydrogen atoms is 104.5°, which is what we expect from our model. The two lone pairs both push the bonds closer together, giving a smaller angle than in ammonia.
Linear and Planar Shapes
In some molecules, there are less than four pairs of valence electrons. This occurs in electron deficient atoms such as boron and beryllium, which don't conform to the octet rule (they can have 6 and 4 valence electrons respectively). In boron trifluoride (BF3), there are only three electron pairs which repel each other to form a flat plane. Each fluorine atom is separated by cos-1(-1/2) = 120°. A different set of hybrid orbitals is formed in this molecule: the 2s and two 2p orbitals combine to form three sp2 hybrid orbitals. The remaining p orbital is empty and sits above and below the plane of the molecule.
Beryllium, on the other hand, forms only two pairs of valence electrons. These repel each other at cos-1(-1) = 180°, forming a linear molecule. An example is beryllium chloride, which has two chlorine atoms situated on opposite sides of a beryllium atom. This time, one 2s and one 2p orbital combine to form two sp hybrid orbitals. The two remaining p orbitals sit above and to the side of the beryllium atom (they are empty).
Bent vs. Linear
Some elements will have a bent shape, others have a linear shape. Both are attached to two groups, so it depends on how many non-bonding pairs the central atom has.
Take a look at sulfur dioxide (SO2) and carbon dioxide (CO2). Both have two oxygen atoms attached with double covalent bonds. Carbon dioxide is linear, and sulfur dioxide is bent. The difference is in their valence shells. Carbon has four valence electrons, sulfur has six. When they bond, carbon has no non-bonding pairs, but sulfur has one.
Five and Six Groups
Recall that some elements, especially sulfur and phosphorus, can bond with five or six groups. The hybridization is sp3d or sp3d2, with a trigonal bipyramidal or octahedral shape respectively. When there are non bonding pairs, other shape can arise (see the above chart).
How The Shapes Look
The yellow groups are non-bonding electron pairs. The white groups are bonded atoms, and the pink is the central atom. This is referred to as the AXE method; A is the central atom, X's are bonded atoms, and E's are non-bonding electron pairs.
Covalent bonds can be polar or non-polar, and so can the overall compound depending on its shape. When a bond is polar, it creates a dipole, a pair of charges (one positive and one negative). If they are arranged in a symmetrical shape, so that they point in opposite directions, they will cancel each other. For example, since the four hydrogens in methane (CH4) are facing away from each other, there is no overall dipole and the molecule is non-polar. In ammonia (NH3), however, there is a negative dipole at the nitrogen, due to the asymmetry caused by the non-bonding electron pair. The polarity of a compound determines its intermolecular bonding abilities.
Polar and Non-Polar Shapes
When a molecule has a linear, trigonal planar, tetrahedral, trigonal bipyramidal, or octahedral shape, it will be non-polar. These are the shapes that do not have non-bonding lone pairs. (e.g. Methane, CH4) But if some bonds are polar while others are not, there will be an overall dipole, and the molecule will be polar (e.g. Chloroform, CHCl3).
The other shapes (with non-bonding pairs) will be polar. (e.g. Water, H2O) Unless, of course, all the covalent bonds are non-polar, in which case there would be no dipoles to begin with.
When two polar molecules are near each other, they will arrange themselves so that the negative and positive sides line up. There will be an attractive force holding the two molecules together, but it is not nearly as strong a force as the intramolecular bonds. This is how many types of molecules bond together to form large solids or liquids.
Certain chemicals with hydrogen in their chemical formula have a special type of intermolecular bond, called hydrogen bonds. Hydrogen bonds will occur when a hydrogen atom is attached to an oxygen, nitrogen, or fluorine atom. This is because there is a large electronegativity difference between hydrogen and fluorine, oxygen, and nitrogen. Thus, molecules such as , , are extremely polar molecules with very strong dipole-dipole forces. As a result of the high electronegativities of fluorine, oxygen, and nitrogen, these elements will pull the electrons almost completely away from the hydrogen. The hydrogen becomes a bare proton sticking out from the molecule, and it will be strongly attracted to the negative side of any other polar molecules. Hydrogen bonding is an extreme type of dipole-dipole bonding. These forces are weaker than intramolecular bonds, but are much stronger than other intermolecular forces, causing these compounds to have high boiling points.
Silicon dioxide forms a covalent network. Unlike carbon dioxide (with double bonds), silicon dioxide forms only single covalent bonds. As a result, the individual molecules covalently bond into a large network. These bonds are very strong (being covalent) and there is no distinction between individual molecules and the overall network. Covalent networks hold diamonds together. Diamonds are made of nothing but carbon, and so is soot. Unlike soot, diamonds have covalent networks, making them very hard and crystalline.
Van der Waals forces
Van der Waals, or London dispersion forces are caused by temporary dipoles created when electron locations are lopsided. The electrons are constantly orbiting the nucleus, and by chance they could end up close together. The uneven concentration of electrons could make one side of the atom more negatively-charged than the other, creating a temporary dipole. As there are more electrons in an atom, and the shells are farther away from the nucleus, these forces become stronger.
Van der Waals forces explain how nitrogen can be liquified. Nitrogen gas is diatomic; its equation is N2. Since both atoms have the same electronegativity, there is no dipole and the molecule is non-polar. If there are no dipoles, what would make the nitrogen atoms stick together to form a liquid? Van der Waals forces are the answer. They allow otherwise non-polar molecules to have attractive forces. These are by far the weakest forces that hold molecules together.
Melting and Boiling Points
When comparing two substances, their melting and boiling points may be questioned. To determine which substance has the higher melting or boiling point, you must decide which one has the strongest intermolecular force. Metallic bonds, ionic bonds, and covalent networks are very strong, as they are actually intramolecular forces. These substances have the highest melting and boiling points because they only separate into individual molecules when the powerful bonds have been broken. Breaking these intramolecular forces requires great amounts of heat energy.
Substances with hydrogen bonding, an intermolecular force, will have much higher melting and boiling points than those that have ordinary dipole-dipole intramolecular forces. Non-polar molecules have the lowest melting and boiling points, because they are held together by the weak van der Waals forces.
If you need to compare the boiling points of two metals, the metal with the larger atomic radius will have weaker bonding, due to the lower concentration of charge. When comparing boiling points of the non-polar gases, like the noble gases, the gas with the largest radius will have the highest points because it has the most potential for van der Waals forces.
Ionic compounds can be compared using Coulomb's Law. Look for substances with high ionic charges and low ionic radii.
Some compounds have common names, like water for H2O. However, there are thousands of other compounds that are uncommon or have multiple names. Also, the common name is usually not recognized internationally. What looks like water to you might look like agua or vatten to someone else. To allow chemists to communicate without confusion, there are naming conventions to determine the systematic name of a chemical.
Naming Ions and Ionic Compounds
Ions are atoms that have lost or gained electrons. Note that in a polyatomic ion, the ion itself is held together by covalent bonds. Monoatomic cations (positive) are named the same way as their element, and they come first when naming a compound. Monoatomic anions (negative) have the suffix -ide and come at the end of the compound's name.
- Examples of ionic compounds
- NaCl - Sodium chloride
- MgCl2 - Magnesium chloride
- Ca3N2 - Calcium nitride
Polyatomic ions have special names. Many of them contain oxygen and are called oxyanions. When different oxyanions are made of the same element but have a different number of oxygen atoms, prefixes and suffixes are used to tell them apart. The chlorine family of ions is an excellent example.
The -ate suffix is used on the most common oxyanion (like sulfate SO42- or nitrate NO3-). The -ite suffix is used on the oxyanion with one less oxygen (like sulfite SO32- or nitrite NO2-). Sometimes there can be a hypo- prefix, meaning one less oxygen than the -ite. There is also a per- prefix, meaning one more oxygen than the -ate.
|Occasionally, you will see a bi- prefix. This is an older prefix, it means the compound can both take up and lose a proton. It is often replaced with the word hydrogen. In either case, the oxyanion will have a hydrogen in it, decreasing its charge by one. For instance, there is carbonate (CO32-) and bicarbonate or hydrogen carbonate (HCO3-).|
One last prefix you may find is thio-. It means an oxygen has been replaced with a sulfur within the oxyanion. Cyanate is OCN-, and thiocyanate is SCN-.
- Examples of polyatomic ions
- NH4Cl - Ammonium chloride
- K(HCO3) - Potassium hydrogen carbonate
- AgNO3 - Silver nitrate
- CuSO3 - Copper (II) sulfite
In the last example, copper had a roman numeral 2 after its name because the transition metals can have more than one charge. The charge on the ion must be known, so it is written out for ions that have more than one common charge. Silver always has a charge of 1+, so it isn't necessary (but not wrong) to name its charge. Zinc always has a charge of 2+, so you don't have to name its charge either. Aluminum will always have a charge of +3. All other metals (except the Group 1 and 2 elements) must have roman numerals to show their charge.
Common polyatomic ions that you should know are listed in the following table
Further explanation of the roman numerals is in order. Many atoms (especially the transition metals) are capable of ionizing in more than one way. The name of an ionic compound must make it very clear what the exact chemical formula is. If you wrote "copper chloride", it could be CuCl or CuCl2 because copper can lose one or two electrons when it forms an ion. The charge must be balanced, so there would be one or two chloride ions to accept the electrons. To be correct, you must write "copper(II) chloride" if you want CuCl2 and "copper (I) chloride" if you want CuCl. Keep in mind that the roman numerals refer to the charge of the cation, not how many anions are attached.
Common metal ions are listed below and should be learned:
|Lead (II)/Plumbous (most common)||Pb2+|
|Mercury (I) (Note: Mercury (I) is a polyatomic ion)||Hg22+|
There are two systems of naming molecular compounds. The first uses prefixes to indicate the number of atoms of an element that are in the compound. If the substance is binary (containing only two elements), the suffix -ide is added to the second element. Thus water is dihydrogen monoxide. A prefix is not necessary for the first element if there is only one, so SF6 is 'sulfur hexafluoride'. The prefix system is used when both elements are non-metallic.
|If the last letter of the prefix is an a and the first letter of the element is a vowel, the a is dropped. That makes V2O5 divanadium pentoxide (instead of pentaoxide). Similar dropping occurs with mono- and elements beginning with o, as in the case of monoxide. This does not, however, happen with di- and tri-. A molecule containing three iodine atoms would be named triiodide.|
The second system, the stock system, uses oxidation numbers to represent how the electrons are distributed through the compound. This is essentially the roman numeral system that has already been explained, but it applies to non-ionic compounds as well. The most electronegative component of the molecule has a negative oxidation number that depends on the number of pairs of electrons it shares. The less electronegative part is assigned a positive number. In the stock system, only the cation's number is written, and in Roman numerals. The stock system is used when there is a metallic element in the compound. In the case of V2O5, it could also be called vanadium(V) oxide. Knowing that oxygen's charge is always -2, we can determine that there are five oxygens and two vanadiums if we were given the name without the formula.
If an acid is a binary compound, it is named as hydro[element]ic acid. If it contains a polyatomic ion, then it is named [ion name]ic acid if the ion ends in -ate. If the ion ends in -ite then the acid will end in -ous. These examples should help.
- Examples of acid names
- HCl - Hydrochloric acid
- HClO - Hypochlorous acid
- HClO2 - Chlorous acid
- HClO3 - Chloric acid
- HClO4 - Perchloric acid
Formulas and Numbers
Calculating Formula Masses
The calculation of a compound's formula mass (the mass of its molecule or formula unit) is straightforward. Simply add the individual mass of each atom in the compound (found on the periodic table). For example, the formula mass of glucose (C6H12O6) is 180 amu.
Molar masses are just as easy to calculate. The molar mass is equal to the formula mass, except that the unit is grams per mole instead of amu.
Calculating Percentage Composition
Percentage composition is the relative mass of one substance in a compound compared to the whole. For example, in methane (CH4), the percentage mass of hydrogen is 25% because hydrogen makes up a total of 4 amu out of 16 amu overall.
Using Percentage Composition
Percentage composition can be used to find the empirical formula of a compound, which shows the ratios of elements in the compound. However, this is not the same as the molecular formula. For example, many sugars have the empirical formula CH2O, which could correspond to a molecular formula of CH2O, C2H4O2, C6H12O6, etc.
- To find the empirical formula from percentage composition, follow these procedures for each element.
- Convert from percentage to grams (for simplicity, assume a 100 g sample).
- Divide by the element's molar mass to find moles.
- Simplify to lowest whole-number ratio.
For example, a compound is composed of 75% carbon and 25% hydrogen by mass. Find the empirical formula.
- 75g C / (12 g/mol C) = 6.25 mol C
- 25g H / (1 g/mol H) = 25 mol H
- 6.25 mol C / 6.25 = 1 mol C
- 25 mol H / 6.25 = 4 mol H
Thus the empirical formula is CH4.
Calculating Molecular Formula
If you find the empirical formula of a compound and its molar/molecular mass, then you can find its exact molecular formula. Remember that the molecular formula is always a whole-number multiple of the empirical formula. For example, a compound with the empirical formula HO has a molecular mass of 34.0 amu. Since HO would only be 17.0 amu, which is half of 34.0, the molecular formula must be H2O2.
- Exercise for the reader
An unknown substance must be identified. Lab analysis has found that the substance is composed of 80% Fluorine and 20% Nitrogen with a molecular mass of 71 amu. What is the empirical formula? What is the molecular formula?
The word stoichiometry derives from two Greek words: stoicheion (meaning "element") and metron (meaning "measure"). Stoichiometry deals with calculations about the masses (sometimes volumes) of reactants and products involved in a chemical reaction. It is a very mathematical part of chemistry, so be prepared for lots of calculator use.
Jeremias Benjaim Richter (1762-1807) was the first to lay down the principles of stoichiometry. In 1792 he wrote: "Die stöchyometrie (Stöchyometria) ist die Wissenschaft die quantitativen oder Massenverhältnisse zu messen, in welchen die chymischen Elemente gegen einander stehen." [Stoichiometry is the science of measuring the quantitative proportions or mass ratios in which chemical elements stand to one another.]
Your Tool: Dimensional Analysis
Luckily, almost all of stoichiometry can be solved relatively easily using dimensional analysis. Dimensional analysis is just using units, instead of numbers or variables, to do math, usually to see how they cancel out. For instance, it is easy to see that:
It is this principle that will guide you through solving most of the stoichiometry problems (chemical reaction problems) you will see in General Chemistry. Before you attempt to solve a problem, ask yourself: what do I have now? where am I going? As long as you know how many (units) per (other units), this will make stoichiometry significantly easier.
Moles to Mass
How heavy is 1.5 mol of lead? How many moles in 22.34g of water? Calculating the mass of a sample from the number of moles it contains is quite simple. We use the molar mass (mass of one mole) of the substance to convert between mass and moles. When writing calculations, we denote the molar mass of a substance by an upper case "M" (e.g. M(Ne) means "the molar mass of neon"). As always, "n" stands for the number of moles and "m" indicates the mass of a substance. To find the solutions to the two questions we just asked, let's apply some dimensional analysis:
Can you see how the units cancel to give you the answer you want? All you needed to know was that you had 1.5 mol Pb (lead), and that 1 mol Pb weighs 207.2 grams. Thus, multiplying 1.5 mol Pb by 207.2 g Pb and dividing by 1 mol Pb gives you 310.8 g Pb, your answer.
Mass to Moles
But we had one more question: "How many moles in 22.34g of water?" This is just as easy:
Where did the 18 g H2O come from? We looked at the periodic table and simply added up the atomic masses of two hydrogens and an oxygen to get the molecular weight of water. This turned out to be 18, and since all the masses on the periodic table are given with respect to 1 mole, we knew that 1 mol of water weighed 18 grams. This gave us the relationship above, which is really just (again) watching units cancel out!
Calculating Molar Masses
Before we can do these types of calculations, we first have to know the molar mass. Fortunately, this is not difficult, as the molar mass is exactly the same as the atomic weight of an element. A table of atomic weights can be used to find the molar mass of elements (this information is often included in the periodic table). For example, the atomic weight of oxygen is 16.00 amu, so its molar mass is 16.00 g/mol.
For species with more than one element, we simply add up the atomic weights of each element to obtain the molar mass of the compound. For example, sulfur trioxide gas is made up of sulfur and oxygen, whose atomic weights are 32.06 and 16.00 respectively.
The procedure for more complex compounds is essentially the same. Aluminium carbonate, for example, contains aluminium, carbon, and oxygen. To find the molar mass, we have to be careful to find the total number of atoms of each element. Three carbonate ions each containing three oxygen atoms gives a total of nine oxygens. The atomic weights of aluminium and carbon are 26.98 and 12.01 respectively.
The empirical formula of a substance is the simplest ratio of the number of moles of each element in a compound. The empirical formula is ambiguous, e.g. the formula CH could represent CH, C2H2, C3H3 etc. These latter formulae are called molecular formulae. It follows that the molecular formula is always a whole number multiple of the empirical formula for a compound.
Calculating the empirical formula is easy if the relative amounts of each element in the compound are known. For example, if a sample contains 1.37 mol oxygen and 2.74 mol hydrogen, we can calculate the empirical formula. A good strategy to use is to divide all amounts given by the smallest non-integer amount, then multiply by whole numbers until the simplest ratio is found. We can make a table showing the successive ratios.
|2.74||1.37||divide by 1.37|
The empirical formula of the compound is H2O.
Here's another example. A sample of piperonal contains 1.384 mol carbon, 1.033 mol hydrogen and 0.519 mol oxygen.
|1.384||1.033||0.519||divide by 0.519|
|2.666||2||1||multiply by 3|
The empirical formula of piperonal is C8H6O3.
Converting from Masses
Often, we are given the relative composition by mass of a substance and asked to find the empirical formula. These masses must first be converted to moles using the techniques outlined above. For example, a sample of ethanol contains 52.1% carbon, 13.2% hydrogen, and 34.7% oxygen by mass. Hypothetically, 100g of this substance will contain 52.1 g carbon, 13.2 g hydrogen and 34.7 g oxygen. Dividing these by their respective molar masses gives the amount in moles of each element (as we learned above). These are 4.34 mol, 13.1 mol, and 2.17 mol respectively.
|4.34||13.1||2.17||divide by 2.17|
The empirical formula of ethanol is C2H6O.
As mentioned above, the molecular formula for a substance equals the count of atoms of each type in a molecule. This is always a whole number multiple of the empirical formula. To calculate the molecular formula from the empirical formula, we need to know the molar mass of the substance. For example, the empirical formula for benzene is CH, and its molar mass is 78.12 g/mol. Divide the actual molar mass by the mass of the empirical formula, 13.02 g/mol, to determine the multiple of the empirical formula, "n". The molecular formula equals the empirical formula multiplied by "n".
This shows that the molecular formula for benzene is 6 times the empirical formula of CH. The molecular formula for benzene is C6H6.
Solving Mass-Mass Equations
A typical mass-mass equation will give you an amount in grams and ask for another answer in grams.
- To solve a mass-mass equation, follow these rules
- Balance the equation if it is not already.
- Convert the given quantity to moles.
- Multiply by the molar ratio of the demanded substance over the given substance.
- Convert the demanded substance into grams.
For example, given the equation , find out how many grams of silver (Ag) will result from 43.0 grams of copper (Cu) reacting.
- Convert the given quantity to moles.
- Multiply by the molar ratio of the demanded substance and the given substance.
- Convert the demanded substance to grams.
|Notice how dimensional analysis applies to this technique. All units will cancel except for the desired one (grams of silver, in this case).|
To solve a stoichiometric problem, you need to know what you already have and what you want to find. Everything in between is basic algebra.
- Key Terms
- Molar mass: mass (in grams) of one mole of a substance.
- Empirical formula: the simplest ratio of the number of moles of each element in a compound
- Molecular formula: the actual ratio of the number of moles of each element in a compound
In general, all you have to do is keep track of the units and how they cancel, and you will be on your way!
Chemical equations are a convenient, standardized system for describing chemical reactions. They contain the following information.
- The type of reactants consumed and products formed
- The relative amounts of reactants and products
- The electrical charges on ions
- The physical state of each species (e.g. solid, liquid)
- The reaction conditions (e.g. temperature, catalysts)
The final two points are optional and sometimes omitted.
Anatomy of an Equation
Hydrogen gas and chlorine gas will react vigorously to produce hydrogen chloride gas. The equation above illustrates this reaction. The reactants, hydrogen and chlorine, are written on the left and the products (hydrogen chloride) on the right. The large number 2 in front of HCl indicates that two molecules of HCl are produced for each 1 molecule of hydrogen and chlorine gas consumed. The 2 in subscript below H indicates that there are two hydrogen atoms in each molecule of hydrogen gas. Finally, the (g) symbols subscript to each species indicates that they are gases.
Species in a chemical reaction is a general term used to mean atoms, molecules or ions. A species can contain more than one chemical element (HCl, for example, contains hydrogen and chlorine). Each species in a chemical equation is written:
E is the chemical symbol for the element, x is the number of atoms of that element in the species, y is the charge (if it is an ion) and (s) is the physical state.
The symbols in parentheses (in subscript below each species) indicate the physical state of each reactant or product. For ACS Style the state is typeset at the baseline without size change.
- (s) means solid
- (l) means liquid
- (g) means gas
- (aq) means aqueous solution (i.e. dissolved in water)
For example, ethyl alcohol would be written because each molecule contains 2 carbon, 6 hydrogen and 1 oxygen atom. A magnesium ion would be written because it has a double positive ("two plus") charge. Finally, an ammonium ion would be written because each molecule contains 1 nitrogen and 4 hydrogen atoms and has a charge of 1+.
The numbers in front of each species have a very important meaning—they indicate the relative amounts of the atoms that react. The number in front of each species is called a coefficient. In the above equation, for example, one H2 molecule reacts with one Cl2 molecule to produce two molecules of HCl. This can also be interpreted as moles (i.e. 1 mol H2 and 1 mol Cl2 produces 2 mol HCl).
It is important that the Law of Conservation of Mass is not violated. There must be the same number of each type of atoms on either side of the equation. Coefficients are useful for keeping the same number of atoms on both sides:
If you count the atoms, there are four hydrogens and two oxygens on each side. The coefficients allow us to balance the equation; without them the equation would have the wrong number of atoms. Balancing equations is the topic of the next chapter.
Occasionally, other information about a chemical reaction will be supplied in an equation (such as temperature or other reaction conditions). This information is often written to the right of the equation or above the reaction arrow. A simple example would be the melting of ice.
Reactions commonly involve catalysts, which are substances that speed up a reaction without being consumed. Catalysts are often written over the arrow. A perfect example of a catalyzed reaction is photosynthesis. Inside plant cells, a substance called chlorophyll converts sunlight into food. The reaction is written:
|This is the equation for burning methane gas (CH4) in the presence of oxygen (O2) to form carbon dioxide and water: CO2 and H2O respectively. Notice the use of coefficients to obey the Law of Conservation of Matter.|
|This is a precipitation reaction in which dissolved lead cations and iodide anions combine to form a solid yellow precipitate of lead iodide (an ionic solid).|
|These two equations involve a catalyst. They occur one after another, using divanadium pentoxide to convert sulfur dioxide into sulfur trioxide. If you look closely, you can see that the vanadium catalyst is involved in the reaction, but it does not get consumed. It is both a reactant and a product, but it is necessary for the reaction to occur, making it a catalyst.|
|If we add both equations together, we can cancel out terms that appear on both sides. The resulting equation is much simpler and self-explanatory (although the original pair of equations is more accurate in describing how the reaction proceeds).|
|The last example demonstrates another important principle of chemical equations: they can be added together. Simply list all reactants from the equations you are adding, then list all products. If the same term appears on both sides, it is either a catalyst, an intermediate product, or it is not involved in the reaction. Either way, it can be canceled out from both sides (if the coefficients are equal). Try adding the first two vanadium equations together and see if you can cancel out terms to get the final equation.|
Chemical equations are useful because they give the relative amounts of the substances that react in a chemical equation.
In some cases, however, we may not know the relative amounts of each substance that reacts. Fortunately, we can always find the correct coefficients of an equation (the relative amounts of each reactant and product). The process of finding the coefficients is known as balancing the equation.
During a chemical reaction, atoms are neither created or destroyed. The same atoms are present before and after a reaction takes place; they are just rearranged. This is called the Law of Conservation of Matter, and we can use this law to help us find the right coefficients to balance an equation.
For example, assume in the above equation that we do not know how many moles of ammonia gas will be produced:
From the left side of this equation, we see that there are 2 atoms of nitrogen gas in the molecule N2 (2 atoms per molecule x 1 molecule), and 6 atoms of hydrogen gas in the 3 H2 molecules (2 atoms per molecule x 3 molecules). Because of the Law of Conservation of Matter, there must also be 2 atoms nitrogen gas and 6 atoms of hydrogen gas on the right side. Since each molecule of the resultant ammonia gas (NH3) contains 1 atom of nitrogen and 3 atoms of hydrogen, 2 molecules are needed to obtain 2 atoms of nitrogen and 6 atoms of hydrogen.
|This chemical equation shows the compounds being consumed and produced; however, it does not appropriately deal with the quantities of the compounds. There appear to be two oxygen atoms on the left and only one on the right. But we know that there should be the same number of atoms on both sides. This equation is said to be unbalanced, because the number of atoms are different.|
|To make the equation balanced, add coefficients in front of each molecule as needed. The 2 in front of hydrogen on the left indicates that twice as many atoms of hydrogen are needed to react with a certain number of oxygen atoms. The coefficient 1 is not written, since it assumed in the absence of any coefficient.|
|Now, let's consider a similar reaction between hydrogen and nitrogen.|
|Typically, it is easiest to balance all pure elements last, especially hydrogen. First, by placing a two in front of ammonia, the nitrogens are balanced.|
|This leaves 6 moles of atomic hydrogen in the products and only two moles in the reactants. A coefficient of 3 is then placed in front of the hydrogen to give a fully balanced reaction.|
|When balancing a reaction, only coefficients can be changed because changing a subscript would give a different reaction.|
Tricks in balancing certain reactions
A combustion reaction is a reaction between a carbon chain (basically, a molecule consisting of carbons, hydrogen, and perhaps oxygen) with oxygen to form carbon dioxide and water, plus heat. Combustion reactions could get very complex:
Fortunately, there is an easy way to balance these reactions.
First, note that the carbon in C6H6 can only appear on the product side in CO2. Thus, we can write a coefficient of 6 in front of CO2.
Next, note that the hydrogen in C6H6 can only go to H2O. Thus, we put a 3 in front of H2O.
We have 15 oxygen atoms on the product side, so there are O2 molecules on the reactant side. To make this an integer, we multiply all coefficients by 2.
|Note: Fractions are technically allowed as coefficients, but they are generally avoided. Multiply all coefficients by the denominator to remove a fraction.|
|As reactions become more complex, they become more difficult to balance. For example, the combustion of butane (lighter fluid).|
|Once again, it is better to leave pure elements until the end, so first we'll balance carbon and hydrogen. Oxygen can then be balanced after. It is easy to see that one mole of butane will produce four moles of carbon dioxide and five moles of water.|
|Now there are 13 oxygen atoms on the right and two on the left. The odd number of oxygens prevents balancing with elemental oxygen. Because elemental oxygen is diatomic, this problem comes up in nearly every combustion reaction. Simply double every species except for oxygen to get an even number of oxygen atoms in the product.|
|The carbon and hydrogens are still balanced, and now there are an even number of oxygens in the product. Finally, the reaction can be balanced.|
|Keep in mind that every equation must always be balanced. If it's not balanced, it is incorrect.|
Limiting Reactants and Percent Yield
When chemical reactions occur, the reactants undergo change to create the products. The coefficients of the chemical equation show the relative amounts of substance needed for the reaction to occur. Consider the combustion of propane:
For every one mole of propane, there must be five moles of oxygen. For every one mole of propane combusted, there will be three moles of carbon dioxide and four moles of water produced (along with much heat). If a propane grill is burning, there will be a very large amount of oxygen available to react with the propane gas. In this case, oxygen is the excess reactant. There is so much oxygen that the exact amount doesn't matter—it will not run out.
On the other hand, there is not an unlimited amount of propane. It will run out far before the oxygen runs out, making it a limiting reactant. The amount of propane available will decide how far the reaction will go.
If there are three moles of hydrogen, and one mole of oxygen, which is the limiting reactant? How much product is created?
Twice as much hydrogen than oxygen is required. However, there is more than twice as much hydrogen. Thus hydrogen is the excess reactant and oxygen is the limiting reactant. If the reaction proceeds to completion, all of the oxygen will be used up, and one mole of hydrogen will remain. You can imagine this situation like this:
The reactant that is left over after the reaction is complete is called the "excess reactant". Often, you will want to figure out how much of the excess reactant is left after the reaction is complete. to do this, first use mole ratios to determine how much excess reactant is used up in the reaction.
Here are the ratios that need to be used:
Usually, less product is made than theoretically possible. The actual yield is lower than the theoretical yield. To compare the two, one can calculate percent yield, which is .
The percent yield tells us how far the reaction actually went.
Types of Chemical Reactions
|Synthesis reactions always yield one product. Reversing a synthesis reaction will give you a decomposition reaction.|
The general form of a synthesis reaction is A + B → AB. Synthesis reactions "put things together".
|This is the most well-known example of a synthesis reaction—the formation of water via the combustion of hydrogen gas and oxygen gas.|
|Another example of a synthesis reaction is the formation of sodium chloride (table salt).|
Because of the very high reactivities of sodium metal and chlorine gas, this reaction releases a tremendous amount of heat and light energy. Recall that atoms release energy as they become stable, and consider the octet rule when determining why this reaction has such favorable features.
These are the opposite of synthesis reactions, with the format AB → A + B. Decomposition reactions "take things apart". Just as synthesis reactions can only form one product, decomposition reactions can only start with one reactant. Compounds that are unstable decompose quickly without outside assistance. Also they help other atoms to decompose better.
|One example is the electrolysis of water (passing water through electrical current) to form hydrogen gas and oxygen gas.|
|Hydrogen peroxide slowly decomposes into water and oxygen because it is somewhat unstable. The process is sped up by the energy from light, so hydrogen peroxide is often stored in dark containers to slow down the decomposition.|
|Carbonic acid is the carbonation that is dissolved in soda. It decomposes into carbon dioxide and water, which is why an opened drink loses its fizz.|
|Decomposition, aside from happening spontaneously in unstable compounds, occurs under three conditions: thermal, electrolytic, and catalytic. Thermal decomposition occurs when a substance is heated. Electrolytic decomposition, as shown above, is the result of an electric current. Catalytic decomposition happens because a catalyst breaks apart a substance.|
Single Displacement Reactions
Single displacement reaction, also called single replacement, is a reaction in which 2 elements are substituted for another element in a compound. The starting materials are always pure elements, such as a pure zinc metal or hydrogen gas plus an aqueous compound. When a displacement reaction occurs, a new aqueous compound and a different pure element are generated as products. Its format is AB + C → AC + B. Single Diplacement Adding hydrochloric acid to zinc will cause a gas to bubble out:
Double Displacement Reactions
In these reactions, two compounds swap components, in the format AB + CD → AD + CB
This is also called an "exchange". Here are the examples below:
1.) HCl + NaOH ----> NaCl + H2O
A precipitation reaction occurs when an ionic substance comes out of solution and forms an insoluble (or slightly soluble) solid. The solid which comes out of solution is called a precipitate. This can occur when two soluble salts (ionic compounds) are mixed and form an insoluble one—the precipitate.
|An example is lead nitrate mixed with potassium iodide, which forms a bright yellow precipitate of lead iodide.|
|Note that the lead iodide is formed as a solid. The previous equation is written in molecular form, which is not the best way of describing the reaction. Each of the elements really exist in solution as individual ions, not bonded to each other (as in potassium iodide crystals). If we write the above as an ionic equation, we get a much better idea of what is actually happening.|
|Notice the like terms on both sides of the equation. These are called spectator ions because they do not participate in the reaction. They can be ignored, and the net ionic equation is written.|
In the solution, there exists both lead and iodide ions. Because lead iodide is insoluble, they spontaneously crystallise and form the precipitate.
In simple terms, an acid is a substance which can lose a H+ ion (i.e. a proton) and a base is a substance which can accept a proton. When equal amounts of an acid and base react, they neutralize each other, forming species which aren't as acidic or basic.
|For example, when hydrochloric acid and sodium hydroxide react, they form water and sodium chloride (table salt).|
|Again, we get a clearer picture of what's happening if we write a net ionic equation.|
Acid base reactions often happen in aqueous solution, but they can also occur in the gaseous state. Acids and bases will be discussed in much greater detail in the acids and bases section. the reaction
Combustion, better known as burning, is the combination of a substance with oxygen. The products are carbon dioxide, water, and possible other waste products. Combustion reactions release large amounts of heat. C3H8, better known as propane, undergoes combustion. The balanced equation is:
Combustion is similar to a decomposition reaction, except that oxygen and heat are required for it to occur. If there is not enough oxygen, the reaction may not occur. Sometimes, with limited oxygen, the reaction will occur, but it produces carbon monoxide (CO) or even soot. In that case, it is called incomplete combustion. If the substances being burned contain atoms other than hydrogen and oxygen, then waste products will also form. Coal is burned for heating and energy purposes, and it contains sulfur. As a result, sulfur dioxide is released, which is a pollutant. Coal with lower sulfur content is more desirable, but more expensive, because it will release less of the sulfur-based pollutants.
Organic reactions occur between organic molecules (molecules containing carbon and hydrogen). Since there is a virtually unlimited number of organic molecules, the scope of organic reactions is very large. However, many of the characteristics of organic molecules are determined by functional groups—small groups of atoms that react in predictable ways.
Another key concept in organic reactions is Lewis basicity. Parts of organic molecules can be electrophillic (electron-loving) or nucleophillic (nucleus, or positive loving). Nucleophillic regions have an excess of electrons—they act as Lewis bases—whereas electrophillic areas are electron deficient and act as Lewis acids. The nucleophillic and electrophillic regions attract and react with each other. Organic reactions are beyond the scope of this book, and are covered in more detail in Organic Chemistry. However, most organic substances can undergo replacement reactions and combustion reactions, as you have already learned.
Redox is an abbreviation of reduction/oxidation reactions. This is exactly what happens in a redox reaction, one species is reduced and another is oxidized. Reduction involves a gain of electrons and oxidation involves a loss, so a redox reaction is one in which electrons are transferred between species. Reactions where something is "burnt" (burning means being oxidised) are examples of redox reactions, however, oxidation reactions also occur in solution, which is very useful and forms the basis of electrochemistry.
Redox reactions are often written as two half-reactions showing the reduction and oxidation processes separately. These half-reactions are balanced (by multiplying each by a coefficient) and added together to form the full equation. When magnesium is burnt in oxygen, it loses electrons (it is oxidised). Conversely, the oxygen gains electrons from the magnesium (it is reduced).
Redox reactions will be discussed in greater detail in the redox section.
Energy Changes in Chemical Reactions
Exothermic and Endothermic Reactions
The release of energy in chemical reactions occurs when the reactants have higher chemical energy than the products. The chemical energy in a substance is a type of potential energy stored within the substance. This stored chemical potential energy is the heat content or enthalpy of the substance.
If the enthalpy decreases during a chemical reaction, a corresponding amount of energy must be released to the surroundings. Conversely, if the enthalpy increases during a reaction, a corresponding amount of energy must be absorbed from the surroundings. This is simply the Law of Conservation of Energy.
Exothermic reactions is when a chemical reaction releases more energy than it absorbs and you can also see this in many way the most are through chemical reactants.
Endothermic reactions is when a chemical reaction absorbs more energy than it releases.
You are already familiar with enthalpy: melting ice is endothermic and freezing water is exothermic.
|When methane burns in air the heat given off equals the decrease in enthalpy that occurs as the reactants are converted to products.
||When ammonium nitrate is dissolved in water, energy is absorbed and the water cools. This concept is used in "cold packs".
Because reactions release or absorb energy, they affect the temperature of their surroundings. Exothermic reactions heat up their surroundings while endothermic reactions cool them down. The study of enthalpy, along with many other energy-related topics, is covered in the Thermodynamics Unit.
Think about the combustion of methane. It releases enough heat energy to cause a fire. However, the reaction does not occur automatically. When methane and oxygen are mixed, an explosion does not instantly occur. First, the methane must be ignited, usually with a lighter or matchstick. This reveals something about reactions: they will not occur unless a certain amount of activation energy is added first. In this sense, all reactions absorb energy before they begin, but the exothermic reactions release even more energy. This can be explained with a graph of potential energy:
This graph shows an exothermic reaction because the products are at a lower energy than the reactants (so heat has been released). Before that can happen, the energy must actually increase. The amount of energy added before the reaction can complete is the activation energy, symbolized Ea.
Predicting Chemical Reactions
Types of Reactions
There are several guidelines that can help you predict what kind of chemical reaction will occur between a mixture of chemicals.
- Several pure elements mixed together may undergo a synthesis reaction.
- A single compound may undergo a decomposition reaction. It often forms water or hydrogen gas.
- A pure element mixed with an ionic compound may undergo a single replacement reaction.
- Two different ionic compounds are very likely to undergo a double replacement reaction.
- An organic compound (containing carbon and hydrogen) can usually react with oxygen in a combustion reaction.
However, not all elements will react with each other. To better predict a chemical reaction, knowledge of the reactivity series is needed.
When combining two chemicals, a single- or double-replacement reaction doesn't always happen. This can be explained by a list known as the reactivity series, which lists elements in order of reactivity. The higher on the list an element is, the more elements it can replace in a single- or double-replacement reaction. When deciding if a replacement reaction will occur, look up the two elements in question. The higher one will replace the lower one.
Elements at the very top of the series are so reactive that they can replace hydrogen from water. This explains the explosive reaction between sodium and water:
Elements in the middle of the list will react with acids (but not water) to produce a salt and hydrogen gas. Elements at the bottom of the list are mostly nonreactive.
Elements near the top of the list will corrode (rust, tarnish, etc.) in oxygen much faster than those at the bottom of the list.
The Reactivity Series
- Red: elements that react with water and acids to form hydrogen gas, and with oxygen.
- Orange: elements that react very slowly with water but strongly with acids.
- Yellow: elements that react with acid to form hydrogen gas, and with oxygen.
- Grey: elements that react with oxygen (tarnish).
- White: elements that are often found pure; relatively nonreactive.
Oxidation states are used to determine the degree of oxidation or reduction that an element has undergone when bonding. The oxidation state of a compound is the sum of the oxidation states of all atoms within the compound, which equals zero unless the compound is ionic.
| *Gaining electrons is reduction.
The oxidation state of an atom within a molecule is the charge it would have if the bonding were completely ionic, even though covalent bonds do not actually result in charged ions.
Method of notation
Oxidation states are written above the element or group of elements that they belong to (when drawing the molecule), or written with roman numerals in parenthesis when naming the elements.
|aluminum(III), an ion|
Determining oxidation state
For single atoms or ions
Because oxidation numbers are just the sum of the electrons gained or lost, calculating them for single elements is easy.
|The oxidation state of a single element is the same as its charge. Pure elements always have an oxidation states of zero.|
Notice that the oxidation states of ionic compounds are simple to determine.
For larger molecules
|Remember that all the individual oxidation states must add up to the charge on the whole substance.|
Although covalent bonds do not result in charges, oxidation states are still useful. They label the hypothetical transfer of electrons if the substance were ionic. Determining the oxidation states of atoms in a covalent molecule is very important when analyzing "redox" reactions. When substances react, they may transfer electrons when they form the products, so comparing the oxidation states of the products and reactants allows us to keep track of the electrons.
|for hydrogen chloride|
|for the chlorite ion|
(notice the overall charge)
|Oxidation states do not necessarily represent the actual charges on an atom in a molecule. They are simply numbers that indicate what the charges would be if that atom had gained or lost the electrons involved in the bonding. For example, CH4 is a covalent molecule—the C has no charge nor does the H, however the molecule can be assigned a −4 oxidation state for the C and a +1 oxidation state for the H's.|
Determining Oxidation States
The determination of oxidation states is based on knowing which elements can have only one oxidation state other than the elemental state and which elements are able to form more than one oxidation state other than the elemental state. Let's look at some of the "rules" for determining the oxidation states.
1. The oxidation state of an element is always zero.
2. For metals, the charge of the ion is the same as the oxidation state. The following metals form only one ion: Group IA, Group IIA, Group IIIA (except Tl), Zn2+, Cd2+.
3. For monatomic anions and cations, the charge is the same as the oxidation state.
4. Oxygen in a compound is −2, unless a peroxide is present. The oxidation state of oxygen in peroxide ion , O22− is −1.
5. For compounds containing polyatomic ions, use the overall charge of the polyatomic ion to determine the charge of the cation. Here is a convenient method for determining oxidation states. Basically, you treat the charges in the compound as a simple algebraic expression. For example, let's determine the oxidation states of the elements in the compound, KMnO4. Applying rule 2, we know that the oxidation state of potassium is +1. We will assign "x" to Mn for now, since manganese may be of several oxidation states. There are 4 oxygens at −2. The overall charge of the compound is zero: K Mn O4 +1 x 4(-2)
The algebraic expression generated is: 1 + x -8 = 0
Solving for x gives the oxidation state of manganese: x - 7 = 0 x = +7 K Mn O4 +1 +7 4(-2)
Suppose the species under consideration is a polyatomic ion. For example, what is the oxidation state of chromium in dichromate ion, (Cr2O72-)?
As before, assign the oxidation state for oxygen, which is known to be -2. Since the oxidation state for chromium is not known, and two chromium atoms are present, assign the algebraic value of 2x for chromium: Cr2 O7 2- 2x 7(-2)
Set up the algebraic equation to solve for x. Since the overall charge of the ion is -2, the expression is set equal to -2 rather than 0: 2x + 7(-2) = -2
Solve for x: 2x - 14 = -2 2x = 12 x = +6
Each chromium in the ion has an oxidation state of +6. Let's do one last example, where a polyatomic ion is involved. Suppose you need to find the oxidation state of all atoms in Fe2(CO3)3. Here two atoms, iron and carbon, have more than one possible oxidation state. What happens if you don't know the oxidation state of carbon in carbonate ion? In fact, knowledge of the oxidation state of carbon is unnecessary. What you need to know is the charge of carbonate ion (-2). Set up an algebraic expression while considering just the iron ion and the carbonate ion: Fe2 (CO3)3 2x 3(-2) 2x - 6 = 0 2x = 6 x = 3
Each iron ion in the compound has an oxidation state of +3. Next consider the carbonate ion independent of the iron(III) ion: C O3 2- x 3(-2) x - 6 = -2 x = +4
The oxidation state of carbon is +4 and each oxygen is -2.
Determining oxidation states is not always easy, but there are many guidelines that can help. This guidelines in this table are listed in order of importance. The highest oxidation state that any element can reach is +8 in XeO4.
|Element||Usual Oxidation State|
|Fluorine||Fluorine, being the most electronegative element, will always have an oxidation of -1 (except when it is bonded to itself in F2, when its oxidation state is 0).|
|Hydrogen||Hydrogen always has an oxidation of +1, -1, or 0. It is +1 when it is bonded to a non-metal (e.g. HCl, hydrochloric acid). It is -1 when it is bonded to metal (e.g. NaH, sodium hydride). It is 0 when it is bonded to itself in H2.|
|Oxygen||Oxygen is usually given an oxidation number of -2 in its compounds, such as H2O. The exception is in peroxides (O2-2) where it is given an oxidation of -1. Also, in F2O oxygen is given an oxidation of +2 (because fluorine must have -1), and in O2, where it is bonded only to itself, the oxidation is 0.|
|Alkali Metals||The Group 1A metals always have an oxidation of +1, as in NaCl. The Group 2A metals always have an oxidation of +2, as in CaF2. There are some rare exceptions that don't need consideration.|
|Halogens||The other halogens (Cl, Br, I, As) usually have an oxidation of -1. When bonded to another halogen, its oxidation will be 0. However, they can also have +1, +3, +5, or +7. Looking at the family of chlorides, you can see each oxidation state (Cl2 (0), Cl- (-1), ClO- (+1), ClO2- (+3), ClO3- (+5), ClO4- (+7)).|
|Nitrogen||Nitrogen (and the other Group 5A elements, such as phosphorus, P) often have -3 (as in ammonia, NH3), but may have +3 (as in NF3) or +5 (as in phosphate, PO43-).|
|Carbon||Carbon can take any oxidation state from -4, as in CH4, to +4, as in CF4. It is best to find the oxidation of other elements first.|
In general, the more electronegative element has the negative number. Using a chart of electronegativities, you can determine the oxidation state of any atom within a compound.
Oxidation states are another periodic trend. They seem to repeat a pattern across each period.
Redox reactions are chemical reactions in which elements are oxidized and reduced.
| *Losing electrons is oxidation.
Specifically, at the most basic level one element gets oxidized by losing, or donating, electrons to the oxidizing agent. In doing so, the oxidizing agent gets reduced by accepting the electrons lost, or donated, by the reducing agent (i.e. the element getting oxidized).
If it seems as though there are two separate things going on here, you are correct: redox reactions can be split into two half-reactions, one dealing with oxidation, the other, reduction.
Oxidation Is Loss. Reduction Is Gain
Loose Electrons Oxidation. Gain Electrons Reduction
|This is the complete reaction. Iron is oxidized, thus it is the reducing agent. Copper is reduced, making it the oxidizing agent.|
|This is the oxidation half-reaction.|
|This is the reduction half-reaction.|
When the two half-reactions are summed, the result is:
|If you cancel out the electrons on both sides, you get the original equation.|
Balancing Redox Equations
In a redox reaction, all electrons must cancel out. If you are adding two half-reactions with unequal numbers of electrons, then the equations must be multiplied by a common denominator. This process is similar to balancing regular equations, but now you are trying to balance the electrons between two half-reactions.
The electrons don't completely cancel out. There is one electron more on the left. However, if you double all terms in the first half-reaction, then add it to the second half-reaction, the electrons will cancel out completely. That means the half-reactions for this redox reaction are actually:
Balancing Redox Equations in an Acidic or Basic Solution
If a reaction occurs in an acidic or basic environment, the redox equation is balanced as follows:
- Write the oxidation and reduction half reactions, but with the whole compound, not just the element that is reduced/oxidized.
- Balance both reactions for all elements except oxygen and hydrogen.
- If the oxygen atoms are not balanced in either reaction, add water molecules to the side missing the oxygen.
- If the hydrogen atoms are not balanced, add hydrogen ions until the hydrogen atoms are balanced.
- Multiply the half reactions by the appropriate number (so that they have equal numbers of electrons).
- Add the two equations to cancel out the electrons, as in the previous method, and the equation is balanced!
If the reaction occurs in a basic environment, proceed as if it is in an acid environment, but, after step 4, for each hydrogen ion added, add a hydroxide ion to both sides of the equation. Then, combine the hydroxide ions and hydrogen ions to form water. Then, cancel all the water molecules that appear on both sides.
Redox Reactions (review)
Redox (shorthand for reduction/oxidation reaction) describes all chemical reactions in which atoms have their oxidation number (oxidation state) changed.
This can be either a simple redox process such as the oxidation of carbon to yield carbon dioxide, or the reduction of carbon by hydrogen to yield methane (CH4), or it can be a complex process such as the oxidation of sugar in the human body through a series of very complex electron transfer processes.
The term redox comes from the two concepts of reduction and oxidation. It can be explained in simple terms:
- Oxidation describes the loss of electrons by a molecule, atom, or ion
- Reduction describes the gain of electrons by a molecule, atom, or ion
However, these descriptions (though sufficient for many purposes) are not truly correct. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. In practice, the transfer of electrons will always cause a change in oxidation number, but there are many reactions which are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
Electrochemistry is a branch of chemistry that deals with the flow of electricity by chemical reactions. The electrons in a balanced half-reaction show the direct relationship between electricity and the specific redox reaction. Electrochemical reactions are either spontaneous, or nonspontaneous. A spontaneous redox reaction generates a voltage itself. A nonspontaneous redox reaction occurs when an external voltage is applied. The reactions that occur in an electric battery are electrochemical reactions.
- Three components of an electrochemical reaction
- A solution where redox reactions may occur (solutions are substances dissolved in liquid, usually water)
- A conductor for electrons to be transferred (such as a metal wire)
- A conductor for ions to be transferred (usually a salt bridge) e.g. filter paper dipped in a salt solution.
An electrolysis experiment forces a nonspontaneous chemical reaction to occur. This is achieved when two electrodes are submersed in an electrically conductive solution, and the electrical voltage applied to the two electrodes is increased until electrons flow. The electrode receiving the electrons, or where the reduction reactions occur, is called the cathode. The electrode which supplies the electrons, or where the oxidation reactions occur, is called the anode.
A molten salt is an example of something that may be electrolyzed because salts are composed of ions. When the salt is in its solid state, the ions are not able to freely move. However, when the salt is heated enough until it melts (making it a molten salt), the ions are free to move. This mobility of the ions in the molten salt makes the salt electrically conductive. In the electrolysis of a molten salt, for example melted , the cation of the salt (in this case ) will be reduced at the cathode, and the anion of the salt (in this case ) will be oxidized at the anode:
- Cathode reaction: Na+ + e− → Na
- Anode reaction: 2Cl → Cl2 + 2e−
Aqueous solutions of salts can be electrolyzed as well because they are also electrically conductive. In aqueous solutions, there is an additional reaction possible at each the cathode and the anode:
- Cathode: 2H2O + 2e− → H2 + 2OH− (reduction of
- Anode: 2H2O → 4H+ + O2 + 4e− (oxidation of water)
With the addition of these two reactions, there are now two possible reactions at each electrode. At the cathode, either the reduction of the cation or the reduction of water will occur. At the anode, either the oxidation of the anion or the oxidation of water will occur. The following rules determine which reaction takes place at each electrode:
- Cathode: If the cation is a very active metal, water will be reduced. Very active metals include Li, Na, K, Rb, Cs, Ca, Sr, and Ba. If the cation is an active or inactive metal, the cation will be reduced.
- Anode: If the anion is a polyatomic ion, water will generally be oxidized. Specifically, sulfate, perchlorate, and nitrate ions are not oxidized; water will oxidize instead. Chloride, bromide, and iodide ions will be oxidized. If the anion in one salt is oxidized in an aqueous electrolysis, that same anion will also be oxidized in any other salt.
The energy of a spontaneous redox reaction is captured using a galvanic cell. The following parts are necessary to make a galvanic cell:
- Two half cells
- Two electrodes
- One electrically conductive wire
- One salt bridge
- One device, usually an ammeter or a voltmeter
A galvanic cell is constructed as shown in the image to the right. The two half-reactions are separated into two half cells. All of the reactants in the oxidation half-reaction are placed in one half cell (the anode), and all the reactants of the reduction half-reaction are placed in the other half cell (the cathode). If the half-reaction contains a metal, the metal serves as the electrode for that half cell. Otherwise, an inert electrode made of platinum, silver, or gold is used. The electrodes are connected with a wire which allows the flow of electrons. The electrons always flow from the anode to the cathode. The half cells are connected by a salt bridge which allows the ions in the solution to move from one half cell to the other, so that the reaction can continue. Since the overall reaction is spontaneous, the flow of electrons will move spontaneously through the outer circuitry from which the energy can be extirpated. The energy harnessed is useful because it can be used to do work. For example, if an electrical component such as a light bulb is attached to the wire, it will receive power from the flowing electrons.
Consistent results from a galvanic cell are dependent on three variables: pressure, temperature, and concentration. Thus, chemists defined a standard state for galvanic cells. The standard state for the galvanic cell is a pressure of 1.00 atmospheric pressure (atm) for all gases, a temperature of 298 kelvin (K) and concentrations of 1.00 molarity (M) for all soluble compounds, liquids, and solids.
Voltage is a measure of spontaneity of redox reactions, and it can be measured by a voltmeter. If the voltage of a reaction is positive, the reaction occurs spontaneously, but when negative, it does not occur spontaneously.
To compute the voltage of a redox equation, split the equation into its oxidation component and reduction component. Then, look up the voltages of each component on a standard electrode potential table. This table will list the voltage for the reduction equation. The oxidation reaction's voltage is negative of the corresponding reduction equation's voltage. To find the equation's voltage, add the standard voltages for each half reaction.
Types of Solutions
A solution is a homogenous mixture, composed of solvent(s) and solute(s). A solvent is any substance which allows other substances to dissolve in it. Therefore, it is usually present in the greater amount. Solutes are substances present in a solution. Note that when a solute dissolves in a solvent, no chemical bonds form between the solvent and solute.
Solutions have variable composition, unlike pure compounds whose composition is fixed. For example, a 500 mL solution of lemonade can consist of 70% water, 20% lemon juice, and 10% sugar. There can also be a 500 mL solution of lemonade consisting of 60% water, 25% lemon juice, and 15% sugar.
|The most common type of solution is an aqueous solution, which is a solution where water is the solvent. Aqueous solutions are quite important. For example, acids and bases exist typically as aqueous solutions.|
When two liquids can be readily combined in any proportions, they are said to be miscible. An example would be alcohol and water. Either of the two can totally dissolve each other in any proportion. Two liquids are defined as immiscible if they will not form a solution, such as oil and water. Solid solutes in a metallic solvent are known as alloys. Gold is an example of an alloy. It is too soft in its pure form, so other metals are dissolved in it. Jewelers may use 14-karat gold, which contains two-thirds gold and one-third other metals.
Variables Affecting Solubility
|Surface area||More surface area gives more opportunity for solute-solvent contact||Powdered sugar will dissolve in water faster than rock candy.|
|Temperature||Solids are more soluble in hot solvents, gases are more soluble in cold solvents||Sugar dissolves more readily in hot water, but CO2 dissolves better in cold soda than warm soda.|
|Polarity||Non-polar compounds dissolve in non-polar solvents, and polar compounds dissolve in polar solvents. If one liquid is polar, and the other isn't, they are immiscible.||Alcohol and water are both polar, and they are miscible. Oil is non-polar and is immiscible in water.|
|Pressure||Gases dissolve better under higher pressure, due to greater forces pushing the gas molecules into the solvent.||Leaving the cap off a soda bottle will let the carbonation out.|
|Agitation||If a solution is agitated by stirring or shaking, there is an increase in kinetic motion and contact of particles. Therefore, the rate of solubility increases.||Everyone knows to stir their coffee after adding sugar.|
|In general, the rate of solubility is dependent on the intermolecular forces between substances. The maxim "like dissolves like" will help you remember that substances must both be polar or non-polar to dissolve.|
Dissolving at the Molecular Level
- The forces between the particles in the solid must be broken. This is an endothermic process called dissociation.
- Some of the intermolecular forces between the particles in the liquid must also be broken. This is endothermic.
- The particles of the solid and the particles in a liquid become attracted. This is an exothermic process called solvation.
Example: Dissolving NaCl
When sodium chloride is added to water, it will dissolve. Water molecules are polar, and sodium chloride is ionic (which is very polar). The positive ends of the water molecules (the hydrogens) will be attracted to the negative chloride ions, and the negative ends of the water molecules (the oxygens) will be attracted to the positive sodium ions. The attractions are strong enough to separate sodium from chloride, so the solute dissociates, or breaks apart. The solute is then spread throughout the solvent. The polar water molecules prevent the ions from reattaching to each other, so the salt stays in solution.
- When a solution can hold no more solute, it is said to be saturated. This occurs when there is an equilibrium between the dissolved and undissolved solute.
- If more solute can be added, the solution is unsaturated.
- If a solution has more solute than is normally possible, due to the lowering or heightening of temperature, it is said to be supersaturated. If disturbed, the solution will rapidly form solid crystals.
- Solubility is the measure of how many grams of solute can dissolve in 100 grams of solvent (or in the case of water, solute per 100 milliliters.)
Sometimes, compounds form crystals with a specific amount of water in them. For example, copper(II) sulfate is written as CuSO4 • 5H2O. For every mole of copper(II) sulfate, there are five moles of water attached. The atoms are arranged in a crystal lattice. Even if dried, the compound will still be hydrated. It will not feel moist, but there are water molecules within the crystal structure of the solid.
Intense heat will release the water from the compound. Its color may change, indicating a chemical change. When the anhydrous compound is dissolved in water, it will become hydrated again.
Heats of Solution
Some chemicals change temperature when dissolved. This is due to a release or absorption of heat. The specific change is known as the heat of solution, measured in kJ/mol.
Some substances break up into ions and conduct electricity when dissolved. These are called electrolytes. All ionic compounds are electrolytes. Nonelectrolytes, on the other hand, do not conduct electricity when dissolved. Electrolytes are the reason that tap water conducts electricity. Tap water contains salts and other ions. If you have purified water, you will find that it does not conduct electricity at all. Upon dissolving some salt, it conducts electricity very well. The presence of ions allows electrons to move through the solution, and electricity will be conducted.
Solubility Practice Questions
1. In a mixture of 50 mL of benzene and 48 mL of octane,
- a) which substance is the solute?
- b) would these two substances form a solution?
2. Solutions are formed as physical reactions. Using this principle, name two ways in which solutes can be separated from solvents.
3. Three different clear, colourless liquids were gently heated in an evaporating dish. Liquid A left a white residue, liquid B left no residue, and liquid C left water. Identify each liquid solution as a pure substance or a solution.
4. Compare three bottles of soda. Bottle A was stored at room temperature (25°C), bottle B was stored at 10°C, and bottle C was stored at 30°C.
- a) If you wanted a fizzy drink, which bottle would you choose?
- b) If you wanted to change the gas pressure of bottle C to that of bottle B, what could you do?
Properties of Solutions
The concentration of a solution is the measure of how much solute and solvent there is. A solution is concentrated if it contains a large amount of solute, or dilute if contains a small amount.
Molarity is the number of moles of solute per liter of solution. It is abbreviated with the symbol M, and is sometimes used as a unit of measurement, e.g. a 0.3 molar solution of HCl. In that example, there would be 0.3 moles of HCl for every liter of water (or whatever the solvent was).
|Molarity is by far the most commonly used measurement of concentration.|
Molality is the number of moles of solute per kilogram of solvent. It is abbreviated with the symbol m (lowercase), and is sometimes used as a unit of measurement, e.g. a 0.3 molal solution of HBr. In that example, there would be 0.3 moles of HBr for every kilogram of water (or whatever the solvent was).
|One kilogram of water is one liter of water (near room temperature), but the molality is not the same as the molarity. The molarity is the ratio of solute to solution, whereas the molality is the ratio of solute to solvent.|
The mole fraction is simply the moles of solute per moles of solution. As an example, you dissolve one mole of NaCl into three moles of water. Remember that the NaCl will dissociate into its ions, so there are now five moles of particles: one mole Na+, one mole Cl-, and three moles water. The mole fraction of sodium is 0.2, the mole fraction of chloride is 0.2, and the mole fraction of water is 0.6.
The mole fraction is symbolized with the Greek letter (chi), which is often written simply as an X.
|The sum of all mole fractions for a solution must equal 1.|
Dilution is adding solvent to a solution to obtain a less concentrated solution. Perhaps you have used dilution when running a lemonade stand. To cut costs, you could take a half-full jug of rich, concentrated lemonade and fill it up with water. The resulting solution would have the same total amount of sugar and lemon juice, but double the total volume. Its flavor would be weaker due to the added water.
The key concept is that the amount of solute is constant before and after the dilution process. The concentration is decreased (and volume increased) only by adding solvent.
|Thus, the number of moles of solute before and after dilution are equal.|
|By definition of molarity, you can find the moles of solvent.|
|Substituting the second equation into the first gives the dilution equation.|
To determine the amount of solvent (usually water) that must be added, you must know the initial volume and concentration, and the desired concentration. Solving for in the above equation will give you the total volume of the diluted solution. Subtracting the initial volume from the total volume will determine the amount of pure solvent that must be added.
When ionic compounds dissolve in water, they separate into ions. This process is called dissociation. Note that because of dissociation, there are more moles of particles in the solution containing ions than there would be with the solute and solvent separated.
If you have two glasses of water, and you dissolve salt into one and sugar into the other, there will be a big difference in concentration. The salt will dissociate into its ions, but sugar (a molecule) will not dissociate. If the salt were NaCl, the concentration would be double that of the sugar. If the salt were MgCl2, the concentration would be triple (there are three ions).
Not all ionic compounds are soluble. Some ionic compounds have so much attractive force between their anions and cations that they will not dissociate. These substances are insoluble and will not dissolve. Instead, they clump together as a solid in the bottom of solution. Many ionic compounds, however, will dissociate in water and dissolve. In these cases, the attractive force between ion and water is greater than that between cation and anion. There are several rules to help you determine which compounds will dissolve and which will not.
- Solubility Rules
- All compounds with Group 1 ions or ammonium ions are soluble.
- Nitrates, acetates, and chlorates are soluble.
- Compounds containing a halogen are soluble, except those with fluorine, silver, or mercury. If they have lead, they are soluble only in hot water.
- Sulfates are soluble, except when combined with silver, lead, calcium, barium, or strontium.
- Carbonates, sulfides, oxides, silicates, and phosphates are insoluble, except for rule #1.
- Hydroxides are insoluble except when combined with calcium, barium, strontium, or rule #1.
Sometimes, when two different ionic compounds are dissolved, they react, forming a precipitate that is insoluble. Predicting these reactions requires knowledge of the activity series and solubility rules. These reactions can be written with all ions, or without the spectator ions (the ion that don't react, present on both sides of the reaction), a format known as the net ionic equation.
For example, silver nitrate is soluble, but silver chloride is not soluble (see the above rules). Mixing silver nitrate into sodium chloride would cause a cloudy white precipitate to form. This happens because of a double replacement reaction.
When solutes dissociate (or if a molecule ionizes), the solution can conduct electricity. Compounds that readily form ions, thus being good conductors, are known as strong electrolytes. If only a small amount of ions are formed, electricity is poorly conducted, meaning the compound is a weak electrolyte.
|A strong electrolyte will dissolve completely. All ions dissociate. A weak electrolyte, on the other hand, will partially dissociate, but some ions will remain bonded together.|
Some properties are the same for all solute particles regardless of what kind. These are known as the colligative properties. These properties apply to ideal solutions, so in reality, the properties may not be exactly as calculated. In an ideal solution, there are no forces acting between the solute particles, which is generally not the case.
All liquids have a tendency for their surface molecules to escape and evaporate, even if the liquid is not at its boiling point. This is because the average energy of the molecules is too small for evaporation, but some molecules could gain above average energy and escape. Vapor pressure is the measure of the pressure of the evaporated vapor, and it depends on the temperature of the solution and the quantities of solute. More solute will decrease vapor pressure.
|The vapor pressure is given by Rauolt's Law, where is the mole fraction of the solvent. Notice that the vapor pressure equals that of the pure solvent when there is no solute (). If , there would be no vapor pressure at all. This could only happen if there were no solvent, only solute. A solid solute has no vapor pressure.|
|If two volatile substances (both have vapor pressures) are in solution, Rauolt's Law is still used. In this case, Rauolt's Law is essentially a linear combination of the vapor pressures of the substances. Two liquids in solution both have vapor pressures, so this equation must be used.|
The second equation shows the relationship between the solvents. If two liquids were mixed exactly half-and-half, the vapor pressure of the resulting solution would be exactly halfway between the vapor pressures of the two solvents.
Another relation is Henry's Law, which shows the relationship between gas and pressure. It is given by Cg = k Pg , where C is concentration and P is pressure. As the pressure goes up, the concentration of gas in solution must also increase. This is why soda cans release gas when they are opened - The decrease in pressure results in a decrease in concentration of CO2 in the soda.
- Exercise for the reader
At 50 °C the vapor pressure of water is 11 kPa and the vapor pressure of ethanol is 30 kPa. Determine the resulting vapor pressure if a solution contains 75% water and 25% ethanol (by moles, not mass).
Boiling Point Elevation
A liquid reaches its boiling point when its vapor pressure is equal to the atmosphere around it. Because the presence of solute lowers the vapor pressure, the boiling point is raised. The boiling point increase is given by:
The reduced vapor pressure increases the boiling point of the liquid only if the solute itself is non-volatile, meaning it doesn't have a tendency to evaporate. For every mole of non-volatile solute per kilogram of solvent, the boiling point increases by a constant amount, known as the molal boiling-point constant (). Because this is a colligative property, is not affected by the kind of solute.
Freezing Point Depression
A liquid reaches its freezing temperature when its vapor pressure is equal to that of its solid form. Because the presence of the solute lowers the vapor pressure, the freezing point is lowered. The freezing point depression is given by:
Again, this equation works only for non-volatile solutes. The temperature of the freezing point decreases by a constant amount for every one mole of solute added per kilogram solvent. This constant () is known as the molar freezing-point constant.
If you studied biology, you would know that osmosis is the movement of water through a membrane. If two solutions of different molarity are placed on opposite sides of a semipermeable membrane, then water will travel through the membrane to the side with higher molarity. This happens because the water molecules are "attached" to the solvent molecules, so they cannot travel through the membrane. As a result, the water on the side with lower molarity can more easily travel through the membrane than the water on the other side.
The pressure of this osmosis is given in the equation
Where pi is the pressure, M is molarity, R is the gas constant, and T is temperature in Kelvin.
Electrolytes and Colligative Properties
When one mole of table salt is added to water, the colligative effects are double those that would have occurred if sugar were added instead. This is because the salt dissociates, forming twice as many particles as sugar would. This dissociation, called the Van't Hoff Factor describes how many particles that are dissociated into the solution and must be multiplied into the Boiling Point Elevation or Vapor Pressure Lowering equations.
|Sugar is a covalent molecule. No dissociation occurs when dissolved.|
|Table salt is an ionic compound and a strong electrolyte. Total dissociation occurs when dissolved, doubling the effects of colligative properties.|
|Magnesium bromide is also ionic. The colligative effects will be tripled.|
Though extremely useful for calculating the general Van't Hoff Factor, this system of calculation is slightly inaccurate when considering ions. This is because when ions are in solution, they may interact and clump together, lessening the effect of the Van't Hoff factor. In addition, more strongly charged ions may have a smaller effect. For example, CaO would be less effective as an electrolyte than NaCl.
Acids and Bases
Acid-Base Reaction Theories
Acids and bases are everywhere. Some foods contain acid, like the citric acid in lemons and the lactic acid in dairy. Cleaning products like bleach and ammonia are bases. Chemicals that are acidic or basic are an important part of chemistry.
|You may need to refresh your memory on naming acids.|
Several different theories explain what composes an acid and a base. The first scientific definition of an acid was proposed by the French chemist Antoine Lavoisier in the eighteenth century. He proposed that acids contained oxygen, although he did not know the dual composition of acids such as hydrochloric acid (HCl). Over the years, much more accurate definitions of acids and bases have been created.
The Swedish chemist Svante Arrhenius published his theory of acids and bases in 1887. It can be simply explained by these two points:
- Arrhenius Acids and Bases
- An acid is a substance which dissociates in water to produce one or more hydrogen ions (H+).
- A base is a substance which dissociates in water to produce one or more hydroxide ions (OH-).
Based on this definition, you can see that Arrhenius acids must be soluble in water. Arrhenius acid-base reactions can be summarized with three generic equations:
|An acid will dissociate in water producing hydrogen ions.|
|A base (usually containing a metal) will dissociate in water to produce hydroxide ions.|
|Acids and bases will neutralize each other when mixed. They produce water and an ionic salt, neither of which are acidic or basic.|
The Arrhenius theory is simple and useful. It explains many properties and reactions of acids and bases. For instance, mixing hydrochloric acid (HCl) with sodium hydroxide (NaOH) results in a neutral solution containing table salt (NaCl).
However, the Arrhenius theory is not without flaws. There are many well known bases, such as ammonia (NH3) that do not contain the hydroxide ion. Furthermore, acid-base reactions are observed in solutions that do not contain water. To resolve these problems, there is a more advanced acid-base theory.
The Brønsted-Lowry theory was proposed in 1923. It is more general than the Arrhenius theory—all Arrhenius acids/bases are also Brønsted-Lowry acids/bases (but not necessarily vice versa).
- Brønsted-Lowry Acids and Bases
- An acid is a substance from which a proton (H+ ion) can be removed. Essentially, an acid donates protons to bases.
- A base is a substance to which a proton (H+) can be added. Essentially, a base accepts protons from acids.
Acids that can donate only one proton are monoprotic, and acids that can donate more than one proton are polyprotic.
These reactions demonstrate the behavior of Brønsted-Lowry acids and bases:
|An acid (in this case, hydrochloric acid) will donate a proton to a base (in this case, water is the base). The acid loses its proton and the base gains it.|
|Water is not necessary. In this case, hydrochloric acid is still the acid, but ammonia acts as the base.|
|The same reaction is happening, but now in reverse. What was once an acid is now a base (HCl → Cl-) and what was once a base is now an acid (NH3 → NH4+). This concept is called conjugates, and it will be explained in more detail later.|
|Two examples of acids (HCl and H3O+) mixing with bases (NaOH and OH-) to form neutral substances (NaCl and H2O).|
|A base (sodium hydroxide) will accept a proton from an acid (ammonia). A neutral substance is produced (water), which is not necessarily a part of every reaction. Compare this reaction to the second one. Ammonia was a base, and now it is an acid. This concept, called amphoterism, is explained later.|
The Brønsted-Lowry theory is by far the most useful and commonly-used definition. For the remainder of General Chemistry, you can assume that any acids/bases use the Brønsted-Lowry definition, unless stated otherwise.
The Lewis definition is the most general theory, having no requirements for solubility or protons.
- Lewis Acids and Bases
- An acid is a substance that accepts a lone pair of electrons.
- A base is a substance that donates a lone pair electrons.
Lewis acids and bases react to create an adduct, a compound in which the acid and base have bonded by sharing the electron pair. Lewis acid/base reactions are different from redox reactions because there is no change in oxidation state.
Amphoterism and Water
Substances capable of acting as either an acid or a base are amphoteric. Water is the most important amphoteric substance. It can ionize into hydroxide (OH-, a base) or hydronium (H3O+, an acid). By doing so, water is
- Increasing the H+ or OH- concentration (Arrhenius),
- Donating or accepting a proton (Brønsted-Lowry), and
- Accepting or donating an electron pair (Lewis).
|A bare proton (H+ ion) cannot exist in water. It will form a hydrogen bond to the nearest water molecule, creating the hydronium ion (H3O+). Although many equations and definitions may refer to the "concentration of H+ ions", that is a misleading abbreviation. Technically, there are no H+ ions, only hydronium (H3O+) ions. Fortunately, the number of hydronium ions formed is exactly equal to the number of hydrogen ions, so the two can be used interchangeably.|
Water will dissociate very slightly (which further explains its amphoteric properties).
|The presence of hydrogen ions indicates an acid, whereas the presence of hydroxide ions indicates a base. Being neutral, water dissociates into both equally.|
|This equation is more accurate—hydrogen ions do not exist in water because they bond to form hydronium.|
Another common example of an amphoteric substance is ammonia. Ammonia is normally a base, but in some reactions it can act like an acid.
|Ammonia acts as a base. It accepts a proton to form ammonium.|
|Ammonia also acts as an acid. Here, it donates a proton to form amide.|
Ammonia's amphoteric properties are not often seen because ammonia typically acts like a base. Water, on the other hand, is completely neutral, so its acid and base behaviors are both observed commonly.
Conjugate Acids and Bases
In all the theories, the products of an acid-base reaction are related to the initial reactants of the reaction. For example, in the Brønsted-Lowry theory, this relationship is the difference of a proton between a reactant and product. Two substances which exhibit this relationship form a conjugate acid-base pair.
- Brønsted-Lowry Conjugate Pairs
- An acid that has donated its proton becomes a conjugate base.
- A base that has accepted a proton becomes a conjugate acid.
|Hydroiodic acid reacts with water (which serves as a base). The conjugate base is the iodide ion and the conjugate acid is the hydronium ion. The acids are written in red, and the bases are written in blue. One conjugate pair is written bold and the other conjugate pair is in cursive.|
|Ammonia (basic) reacts with water (the acid). The conjugate acid is ammonium and the conjugate base is hydroxide. Again, acids are written in red, and the bases are written in blue. The conjugate pairs are distinguished with matching fonts.|
Strong and Weak Acids/Bases
A strong acid is an acid which dissociates completely in water. That is, all the acid molecules break up into ions and solvate (attach) to water molecules. Therefore, the concentration of hydronium ions in a strong acid solution is equal to the concentration of the acid.
The majority of acids exist as weak acids, an acid which dissociates only partially. On average, only about 1% of a weak acid solution dissociates in water in a 0.1 mol/L solution. Therefore, the concentration of hydronium ions in a weak acid solution is always less than the concentration of the dissolved acid.
Strong bases and weak bases do not require additional explanation; the concept is the same.
|The conjugate of a strong acid/base is very weak. The conjugate of a weak acid/base is not necessarily strong.|
This explains why, in all of the above example reactions, the reverse chemical reaction does not occur. The stronger acid/base will prevail, and the weaker one will not contribute to the overall acidity/basicity. For example, hydrochloric acid is strong, and upon dissociation chloride ions are formed. Chloride ions are a weak base, but the solution is not basic because the acidity of HCl is overwhelmingly stronger than basicity of Cl-.
|Although the other halogens make strong acids, hydrofluoric acid (HF) is a weak acid. Despite being weak, it is incredibly corrosive—hydrofluoric acid dissolves glass and metal!|
Most acids and bases are weak. You should be familiar with the most common strong acids and assume that any other acids are weak.
|HCl, HBr, HI||Hydrohalic acids|
Within a series of oxyacids, the ions with the greatest number of oxygen molecules are the strongest. For example, nitric acid (HNO3) is strong, but nitrous acid (HNO2) is weak. Perchloric acid (HClO4) is stronger than chloric acid (HClO3), which is stronger than the weak chlorous acid (HClO2). Hypochlorous acid (HClO) is the weakest of the four.
Common strong bases are the hydroxides of Group 1 and most Group 2 metals. For example, potassium hydroxide and calcium hydroxide are some of the strongest bases. You can assume that any other bases (including ammonia and ammonium hydroxide) are weak.
|Acids and bases that are strong are not necessarily concentrated, and weak acids/bases are not necessarily dilute. Concentration has nothing to do with the ability of a substance to dissociate. Furthermore, polyprotic acids are not necessarily stronger than monoprotic acids.|
Properties of Acids and Bases
Now that you are aware of the acid-base theories, you can learn about the physical and chemical properties of acids and bases. Acids and bases have very different properties, allowing them to be distinguished by observation.
Made with special chemical compounds that react slightly with an acid or base, indicators will change color in the presence of an acid or base. A common indicator is litmus paper. Litmus paper turns red in acidic conditions and blue in basic conditions. Phenolphthalein purple is colorless in acidic and neutral solutions, but it turns purple once the solution becomes basic. It is useful when attempting to neutralize an acidic solution; once the indicator turns purple, enough base has been added.
A less informative method is to test for conductivity. Acids and bases in aqueous solutions will conduct electricity because they contain dissolved ions. Therefore, acids and bases are electrolytes. Strong acids and bases will be strong electrolytes. Weak acids and bases will be weak electrolytes. This affects the amount of conductivity.
However, acids will react with metal, so testing conductivity may not be plausible.
|The following is for informative purposes only. Do not sniff, touch, or taste any acids or bases as they may result in injury or death.|
The physical properties of acids and bases are opposites.
These properties are very general; they may not be true for every single acid or base.
Another warning: if an acid or base is spilled, it must be cleaned up immediately and properly (according to the procedures of the lab you are working in). If, for example, sodium hydroxide is spilled, the water will begin to evaporate. Sodium hydroxide does not evaporate, so the concentration of the base steadily increases until it becomes damaging to its surrounding surfaces.
Acids will react with bases to form a salt and water. This is a neutralization reaction. The products of a neutralization reaction are much less acidic or basic than the reactants were. For example, sodium hydroxide (a base) is added to hydrochloric acid.
This is a double replacement reaction.
|Acids react with metal to produce a metal salt and hydrogen gas bubbles.|
|Acids react with metal carbonates to produce water, CO2 gas bubbles, and a salt.|
|Acids react with metal oxides to produce water and a salt.|
Bases are typically less reactive and violent than acids. They do still undergo many chemical reactions, especially with organic compounds. A common reaction is saponification: the reaction of a base with fat or oil to create soap.
1. Name the following compounds that will form, and identify as an acid or base:
- a) Br + H
- b) 2H + SO3
- c) K + H
- d) 2H + SO6
- e) 3H + P2
- f) H + BrO100
- g) Na + Cl
2. What are the conjugate acids and bases of the following:
- a) water
- b) ammonia
- c) bisulfate ion
- d) zinc hydroxide
- e) hydrobromic acid
- f) nitrite ion
- g) dihydrogen phosphate ion
3. In a conductivity test, 5 different solutions were set up with light bulbs. The following observations were recorded:
- Solution A glowed brightly.
- Solution B glowed dimly.
- Solution C glowed dimly.
- Solution D did not glow.
- Solution E glowed brightly.
- a) Which solution(s) could contain strong bases?
- b) Which solution(s) could contain weak acids?
- c) Which solution(s) could contain ions?
- d) Which solution(s) could contain pure water?
- e) Based solely on these observations, would it be possible to distinguish between acidic and basic solutions?
4. Identity the conjugate base and conjugate acid in these following equations:
- a) HCl + H2O → H3O+ + Cl-
- b) HClO + H2O → ClO- + H3O+
- c) CH3CH2NH2 + H2O → CH3CH2NH3+ + OH-
5. Identify these bases as Arrhenius, Brønsted-Lowry, or both.
- a) strontium hydroxide
- b) butyllithium (C4H9Li)
- c) ammonia
- d) potassium hydroxide
- e) potassium iodide
6. Based on the Brønsted-Lowry Theory of Acids and Bases, would you expect pure water to have no dissolved ions whatsoever? Explain, using a balanced chemical equation.
1. 2. 3. 4. 5. 6.
- ^ Brown, Theodore E.; Lemay, H. Eugene; Bursten, Bruce E.; Murphy, Catherine; Woodward, Patrick (2009), Chemistry: The Central Science (11th ed.), New York: Prentice-Hall, ISBN 0136006175.
Titration and pH
Ionization of Water
Water is a very weak electrolyte. It will dissociate into hydroxide and hydronium ions, although only in a very small amount. Because pure water is completely neutral, it always dissociates in equal amounts of both hydroxide and hydronium. Once acidic or basic substances have been added to pure water, the concentration of the ions will change. Regardless of which acid-base theory is used, acids and bases all have one important thing in common:
- All acids increase the H+ concentration of water.
- All bases increase the OH- concentration of water.
Furthermore, the concentration of hydrogen ions multiplied by the concentration of hydroxide ions is a constant. This constant is known as the ionization constant of water, or Kw. At room temperature it equals 10-14 mol2/L2. Thus:
In a neutral solution, the concentrations of H+ and OH- are both equal to 10-7. Using the above equation, the concentration of one ion can be determined if the concentration of the other ion is known. This equation further demonstrates the relationship between acids and bases: as the acidity (H+) increases, the basicity (OH-) must decrease.
The pH Scale
To measure the acidity or basicity of a substance, the pH scale is employed.
- The pH Scale
- A completely neutral substance has a pH of 7.
- Acids have a pH below 7
- Bases have a pH above 7.
pH usually ranges between 0 and 14, but it can be any value. Battery acid, for example, has a negative pH because it is so acidic.
Definition of pH
The pH scale is mathematically defined as:
Substances that release protons or increase the concentration of hydrogen ions (or hydronium ions) will lower the pH value.
There is also a less common scale, the pOH scale. It is defined as:
Substances that absorb protons or increase the concentration of hydroxide ions will lower the pOH value.
The sum of pH and pOH is always 14 at room temperature:
A strong acid or strong base will completely dissociate in water, so the concentration of the acid/base is equal to the concentration of H+ or OH-. If you know the concentration of the acid or base, then you can simply plug that number into the pH or pOH formula. The sum of pH and pOH will always equal 14 at room temperature, so you can interconvert these two values.
If you know the H+ concentration and need to know the OH- concentration (or vice versa), use the definition of Kw above. The product of the two ion concentrations will always equal 10-14 at room temperature.
Titration is the controlled mixing of a solution with known concentration (the standard solution) to another solution to determine its concentration. One solution is acidic and the other is basic. An indicator is added to the mixture. An indicator must be selected so that it changes color when equal amounts of acid and base have been added. This is known as the equivalence point. This does not necessarily mean that the pH is 7.0.
Once the equivalence point has been reached, the unknown concentration can be determined mathematically.
1) 5.00g of NaOH are dissolved to make 1.00L of solution.
- a What is the concentration of H+?
- b What is the pH?
Buffer systems are systems in which there is a significant (and nearly equivalent) amount of a weak acid and its conjugate base—or a weak base and its conjugate acid—present in solution. This coupling provides a resistance to change in the solution's pH. When strong acid is added, it is neutralized by the conjugate base. When strong base is added, it is neutralized by the weak acid. However, too much acid or base will exceed the buffer's capacity, resulting in significant pH changes.
|Consider an arbitrary weak acid, HA, and its conjugate base, A-, in equilibrium.|
|The addition of a strong acid will cause only a slight change in pH due to neutralization.|
|Likewise, the addition of a strong base will cause only a slight change in pH.|
Buffers are useful when a solution must maintain a specific pH. For example, blood is a buffer system because the life processes in a human only function within a specific pH range of 7.35 to 7.45. When, for example, lactic acid is released by the muscles during exercise, buffers within the blood neutralize it to maintain a healthy pH.
Making a Buffer
Once again, let's consider an arbitrary weak acid, HA, which is present in a solution. If we introduce a salt of the acid's conjugate base, say NaA (which will provide the A- ion), we now have a buffer solution. Ideally, the buffer would contain equal amounts of the weak acid and conjugate base.
Instead of adding NaA, what if a strong base were added, such as NaOH? In that case, the hydroxide ions would neutralize the weak acid and create water and A- ions. If the solution contained only A- ions, then a strong acid like HCl were added, they would neutralize and create HA.
As you can see, there are three ways to create a buffer:
All six of the combinations will create equal amounts of a weak acid and its conjugate base, or a weak base and its conjugate acid.
Buffers and pH
To determine the pH of a buffer system, you must know the acid's dissociation constant. This value, (or for a base) determines the strength of an acid (or base). It is explored more thoroughly in the Equilibrium unit, but for now it suffices to say that this value is simply a measure of strength for acids and bases. The dissociation constants for acids and bases are determined experimentally.
|The pKa of an acid is the negative logarithm of its acid dissociation constant. This is analogous to pH (the negative logarithm of the H+ concentration).|
The Henderson-Hasselbalch equation allows the calculation of a buffer's pH. It is:
For a buffer created from a base, the equation is:
Using these equations requires determining the ratio of base to acid in the solution.
Reactions of Acids and Bases
To summarize the properties and behaviors of acids and bases, this chapter lists and explains the various chemical reactions that they undergo. You may wish to review chemical equations and types of reactions before attempting this chapter.
The following reactions are net ionic equations. In other words, spectator ions are not written. If an ion does not partake in the reaction, it is simply excluded. The spectator ions can be found because they occur on both the reactant and the product side of the equation. Cross them out and rewrite the equation without them. Of course, the coefficients must be equal.
Canceling out the spectator ions explains the net of net ionic equations. The ionic part means that dissolved compounds are written as ions instead of compounds. Acids, bases, and salts are all ionic, so they are written as separate ions if they have dissociated.
- Net Ionic Equations
- Soluble salts are written as ions.
- e.g.: Na+ + Cl-
- Solids, liquids, and gases are written as compounds.
- e.g.: NaCl(s), H2O(l), HCl(g)
- Strong acids and strong bases are written as ions (because they dissociate almost completely).
- e.g.: H+ + NO3-
- Weak acids and weak bases are written as compounds (because they barely dissociate).
- e.g.: HNO2
As an example, sodium bicarbonate (NaHCO3) would be written as Na+ and HCO3- because the salt will dissociate, but the bicarbonate will not dissociate (it's a weak acid).
When an acid and a base react, they form a neutral substance, often water and a salt.
First, let's examine the neutralization of a strong acid with a strong base.
|Solid potassium hydroxide is added to an aqueous solution of hydrochloric acid. Notice how the solid is written as a compound, but the acid is written as ions because it dissociates.|
|The hydrogen ions will react with hydroxide ions to form water.|
|Ignoring spectator ions, this is the net ionic equation.|
|Whenever a strong acid neutralizes a strong base, the net ionic equation is always H+ + OH- → H2O.|
Now, let's see some examples involving weak acids and weak bases.
|Excess hydrochloric acid is added to a solution of sodium phosphate. Phosphoric acid is weak, so the phosphate ions will react with hydrogen ions. The result is a solution with some, but much less, hydrogen ions, so it is much closer to neutral than either of the original reactants.|
|Equimolar amounts of sodium phosphate and hydrochloric acid are mixed. Notice the difference between this reaction and the previous one.|
|A strong base is added to a solution of calcium bicarbonate. (Bicarbonate is a weak acid.)|
|A strong acid is added to a solution of calcium bicarbonate. Gas bubbles appear.|
Many reactions result in the formation of gas bubbles or a solid precipitate that will make the solution cloudy. The last equation brings up an interesting application. Many rocks and minerals contain calcium carbonate or calcium bicarbonate. To identify these rocks, geologists can perform the "acid test". A drop of acid is applied, and the presence of gas bubbles indicates carbonate.
|Many ions and compounds are amphoteric. They can react with (or behave as) acids and bases. Bicarbonate is a good example, as you can see by comparing the last two equations above.|
Here are more examples of neutralization reactions.
|Solid ammonium chloride crystals are dissolved into a solution of sodium hydroxide. The smell of ammonia is detected.|
|Ammonia gas is bubbled through a solution of hydrochloric acid. This reaction is essentially the opposite of the previous. In that reaction, ammonium ions react with base to form ammonia gas. In this reaction, ammonia gas reacts with acid to form ammonium ions.|
|Ammonia (a weak base) reacts with acetic acid (also weak). The resulting solution is nearly neutral, but it will be slightly basic because ammonia is stronger than acetic acid.|
|Hydrogen sulfide gas is bubbled into a strong base.|
|A strong acid is added to the above result, and hydrogen sulfide gas is released.|
An anhydride is a substance that does not contain water. More specifically, it is a substance that reacts with water to form an acid or base. Anhydrides are usually in the form of a gas that dissolves into water and reacts to form an acid or base. They can also be solids that will react with water.
|Gaseous dinitrogen pentoxide is bubbled through water to form nitric acid.|
|Dinitrogen trioxide is mixed with water to form nitrous acid.|
The main difference between those two equations is the fact that nitrous acid is weak and thus does not dissociate, whereas nitric acid is strong and dissociates into ions.
Here are a few more examples of anhydride reactions.
|Solid potassium oxide is added to water to form a strong base.|
|Phosphorus(V) oxide powder is mixed into water to form a weak acid.|
It is important to remember which acids are strong and which are weak. Review this if necessary.
|Anhydrides can also react with acidic or basic solutions. In that case, you may find it helpful to first determine the anhydride's reaction with water, then determine that reaction with the acid or base.|
For example, sulfur dioxide gas (acidic anhydride) is bubbled through a solution of calcium hydroxide (basic).
|First, determine the reaction of the anhydride with water.|
|Then, determine the reaction of the acid and base. This is a double replacement reaction.|
|Add the two reactions together.|
|Cancel out spectators. Also, calcium hydroxide should be ionized (but calcium sulfite is a solid precipitate). This is the final net ionic equation.|
Here are more examples.
|Calcium oxide crystals (basic anhydrides) are added to a strong acid. Notice that it does not matter what the acid is (nitric, sulfuric, etc.) because it is strong and this reaction only requires the hydrogen ions. In other words, the anions of the strong acid are spectators and are not written.|
|Excess sulfur dioxide gas is bubbled into a dilute solution of strong base. The base is the limiting reactant.|
|Sulfur dioxide gas is bubbled into an excess of basic solution.|
Remember that water is involved in these reactions, but it is not written if it occurs on both sides of the equation.
|Anhydrides can undergo neutralization reactions, even without the presence of water.|
|Solid calcium oxide (basic anhydride) is exposed to dry ice gas (acidic anhydride). The resulting solid is a salt.|
|Solid calcium oxide is exposed to a stream of sulfur trioxide gas. The resulting solid is a neutral salt.|
A salt of a weak acid and strong base dissociates and reacts in water to form OH-. A salt of a strong acid and weak base dissociates and reacts in water to form H+. This process is called hydrolysis.
In this first example, aluminum nitrate is dissolved in water.
|First, the salt dissociates in the water. It isn't necessary to write H2O in this reaction.|
|Now, at least one of the ions will react with water. You know that nitric acid is strong, so the nitrate ion will not take an H+ ion from water. Instead, the aluminum ion will react with water, releasing a hydrogen ion.|
|This is the net ionic equation. The resulting solution is acidic.|
The solution is acidic not because nitric acid is strong, but because aluminum is a weak base.
Here is an easier example.
|First, the salt dissociates. Again, the H2O need not be written.| | <urn:uuid:86183f1d-325d-4114-90fb-d03248305a96> | 3.953125 | 49,690 | Content Listing | Science & Tech. | 47.002553 | 95,540,919 |
Oceans occupy some 70% of the Earth’s surface and extend from the North Pole to the shores of Antarctica. There are great differences in the character of the oceans, and these differences are of fundamental importance, both in the geography of the oceans themselves, and to the climatic patterns of the whole Earth. The surface of the ocean is differentiated into regions, or zones, with different hydrologic properties resulting from unevenness in the action of solar radiation and other phenomena of the climate of the atmosphere.
KeywordsTrade Wind Hydrologic Region Oceanic Type Polar Division Hydrologic Zone
Unable to display preview. Download preview PDF. | <urn:uuid:89d4abbe-4bf8-451c-909b-8214bc0bb02e> | 3.296875 | 132 | Truncated | Science & Tech. | 23.844154 | 95,540,920 |
Released on December 30, 2007
Although originally designed to measure atmospheric water vapor and temperature profiles for weather forecasting, data from the Atmospheric Infrared Sounder (AIRS) instrument on NASA's Aqua spacecraft are now also being used by scientists to observe atmospheric carbon dioxide. Scientists from NASA; the National Oceanic and Atmospheric Administration; the European Center for Medium-Range Weather Forecasts; the University of Maryland, Baltimore County; Princeton University, Princeton, New Jersey; and the California Institute of Technology (Caltech), Pasadena, Calif., are using several different methods to measure the concentration of carbon dioxide in the mid-troposphere (about eight kilometers, or five miles, above the surface). This visualization shows Aqua/AIRS mid-tropospheric carbon dioxide from July 2003. Low concentrations, 360 ppm, are shown in blue and high concentrations, 385 ppm, are shown in red. Notice that despite carbon dioxide's high degree of mixing, the regional patterns of atmospheric sources and sinks are still apparent in mid-troposphere carbon dioxide concentrations. This pattern of high carbon dioxide in the Northern Hemisphere (North America, Atlantic Ocean, and Central Asia) is consistent with model predictions.
GCMD keywords can be found on the Internet with the following citation:
Olsen, L.M., G. Major, K. Shein, J. Scialdone, S. Ritz, T. Stevens, M. Morahan, A. Aleman, R. Vogel, S. Leicester, H. Weir, M. Meaux, S. Grebas, C.Solomon, M. Holland, T. Northcutt, R. A. Restrepo, R. Bilodeau, 2013. NASA/Global Change Master Directory (GCMD) Earth Science Keywords. Version 220.127.116.11.0 | <urn:uuid:f7f6c267-74fe-4251-9e09-2e58e3317ca6> | 3.546875 | 371 | Knowledge Article | Science & Tech. | 44.496806 | 95,540,923 |
Tick Genome Reveals Secrets of a Successful Bloodsucker
News Feb 09, 2016
With tenacity befitting their subject, an international team of nearly 100 researchers toiled for a decade and overcame tough technical challenges to decipher the genome of the blacklegged tick (Ixodes scapularis).
“Ticks spread more different kinds of infectious microbes to people and animals than any other arthropod group,” said NIAID Director Anthony S. Fauci, M.D. “The spiral-shaped bacterium that causes Lyme disease is perhaps the best known microbe transmitted by ticks; however, ticks also transmit infectious agents that cause human babesiosis, anaplasmosis, tick-borne encephalitis and other diseases. The newly assembled genome provides insight into what makes ticks such effective disease vectors and may generate new ways to lessen their impact on human and animal health.”
Catherine A. Hill, Ph.D., of Purdue University, headed the team of investigators. Aside from the logistical challenges of coordinating activities of dozens of workers across many time zones, the researchers’ focus was a creature that is extremely difficult to maintain and that lives a long time — up to two years in the wild and nine months in the lab, Dr. Hill noted. Ixodes ticks have three blood-feeding life stages, and during each one, they feed on a different vertebrate animal. During feeding, ticks ingest blood for hours or days at a time. After mating, adult female ticks rapidly imbibe a large blood meal during which they expand hugely. “Because genes may switch on or off depending on the life stage of the tick, we needed to culture and collect ticks at each stage for analysis. This was not easy to do,” said Dr. Hill.
Another challenge was the sheer size of the tick genome — some 2.1 billion DNA base pairs — and expansive regions where sequences are repeated. “The degree of DNA repetition — approximately 70 percent of the total — made assembling the full genome in the correct order very difficult,” Dr. Hill said. In the end, the team determined the order and sequence of about two-thirds of the total genome. “We determined the sequence for 20,486 protein-coding genes,” she said, “of which 20 percent may be unique to ticks. Those tick-specific genes are like guideposts that say ‘start here’ as we look for new ways to counter infectious ticks.”
Although the latest research represents just a first look at the tick genome, the scientists have already identified genes and protein families that shed light on why Ixodes ticks succeed so well as parasites and hint at the reasons they excel at spreading pathogens, Dr. Hill noted. For example, compared with other blood-feeders, ticks have many more proteins devoted to consuming, concentrating and detoxifying their iron-containing food. Although mosquitoes — which quickly siphon up relatively small amounts of blood through a tube-like mouthpiece — have several proteins dedicated to blood digestion, ticks have many more proteins involved in this process. Other genes code for proteins that help ticks concentrate the blood and rapidly excrete excess water that accompanies large blood meals. Still other genes allow ticks to quickly expand their stiff outer coats to accommodate a 100-fold increase in total body size during blood feeding.
Other peculiarities of the tick’s lifestyle reflected in the genome include genes associated with the multifaceted sensory systems that the parasite uses when “questing” for a host during each of its separate blood-feeding stages. Compared with mosquitoes, ticks appear to have fewer genes used to detect hosts, and, unlike a mosquito’s “smell” receptors, ticks may use “taste” receptors to locate their food sources.
Each of the newly identified proteins is a potential target for new, tick-specific interventions, explained Dr. Hill. “The genome gives us a code book to the inner workings of ticks. With it, we can now begin to hack their system and write a counter-script against them.”
In an effort to explain variations in Lyme disease prevalence across the United States, the team also examined genetic diversity within and among I. scapularis populations gathered from five states in the Northeast and Midwest and three in the South. Some have speculated that ticks in the Northeast and Midwest spread the bacteria that cause Lyme disease more easily than those in the South, or that the two populations perhaps comprise separate species. The genetic analysis showed that there is only one species of I. scapularis, said Dr. Hill, but subtle genetic differences were detected, and these may help explain some of the variance in the ability of populations to transmit disease and, therefore, affect disease prevalence.
Dr. Hill admits to a grudging admiration for her eight-legged subjects. “I find them almost endearing in the way they stick so firmly to the business of parasitizing their hosts. They are persistent and resilient. In a way, our team took a page from the tick’s book in working together over so many years until we achieved our goal.”
House Plants an an Early Warning System?News
Researchers are exploring the future of houseplants as aesthetically pleasing and functional sirens of home health. The idea is to genetically engineer house plants to serve as subtle alarms that something is amiss in our home and office environments.READ MORE
Identical Twin Study Shows Impact of a Lifetime of Exercise on FItnessNews
When it comes to being fit, are genes or lifestyle more important? Researchers removed the nature part of the equation by studying a pair of identical twins who had taken radically different fitness paths over three decades. One became an Ironman triathlete while the other remained relatively sedentary over the last 30 years. | <urn:uuid:9e1ccf70-1302-457d-8fa5-f613fe1a799e> | 3.53125 | 1,203 | Truncated | Science & Tech. | 40.207059 | 95,540,927 |
Assessing Watershed-Wildfire Risks on National Forest System Lands in the Rocky Mountain Region of the United States
AbstractWildfires can cause significant negative impacts to water quality with resultant consequences for the environment and human health and safety, as well as incurring substantial rehabilitation and water treatment costs. In this paper we will illustrate how state-of-the-art wildfire simulation modeling and geospatial risk assessment methods can be brought to bear to identify and prioritize at-risk watersheds for risk mitigation treatments, in both pre-fire and post-fire planning contexts. Risk assessment results can be particularly useful for prioritizing management of hazardous fuels to lessen the severity and likely impacts of future wildfires, where budgetary and other constraints limit the amount of area that can be treated. Specifically we generate spatially resolved estimates of wildfire likelihood and intensity, and couple that information with spatial data on watershed location and watershed erosion potential to quantify watershed exposure and risk. For a case study location we focus on National Forest System lands in the Rocky Mountain Region of the United States. The Region houses numerous watersheds that are critically important to drinking water supplies and that have been impacted or threatened by large wildfires in recent years. Assessment results are the culmination of a broader multi-year science-management partnership intended to have direct bearing on wildfire management decision processes in the Region. Our results suggest substantial variation in the exposure of and likely effects to highly valued watersheds throughout the Region, which carry significant implications for prioritization. In particular we identified the San Juan National Forest as having the highest concentration of at-risk highly valued watersheds, as well as the greatest amount of risk that can be mitigated via hazardous fuel reduction treatments. To conclude we describe future opportunities and challenges for management of wildfire-watershed interactions. View Full-Text
Share & Cite This Article
Thompson, M.P.; Scott, J.; Langowski, P.G.; Gilbertson-Day, J.W.; Haas, J.R.; Bowne, E.M. Assessing Watershed-Wildfire Risks on National Forest System Lands in the Rocky Mountain Region of the United States. Water 2013, 5, 945-971.
Thompson MP, Scott J, Langowski PG, Gilbertson-Day JW, Haas JR, Bowne EM. Assessing Watershed-Wildfire Risks on National Forest System Lands in the Rocky Mountain Region of the United States. Water. 2013; 5(3):945-971.Chicago/Turabian Style
Thompson, Matthew P.; Scott, Joe; Langowski, Paul G.; Gilbertson-Day, Julie W.; Haas, Jessica R.; Bowne, Elise M. 2013. "Assessing Watershed-Wildfire Risks on National Forest System Lands in the Rocky Mountain Region of the United States." Water 5, no. 3: 945-971. | <urn:uuid:86d3b74c-80cd-4b86-a0ad-a83255192783> | 2.609375 | 594 | Academic Writing | Science & Tech. | 39.265354 | 95,540,953 |
Nutrient contribution of nonpoint source runoff in the Las Vegas Valley
A Geographic Information System (GIS) based non-point source runoff model is developed for the Las Vegas Valley, Nevada, to estimate the nutrient loads during the years 2000 and 2001. The estimated nonpoint source loads are compared with current wastewater treatment facilities loads to determine the non-point source contribution of total phosphorus (TP), total nitrogen (TN), and total suspended solids (TSS) on a monthly and annual time scale. An innovative calibration procedure is used to estimate the pollutant concentrations for different land uses based on available water quality data at the outlet. Results indicate that the pollutant concentrations are higher for the Las Vegas Valley than previous published values for semi-arid and arid regions. The total TP and TN loads from nonpoint sources are approximately 15 percent and 4 percent, respectively, of the total load to the receiving water body, Lake Mead. The TP loads during wet periods approach the permitted loads from the wastewater treatment plants that discharge into Las Vegas Wash. In addition, the GIS model is used to track pollutant loads in the stream channels for one of the subwatersheds. This is useful for planning the location of Best Management Practices to control nonpoint pollutant loads.
Geographic information systems; Lake Mead; Las Vegas Valley; Modeling; Nonpoint source pollution; Storm water management; Surface water hydrology; Urban run-off; Wastewater treatment; Watershed management
Desert Ecology | Environmental Sciences | Fresh Water Studies
Use Find in Your Library, contact the author, or use interlibrary loan to garner a copy of the article. Publisher copyright policy allows author to archive post-print (author’s final manuscript). When post-print is available or publisher policy changes, the article will be deposited
Piechota, T. C.
Nutrient contribution of nonpoint source runoff in the Las Vegas Valley.
Journal of the American Water Resources Association, 40(6), | <urn:uuid:6d7d8151-2a12-4b46-a514-c31bfec6b92f> | 2.890625 | 404 | Academic Writing | Science & Tech. | 24.278712 | 95,540,955 |
Photo courtesy of SUNY-ESFAmericans are spending their lives farther from forests than they did at the end of the 20th century and, contrary to popular wisdom, the change is more pronounced in rural areas than in urban settings.
A study published (Feb. 22 in the journal PLOS ONE says that between 1990 and 2000, the average distance from any point in the United States to the nearest forest increased by 14 percent—or about a third of a mile. And while the distance isn't insurmountable for humans in search of a nature fix, it can present challenges for wildlife and have broad effects on ecosystems.
Dr. Giorgos Mountrakis, an associate professor in the ESF Department of Environmental Resources, and co-author of the study, called the results "eye opening."
"Our study analyzed geographic distribution of forest losses across the continental U.S. While we focused on forests, the implications of our results go beyond forestry," Mountrakis said.
The study overturned conventional wisdom about forest loss, the researcher noted. The amount of forest attrition—the complete removal of forest patches—is considerably higher in rural areas and in public lands. "The public perceives the urbanized and private lands as more vulnerable," said Mountrakis, "but that's not what our study showed. Rural areas are at a higher risk of losing these forested patches.
"Patches of forests are important to study because they serve a lot of unique ecoservices," Mountrakis said, citing bird migration as one example. "You can think of the forests as little islands that the birds are hopping from one to the next."
"Typically we concentrate more on urban forest," said Sheng Yang, an ESF graduate student and co-author of the study, "but we may need to start paying more attention—let's say for biodiversity reasons—in rural rather than urban areas. Because the urban forests tend to receive much more attention, they are better protected."
Forest dynamics are an integral part of larger ecosystems and have the potential to significantly affect water chemistry, soil erosion, carbon sequestration patterns, local climate, biodiversity distribution, and human quality of life, Mountrakis said.
Using forest maps over the entire continental United States, researchers compared satellite data from the 1990s with data from 2000. "We did a statistical analysis starting with forest maps from 1990 and compared it to forests in 2000," said Mountrakis.
The study looked at the loss of forest by calculating the distance to the nearest forest from every area in the landscape, Mountrakis said. The loss of a smaller isolated forest could have a greater environmental impact than losing acreage within a larger forest.
The study also found distance to the nearest forest is considerably greater in western forests than eastern forests.
"So if you are in the western U.S. or you are in a rural area or you are in land owned by a public entity, it could be federal, state or local, your distance to the forest is increasing much faster than the other areas," he said. "The forests are getting further away from you."
"Distances to nearest forest are also increasing much faster in less forested landscapes. This indicates that the most spatially isolated—and therefore important—forests are the ones under the most pressure," said Yang.
The loss of these unique forests proposes a different set of side effects, Mountrakis said, "for local climate, for biodiversity, for soil erosion. This is the major driver—we can link the loss of the isolated patches to all these environmental degradations."
Along with research into the drivers behind the loss of forests, Mountrakis expects the differing geographic distributions and differences in land ownership and urbanization levels will initiate new research and policy across forestry, ecology, social science and geography.
This work was supported by the National Urban and Community Forestry Advisory Council and the McIntire-Stennis program, U.S. Forest Service.
Like this article? Click here to subscribe to free newsletters from Lab Manager | <urn:uuid:353ac5eb-5a47-4d9d-97d1-0958239feaaa> | 3.328125 | 829 | News Article | Science & Tech. | 36.916778 | 95,540,969 |
The proteins used in this study, lysozyme, ribonuclease, myoglobin and α-lactalbumin, have similar molecular masses (ca 15 000 dalton) and similar dimensions, but they differ with respect to isoelectric point and structure stability. Sequential and competitive adsorption experiments have been performed in a systematic way using sorbent surfaces of varying hydrophobicity and charge density. Adsorbed and desorbed amounts were calculated from changes in the protein solution concentrations as determined by FPLC. In addition, electrophoretic mobilities of bare and protein-covered sorbent particles were measured. Sequential and competitive adsorption of lysozyme, ribonuclease and myoglobin are primarily influenced by electrostatic interactions. With α-lactalbumin an extra factor, related to the low structure stability of this protein, contributes to preferential adsorption. As a rule, displacement of pre-adsorbed protein by a secondly supplied different protein is facilitated by the hydrophilicity of the sorbent surface. © 1990.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:f3a203ac-debf-4d31-835c-5ecd6019bcf4> | 2.609375 | 240 | Academic Writing | Science & Tech. | 0.967535 | 95,541,022 |
Analogous structures are structures which serve similar purposes yet are found in species that have come from different evolutionary lines. The study of analogous structures is a type of anatomical comparison between two different species, used to gain evidence for convergent evolution. How are analogous structures used to gain evidence for convergent evolution and what are some examples of analogous structures? Let’s take a look at some examples of analogous structures.
Evidence for evolution comes in many different forms, from fossils, DNA sequences, and the discipline of developmental biology among other sources. Anatomical comparisons between species of animals is one of the most common ways that scientists determine the evolutionary history of different animals. There are different types of evolutionary patterns, convergent evolution and divergent evolution among them. It’s important to understand the differences between the two types of evolution, as it gives context to the difference between analogous structures and another form of anatomical comparison known as homologous structures.
Convergent Evolution Vs. Divergent Evolution
Convergent evolution refers to the phenomenon where different species become more similar to one another over time. A reason this may occur is that species which live in similar environments are often subject to the same evolutionary pressures, and thus evolve to occupy the same or highly similar ecological niches.
One example of convergent evolution is the similarity between Asian fork-tailed sunbirds and North American hummingbirds, which look extremely similar despite coming from different evolutionary lineages. The birds were subjected to similar environmental pressures and became more similar due to living in similar environments.
The opposite of convergent evolution is divergent evolution, where one evolutionary lineage splits apart overtime to give rise to different species. Divergent evolution is often caused by shifts in environmental pressures, which may occur due to changes in the environment or due to a species migrating to a new area. If a species migrates to a new area and is forced to fill a new ecological niche, a new species may evolve rather quickly. This form of evolution is sometimes referred to as adaptive radiation. An example of divergent evolution is a type of fish referred to as the Characidae. There are many different lines of Characidae which evolved from a common evolutionary line over the years, with the teeth and jaws of the fish changing to adapt to food supplies present in the new environment. Both tetras and piranhas are examples of the divergent evolution witnessed in Characidae.
As previously mentioned, analogous structures are structures within the bodies of living things that fulfill a similar role even though they come from different evolutionary lineages. The most frequent cause of analogous structures is convergent evolution, where organisms are subjected to similar environmental pressures. This can occur in different areas of the world, it doesn’t have to be the same area. All that is required for a structure to be an analogous structure is that the structure has evolved due to similar selection pressures found in similar environments, creating a need for the different species to fill the same niche in the different areas.
The general process of natural selection doesn’t change based on geographical location, so despite the different areas, if the environment is similar the same kinds of adaptations will be favored for preservation and passed down to the offspring of individuals with those adaptations. This process continues until most of the population of animals has that adaptation.
Significant adaptations can end up changing the structure of the species, body parts may transform, be lost, gained, or shifted around in the process of evolution, depending on what function the body part plays. Body parts that are unnecessary or of mild help in a new environment may shrink or be lost entirely while body parts that prove useful in the new environment may grow.
The study of analogous structures has proven immensely beneficial in uncovering the evolutionary lineages of species. Early taxonomies of species, such as Carolus Linnaeus’ attempt, often grouped animals into groups based on superficial characteristics (similar looking animals). This created groupings which were incorrect when they were compared to evolutionary groupings based off of analogous structures, which proved that species didn’t have to be related to look similar.
It is important to remember that analogous structures don’t necessarily represent similar evolutionary paths. An analogous structure may have evolved under one set of circumstances long ago, while its counterpart structure on a different organism may be fairly new in comparison. Analogous structures may also shift and transform through different stages, looking quite different from one another before they end up looking similar. This means that analogous structures aren’t necessarily evidence for a common ancestor between two species, and in fact it is more likely that the structures merely arose under similar circumstances and that the organisms are hardly related at all.
Examples Of Analogous Structures
One of the most notable examples of analogous structures is human and octopus eyes. The eyes of humans and the eyes of octopi are very similar for the most part, with the only substantial difference being that the eye of an octopus doesn’t have a blindspot like the human eye does. Yet octopi and humans aren’t very related and are located far away from one another on the phylogenetic tree.
Dolphins and sharks are another notable example of analogous structures and convergent evolution. The two animals share many features, including their overall body shape, coloration and fin placement. Yet dolphins are mammals while sharks are fish, meaning that in terms of evolutionary lineage dolphins share more in common with rats than sharks. This is supported by DNA evidence.
Wings are one of the most common examples of analogous evolution, as many organisms have wings yet evolved them in different ways. Insects, birds, and bats all have wings, yet bats are mammals and more closely related to their other mammal cousins than insects or birds.
Remember that the conceptual opposite of analogous structures is homologous structures, which exist in animals that have a common ancestry yet are different in function from one another. These homologous structures arise due to convergent evolutionary pressures. | <urn:uuid:ed50cdba-e52e-4ce3-888a-9473b4000ae1> | 3.75 | 1,215 | Knowledge Article | Science & Tech. | 15.221149 | 95,541,032 |
Why is SpacecraftAzimuth 180 degrees out for two nearly identical LROC R images?
I have been trying to understand the output of campt. One of the stumbling blocks has been SpacecraftAzimuth. The documentation is ambiguous so I have been trying to understand it by trial and error. I thought I had it figured out until I found two nearly identical images with totally different SpacecraftAzimuth values:
When I run campt on LROC image M1223450772R, I get a SpacecraftAzimuth of 179.871 . When I run it on M1218738444R for the same lat lon, I get 1.564 .
M1223450772R and M1218738444R are nearly identical in all other respects except for sun angle and 2 months separation in time. Does SpacecraftAzimuth have something to do with sun angle?
#1 Updated by Aaron Curtis 5 months ago
My new theory is that SpacecraftAzimuth is constrained to the plane of the image and described by three points: 1) At the same line as the requested GroundPoint, at the highest sample value (right edge of image). 2) The requested GroundPoint. 3) Some projection of the spacecraft position onto the image which is different than the GroundPoint... maybe the lat / lon projection?
If this theory is right then the reason it goes from 180 to 1 is because point 3) has moved from slightly left of point 1) to slightly right of it.
Would be great if somebody can confirm that I've guessed correctly about these points.
#2 Updated by Lynn Weller 5 months ago
I can't give you the affirmative answer you are looking for, but maybe the attached slide will help until someone with a better understanding can get to this post. The slide is from an ISIS beginner's workshop. I have access to it in house, but it seems it ought to be online somewhere (under our Wiki section), but I'm not seeing this particular workshop there. I'll see if someone can add it since it has a ton of useful information.
#4 Updated by Aaron Curtis 5 months ago
Thanks Tammy, that image confirms my newfound understanding and explains the numbers. The glossary entries for all of these azimuth things really should be re-written for clarity. To begin with, get rid of the "... angle from a point of origin to the direction of the .." sentences. They are very confusing and lead one to waste time wondering how you can have an angle from a point to a direction (you can't). Also, I presume the Sun Azimuth glossary entry should be renamed to SubSolarAzimuth.
I would recommend the definition of SpacecraftAzimuth be written:
The angle between points A, B, and C in the plane of the image, measured clockwise if looking from the spacecraft direction, where:
* A is the point on the image co-linear with the spacecraft and the center of the planet. (Actually, is it a point on the image or the ground surface? I'm not quite sure.)
* B is the requested image or ground point of interest
* C is the point at the right hand edge of the image (maximum sample) halfway down the image (number of lines / 2)
The other Azimuth glossary definitions could be similarly improved. | <urn:uuid:8eede1dc-d3c1-461c-a0cf-bbaa1c7663ea> | 2.625 | 699 | Q&A Forum | Science & Tech. | 60.502374 | 95,541,038 |
Scott Montgomery, Lecturer, University of Washington: Yet if the roughly $3.5 trillion invested in renewable power since 2000 had all backed fission, I believe the advances in that technology would have led all remaining coal- and oil-fired power plants to have disappeared from the face of the Earth by now. And if that same money had instead backed fusion, perhaps a working reactor would now exist.
peakoil.com:This is the third of three reports about the claims by representatives and proponents of the International Thermonuclear Experimental Reactor (ITER). The International Thermonuclear Experimental Reactor (ITER) is the largest and most expensive science experiment on Earth today. Public outreach for the experimental fusion reactor, under the direction of Laban Coblentz, the head of the ITER communications office, has led journalists and the public to believe that, when completed, the reactor will produce 10 times more power than goes into it. It will do no such thing.
Henri Bonet, Engineer and Nuclear Physicist, World Council On Isotopes: Many people believe that our work in the nuclear field is not ethical, relaying Greenpeace and other Environmentalist’s accusations. What to respond? Thriving in Radiation: Many ethical issues are related to both protection against radiation and exploitation of radiation as a means for improving quality of human life. Nuclear techniques are used in a significant number of applications outside nuclear energy production. Examples include not only the medical sector, but also agriculture and food processing, modern industry including materials development, environmental protection, space exploration, arts & science, public security… All together, these various sectors economically dwarf that of energy production.
Tom Tamarkin: In the early 1970s a course of action was developed by the United States Government which should have provided the American citizens and the entire world with such an unlimited, inexpensive, source of energy . . . controlled nuclear fusion energy . . . on-line and powering the electrical grids world wide by 2005. Unfortunately this did not happen. This article takes a critical look at why this science and technology was incorrectly discredited by politicians and the scientifically lay. | <urn:uuid:278aadaa-4b5b-4d72-91bd-22ce4e4c5966> | 3.109375 | 433 | Content Listing | Science & Tech. | 26.682696 | 95,541,042 |
This chapter gets back into the language itself and covers how objects behave in C++/CLI. You’ll learn a bit more about value types and reference types, including some of the implications of having a unified type system. You’ll also see how to work with objects on the managed heap as though they were stack-allocated variables, complete with the assurance that they will be cleaned up when they go out of scope. You’ll look at tracking references and object dereferencing and copying. You’ll also explore the various methods of passing parameters in C++/CLI and look at how to use C++/CLI types as return values.
KeywordsObject Type Reference Type Primitive Type Garbage Collector Native Reference
Unable to display preview. Download preview PDF. | <urn:uuid:83a1ac21-c0cc-463a-b15f-74df914a7b7c> | 2.625 | 166 | Product Page | Software Dev. | 46.173438 | 95,541,073 |
In an unusual but useful example of cellular flip-flop, a new research study demonstrates that multiple cell types have the ability to temporarily switch into renin-secreting cells when they are needed to stabilize blood pressure. The research, published in the May issue of Developmental Cell, demonstrates that the recruited cells are direct descendants of cells that expressed renin at one time during development.
Renin is a hormone released into the blood by specialized cells in the walls of kidney blood vessels. Renin is released in response to sodium depletion and/or low blood pressure in the blood vessels of the kidneys and it plays a major role in regulating blood pressure generally in the body. Adult mammals can increase circulating renin, when necessary, by increasing the number of renin-synthesizing cells. Dr. R. Ariel Gomez from the University of Virginia and colleagues examined whether the ability of adult cells to synthesize renin was dependent on the cells original lineage. The researchers generated mice with a genetic marker that allowed visualization of renin-expressing cells even after the cell had differentiated into a non-renin-secreting cell type. Experimental manipulations known to recruit renin-expressing cells demonstrated that adult cells that were descendants of renin cells retained the capability to make renin when more of the hormone was required to stabilize blood pressure.
The researchers conclude that specific subpopulations of apparently differentiated cells are "held in reserve" to repeatedly respond by de-differentiating and expressing renin in response to stress and then re-differentiating when the crisis has passed. According to Dr. Gomez, "The experiments confirm that recruitment of renin-expressing cells is determined by the developmental history of the cells, which retain the memory to re-express the renin gene under physiological stress. The mice we have generated should be extremely valuable to delete genes specifically in the renin-expressing cell and therefore determine the precise cellular function of those genes independently of systemic influences."
Heidi Hardman | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:5f7655fc-e37f-403b-80bb-f8014b8f2284> | 2.875 | 1,051 | Content Listing | Science & Tech. | 33.152792 | 95,541,091 |
Completing Genome Sequence of Model Bacterium to Help Improve Grass-Crop Harvests
News Aug 18, 2014
TGAC, together with the Universidad Nacional de Rio Cuarto (UNRC) and Instituto de Agrobiotecnologica Rossario (INDEAR), Argentina, plus other European partners have completed the genome sequence of a model strain of the soil bacterium Azospirillum brasilense Az39 to improve plant health and nutrition in agriculture.
One of the main characteristics of the Azospirillum bacterium that aids plant health is its ability to be able to produce plant-growth regulators. By sequencing the genome of the bacterium’s model strain, Azospirillum brasilense (Az39), the potential mechanisms responsible for growth improvement can be unravelled.
Azospirillum brasilense (Az39) was isolated from surface-sterilised wheat seedlings in Marco Juarez, Argentina. The selected crop was chosen for fertilisation based on its ability to increase grass-crop yields, such as wheat and maize, under agronomic conditions. As the most-studied soil bacterium that encourages plant growth, Azospirillum brasilense is responsible for the major improvement of more than a hundred plant species’ growth and yield productivity.
To sequence the genome, TGAC performed the optical mapping analysis with an OpGen Argus whole genome mapper to validate the final genome assembly. Bernardo Clavijo, Project Leader of the Bioinformatics Algorithms Development Group at TGAC, explains the analytic process: “Optical mapping is a technology that produces a restriction map of a genome, which is essentially a list of distances at which a known sequence occurs within the DNA. Knowing where this tag lies allow you to anchor shorter assembled sequences from a sequencing experiment and confirm their validity, pretty much like how a few small clouds and their position on an image guide would give you the necessary hints to assemble an otherwise elusive blue sky in a jigsaw puzzle.”
David Baker, Platforms and Pipelines Team Leader at TGAC, said: “Whole Genome Mapping is a powerful tool in validating sequencing of small genomes. We are privileged here at TGAC to have the equipment and expertise to run these types of samples. From receipt of sample material to alignment of the optical maps takes just several days’ work.”
Azospirillum brasilense is a bacterium that is found in the rhizopheres of several grasses. Rhizopheres are the narrow regions of soil that is directly influenced by root secretions and associated soil microorganisms. Microbial inoculants, also known as soil inoculants, are used for agricultural enhancements that use valuable endophytes (microbes) to encourage plant health. Many of the microbes involved form mutual-beneficial relationships with the target crops, while microbial inoculants are applied to improve plant nutrition, they can also be used to promote plant growth by stimulating plant hormone production. Research into the benefits of inoculants in agriculture extends beyond their capacity as biofertilizers. Microbial inoculants can encourage resistance of crop species to several common crop diseases.
The paper, titled: “Complete Genome Sequence of the Model Rhizosphere Strain Azospirillum brasilense Az39, Successfully Applied in Agriculture” is published in genome A, American Society for Microbiology.
Analytical Tool Predicts Disease-Causing GenesNews
Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes.
Single Gene Change in Gut Bacteria Alters Host MetabolismNews
Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE
Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews
A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE | <urn:uuid:4c4f8831-f53c-4ce6-944c-a3ffa0c60ad5> | 2.90625 | 941 | News Article | Science & Tech. | 10.600057 | 95,541,109 |
In astronomy, an observation method called spectroscopy extends humanity’s reach into the cosmos. Through spectroscopy, astronomers can study different wavelengths of light coming from very distant objects in the universe, from single stars to massive galaxies, and determine their chemical composition. The technology may, one day, uncover life-giving molecules in the atmosphere of an exoplanet.
It’s a very cool thing, so it’s unfortunate that spectroscopy’s name makes it sound like an uncomfortable medical procedure. Here’s another way to think of the method: It’s the Leonardo DiCaprio of astronomy instruments, the Inception version, who insists, again and again, that we go deeper into the unknown.
Recently, one international team of astronomers spent two years heeding Leo’s call. The team, led by Roland Bacon of the Centre de Recherche Astrophysique de Lyon in France, used a spectroscopy instrument called MUSE, installed on European Southern Observatory’s Very Large Telescope in Chile. They pointed the instrument at a small patch of sky known as the Hubble Ultra-Deep Field. The field is our deepest view of the cosmos, a photograph of the universe as it was 13 billion years ago. The Hubble Space Telescope captured the view in 2004 after spending several months absorbing the light from the earliest galaxies.
The survey ended up measuring the properties of 1,600 faint galaxies, which the astronomers say is 10 times as many as have been recorded using other ground-based telescopes over the years. The survey includes 72 galaxies that have never been detected before, not even by Hubble. Altogether, the data have produced the deepest spectroscopic observations ever made, according to the team. Their findings are described in 10 papers published Wednesday in Astronomy & Astrophysics.
“When we started the project, I did not expect that we would be so successful to detect so many galaxies,” Bacon said in an email.
The 72 galaxies were hidden from earlier spectroscopy studies because they shine in only one color of light called Lyman-alpha. These galaxies, known as Lyman-alpha emitters, are young, star-producing factories. The dynamics of Lyman-alpha emitters are still poorly understood, and Bacon said studying them “must tell us something about the star formation in the early universe, a key ingredient for galaxy formation.”
The survey also found that the presence of halos of hydrogen gas around galaxies is a pretty common phenomenon in the early universe. Observing these halos is key to understanding the fundamentals of galaxy formation, Bacon said.
ESO has produced a visual tour of this deeper view of the cosmos, constructed using MUSE measurements of the distances of the galaxies from Earth. The video captures the data better than a single composite image could. Traveling through the field of view feels like staring up at falling snow:
The new survey demonstrates how spectroscopy can spice up the usual ways astronomers study distant galaxies, said Massimo Stiavelli, an astronomer at the Space-Telescope Science Institute in Baltimore. Instruments like MUSE have the capacity to provide information about the chemical compositions of every galaxy in their path, whether they can be detected in visible wavelengths of light or not.
“We tend to take an image, identify objects that we think are promising, and then take spectra of these objects,” Stiavelli said. “By being prejudiced by images, we would be missing some objects.”
The MUSE survey provides a tiny hint of what’s to come, when NASA launches its next space telescope, the James Webb, in 2019, kicking off a veritable spectroscopy party. The Webb, an infrared-light observatory, will be capable of measuring the spectra of some of the most distant exoplanets, stars, and galaxies in the universe.
Stiavelli said he has no doubt Webb will find the 72 galaxies the MUSE team discovered, and see them in even greater clarity. When that happens, Webb will unseat MUSE—as well as pretty much every other similar tool on or around the planet—as the Leonardo DiCaprio of astronomy instruments.
“This is a very meaningful appetizer of things we should be able to do with James Webb,” Stiavelli said.
We want to hear what you think. Submit a letter to the editor or write to email@example.com. | <urn:uuid:4231add2-0e82-4d1e-8b46-2be8a8789de5> | 3.828125 | 928 | Truncated | Science & Tech. | 35.731628 | 95,541,110 |
Download this PDF: Abu Dhabi Blue Carbon Demonstration Project Brochure (2238 downloads)
The coastal and marine environments of Abu Dhabi are diverse and include mangrove forests, saltmarshes, sabkha, intertidal mudflats with cyanobacterial mats and extensive subtidal seagrass meadows.
Project field surveys have discovered an unusual potential blue carbon ecosystem, and one that is unique to the Gulf states. Cyanobacterial (blue-green algal) mats associated with areas of sheltered intertidal mud are the present day representation of the earliest known forms of life identified in rock records, dating back 3.2 billion years. Primary production can be very high, but carbon storage may be highly variable depending upon soil conditions.
Mangroves are the most visible Blue Carbon ecosystem, occupying some 68 square kilometres along the UAE coast. A patch of mangrove forest in the east of Abu Dhabi was the first to be intentionally planted in the UAE and dates back to 1966. Over the following decades this practice was expanded along the Emirate’s coast, with particular success in sheltered locations adjacent to existing stands. A single species – the grey mangrove (Avicennia marina), locally known as Qurm – is found in Abu Dhabi. The dense and complex structure of old natural stands provides a rich environment for fish and other species.
On higher ground away from the water’s edge in areas of extremely high salinity (2-4 times greater than seawater), coastal sabkha, extensive areas of saltflats, occasionally flooded by extreme high tides, are hostile to all but the hardiest forms of life. While considered a fringe blue carbon habitat at the onset of the project, assessments indicate negligible carbon, and sabkha is therefore no longer considered a Blue Carbon ecosystem in this project.
Less visible, and beneath the waterline, Abu Dhabi hosts one of the world’s most expansive complex of seagrass meadows, supporting the second largest population of dugongs found anywhere. The meadows are populated by mosaics of three seagrass species (Halodule uninervis, Halophila ovalis, and Halophila stipulacea), which are found near the shore and around islands down to a depth of 8 metres. Like saltmarshes and mangroves, many seagrasses accumulate carbon within soils through the production and storage of root material.
Saltmarshes are relatively rare in the Emirate, found locally at high tidal elevations and behind mangrove stands. The vegetation consists of specialist salt tolerant dwarf shrubs of the goosefoot (Chenopodiaceae) and caltrop (Zygophyllaceae) family, as well as the desert hyacinth (Cistanche tubulosa) favored for eastern traditional medicinal benefits. | <urn:uuid:1b3153b0-782d-4cb9-9b04-8fd423c9dba2> | 3.171875 | 586 | Knowledge Article | Science & Tech. | 20.428409 | 95,541,112 |
This suggests that terrestrial ecosystems and the oceans have a much greater capacity to absorb CO2 than had been previously expected.
The results run contrary to a significant body of recent research which expects that the capacity of terrestrial ecosystems and the oceans to absorb CO2 should start to diminish as CO2 emissions increase, letting greenhouse gas levels skyrocket. Dr Wolfgang Knorr at the University of Bristol found that in fact the trend in the airborne fraction since 1850 has only been 0.7 ± 1.4% per decade, which is essentially zero.
The strength of the new study, published online in Geophysical Research Letters, is that it rests solely on measurements and statistical data, including historical records extracted from Antarctic ice, and does not rely on computations with complex climate models.
This work is extremely important for climate change policy, because emission targets to be negotiated at the United Nations Climate Change Conference in Copenhagen early next month have been based on projections that have a carbon free sink of already factored in. Some researchers have cautioned against this approach, pointing at evidence that suggests the sink has already started to decrease.
So is this good news for climate negotiations in Copenhagen? “Not necessarily”, says Knorr. “Like all studies of this kind, there are uncertainties in the data, so rather than relying on Nature to provide a free service, soaking up our waste carbon, we need to ascertain why the proportion being absorbed has not changed”.
Another result of the study is that emissions from deforestation might have been overestimated by between 18 and 75 per cent. This would agree with results published last week in Nature Geoscience by a team led by Guido van der Werf from VU University Amsterdam. They re-visited deforestation data and concluded that emissions have been overestimated by at least a factor of two.
Please contact Cherry Lewis for further information.Further information:
Cherry Lewis | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:ac838918-409a-4dbe-a52b-7f0586bbe9c1> | 3.421875 | 961 | Content Listing | Science & Tech. | 37.75563 | 95,541,134 |
The roll-call of top climate researchers includes five University of East Anglia scientists: Prof Corinne Le Quéré (also of the British Antarctic Survey), Prof Andrew Watson, Dr Dorothee Bakker, Dr Erik Buitenhuis and Dr Nathan Gillett.
The signatories warn that if immediate action is not taken, many millions of people will be at risk from extreme events such as heat waves, drought, floods and storms, with coasts and cities threatened by rising sea levels, and many ecosystems, plants and animal species in serious danger of extinction.
The researchers, who include many of the world’s most acclaimed climate scientists, have issued the ‘Bali Climate Declaration by Scientists’ in which they call on government negotiators from the 180 nations represented at the meeting to recognize the urgency of taking action now. They say the world may have as little as 10 years to start reversing the global rise in emissions.
Prof Le Quéré said: “Climate change is unfolding very fast. There is only one option to limit the damages: stabilise the concentration of CO2 and other greenhouse gases in the atmosphere.
“There is no time to waste. I urge the negotiators in Bali to stand up to the challenge and set strong binding targets for the benefit of the world population.”
The Bali Declaration emphasises the current scientific consensus that long-term greenhouse gas concentrations need to be stabilised at a level well below 450ppm CO2e (450 parts per million measured in carbon dioxide equivalent).
Building on the urgency of the recent Synthesis Report of the Intergovernmental Panel on Climate Change (IPCC) released on 17 November in Valencia, Spain, the declaration calls on governments to reduce emissions “by at least 50 per cent below their 1990 levels by the year 2050”.
The Bali Declaration endorses the latest scientific consensus that every effort must be made to keep increases in the globally averaged surface temperature to below 2 degrees C. The scientists say that “to stay below 2 degrees C, global emissions must peak and decline in the next 10 to 15 years”.
The critical reductions in global emissions of greenhouse gases and the atmospheric stabilisation target highlighted in the Bali Declaration places a tremendous responsibility on the Bali United Nations Framework Convention on Climate Change.
Negotiations at Bali must start the process of reaching a new global agreement that sets strong and binding targets and includes the vast majority of the nations of the world. The Bali Declaration concludes:
“As scientists, we urge the negotiators to reach an agreement that takes these targets as a minimum requirement for a fair and effective global climate agreement.”
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:24c90122-cfeb-4834-b0e0-4452a2069fa9> | 2.875 | 1,143 | Content Listing | Science & Tech. | 38.70527 | 95,541,135 |
Links (or Hyperlinks) are HTML elements by which we can open other pages, jump from one document to another or to another point on the same page. They are very important in creating web pages.
- General Syntax:
<a href="url" title="Link title">Link text</a>
- "a" - is the HTML element to create links
- "href" - specifies the destination of a link
- "url" - (Uniform Resource Locator) is the address of the link (the page that will be open).
For example: https://coursesweb.net/html/
- "title" - specifies a title for the hyperlink (a hidden text that only appears when the mouse is positioned over the link)
- "Link text" - is the text that appears in the web page, which must be clicked on. This text can be replaced with an image:
<a href="url" title="Link title"><img src="image_address" alt="Title" /></a>
When you move the cursor over a link in a Web page, the arrow will turn into a little hand.
You can create two types of hyperlinks:
1. Hyperlink to another document (called external links).
2. A bookmark inside a document, by using the "id" attribute (called internal links).
These type of links open external documents. The URL address added at the "href" attribute can be of two types:
1) Absolute path - the URL contains the domain name too (the "http" and "https" protocols can be omitted).
<a href="//coursesweb.net/index.php" title="Free courses">Vist CoursesWeb.net</a>
2) Relative path - the URL contains only the name of the document (and the path to the folder if the document is in another directory).
<!-- The index.php page is in the same folder -->
- Relative path is used only with documents which are on the same server.
<a href="index.php" title="Free courses">Home</a>
<!-- The page.html is up one directory -->
<a href="../page.html" title="Free courses">Text</a>
<!-- The page.html is in a sub-directory -->
<a href="folder/page.html" title="Free courses">Text</a>
An internal link allows you to jump to another section on the same web page, so it basically scrolls the page up or down to the desired location.
To create an internal link you must follow these steps:
- Write the following code to the target, a bookmark which marks the location where you jump and is on the same web page.
- <a id="indice"> </a>
- the "id" attribute indicates the target for the link
- the "indice" can be any word, will be used in the "href" attribute
- Create the link, anywhere in the page, using for "href" attribute the same "indice" specified in the "id" attribute.
- <a href="#indice">Link text</a>
<a href="#next1">Next section</a>
<p>Lots of paragraphs<br />
<a id="next1"> </a>
<p>Here is the next section.</p>
Bookmarks are not displayed, they are invisible to the reader.
• You can combine the external hyperlinks with internal links, to jump to a certain section of another document, where you have added the bookmark.
<a href="page_url#bookmark" title="A title">Link text</a>
• The <a> tag can have a target attribute, which specifies where to open the linked document.
There are 4 special "target" values:
- target="_top" - will load the link in entire window, any frames will disappear.
- target="_blank" - will load the link in a new window, the old window will remain opened.
- target="_self" - will load the link in the same window.
- target="_parent" - the "_parent" indicates the previous frame from which the current page was opened, if it does not exist then the link will be opened in the current window.
The next example will open the linked document in a new browser window or a new tab:
<a href="//marplo.net/" title="Free courses" target="_blank">Free courses</a> | <urn:uuid:f8dcdc28-bd8a-4671-8609-766e8fbc2c45> | 4.1875 | 964 | Tutorial | Software Dev. | 61.439946 | 95,541,150 |
Leaf litter accumulation during fire exclusion and increases in tree density in postsettlement southwestern Pinus ponderosa forests may limit the establishment of understory vegetation. We performed an experiment in P. ponderosa forests of northern Arizona to ascertain plant community responses to forest-floor scarification and Oi removal on thirty-six 100-m2 plots overlaid on an existing ecological restoration experiment that involved tree thinning and prescribed burning. Constrasting with findings from many other forest types, forest-floor treatments had no effect on community diversity or composition during the 2-year experiment. Sørensen similarities were as high as 97% between posttreatment years within treatments; and successional vectors also provided little indication that treatments may appreciably affect longer-term successional trajectories. Lack of response to these fairly drastic treatments is surprising given these forests’ exceptionally heavy Oi horizons and large proportions of conifer litter. Apparently shading, belowground competition for water or nutrients, or other tree-associated factors more strongly limit understory communities than does leaf litter. Based on sparse A-horizon seed banks averaging⋅ m–2 and limited above ground vegetation, we hypothesize that seed shortages, particularly for native perennials, also partly precluded a treatment response. Because extensive unvegetated areas at these restoration sites may be colonized by exotics, conservative management strategies could include testing the seeding or outplanting of desirable native species as an option for filling unoccupied microsites. Reporting of “no treatment effect” experiments such as this one is important to avoid biasing meta-analyses, as is future research to clarify combinations of factors limiting understory communities. Increased understanding of these limiting factors may lead to identification of other treatments that promote recovery of native species during ecological restoration in this region.
Arizona; Forest management; Ground flora; Leaf litter; Leaf-mold; O horizon; Plant diversity; Plant litter; Ponderosa pine; Seed limitation; Soil horizons; Soil seed bank; Species diversity; Understory plants; Understory vegetation
Environmental Sciences | Forest Biology | Forest Management | Forest Sciences | Plant Sciences
Abella, S. R.,
Covington, W. W.
Forest-floor treatments in Arizona ponderosa pine restoration ecosystems: No short-term effects on plant communities.
Western North American Naturalist, 67(1), | <urn:uuid:cda2513a-cda1-4b94-a60e-2c26232cc39f> | 2.75 | 484 | Academic Writing | Science & Tech. | 1.654161 | 95,541,166 |
|Unit system||SI derived unit|
|Named after||Henri Becquerel|
|In SI base units||s−1|
The becquerel (English: //; symbol: Bq) is the SI derived unit of radioactivity. One becquerel is defined as the activity of a quantity of radioactive material in which one nucleus decays per second. The becquerel is therefore equivalent to an inverse second, s−1. The becquerel is named after Henri Becquerel, who shared a Nobel Prize in Physics with Pierre and Marie Curie in 1903 for their work in discovering radioactivity.
As with every International System of Units (SI) unit named for a person, the first letter of its symbol is uppercase (Bq). However, when an SI unit is spelled out in English, it should always begin with a lowercase letter (becquerel)—except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case.
1 Bq = 1 s−1
A special name was introduced for the reciprocal second (s−1) to represent radioactivity to avoid potentially dangerous mistakes with prefixes. For example, 1 µs−1 could be taken to mean 106 disintegrations per second: 1·(10−6 s)−1 = 106 s−1. Other names considered were hertz (Hz), a special name already in use for the reciprocal second, and fourier (Fr). The hertz is now only used for periodic phenomena. Whereas 1 Hz is 1 cycle per second, 1 Bq is 1 aperiodic radioactivity event per second.
The gray (Gy) and the becquerel (Bq) were introduced in 1975. Between 1953 and 1975, absorbed dose was often measured in rads. Decay activity was measured in curies before 1946 and often in rutherfords between 1946 and 1975.
Like any SI unit, Bq can be prefixed; commonly used multiples are kBq (kilobecquerel, 103 Bq), MBq (megabecquerel, 106 Bq, equivalent to 1 rutherford), GBq (gigabecquerel, 109 Bq), TBq (terabecquerel, 1012 Bq), and PBq (petabecquerel, 1015 Bq). For practical applications, 1 Bq is a small unit; therefore, the prefixes are common. For example, the roughly 0.0169 g of potassium-40 present in a typical human body produces approximately 4,400 disintegrations per second or 4.4 kBq of activity. The global inventory of carbon-14 is estimated to be ×1018 Bq (8.5 8.5EBq, 8.5 exabecquerel). The nuclear explosion in Hiroshima (An explosion of 16 kt or 67 TJ) is estimated to have produced ×1024 Bq (8 8YBq, 8 yottabecquerel).
With =6.022 141 79(30)×1023 mol−1, the Avogadro constant.
Since m/ma is the number of moles (n), the amount of radioactivity can be calculated by:
For instance, on average each gram of potassium contains 0.000117 gram of 40K (all other naturally occurring isotopes are stable) that has a of ×109 years = 1.277×1016 s, 4.030 and has an atomic mass of 39.964 g/mol, so the amount of radioactivity associated with a gram of potassium is 30 Bq.
The following table shows radiation quantities in SI and non-SI units.
|Activity (A)||curie||Ci||3.7 × 1010 s−1||1953||3.7×1010 Bq|
|rutherford||Rd||106 s−1||1946||1,000,000 Bq|
|Exposure (X)||röntgen||R||esu / 0.001293 g of air||1928||2.58 × 10−4 C/kg|
|Fluence (Φ)||(reciprocal area)||m−2||1962||SI|
|Absorbed dose (D)||erg||erg⋅g−1||1950||1.0 × 10−4 Gy|
|rad||rad||100 erg⋅g−1||1953||0.010 Gy|
|Dose equivalent (H)||röntgen equivalent man||rem||100 erg⋅g−1||1971||0.010 Sv|
|sievert||Sv||J⋅kg−1 × WR||1977||SI|
(d) The hertz is used only for periodic phenomena, and the becquerel is used only for stochastic processes in activity referred to a radionuclide.
|Look up becquerel in Wiktionary, the free dictionary.|
None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.
All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.
The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely. | <urn:uuid:e1f1569e-d2ac-48cd-b899-be43a878da2d> | 3.546875 | 1,236 | Knowledge Article | Science & Tech. | 67.765763 | 95,541,180 |
- Scientists discover feathered dinosaur provides more clues to evolution of birds
- Human-dog bond dates back to ancient times, research shows
Sunday, 9 July 2017
Why frogs thrived after the dinosaurs were wiped out
Frogs around the world should be grateful for the forces that wiped out dinosaurs 66 million years ago.
That's according to new research from scientists in the United States and China that suggests whatever caused the mass extinction paved the way for the proliferation of frogs.
While frogs have been around for more than 200 million years, new research suggests that three main modern frog lineages — about 88 per cent of the living species of frogs — began to thrive shortly after the extinction event that signalled the end of all non-avian dinosaurs at the end of the Cretaceous Period.
While we know that about 80 per cent of the world's species were killed off in the mass extinction, what's not known is whether that extended to frogs, as there are few fossilized remains that have been found.
However, the researchers of this new study say that whether or not many frog species became extinct, the event gave rise to the frogs we know today.
"Maybe there was some extinction that happened there," says David Blackburn, co-author of the paper published in the journal Proceedings of the National Academy of Sciences
"At the very least, what happened afterwards was that it seems like there must have been rapid diversification, where we had many new lineages evolve," said Blackburn, who is also associate curator of amphibians and reptiles at the Florida Museum of Natural History.-read more
60 characters remaining Where your podcast can be heard ▾ Your podcast is available now on the following platforms. We&...
image: http://keyassets.timeincuk.net/inspirewp/live/wp-content/uploads/sites/15/2017/08/Still-of-catfish-533x400.jpg TAGS: alien sp...
If you are the victim of an acid attack or witness one taking place, it’s important to act as quickly as possible to minimise damage to ...
This is in aid of dementia uk to raise awareness and money with a bit of fun inspired by the mad hatter tea party. So if it makes you smil... | <urn:uuid:7e95c582-167b-4bfc-8be4-7b8f2afa8ad1> | 3.375 | 471 | Content Listing | Science & Tech. | 57.378156 | 95,541,183 |
Oxygen and iron are the most and the fourth most abundant elements by mass in the earth’s crust, in which their respective proportions amount to 46.6 and 5.0 percent. It is therefore not surprising that compounds made up of these two elements are common in nature. The oxides, oxyhydroxides and a (single) hydroxide of trivalent iron are often grouped together under the general term “iron oxides”, and the occurrence of nine such iron oxides sensu lato has so far been registered in materials formed on the earth’s surface. As Schwertmann and Taylor (1989) put it “The iron oxides ... are present in most soils of the different climatic regions in one or more of their mineral forms and at variable levels of concentration”. The same is true for many sediments, the other major group of materials formed in the weathering environment.
KeywordsIron Oxide Mossbauer Spectroscopy Quadrupole Splitting Hyperfine Field Magnetic Hyperfine Field
Unable to display preview. Download preview PDF. | <urn:uuid:900a5999-7eea-4a04-b45e-e2a3aa53c081> | 3.5625 | 227 | Truncated | Science & Tech. | 42.7325 | 95,541,232 |
Innovative and unique solutions are being devised throughout the national park system to adapt to climate change in coastal parks. The 24 case studies in this document describe efforts at national park units in a variety of settings to prepare for and respond to climate change impacts that can take the form of either an event or a trend. Examples of these impacts include increased storminess, sea level rise, shoreline erosion, melting sea ice and permafrost, ocean acidification, warming temperatures, groundwater inundation, precipitation, and drought. The adaptation efforts described here include historic structure preservation, archeological surveys, baseline data collection and documentation, habitat restoration, engineering solutions, redesign and relocation of infrastructure, and development of broad management plans that consider climate change. Each case study also includes a point of contact for park managers to request additional information and insight.
These case studies initially were developed by park managers as part of a NPS-led coastal adaptation to climate change training hosted by Western Carolina University in May 2012. The case studies format follows the format created for EcoAdapt’s Climate Adaptation Knowledge Exchange (CAKE) database that identified a list of adaptation strategies. All case studies were updated and modified in September 2013 and March 2015 in response to a growing number of requests from coastal parks and other coastal management agencies looking for examples of climate change adaptation strategies for natural and cultural resources and assets along their ocean, lacustrine, and riverine coasts. | <urn:uuid:d7093ee9-d525-4e86-84a2-6e6d8aefb144> | 3.359375 | 288 | Knowledge Article | Science & Tech. | 0.289101 | 95,541,234 |
In a fascinating example of vocal mimicry, researchers from the Wildlife Conservation Society (WCS) and UFAM (Federal University of Amazonas) have documented a wild cat species imitating the call of its intended victim: a small, squirrel-sized monkey known as a pied tamarin. This is the first recorded instance of a wild cat species in the Americas mimicking the calls of its prey.
The extraordinary behavior was recorded by researchers from the Wildlife Conservation Society and UFAM in the Amazonian forests of the Reserva Florestal Adolpho Ducke in Brazil. The observations confirmed what until now had been only anecdotal reports from Amazonian inhabitants of wild cat species—including jaguars and pumas—actually mimicking primates, agoutis, and other species in order to draw them within striking range.
The observations appear in the June issue of Neotropical Primates. The authors of the paper include: Fabiano de Oliveira Calleia of Projeto Sauim-de-Coleira/UFAM; Fabio Rohe of the Wildlife Conservation Society; and Marcelo Gordo of Projeto Sauim-de-Coleira/UFAM.
"Cats are known for their physical agility, but this vocal manipulation of prey species indicates a psychological cunning which merits further study," said WCS researcher Fabio Rohe.
Researchers first recorded the incident in 2005 when a group of eight pied tamarins were feeding in a ficus tree. They then observed a margay emitting calls similar to those made by tamarin babies. This attracted the attention of a tamarin "sentinel," which climbed down from the tree to investigate the sounds coming from a tangle of vines called lianas. While the sentinel monkey started vocalizing to warn the rest of the group of the strange calls, the monkeys were clearly confounded by these familiar vocalizations, choosing to investigate rather than flee. Four other tamarins climbed down to assess the nature of the calls. At that moment, a margay emerged from the foliage walking down the trunk of a tree in a squirrel-like fashion, jumping down and then moving towards the monkeys. Realizing the ruse, the sentinel screamed an alarm and sent the other tamarins fleeing.
While this specific instance of mimicry was unsuccessful, researchers were amazed at the ingenuity of the hunting strategy.
"This observation further proves the reliability of information obtained from Amazonian inhabitants," said Dr. Avecita Chicchón, director of the Wildlife Conservation Society's Latin America Program. "This means that accounts of jaguars and pumas using the same vocal mimicry to attract prey--but not yet recorded by scientists--also deserve investigation."
WCS is currently monitoring populations of the pied tamarin—listed as "Endangered" on the IUCN's Red List—and is seeking financial support to continue the study, which aims to protect this and other species from extinction. Next to Madagascar, the Amazon has the highest diversity of primates on Earth.
These behavioral insights also are indications of intact Amazon rainforest habitat. WCS works throughout the Amazon to evaluate the conservation importance of these rainforests, which have become increasingly threatened by development.
John Delaney | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:40a88dec-0c90-49cd-938a-8ccfe9100228> | 3.5625 | 1,266 | Content Listing | Science & Tech. | 32.938913 | 95,541,235 |
+44 1803 865913
By: John Erickson
320 pages, 239 illus, tabs
Throughout history, awesome floods, devastating earthquakes, disastrous volcanoes and other catastrophic activities have demonstrated nature's inexplicable power. This title provides up-to-date coverage of the processes that produce natural catastrophes. An introductory chapter details the dynamic forces at work beneath the Earth's crust, while the subsequent nine chapters - each dedicated to a single type of disaster - explain causes and myriad effects of these disturbances, enhanced with numerous examples. Photographs and line drawings depict the carnage wrought by these natural disasters, as well as what is happpening to the Earth's surface while they occur. Chapters include coverage of: earthquakes; volcanic eruptions; earth movements; catastrophic collapse; floods; dust storms; glaciers; impact cratering; and mass extinctions.
An excellent geological resource. - School Library Journal "Erickson masterfully...intrigue[s] the reader with...thorough accounts of earthquakes, volcanoes, and tidal waves..." - Booklist "[a] straightforward, informative, concise, and simple style...Introductory science and environmental students as well as general readers will benefit most from this book...A work of wide appeal." - CHOICE
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Your quick and straightforward service saved the fieldwork for my PhD project in Kerala.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:4b4560a6-87fd-4417-b275-3b168c685be9> | 3.203125 | 327 | Product Page | Science & Tech. | 35.949889 | 95,541,238 |
Over the course of history, we’ve discovered hundreds of thousands of asteroids. But how do astronomers discover these bits of rock and metal? How many have they found? And how do they tell asteroids apart? Carrie Nugent shares the story of the very first asteroid ever discovered and explains how asteroid hunters search for these celestial bodies.
Lesson by Carrie Nugent, directed by TED-Ed, animation by Reza Riahi.
Have an amazing project to share? Join the SHOW-AND-TELL every Wednesday night at 7:30pm ET on Google+ Hangouts.
Join us every Wednesday night at 8pm ET for Ask an Engineer!
Maker Business — American startups are having an increasingly smaller share of the market
Wearables — Switch the advantage
Electronics — Don’t float!
Biohacking — Optimizing the Warm Up
Python for Microcontrollers — CircuitPython 3.0.0 released!
No comments yet.
Sorry, the comment form is closed at this time. | <urn:uuid:5cff87db-bcd0-4b41-be5e-d7b38889ea44> | 3.53125 | 211 | News (Org.) | Science & Tech. | 52.535217 | 95,541,246 |
Statistics from Altmetric.com
Vision includes the frontier of perception, and involves cortical interpretation; thus requiring an “eye” and the neurological machinery for processing. But photoreception is a far different matter and is much more common.
There are many creatures that have extraocular photoreception, even beyond the parietal eye, discussed in the BJO March 2005 essay. Most plants have photosynthesis and, in some sense, photoreception. But, there is more to it than that.
Living creatures are divided into three great domains, bacteria, Archaea (including the extremophiles), and eucarya or eucaryotes. The bacteria and Archaea are both prokaryotes, meaning they lack a nucleus. The eucaryotes include all single celled nucleated organisms, such as the protists, and all metazoa, or multicellular organisms. All three domains contain creatures that use photoreceptive compounds that include vitamin A aldehyde, or retinal. The opsins, however, are different. Bacteria and Archaea have opsins that resemble those in eucaryotes, but are not identical. All metazoan opsins, however, are very similar suggesting that these proteins must be very old.
The sun represents the ultimate source for most, but not all, of the earth’s energy. This source is finitely renewable and the usable energy is concentrated in a narrow spectrum including the visible and nearly visible spectrum. Most living organisms have fashioned their lives around this spectrum.
Evolutionarily, photoreception occurred very early, suggesting that the critical molecules required for this process, including the opsins and retinal, may have preceded life. Opsins are part of a family of transmembrane proteins that may have begun as membrane transporters. Early life had some form of photoreception and the opsins are almost certainly the molecule used. Eyes came later, and essentially have been a way to organise that sensory input. Organisms within two of the domains, bacteria and Archaea, still retain these opsins within their cell membranes.
Eyes, then, are not the only way to perceive and utilise the visible spectrum, as plants bear witness. Most invertebrates, especially those without eyes, have some mechanism of extraocular photoreception. Coral, for example, is photoreceptive, and even has Pax-6-like gene, the so called master eye gene. Many invertebrates have novel anatomical placement of extraocular photoreceptors. For example, some butterflies have photoreceptors in their genitalia allowing the male to confirm penetration. Marine invertebrates as diverse as sea squirts, anemones, sea urchins, and annelids (worms) all have extraocular photoreception. But even vertebrates, further distanced from the prokaryotes, can have extraocular photoreception. Amphibians, lizards, and snakes, including the banded krait on this month’s cover, have all been shown to have such abilities.
Snakes are believed to have evolved approximately150–95 million years ago, and arose from within the true lizard group. The first snake probably evolved from an aquatic or semi-aquatic species, although this remains controversial. Once the Serpentes evolved, however, they slithered onto a terrestrial lifestyle and became quite successful. But just as these reptiles had begun colonising the land, some of their own headed back to the sea.
Although the eye of the yellow lipped sea krait, Laticauda colubrina, has never been examined, the eye of a close relative, Pelamis paturis, has. These snakes have an all-cone retina that is thin and non-vascularised with a high concentration of horizontal cells and large degree of summation indicating excellent motion perception, at the sacrifice of visual acuity. Unfortunately, the cornea of P paturis was not examined, so we don’t know if the cornea is flattened compared to its terrestrial relatives, but it is likely that it is, given that other aquatic animals such as fish have rather flat corneas. The cornea and water have approximately the same index of refraction so the cornea loses its refractive ability under water.
All sea snakes are predators with a neurotoxic venom that is remarkably efficient. Found in the Coral Sea, L colubrina, preys mainly on eels, although it will take fish. This rear fanged snake is not particularly aggressive, although it is highly venomous. Once the prey succumbs, the meal is swallowed whole.
In the ocean, one either eats or is eaten, and that is true for these snakes as they may be prey for sharks, and predatory birds such as the white bellied sea eagle. Avoidance of predators, then, is at least as important as prey capture.
An agile swimmer with a paddle-like tail, as seen on the cover, L colubrina and other sea snakes suffer from not knowing exactly where their tail is, and would benefit from a set of eyes behind them.
The sea snakes may have just that. They possess cutaneous photoreception in their tails so that they may pull their tails completely under rocks or into crevices to avoid detection and predation (Zimmerman et al, Copeia 1990:860–2). Although the exact mechanism is unknown, thermal recognition is not part of this process. Sea snakes’ tails recognise light under water where the thermal difference would not be significant.
Photonic response probably preceded life. Photoreception, sight, and vision followed sequentially with further evolution as these sensory inputs became more organised. But, extraocular photoreception continues well into recent evolutionary changes and will continue to part of the lives of many creatures. There seems to be no end in sight.
Photographs by the author, and thanks to Daniel Zorra, MD, for his assistance.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | <urn:uuid:0dfb097b-40d0-42e8-987b-7ae0116e5825> | 3.734375 | 1,288 | Truncated | Science & Tech. | 30.599016 | 95,541,271 |
Time-lapsed photo sequence of WISE 0855−0714's movement in the sky using captured images from the WISE and the Spitzer telescopes.
Epoch J2000 Equinox J2000
|Right ascension||08h 55m 10.83s|
|Declination||−07° 14′ 42.5″|
|Evolutionary stage||Sub-brown dwarf|
|Spectral type||Class Y2|
|Apparent magnitude (J)||±0.5325.00|
|Proper motion (μ)|| RA: ±8−8118 mas/yr |
Dec.: ±7680 mas/yr
|Parallax (π)||449 ± 8 mas|
|Distance||±0.13 7.27ly |
WISE 0855−0714 (full designation WISE J085510.83−071442.5) is a sub-brown dwarf ±0.04 parsecs ( 2.23±0.13 light-years) 7.27 from Earth, the discovery of which was announced in April 2014 by Kevin Luhman using data from the Wide-field Infrared Survey Explorer (WISE). As of 2014[update], WISE 0855−0714 has the third-highest proper motion (±8 mas/yr) 8118 after Barnard's Star (300 mas/yr) and 10Kapteyn's Star () 8600 mas/yr and the fourth-largest parallax (±8 mas) 449 of any known star or brown dwarf, meaning it is the fourth-closest extrasolar system to the Sun. It is also the coldest object of its type found in interstellar space, having a temperature in the range 225 to 260 K (−48 to −13 °C; −55 to 8 °F).
The WISE object was detected in March 2013, and follow-up observations were taken by the Spitzer Space Telescope and the Gemini North telescope. The name WISE J085510.83−071442.5 includes the coordinates and indicates that the object is located in the constellation Hydra.
Distance and proper motion
Based on direct observations, WISE 0855−0714 has a large parallax, which specifically relates to its distance from the Solar System. This phenomenon results in a distance of around ±0.13 light-years, 7.27 with a small margin of error due to the strength of the parallax effect and the clarity of observations. WISE 0855−0714's proper motion across the sky is also directly observed over time, causing it to stand out in the observations, but the proper motion is itself a combination of its speed in the galactic neighborhood relative to the Solar System as well as its proximity to the Solar System. If it were moving exactly as fast but farther away, if it were moving more slowly but closer, or if it were moving more quickly near to the Sun but moving at a high angle towards or away from the Sun, it would have a smaller proper motion.
Its luminosity in different bands of the thermal infrared in combination with its absolute magnitude—because of its known distance—was used to place it in context of different models; the best characterization of its brightness was in the W2 band of at an 4.6 μmapparent magnitude of ±0.05, though it was brighter into the deeper infrared. 13.89 Infrared images taken with the Magellan Baade Telescope suggest evidence of water clouds.
As of 2003, the International Astronomical Union considers an object with a mass above 13 MJup, capable of fusing deuterium, to be a brown dwarf. A lighter object and one orbiting another object is considered a planet. So far this WISE object is alone, though it could be a rogue planet, something first identified in 2004 in the case of Cha 110913-773444.
Combining its luminosity, distance, and mass it is estimated to be the coldest-known brown dwarf, with a modeled effective temperature of 225 to 260 K (−48 to −13 °C; −55 to 8 °F), depending on the model.
- Clavin, Whitney; Harrington, J. D. (25 April 2014). "NASA's Spitzer and WISE Telescopes Find Close, Cold Neighbor of Sun". NASA.gov. Archived from the original on 26 April 2014.
- "WISEA J085510.74-071442.5". SIMBAD. Centre de données astronomiques de Strasbourg. Retrieved 15 May 2017.
- Luhman, Kevin L.; Esplin, Taran L. (September 2016). "The Spectral Energy Distribution of the Coldest Known Brown Dwarf". The Astronomical Journal. 152 (2). 78. arXiv: . Bibcode:2016AJ....152...78L. doi:10.3847/0004-6256/152/3/78.
- Luhman, Kevin L. (21 April 2014). "Discovery of a ~250 K Brown Dwarf at 2 pc from the Sun". The Astrophysical Journal Letters. 786 (2): L18. arXiv: . Bibcode:2014ApJ...786L..18L. doi:10.1088/2041-8205/786/2/L18.
- Faherty, Jacqueline K.; Tinney, C. G.; Skemer, Andrew; Monson, Andrew J. (August 2014). "Indications of Water Clouds in the Coldest Known Brown Dwarf". Astrophysical Journal Letters. arXiv: . Bibcode:2014ApJ...793L..16F. doi:10.1088/2041-8205/793/1/L16.
- "Working Group on Extrasolar Planets: Definition of a "Planet"". Working Group on Extrasolar Planets of the International Astronomical Union. 28 February 2003. Retrieved 28 April 2014.
- Papadopoulos, Leonidas (28 April 2014). "Between the Planet and the Star: A New Ultra-Cold, Sub-Stellar Object Discovered Close to Sun". AmericaSpace.com. Retrieved 28 April 2014.
- Beichman, C.; Gelino, Christopher R.; et al. (2014). "WISE Y Dwarfs As Probes of the Brown Dwarf-Exoplanet Connection". The Astrophysical Journal. 783 (2): 68. arXiv: . Bibcode:2014ApJ...783...68B. doi:10.1088/0004-637X/783/2/68. (Note: WISE 0855−0714 is not mentioned in this paper; it is about other Y-type objects discovered by WISE.)
- Luhman, Kevin L.; Esplin, Taran L. (2014). "A New Parallax Measurement for the Coldest Known Brown Dwarf". The Astrophysical Journal. 796 (1): 6. arXiv: . Bibcode:2014ApJ...796....6L. doi:10.1088/0004-637X/796/1/6.
- Wright, Edward L.; Mainzer, Amy; et al. (2014). "NEOWISE-R Observation of the Coolest Known Brown Dwarf". The Astronomical Journal. 148 (5): 82. arXiv: . Bibcode:2014AJ....148...82W. doi:10.1088/0004-6256/148/5/82.
- WISE J0855-0714 at Solstation.com | <urn:uuid:b2b2c975-7ddc-4250-82ff-c3d0e32cbc9c> | 2.8125 | 1,599 | Knowledge Article | Science & Tech. | 83.620967 | 95,541,275 |
Counting statistics of cosmic rays is explored using a LabVIEW based computer program on a computer connected to a National Instruments data acquisition unit. The program collects the interval times between successive coincident pulses from a stack of plastic scintillator detectors. Analysis of the data set allows students to see the distribution of N-pulse intervals for various values of N; this is the Erlang distribution. With N = 1, the Erlang distribution is exponential with a characteristic time equal to mean count interval t. When N is increased, the distribution becomes peaked at Nt with a fractional width proportional to 1/?N. This is an instance of the central limit theorem. Students may also examine the data set according to the distribution of numbers of pulses recorded for a series of fixed-length intervals, which for random pulses follows the Poisson distribution. Again, as the length of the interval increases, the distribution conforms to the central limit theorem: it becomes normal with a well-defined mean and width, both related to mean and width of the underlying distribution.
The software also allows students to simulate pulse-intervals that follow a uniform distribution (e.g., any real number between 0 and 1 has equal probability) or a Gaussian one. In these cases, one can see that the counts per fixed interval length do not follow the Poisson distribution, although all types become normal at longer intervals or greater numbers of pulses per interval length. This serves to drive home the point that cosmic ray counts are truly a Poisson process and also to illustrate the significant power of the central limit theorem-- that regardless of the underlying probability distribution function, when N becomes large the distribution becomes normal with a well-defined mean and width.
Presented at the 2013 AAPT Summer Meeting in Portland, Oregon. W36: Advanced Labs Workshop
%0 Electronic Source %A Pengra, David B. %D July 9, 2013 %T Cosmic Ray Statistics using LabVIEW %V 2018 %N 17 July 2018 %8 July 9, 2013 %9 application/pdf %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12915&DocID=3474
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | <urn:uuid:f3c90081-cfb8-4874-9e0c-9b0ecf915443> | 3.453125 | 491 | Academic Writing | Science & Tech. | 43.832777 | 95,541,300 |
Using a dynamic optical technique and settling column apparatus, natural organic matter floc structural characteristics were monitored and evaluated over a one year period to monitor the seasonal variation in floc structure at optimum coagulation dose and pH. The results show that flocs changed seasonally with different growth rates, size, response to shear and settling rate. Autumn and summer flocs were shown to be larger and less resistant to floc breakage when compared to the other seasons, suggesting reduced floc strength. Floc strength was observed to increase with smaller median floc size. The results of the settling tests indicated that the autumnal flocs were of a more open structure which helped to explain why they settled faster. In summary, the autumnal flocs had significantly different floc characteristics although it was difficult to relate the floc structure with the incoming water characteristics.
Research Article|December 01 2004
Characterising natural organic matter flocs
Water Science and Technology: Water Supply (2004) 4 (4): 79-87.
P. Jarvis, B. Jefferson, S.A. Parsons; Characterising natural organic matter flocs. Water Science and Technology: Water Supply 1 December 2004; 4 (4): 79–87. doi: https://doi.org/10.2166/ws.2004.0064
Download citation file:
Don't already have an account? Register
You could not be signed in. Please check your email address / username and password and try again. | <urn:uuid:9f2092e8-7ff7-406a-a13c-0fdd8208d39e> | 2.96875 | 307 | Truncated | Science & Tech. | 48.321337 | 95,541,310 |
The investigation in this report aimed at providing photophysical evidence that the long-lived triplet excited state plays an important role in the non- single-exponential photobleaching kinetics of fluorescein in microscopy. Experiments demonstrated that a thiol-containing reducing agent, mercaptoethylamine (MEA or cysteamine), was the most effective, among other commonly known radical quenchers or singlet oxygen scavengers, in suppressing photobleaching of fluorescein while not reducing the fluorescence quantum yield. The protective effect against photobleaching of fluorescein in the bound state was also found in microscopy. The antibleaching effect of MEA led to a series of experiments using time-delayed fluorescence spectroscopy and nanosecond laser flash photolysis. The combined results showed that MEA directly quenched the triplet excited state and the semioxidized radical form of fluorescein without affecting the singlet excited state. The triplet lifetime of fluorescein was reduced upon adding MEA. It demonstrated that photobleaching of fluorescein in microscopy is related to the accumulation of the long-lived triplet excited state of fluorescein and that by quenching the triplet excited state and the semioxidized form of fluorescein to restore the dye molecules to the singlet ground state, photobleaching can be reduced.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:a9dd08d7-cf10-45d5-a589-5658b4408027> | 2.65625 | 319 | Academic Writing | Science & Tech. | 1.780769 | 95,541,339 |
+44 1803 865913
Edited By: Paul L Hancock and Brian J Skinner
1174 pages, 16 plates, 200 b/w illus, 450 line illus
Provides complete coverage of the earth sciences, including geology, geophysics, volcanology, geochemistry, geodesy, geomorphology, soil science, palaeontology, glaciology, oceanography, climatology, meteorology, environmental questions, resource development, and the history of earth sciences.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
the world’s foremost supplier of natural history and environmental books
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:84e98098-7ae6-455e-9eb7-1b6c7bf5265a> | 2.5625 | 171 | Product Page | Science & Tech. | 18.390292 | 95,541,341 |
London: Near-Earth objects (NEOs) asteroids and comets that may pose a hazard to life on Earth — are destroyed very far from the Sun rather than ending their existence in a dramatic plunge into the solar body, a new study suggests.
An asteroid is classified as an NEO when its smallest distance from the Sun during an orbit is less than 1.3 times the average Earth-Sun distance. The vast majority of NEOs originate in the doughnut-shaped main asteroid belt between the orbits of Mars and Jupiter.
An international team of researchers from Finland, France, the US and the Czech Republic used the properties of almost 9,000 NEOs detected in about 100,000 images acquired over about eight years by the Catalina Sky Survey (CSS) in Arizona to construct a new population model.
The team produced the best-ever model of the NEO population by combining information about CSS's selection effects with the CSS data and theoretical models of the orbit distributions of NEOs that originate in different parts of the main asteroid belt.
But they noticed that their model had a problem — it predicted that there should be almost 10 times more objects on orbits that approach the Sun to within 10 solar diameters. The team then spent a year verifying their calculations before they came to the conclusion that the problem was not in their analysis but in their assumptions of how the solar system works.
Dr Mikael Granvik, a research scientist at the University of Helsinki hypothesised that their model would better match the observations if NEOs are destroyed close to the Sun but long before an actual collision. The team tested this idea and found an excellent agreement between the model and the observed population of NEOs when they eliminated asteroids that spend too much time within about 10 solar diameters of the Sun.
"The discovery that asteroids must be breaking up when they approach too close to the Sun was surprising and that's why we spent so much time verifying our calculations," said Dr Robert Jedicke from the University of Hawai'i Institute for Astronomy.
The team's discovery helps to explain several other discrepancies between observations and predictions of the distribution of small objects in our solar system. Meteors, commonly known as shooting stars, are tiny bits of dust and rock that are dislodged from the surfaces of asteroids and comets that then end their lives burning up as they enter our atmosphere.
The study suggests that the parent objects were completely destroyed when they came too close to the Sun -leaving behind streams of meteors but no parent NEOs. They also found that darker asteroids are destroyed farther from the Sun than brighter ones, explaining an earlier discovery that NEOs that approach closer to the Sun are brighter than those that keep their distance from the Sun.
Updated Date: Feb 18, 2016 12:10 PM | <urn:uuid:a690f3e7-5017-44eb-833d-93a6dfe9ad30> | 3.984375 | 565 | News Article | Science & Tech. | 37.34006 | 95,541,360 |
A group of scientists at The Scripps Research Institute, at the University of California in San Diego, and at the Oregon Hearing Research Center and Vollum Institute at Oregon Health & Science University have discovered a key molecule that is part of the machinery that mediates the sense of hearing.
In a paper that will appear in an upcoming issue of the journal Nature, the team reports that a protein called cadherin 23 is part of a complex of proteins called "tip links" that are on hair cells in the inner ear. These hair cells are involved in the physiological process called mechanotransduction, a phenomenon in hearing in which physical cues (sound waves) are transduced into electrochemical signals and communicated to the brain. The tip link is believed to have a central function in the conversion of physical cues into electrochemical signals.
"In humans, there are mutations in [the gene] cadherin 23 that cause deafness as well as Usher syndrome, the leading cause of deaf-blindness," says Associate Professor Ulrich Mueller, Ph.D., who is in the Department of Cell Biology at The Scripps Research Institute and is a member of Scripps Researchs Institute for Childhood and Neglected Diseases.
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:de206376-f20a-400d-ae74-ebceda526464> | 3.546875 | 830 | Content Listing | Science & Tech. | 37.960742 | 95,541,361 |
The ‘solar power tree’ developed by the science ministry
The solar tree is a solar panel arrangement that draws from natural design, to maximise power output with minimal land requirement.
A ‘solar power tree’ developed by the science ministry harnesses solar energy for producing electricity with an innovative vertical arrangement of solar cells.
Almost akin to the architecture of a tree, with central trunk and solar panels acting like large leaves, it thus reduces the requirement of land as compared to conventional solar photovoltaic layout, on one hand, while keeping the land character intact on the other. Even cultivable land can be utilised for solar energy harnessing along with farming at the same time.
The innovation finds its viability both in rural and urban areas. Science & Technology and Earth Sciences Minister, Dr Harsh Vardhan noted that in order to produce one megawatt of solar power, it requires about 1.4 hectares of land in the conventional sequential layout of solar panels. Thus, to generate copious quantities of green energy, there will be requirement of thousands of hectares of land. Acquisition of land is a major issue in itself, he added.
Girish Sahni, Director General of the Council of Scientific and Industrial Research (CSIR), says as a future prospect, the ‘solar power tree’ would be developed in a rotatable module, which would have a motorised mechanism to align itself with the movement of the sun during the day. Hence, it would be possible to harness more power over and above the current capacity.
This is where the Indian scientists seem to have learnt a lesson from the sunflower plant by applying that knowledge to harvest more solar energy. The sunflower is able to produce upwards of 10 percent more oil thanks to tracking the sun on the same lines the ‘solar tree’ is able to harvest between 10-15 per cent more electricity.
Vardhan hopes there would soon be plantations of the ‘solar tree’ all over India. In future, the sun facing smiley of the sunflower could help brighten India’s energy prospects. | <urn:uuid:c5f671d8-5245-4f95-b6c8-e730ad74416c> | 3.953125 | 437 | News Article | Science & Tech. | 34.282474 | 95,541,372 |
Sample text for Physics of the. If one graphs the magnetic fields of. And behind this laser curtain one might envision a lattice made of "carbon nanotubes.Radiocarbon Dating 1. What is the basis of carbon-14 dating?. changes in the intensity of the cosmic radiation or in the strength of the earth’s magnetic field.How do scientists determine the age of. so carbon-14 dating. Other techniques include analyzing amino acids and measuring changes in an object's magnetic field.While the moon has no global magnetic field. At Least a Billion Years Longer Than Thought, Says Study More. an explanation of how carbon dating.A Creationist Puzzle. using radiocarbon dating, we find that carbon-14 dates are. carbon-14 atoms. So a stronger magnetic field in the past.This is how carbon dating works: Carbon is. Precise measurements taken over the last 140 years have shown a steady decay in the strength of the earth's magnetic field.
Doesn't Carbon Dating Disprove the Bible? | Yahoo Answersindustry, genetics, carbon dating, forensics, and space exploration. moving in a magnetic field of strength 5.0 3 10−4 T. Inserting these values into the.Radiocarbon dating uses carbon. that affect the levels of cosmic rays reaching the atmosphere, such as the fluctuating strength of the Earth’s magnetic field,...
Magnetism and Electromagnetism | Electronics Textbook - All About CircuitsDATING METHODS IN ARCHAEOLOGY. good amount of carbon can be collected for C-14 dating. that the magnetic field of the earth is changing.Carbon dating accuracy magnetic field pdf Free video adfult chatting without sign up. We're not "standing against science" in stating this - in fact we're the ones.They are used in carbon dating and other radioactive dating processes. The combination of a mass spectrometer and a gas. The magnetic field deflects.Radiocarbon Dating and. now known as the carbon-14 (or radiocarbon. Changes in the Earth’s magnetic field are believed to be responsible for long-term.
Carbon 14 And False Assumptions - Lamb and Lion MinistriesUnaware of the many fallacious assumptions used in the dating process, many people believe Carbon-14 dating disproves the biblical timeline. Mike Riddle demonstrates.
Archaeomagnetic dating is the study and interpretation. direction of the local magnetic field at. techniques such as tree ring dating or carbon-14.
How to Use Absolute Dating | Dating TipsHow exact is carbon dating used today? Doesn't carbon dating prove millions of years?. It assumes that the earth’s magnetic field has been the same,.
Understanding Radiocarbon Dating - - UNMASKING EVOLUTIONRadiocarbon dating considerations. The. Armed with the results of carbon-dating the. The known fluctuations in the strength of the earth's magnetic field match.Radiocarbon dating is a radiometric dating method that uses the naturally occurring isotope carbon-14 to determine. 2015 — The Earth's magnetic field experiences.
This article will answer several of the most common creationist attacks on carbon dating,. But how does one know that the magnetic field has fluctuated and.
Dating Sedimentary Rock - How do scientists determine the age ofRadiocarbon dating compares the amount of radioactive Carbon 14 in organic. Carbon in the atmosphere fluctuates with the strength of earth's magnetic field and.After some 400 years of relative stability, Earth's North Magnetic Pole has moved nearly 1,100 kilometers out into the Arctic Ocean during the last century and at its.
This is called radiocarbon dating, or 'Carbon' Dating for short. It is obvious from the above theory,. Earth's magnetic field intensity has not changed.
Because of the earth’s declining magnetic field,. Carbon dating is based on the assumption that the amount of C14 in the atmosphere has always been the same.Endangered Earth: Shift of Earth's Magnetic North Pole. On the left is a normal dipolar magnetic field,. Using carbon dating and other technologies.The Earth’s magnetic field is an ever. If we clearly comprehended Geomagnetism,. and it would forever change the methods science uses for carbon dating.
How Accurate is Radiometric Dating? - Mission To AmericaINDUCTION HEATING ASSISTED INJECTION MOLDING OF. When energized in an AC magnetic field,. carbon-coated iron nanoparticles are dispersed in a low-melting.Start studying Chapter 39. Learn vocabulary,. when alpha and beta rays pass through a magnetic field their paths change. the reason carbon dating works is that.Detection Section (What's Your Deflection?). radiocarbon dating, you need to know the amount of carbon-14 that the once-living. through a magnetic field.Without a magnetic field,. A STRANGE anomaly in Earth’s magnetic field could see a pole reversal for the first time in 780,000. (using radiocarbon dating,.2.7 Radiocarbon dating and NMR imaging. and will integrate both carbon isotopes alike. This means that under the influence of the external magnetic field,.
According to carbon dating of fossil. Changes in the Earth's magnetic field would change the deflection of cosmic-ray particles streaming.RADIOCARBON DATING:. and that the global average was 8.85 grams Carbon/cm 2. where the Earth's magnetic field is dipping into the Earth and therefore does.
Carbon dating 'might be wrong by 10,000 years' - Telegraph
Art of Facts: The Carbon Dating LieThe earth's magnetic field impacts climate: Danish study. "If changes in the magnetic field,. As in the carbon-14 dating technique,.Many people believe that carbon dating disproves the. Carbon-14 is produced in the upper. the decay of the earth’s magnetic field would have direct.Here of some of the well-tested methods of dating used in the study of early humans: Potassium-argon dating, Argon-argon dating, Carbon. Earth’s magnetic field,.
An extremely brief reversal of the geomagnetic field, climatePaleomagnetism (or palaeomagnetism in the United Kingdom) is the study of the record of the Earth's magnetic field in rocks, sediment, or archeological materials.Carbon Dating and the Bible. October 25. What is Carbon Dating? Carbon dating is a process used to determine the age. “the sun’s underlying magnetic field.
This magnetic field decay is not factored into Carbon 14 dating. As the magnetic field affects the amount of solar. The magnetic decay also shows a young earth or.The Carbon Dating Lie Carbon dating. earth’s magnetic field and changes in the. variations of carbon-14 in the atmosphere. Carbon dating is. | <urn:uuid:425a57dc-3a6c-440d-a800-1bfac9c6d00d> | 3.078125 | 1,431 | Content Listing | Science & Tech. | 56.0835 | 95,541,387 |
Analysis of Nested Loops
How to identify the initializations of inner loops? In Chapter 4, it has been assumed that a loop is immediately preceded by its initialization. Because the syntax of most imperative programming languages does not satisfy this assumption, a method for identifying the initialization statements of inner loops needs to be defined. Identifying the initialization of an outermost loop is not as crucial because the analysis results of this loop are not used in analyzing other loops. Specifications of the whole nested construct can be written relative to the program state just before its start.
How to represent and utilize the analysis results of inner loops? A technique for analyzing flat loops has been described in Chapter 4. Can the same basic technique be used for outer loops (loops containing other loops)? What modifications, if any, need to be performed on the basic analysis technique to analyze outer loops?
How to modify the resulting specifications to facilitate Hoare-style verification ? This problem can be further divided into two subproblems, which are explained using the nested construct shown in Figure 5.1. In this nested construct, let I; and 10 be the invariants of the inner and outer loops, respectively.
KeywordsOuter Loop Nest Loop Symbolic Execution Analysis Knowledge Context Adaptation
Unable to display preview. Download preview PDF. | <urn:uuid:bf56ee3b-d53e-49ad-b0a3-f9346f16348d> | 2.984375 | 265 | Truncated | Software Dev. | 30.928329 | 95,541,396 |
After protein production, many proteins are equipped with attachments such as sugar residues in order to perform their tasks properly. This process is directly coupled to the transport across a membrane.
Many protein complexes are involved in protein synthesis.Through the ER translocon (green, blue and red) the newly synthesized protein is transported across the membrane (gray).
Graphic: Friedrich Förster / Copyright: MPI of Biochemistry
Employing various methods of structural biology, scientists at the Max Planck Institute (MPI) of Biochemistry in Martinsried near Munich, Germany, have now gained insights into the architecture of the protein complex (ER translocon) responsible for this process. The results of the joint project have now been published in Nature Communications.
Producing a protein is a highly intricate process for the cell and involves many individual steps. Depending on the purpose for which a protein is used, there are different sites for protein production: the cytoplasm or the endoplasmic reticulum (ER). The ER is separated by a membrane from its surroundings in the cytoplasm. Even before protein synthesis is completed, the proteins produced at the ER enter via its membrane into the interior of the ER and are modified through the attachment of sugar residues concomitantly. Without these attachments, the proteins would not be able to fold properly and thus would not fulfill their functions in the cell.
Scientists of the research group “Modeling of Protein Complexes” have now described the architecture of the protein complex responsible for the transport and modification of the newly produced protein: the ER translocon. “It is located in the membrane of the ER, and this fact, together with its size and complex composition, has greatly hampered previous structural studies,” says Friedrich Förster, group leader at the MPI of Biochemistry, describing the initial situation. The structures of many subunits and their arrangement in the native ER translocon have thus far remained elusive.
It was not until cryoelectron tomography came into use that researchers could gain first insight into the architecture of the translocon. The sample is “shock frozen” to preserve its natural structure. Using an electron microscope, the scientists capture two-dimensional images of the object from different perspectives, from which they then reconstruct a three-dimensional image. Further investigations have made it possible to identify individual modules in the structure. Among them is the module that attaches the sugar residues to the newly produced protein.
“Based on this method, we will now try to determine the structure and location of other components of the ER translocon," says Förster. If the researchers know the individual structures of the ER translocon and their arrangement in the complex, they can indirectly draw conclusions about the precise functions and interactions of all components.
Doi: 10.1038/ncomms4072 (2013).Contact
Anja Konschak | Max-Planck-Institut
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:a2611260-5bf4-453d-b0bd-2f1aa86313aa> | 3.515625 | 1,245 | Content Listing | Science & Tech. | 37.156846 | 95,541,400 |
Every year, 1.2 million blue wildebeest migrate across East Africa, and despite their extraordinary numbers, they can still take you by surprise. “It’s like magic,” says Amanda Subalusky from Yale University. “One day, there are these vast empty plains of really tall grass. The next morning, you’ll wake up and the plains will be full of animals as far as you can see. There’ll be black specks across the landscape, like someone has taken a pepper shaker to the savannah.”
The wildebeest, accompanied by some 200,000 zebra and antelope, travel clockwise through the Serengeti, tracking pockets of rainfall and grazing on about 4,500 tons of grass every day. It’s the largest overland migration in the world. It’s also one of the deadliest. In the northern part of their route, the wildebeest must repeatedly cross the mighty Mara River—and many fail.
Natural history documentaries like to show crocodiles killing the wildebeest, but being cold-blooded, crocs have small appetites and are easily sated. For every wildebeest that they kill, 50 more drown on their own. “They’re anxious and hesitant when they get to rivers,” says Subalusky. “The herd piles up on the bank, and hours or days can go by before they get up the courage to cross. Once one dives in, the rest follow and this herd mentality takes over.”
Sometimes, everything goes wrong. The river might be especially deep or strong at that point. The opposite bank could be slippery or steep. The herd might be too big. Aggressive tourists can push them to more dangerous crossing points. “If there’s anything that keeps them from getting out on the other side, they’ll start to pile up. And even as they’re drowning on one side of the river, there are still wildebeest following them in.”
The result is an annual mass drowning. “We’ve seen up to 300 carcasses wedged into the river bank in some places,” says Subalusky. “It’s quite a sensory experience. The smell is potent for a quarter mile, and lasts for weeks. There’s a ranger station nearby and they really hate it when the drownings happen.”
She and her colleagues, including husband Chris Dutton and supervisor David Post, spent five years studying the migrating wildebeest, counting their corpses as they floated downstream. Through their sometimes grisly work, they’ve shown that these drowning herds account for a shocking large proportion of the river’s nutrients. Disney symbolized the circle of life with a lion cub being held aloft by a monkey. It might have done better with a mound of rotting, sodden wildebeest carcasses.
“Even when people noticed these drownings, it’s easy to underestimate the size and frequency of them,” says Subalusky. Her team estimated that around five mass drownings (defined as events involving at least 100 dead wildebeest) happen every year. Together, these events create around 6,000 carcasses and 1,100 tons of dead meat—roughly like dumping ten blue whales into the river every year.
To work out what happens to these bodies, the team analyzed water at various parts of the Mara, collected samples of fish and microbes, and used camera traps to count arriving scavengers. They found that the piles of bodies sustain life throughout the entire river basin.
Vultures and marabou storks, flying in from more than 100 kilometers away, roost among the dead and eat up to 9 percent of the wildebeest nutrients. At night, hyenas arrive. Insects lay eggs in the carcasses, creating food for mongooses and ibis. Within the water, fish swim up to eat their fill, getting up to half their diet from the wildebeest bodies. Crocodiles become fat. "I’ve got a photo of a juvenile crocodile basking on these wildebeest carcasses, and I think he looks so happy,” says Subalusky.
It takes a month for each wildebeest to decay away, but they continue to affect the Mara long afterwards. Subalusky’s team confirmed this by dragging three carcasses out of the river by hand, dissecting them, and analyzing the chemical composition of each body part.
They showed that the soft tissues like muscles and viscera make up just half of a wildebeest’s weight. Their bones make up the rest, and these take around 7 years to break down. During that time, they leach huge amounts of phosphorus into the river, fertilizing it. They also sustain mats of bacteria and algae that provide fish like catfish and barbels with up to a quarter of their diet.
“When you have 1.2 million animals going round an ecosystem, it’s hard to find anything that’s not impacted,” says Grant Hopcraft from the University of Glasgow, who also studies the migration. “The cool thing isn’t that the migration has an effect on the river, but that it’s a lasting effect.” A crossing could be over in an hour, but if just a few hundred animals die, they could change the river for months. “The long pulse of nutrients means that different species can take advantage,” says Hopcraft. “That leads to diversity and diversity leads to resilience.”
Other migrating animals provide valuable pulses of nutrients, too. Pacific salmon are the best example. They migrate from the oceans to their birth rivers to mate, spawn, and die. Their bodies directly feed eagles, seals, and bears. But their contribution extends well beyond the riverbanks because their carcasses, dragged into the forests by bears, also nourish mosses, trees, insects, songbirds, wolves, and more.
But the Mara wildebeest differ from the Pacific salmon in two important ways. The first is quantitative: they provide four times as much flesh by weight. The second is qualitative. “Unlike salmon, no part of the wildebeest life cycle requires that they die en masse in a river,” says Janice Brahney, a freshwater ecologist at Utah State University. “But the fact that [such deaths] occur nearly annually suggests that their impact on the ecosystem might be just as significant. It would be interesting to investigate what might happen should this influx of wildebeest nutrients not occur for a period of time. With climate change, how might the frequency of these events change in the future? Has the frequency of these events already changed?”
Subalusky certainly thinks so. These kinds of mass drownings used to be far more common, before humans slaughtered their way through the world’s large herds. Take the American bison. Today, there are around 30,000 individuals left in the wild. But before the 1800s, there were at least a thousand times as many—between 30 and 60 million of them. “We found references in Lewis and Clark’s journals to bison drownings in the US, with thousands of carcasses piling up as they crossed frozen rivers,” says Subalusky. “These were potentially regular events in the recent past.”
So, when we watch the wildebeest migration, we’re looking through a window into the global past—into a planet where mega-herds routinely traversed bodies of water and lost their lives in huge numbers, to the benefit of the surrounding ecosystem. And that, Subalusky says, should force us to reconsider our view of what those ecosystems ought to look like.
“What is a pristine river?” she asks. “We picture a crystal-clear babbling brook, but that has low levels of nutrients. The Mara suggests that maybe some ‘pristine’ rivers, back when large animals were really common, were full of dead bodies.”
We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org. | <urn:uuid:259341a5-fa62-478a-bd4a-7ad023c316d2> | 2.84375 | 1,746 | Truncated | Science & Tech. | 56.657726 | 95,541,401 |
Litter in the environment is an ongoing problem, but arguably one of the most pressing environmental challenges that we are faced with today is marine plastic debris. The two common sources marine debris originates from are:
1 land-based, which includes litter from beach-goers, as well as debris that has either blown into the ocean or been washed in with stormwater runoff; and
2 ocean-based, which includes garbage disposed at sea by ships and boats, as well as fishing debris, such as plastic strapping from bait boxes, discarded fishing line or nets, and derelict fishing gear.
While discarded fishing gear takes its toll on the marine environment by entangling marine life and destroying coral reefs, it only comprises an estimated 20% of all marine debris – a staggering 80% of all marine debris stems from land-based sources. This is not that surprising, considering that around 50% of all plastics are used to manufacture sing-use items which are discarded soon after they are first used.
A study published in 2017 estimated between 1.15 to 2.41 million tonnes of plastic enters the oceans via rivers annually, with peak months being between May and October. The top 20 contributing rivers, which according to the report are mostly found in Asia, contribute around 67% of all plastics flowing into the ocean from rivers around the world.
The demand for plastic has increased dramatically over the last 70 years. According to Plastic Ocean, 300 million tons of plastic is produced globally every year. Half of that plastic is used for disposable items that will only be used once. As a result, more than 8 million tons of discarded plastic ends up in our oceans every single year. Once it is there it doesn’t readily go away. The Worldwatch Institute estimates that the average American or European person typically uses 100 kilograms of plastic every year, most of which consists of packaging, and while it is estimated that Asians currently only use an average of 20 kilograms per person, this is expected to rise due to economic growth in the region.
Tons of plastic ends up in our oceans every year, according to a report released by the Worldwatch Institute in 2015.
estimated number of plastic particles currently floating around in world’s oceans.
number of estimated losses per year associated with marine plastic debris due to the negative impact on marine ecosystems.
One of the characteristics that make plastic so popular for use in a wide range of industries is that it is extremely durable and long-lasting. However, this trait also makes it persist in the environment.
Plastics are photodegradable – meaning that they break down into smaller and smaller pieces when exposed to sunlight. Because the temperature they are exposed to in the ocean is much lower than that on land, the breakdown process takes much longer in the marine environment.
But while plastic debris is slowly breaking down in the ocean, more and more plastic is being tossed or washed into the sea – at a rate far faster than what it is breaking down.
Consequently, there is a LOT of plastic in the ocean – it comes in all shapes, forms, and sizes, and is found floating on the surface, suspended in the water column or littering the ocean floor, and eventually washes up on beaches around the world, wreaking havoc with marine life in all these ecosystems.
According to a scientific report released by A Plastic Ocean, marine plastic debris has impacted over 600 marine species from the bottom to the top of the food chain, many dying a slow agonizing death through entanglement or ingesting plastic. According to Greenpeace’s report Plastic Debris in the World’s Oceans: “At least 267 different species are known to have suffered from entanglement or ingestion of marine debris including seabirds, turtles, seals, sea lions, whales, and fish. The scale of contamination of the marine environment by plastic debris is vast. It is found floating in all the world’s oceans, everywhere from polar regions to the equator.”
Large volumes of this plastic tend to accumulate within five oceanic ‘garbage patches’, also known as 5 gyres, located in the Atlantic, Indian and Pacific Oceans. The largest of these is the Great Pacific Garbage Patch which stretches across the Pacific Ocean between Japan and North America, with the greatest concentration of garbage lying in the stretch of ocean between California and Hawaii where scientists estimate concentrations of plastic to be around 480,000 pieces per square kilometre. While large pieces of plastic do accumulate in the gyre, rather than being an island of plastic, in reality this is more like a plastic soup, consisting mostly of tiny bits of invisible microplastic. A 2001 survey conducted by Captain Moore, found an average of 334,271 tiny bits of plastic for every square kilometer surveyed. The recovered plastic weighed approximately six times more than plankton netted in the same survey.
“There is no island of plastic, what exists is more insidious. What exists is a kind of plastic smog”- Craig Leeson, Director, A Plastic Ocean
Plastics and polystyrene foam (Styrofoam) comprise 90% of all marine debris, with single use food and beverage containers being one of the most common items found in ocean and coastal surveys. Plastic debris in the ocean varies greatly in size, from tiny microplastics that are invisible to the naked eye to large pieces of plastic debris, such as discarded fishing gear, which can extend for meters or in some cases even kilometers. According to the Ocean Conservancy’s International Coastal Cleanup 2017 Report, if all the plastic bottles collected during the 2016 International Coastal Cleanup were stacked they would have stood 372 times higher than Dubai’s towering Burj Khalifa (828 meters high); all the plastic straws collected off beaches around the world would have stood 145 times higher than the One World Trade Center in New York City (541 meters); while all the plastic utensils collected would have stood 82 times higher than the Kuala Lumpur’s Petronas Towers (452 meters), and all the cigarette lighters collected would have stood 10 times higher than the Eiffel Tower in Paris (324 meters).
These tiny pieces of plastic, which scientists refer to as microplastic, are now recognized as a major threat to wildlife and to human health. Scientific research surveys have revealed that microplastics are widespread throughout the world’s oceans, and are having a negative impact on marine life, as well as the health of humans who rely on seafood as a staple protein source. Polystyrene beads and plastic pellets are not easily digested so tend to accumulate in the digestive tract of marine animals who consume them. This can result in the animal feeling full, causing it to stop feeding, leading to emaciation and ultimately death from starvation, or it can cause an intestinal blockage that can also be fatal. When a predator feeds on a fish that has a gut full of undigested polystyrene or plastic, this is passed on to the predator who in most cases will also have problems digesting it.
The below photo shows all of the pieces of plastic that were removed from the stomach of a single north fulmar, a seabird, during a necropsy at the National Wildlife Health Lab. (Photo Credit: Carol Meteyer, USGS). How does this happen?
Furthermore, plastics and polystyrene are made up of toxic chemicals, including petroleum, which may be released as the gastric juices try to digest it, and are absorbed into the body tissue. These toxins also leach into the water column as plastics break down, contaminating filter feeding organisms who ingest the water while feeding. But the problems don’t end there. Plastics are known to accumulate persistent organic pollutants (POPs), including Polychlorinated Biphenyls (PCBs) and DDT that are known to disrupt the endocrine system and affect development, at concentrations of a hundred thousand to a million times greater than naturally found in seawater. These contaminants are stored in the body fat and organs of animals and are passed on to predators that feed on them, becoming more concentrated in the tissues of organisms higher up the food chain.
Long living top predators continue to accumulate more and more toxins in their systems over time. Studies have revealed that marine top predators, such as killer whales and polar bears, are amongst the most contaminated animals on Earth. These contaminants reduce fertility and breeding success, and compromise the affected animal’s immune system, making them more vulnerable to disease and infection.
Microfibres from clothing and textiles are another key source of microplastics in our oceans. When we wash our clothes, fibres are shed into the washing water. Due to their minute size, these fibres pass through wastewater treatment plants and end up in the ocean.
“Plastic Soup Foundation research team literally counted these tiny fibres and discovered that actually up to 17 million may be released in every wash,” says Maria Westerbos, director of the Plastic Soup Foundation, a non-profit campaign group.
Microfibres have been found in many different in many different ecosystems, including freshwater systems, ocean waters, ocean sediments, and beaches around the world, indicating it is a worldwide problem that is possibly growing.
According to a report in The Overtake, “The International Union for Conservation of Nature (IUCN) estimates that 35% of all primary plastics which end up in our oceans have come from textiles, making it the largest source of microplastics, followed by those which come from the degradation of car tyres (28%).”
We need to tackle the problem of marine debris head on. It’s not just an issue for environmentally conscious, it is an issue that ultimately affects human health. Man is a top predator that feeds on a variety of ocean fish, shellfish and other marine species. We face the same risks as the killer whale and polar bear. While any plastic or polystyrene pellets that may have been clogging the gut of the fish that is nicely presented on our dinner plate have been long removed, the toxic contaminants originating from that debris remain stored in the flesh we are about to eat. Food for thought indeed.
Clearly this is a mammoth problem and one that needs to be addressed as a matter of urgency. The old saying ‘prevention is better than cure’ rings true here. One obvious solution is to switch from plastic and polystyrene packaging to environmentally friendly alternatives, such as compostable plant fiber packaging made from natural materials that readily break down in the environment without causing any harm, and which contain no harmful chemicals. Many cities and countries around the world have implemented stricter legislation with regard to plastic shopping bags, with some banning them outright. Perhaps we need to do the same for plastic bottles, straws, etc. Consumers should be proactive and opt for reusable and/or refillable containers rather than disposable packaging wherever possible. This will not only save suppliers, and by extension shoppers, money, it will also benefit the environment and everything that is dependent on the environment for survival.
“Even if you don’t live near the ocean, the chances are your plastic garbage has found its way to the sea” – Dr Sylvia Earle, Marine Biologist and Explorer”
Because it is so tough and durable, plastic can be reused or it can be recycled. Popular musician and environmental advocate, Pharrell Williams, is the co-owner of G-Star RAW, a sustainable clothing brand that recently launched the ‘RAW for the Oceans’ collection that recycles single use plastic containers collected from beaches all over the world into stylish apparel. The ‘RAW for the Oceans’ fashion line has collaborated with Bionic Yarn, another company that Williams is both a partner and Creative Director of, which uses recycled ocean plastics to make sustainable clothing yarn. This creative approach provides a sustainable resource — there is plenty of plastic in the sea — while at the same time tackles the humungous problem of ocean plastics by putting this practically unlimited resource to good use.
Philanthropist, environmental advocate, and entrepreneur, Richard Branson, has proposed that we implement a deposit refund system for plastic bottles. Offering an incentive for users to return plastic bottles for recycling makes absolute sense, especially these are one of the most prolific items found on beaches around the world.
While reducing or eliminating plastic packaging may help to stem the flow of plastics at the source, we still need to take steps to prevent plastic that is already in the environment from flowing into the ocean, and to clean up the vast amount of plastic littering beaches around the world, as well as the plastic currently swirling around ocean gyres.
Every year, the Ocean Conservancy coordinates the International Coastal Cleanup in collaboration with environmental organisations, schools and other community initiatives around the world, encouraging volunteers to take part in local beach cleanups to rid the environment of trash. This can be stepped up at a local level, where individuals, communities and organisations can get more actively involved in cleaning up their local beaches to help keep them free of plastic and other debris.
Right: collecting plastic debris and water samples from Kamilo Beach, South of Big Island Hawaii. Kamilo Beach is approximately 1,500 feet (460 m) long and is located on the remote southeast coast of the Kaʻū District on the island of Hawaii. There are no paved roads to the beach. (Photo Credit: Cesar Harada)
Some innovative individuals have proposed other solutions for removing plastic from our oceans, including deploying large floating booms to trap and catch plastic designed by a Dutch entrepreneur when he was still a teenager, and floating sea bins designed by two surfers that can be used to remove plastic from harbours, for example.
While these are all indeed innovative and creative solutions to an ever growing problem, they will in all likelihood not be enough to stem the tide of plastic entering and swirling around our oceans. Nor do they address the problem of microplastics and tiny plastic microbeads that are now having a large impact. A committed multi-pronged approach is urgently needed. We need to take action now.
Who else is taking action?
This article was co-written by Environmental Communication Consultant, Jenny Griffin BSc (Hons) Degree in Marine Biology, Diploma in Nature Conservation); and Janaya Wilkins, Marine Conservation Enthusiast, and SLO active’s Founder.
CSIRO. Marine debris: Sources, distribution and fate of plastic and other refuse — and its impact on ocean and coastal wildlife.
Eric Dewailly, Albert Nantel, Jean-P. Weber, and Francois Meyer. High levels of PCBs in breast milk of Inuit women from Arctic Quebec. Bull. Environ. Contam. Toxicol. (1989) 43:641-646
Christopher Green and Susan Jobling. A Plastic Ocean: The Science behind the Film.
Greenpeace. Plastic Debris in the World’s Oceans
Laurent CM Lebreton, Joost van der Zwet, Jan-Willem Damsteeg, Boyan Slat, Anthony Andrady & Julia Reisser. River plastics emissions to the world’s oceans. Nature Communications 8, Article number: 15611 (2017) DOI:10.1038/ncomms15611
Moore, C.J., Moore, S.L., Leecaster, M.K., and Weisberg, S.B. A Comparison of Plastic and Plankton in the North Pacific Central Gyre. Marine Pollution Bulletin 42, 1297-1300. (2001) DOI:10.1016/S0025-326X(01)00114-X
Ocean Conservancy. International Coastal Cleanup 2017 Report.
Viola Pavlova, Jacob Nabe-Nielsen, Rune Dietz, Christian Sonne, Volker Grimm. Allee effect in polar bears: a potential consequence of polychlorinated biphenyl contamination. Proc. R. Soc. B; 30 November 2016; DOI: 10.1098/rspb.2016.1883.
UNEP (2016) Marine plastic debris and microplastics – Global lessons and research to inspire action and guide policy change. Nairobi, Kenya: United Nations Environment Programme.
Van Franeker, J.A. and Law, K.L. (2015) ‘Seabirds, gyres and global trends in plastic pollution’, Environmental Pollution, 203, pp. 89-96.
Worldwatch Institute. Global Plastic Production Rises, Recycling Lags (2015)
Wright, S.L., Thompson, R.C. and Galloway, T.S. (2013b) ‘The physical impacts of microplastics on marine organisms: a review.’, Environmental Pollution, 178, pp. 483-492.
Your account will be closed and all data will be permanently deleted and cannot be recovered. Are you sure? | <urn:uuid:780c6bb6-b456-4db6-8511-5b6b46eddb98> | 4 | 3,508 | Nonfiction Writing | Science & Tech. | 42.271879 | 95,541,410 |
The point of no return: In astronomy, it’s known as a black hole — a region in space where the pull of gravity is so strong that nothing, not even light, can escape. Black holes that can be billions of times more massive than our sun may reside at the heart of most galaxies. Such supermassive black holes are so powerful that activity at their boundaries can ripple throughout their host galaxies.
This image, created using computer models, shows how the extreme gravity of the black hole in M87 distorts the appearance of the jet near the event horizon. Part of the radiation from the jet is bent by gravity into a ring that is known as the 'shadow' of the black hole.
Image: Avery E. Broderick (Perimeter Institute & University of Waterloo)
An accretion disk (orange) of gas and dust surrounds super-massive black holes at the center of most galaxies. These disks of galactic matter emit magnetic beams (pink lines) that spew out from the center of the black hole, drawing matter out from both ends in high-powered jets.
Image: NASA and Ann Field (Space Telescope Science Institute)
Now, an international team, led by researchers at MIT’s Haystack Observatory, has for the first time measured the radius of a black hole at the center of a distant galaxy — the closest distance at which matter can approach before being irretrievably pulled into the black hole.
The scientists linked together radio dishes in Hawaii, Arizona and California to create a telescope array called the “Event Horizon Telescope” (EHT) that can see details 2,000 times finer than what’s visible to the Hubble Space Telescope. These radio dishes were trained on M87, a galaxy some 50 million light years from the Milky Way. M87 harbors a black hole 6 billion times more massive than our sun; using this array, the team observed the glow of matter near the edge of this black hole — a region known as the “event horizon.”
“Once objects fall through the event horizon, they’re lost forever,” says Shep Doeleman, assistant director at the MIT Haystack Observatory and research associate at the Smithsonian Astrophysical Observatory. “It’s an exit door from our universe. You walk through that door, you’re not coming back.”
Doeleman and his colleagues have published the results of their study this week in the journal Science.
Jets at the edge of a black hole
Supermassive black holes are the most extreme objects predicted by Albert Einstein’s theory of gravity — where, according to Doeleman, “gravity completely goes haywire and crushes an enormous mass into an incredibly close space.” At the edge of a black hole, the gravitational force is so strong that it pulls in everything from its surroundings. However, not everything can cross the event horizon to squeeze into a black hole. The result is a “cosmic traffic jam” in which gas and dust build up, creating a flat pancake of matter known as an accretion disk. This disk of matter orbits the black hole at nearly the speed of light, feeding the black hole a steady diet of superheated material. Over time, this disk can cause the black hole to spin in the same direction as the orbiting material.
Caught up in this spiraling flow are magnetic fields, which accelerate hot material along powerful beams above the accretion disk The resulting high-speed jet, launched by the black hole and the disk, shoots out across the galaxy, extending for hundreds of thousands of light-years. These jets can influence many galactic processes, including how fast stars form.
‘Is Einstein right?’
A jet’s trajectory may help scientists understand the dynamics of black holes in the region where their gravity is the dominant force. Doeleman says such an extreme environment is perfect for confirming Einstein’s theory of general relativity — today’s definitive description of gravitation.
“Einstein’s theories have been verified in low-gravitational field cases, like on Earth or in the solar system,” Doeleman says. “But they have not been verified precisely in the only place in the universe where Einstein’s theories might break down — which is right at the edge of a black hole.”
According to Einstein’s theory, a black hole’s mass and its spin determine how closely material can orbit before becoming unstable and falling in toward the event horizon. Because M87’s jet is magnetically launched from this smallest orbit, astronomers can estimate the black hole’s spin through careful measurement of the jet’s size as it leaves the black hole. Until now, no telescope has had the magnifying power required for this kind of observation.
“We are now in a position to ask the question, ‘Is Einstein right?’” Doeleman says. “We can identify features and signatures predicted by his theories, in this very strong gravitational field.”
The team used a technique called Very Long Baseline Interferometry, or VLBI, which links data from radio dishes located thousands of miles apart. Signals from the various dishes, taken together, create a “virtual telescope” with the resolving power of a single telescope as big as the space between the disparate dishes. The technique enables scientists to view extremely precise details in faraway galaxies.
Using the technique, Doeleman and his team measured the innermost orbit of the accretion disk to be only 5.5 times the size of the black hole event horizon. According to the laws of physics, this size suggests that the accretion disk is spinning in the same direction as the black hole — the first direct observation to confirm theories of how black holes power jets from the centers of galaxies.
The team plans to expand its telescope array, adding radio dishes in Chile, Europe, Mexico, Greenland and Antarctica, in order to obtain even more detailed pictures of black holes in the future.
Christopher Reynolds, a professor of astronomy at the University of Maryland, says the group’s results provide the first observational data that will help scientists understand how a black hole’s jets behave.
“The basic nature of jets is still mysterious,” Reynolds says. “Many astrophysicists suspect that jets are powered by black hole spin ... but right now, these ideas are still entirely in the realm of theory. This measurement is the first step in putting these ideas on a firm observational basis.”
This research was supported by the National Science Foundation.
Sarah McDonnell | EurekAlert!
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:6205cc6f-1d80-422d-b369-8b9221a234ad> | 4.03125 | 2,029 | Content Listing | Science & Tech. | 43.164454 | 95,541,415 |
Strong sea surface cooling in the eastern equatorial Pacific and implications for Galápagos Penguin conservation
Karnauskas, Kristopher B.
Brown, Christopher W.
MetadataShow full item record
The Galápagos is a flourishing yet fragile ecosystem whose health is particularly sensitive to regional and global climate variations. The distribution of several species, including the Galápagos Penguin, is intimately tied to upwelling of cold, nutrient-rich water along the western shores of the archipelago. Here we show, using reliable, high-resolution sea surface temperature observations, that the Galápagos cold pool has been intensifying and expanding northward since 1982. The linear cooling trend of 0.8°C/33 yr is likely the result of long-term changes in equatorial ocean circulation previously identified. Moreover, the northward expansion of the cold pool is dynamically consistent with a slackening of the cross-equatorial component of the regional trade winds—leading to an equatorward shift of the mean position of the Equatorial Undercurrent. The implied change in strength and distribution of upwelling has important implications for ongoing and future conservation measures in the Galápagos.
Author Posting. © American Geophysical Union, 2015. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Geophysical Research Letters 42 (2015): 6432–6437, doi:10.1002/2015GL064456.
Suggested CitationArticle: Karnauskas, Kristopher B., Jenouvrier, Stephanie, Brown, Christopher W., Murtugudde, Raghu, "Strong sea surface cooling in the eastern equatorial Pacific and implications for Galápagos Penguin conservation", Geophysical Research Letters 42 (2015): 6432–6437, DOI:10.1002/2015GL064456, https://hdl.handle.net/1912/7538
Showing items related by title, author, creator and subject.
Broadus, James M.; Pires, Ivon A.; Gaines, Arthur G.; Bailey, Connor; Knecht, Robert W.; Cicin-Sain, Biliana (Woods Hole Oceanographic Institution, 1984-10)The report briefly describes coastal and marine resource uses and problems in Ecuador's Galapagos Islands, discusses general principles of coastal zone management (CZM) and marine resources management (MRM), examines the ...
Broadus, James M.; Pires, Ivon A.; Gaines, Arthur G.; Bailey, Connor; Knecht, Robert W.; Cicin-Sain, Biliana (Woods Hole Oceanographic Institution, 1984-10)Este informe describe brevemente los usos y problemas de los recursos costeros y marinos en Galapagos, discute los principios generales del manejo de la zona costera y de los recursos marinos, examina la situaci6n actual ...
Pan-Antarctic analysis aggregating spatial estimates of Adélie penguin abundance reveals robust dynamics despite stochastic noise Che-Castaldo, Christian; Jenouvrier, Stephanie; Youngflesh, Casey; Shoemaker, Kevin T.; Humphries, Grant; McDowall, Philip; Landrum, Laura; Holland, Marika M.; Li, Yun; Ji, Rubao; Lynch, Heather J. (Nature Publishing Group, 2017-10-10)Colonially-breeding seabirds have long served as indicator species for the health of the oceans on which they depend. Abundance and breeding data are repeatedly collected at fixed study sites in the hopes that changes in ... | <urn:uuid:b3d72f1b-e77d-42ef-b50d-3055530f4d04> | 3.203125 | 774 | Academic Writing | Science & Tech. | 33.218899 | 95,541,421 |
The increased storage of carbon in soil could help to slow down rising atmospheric carbon dioxide concentrations.
The Department of Energy-sponsored free-air carbon dioxide-enrichment, or FACE, experiment officially ended in 2009. The conclusion and final harvest of the ORNL FACE experiment provided researchers with the unique opportunity to cut down entire trees and to dig in the soil to quantify the effects of elevated carbon dioxide concentrations on plant and soil carbon.
In a paper published in Global Change Biology, Colleen Iversen, ORNL ecosystem ecologist, and her colleagues quantified the effects of elevated carbon dioxide concentrations on soil carbon by excavating soil from large pits that were nearly three feet deep. Researchers saw an increase in soil carbon storage under elevated carbon dioxide concentrations, a finding that was different from the other FACE experiments in forests.
Researchers found the increase in carbon storage even in deeper soil.
“Under elevated carbon dioxide, the trees were making more, deeper roots, which contributed to the accumulation of soil carbon,” Iversen said.
Iversen pointed out that processes such as microbial decomposition and root dynamics change with soil depth, and information on processes occurring in deeper soil will help to inform large-scale models that are projecting future climatic conditions.
Co-authors on the paper, “Soil carbon and nitrogen cycling and storage throughout the soil profile in a sweetgum plantation after 11 years of carbon dioxide-enrichment” are ORNL’s Charles Garten and Richard Norby, FACE project leader; and Chapman University’s Jason Keller.
The research was sponsored by the DOE Office of Science. ORNL is managed by UT-Battelle for the Department of Energy's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov.
Caption: From the left, ORNL's Joanne Childs, Colleen Iversen and Rich Norby dig soil pits and excavate roots and soil at the FACE site.
NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www.ornl.gov/news. Additional information about ORNL is available at the sites below:
Twitter - http://twitter.com/oakridgelabnews
RSS Feeds - http://www.ornl.gov/ornlhome/rss_feeds.shtml
Flickr - http://www.flickr.com/photos/oakridgelab
YouTube - http://www.youtube.com/user/OakRidgeNationalLab
LinkedIn - http://www.linkedin.com/companies/oak-ridge-national-laboratory
Facebook - http://www.facebook.com/Oak.Ridge.National.Laboratory
Emma Macmillan | Newswise Science News
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:becc74da-eab8-4565-bd92-7ea065cf440f> | 3.96875 | 1,281 | Content Listing | Science & Tech. | 41.113954 | 95,541,456 |
Physics is the branch of science which seeks to understand the properties and behaviors of the world around us at all levels of scale. Physics should answer the following questions.
What are the constituents that make up the world around us and how do they interact?
As physicists, we observe, we experiment, and we build conceptual models. We build models on many different levels. We hope to develop theories, grand models that embrace a huge range of phenomena. Our theories cannot be proved absolutely. If our confidence in a theory is great, then its key features are referred to as laws.
Models are made by people for people. One must be careful not to confuse the model with the reality. A model is just a simple, manageable representation. Models can change as our knowledge changes, but the underlying reality presumably does not change. The medium we use to create these models is usually mathematics. Different physical models have different ranges of applicability, they have been verified to work on different scales. The science of Physics is often divided into Classical Physics and Modern Physics.
Classical Physics in a model of the macroscopic world around us. All the laws of classical physics were known by the end of the 19th century. The known properties of matter at the end of the 19th century were mass and charge. The smallest constituents were atoms. The known interactions were gravity, modeled by Newton's law of gravitation, electromagnetic interactions, modeled by Maxwell's equations, and contact force arising from the requirement that "atoms need their space". Consequences of the interactions were described by Newton's laws of motion, which predict how matter behaves when acted on by forces. Statistical physics and thermodynamics were developed for describing systems with a large number of degrees of freedom.
Classical physics works well describing and predicting almost all everyday phenomena. In this class we will study Classical Physics.
Most disciplines have their own special vocabulary. The vocabulary of physics includes words whose meaning in everyday language may depend on the context in which they are used. In physics, these words have only one precisely defined, context-independent meaning.
Take, for example, the word "force".
Only the first of these three sentences uses the word "force" to refer to the concept it describes in physics.
Words such as position, velocity, acceleration, energy, power, etc, all have context-dependent meaning in everyday language. It is important that when communicating about physics, you always use the precise, context-independent definition of these words. Words whose definitions you should keep in mind appear in bold red font in the Web material.
The result of every measurement has two parts, a number and a unit. The number is the answer to "How many?" and the unit is the answer to "Of what?". Units are standard quantities such as a second, a meter, a mile. The most widely used units today are those of the international system, abbreviated SI (Systeme International d'Unites). Examples of SI units are the meter (m) for length, the second (s) for time, and the kilogram (kg) for mass. The unit of an observable carries information about the dimension of the observable. If a quantity is measured in meters, than its dimension is length (L), if it is measured in kilogram, than its dimension is mass (M).
SI base units:
The 20 SI prefixes used to form decimal multiples and submultiples of SI units are given in Table below.
Links: Reference (SI units) | <urn:uuid:049df853-37ba-4f12-99be-1defc722b10f> | 3.890625 | 718 | Knowledge Article | Science & Tech. | 42.788447 | 95,541,463 |
What are the current theories as to the source of the new space that accounts for dark energy?
What do you mean by "new space"?
As i understand it, since nothing can travel faster than light and since some galaxies are traveling away from us faster than light, then there must be new space coming into existence between galaxies.
Some galaxies are indeed receding from us at speeds surpassing that of light, but that's true of expansion in general, not only expansion resulting from dark energy. As far as space coming into existence, in what sense do you think space is a "thing" that "comes into existence"?
Space is not believed to be a substance or material that needs a "source".
Instead, distances between stationary points increase at speeds proportional to their size, that is by a certain small percentage per unit of time. It's not like ordinary MOTION we are used to, because nobody gets anywhere by it. Nobody approaches any goal or destination, everybody just becomes farther apart. Distances grow.
So you could say that geometry is dynamic. 1915 GR told us that. The network of geometric relations can change and can interact with matter. Dynamic geometry is governed by an equation first written down in 1915. According to that equation the percentage rate of distance growth should gradually decline.
As far as we can tell from observations, the rate of distance growth has in fact been declining, the equation (called Einstein GR equation) fits the data remarkably well!
And the GR equation has a CONSTANT term in it called "cosmological constant" whose effect is to SLOW the gradual decline. Mathematically the constant is one over a certain area, an "inverse area" if you like---technically it is called a curvature (a quantity measured in units of inverse area.) This curvature constant in Einstein's GR equation was always, for no good reason, assumed to be zero and then, in 1998, was discovered not to be zero and was measured to have a small positive value.
Probably because it sounds jazzy and excites interest, the constant acquired the name "dark energy" and its effect was described as ACCELERATION. But actually all we know is there's a nonzero constant in the GR equation that slows the gradual decline in the percentage rate of distance growth.
The distance growth rate used to be 1% per million years (that was a long time ago.)
And it has always been declining, so after a while it was 1/2% per million years.
And now, as near as we can tell, it is 1/144% per million years, and still declining.
According to the standard cosmic model that is used in professional cosmology it is expected to continue tapering off, but not to zero!
Because of the effect of cosmological constant, the decline is expected to level off at around 1/173% per million years.
This is based on the latest observation results (from Planck satellite) that came out in March 2013.
We do not know of any "dark energy" underlying the cosmo constant. That part could just be a fiction.
No reason to speculate about it. Operationally, in terms of what we MEASURE, we can measure that the percentage rate of distance growth is declining on a track that suggests it will NOT go all the way to zero, but instead will taper off to around a steady 1/173% per million years.
By "new space" i guess i mean the changes in distances that are occurring.
I understand that decrease in the rate of expansion but it is also true that a distant galaxy's distance from us is not as different last year as it will be this year. I assume that can be called acceleration.
Sure you can talk about what happens to a particular distance in those terms. The percentage growth rate is declining, but it is declining SLOWLY. So what we see if we fix attention on one particular distance, say to some particular galaxy we choose, is like a bank account growing at (nearly) constant percentage rate of interest.
So maybe you can clarify your question. What are you asking?
The cosmological constant (jazzy name: "dark energy") doesn't make much difference. Before 1998 we thought it was zero and we still understood that distances between CMB stationary objects were growing at a certain percent rate.
That rate now measured 1/144% per My would have been about the same rough size if the cosmo const had been zero.
So cosmo const. or "dark energy" is not the CAUSE of the geometry change governed by GR equation.
GR equation is our law of geometry that describes how distances behave (and angles, areas, volumes). It was already known in 1923 that if you started with distances expanding in a uniform way then they will CONTINUE though perhaps at a declining rate, as indeed was observed later to be the case.
Since geometry we now know is dynamic and governed by GR equation (not by Euclid) we have no right to expect that distances between CMB stationary objects will remain constant. THIS DOES NOT REQUIRE A SOURCE. Distances can simply grow according to a pattern by which, as I said earlier, nobody gets anywhere. The GR equation explains why in a weak gravity field such as we have on earth the spatial geometry is ALMOST Euclidean (and if you include space-time then almost Lorentzian i.e. according to Special Rel) So the GR equation explains why we have the kind of local geometry we see about us in which distances seem not to change. And it also describes something analogous to "momentum" by which, once a general expansion process gets started (say around big bang time) then it will CONTINUE by its own "momentum-like" rules. That's how geometry behaves, according to the modern 1915 theory of it. (which is also our accepted law of gravity, gravity and geometry are the same)
so maybe the answer to your question is THERE IS NO SOURCE because geometry is not like water or air that has a source (it's a bunch of relationships governed by its own type of law)
or maybe the answer to your question is THE SOURCE IS THE GR EQUATION the law of how geometry behaves, in conjunction with matter, once it gets started---the rule according to which geometric change continues
or maybe you would prefer to go way back to the start of expansion, which might have been a big bounce combined with a brief period of inflation (that we do not have a confirmed explanation for, only conjectures about) or some other scenario. If you like you could say that speculative mental picture is the "source".
Personally I think it is clearer to say there is no source of "new" space. I see no way of distinguishing between "new" and "old" space. I can't separate out regions and say this piece is new and this is old. So I think it best to avoid saying words like "source of new space". those are the wrong words to use because they give the wrong ideas---make people think in pictures that do not match reality.
That's a bit like asking "What is the source of new time?"
[Don't know why nobody asks THAT!]
Nothing wrong with either question, by the way.
I sense dark energy is currently a bit less popular than when I started in these forums and "dynamic geometry" more so. Is that because 'dark energy' can't be experimentally verified [see below].
Wikipedia hasn't changed it's descriptions as far as I can tell but this snippet I found interesting:
Can anyone provide insights on just what is being measured? Any results?
Wiki also says:
'independent of its actual nature'...that sounds crazy...Has 'negative pressure' aka' repulsive gravity' been divorced from 'dark energy'??
In another discussion in these forums, Chalnoth answered this question:
[I did not record the discussion link.]
Marcus, what do you think of such 'evidence' .....seems like both cosmological expansion and this Sachs Wolfe redshift relies on some 'hand waving' [aka 'dark energy'] to explain??
How about if i call the new space the "new distance" instead of new space or maybe better said the "new difference in distance" between galaxies which would actually be composed of many little new distances that would add up to the new distance (new difference in distances)? I just thought there needed to be an explanation of why certain galaxies are moving away from us at a faster than light speed. Won't all galaxies eventually be moving away from each other at a faster than light speed?
Well it's not ordinary motion, so it is not governed by the local speed limit. GR distance growth is not supposed to obey local special rel (SR applies only to flat non-expanding space so it is a local approximation)
But if you don't think of recession/distance growth as motion then your picture is basically RIGHT!
You should realize that MOST of the galaxies that we can see with telescope now today, the distances to them are increasing faster than c. This is normal. So it is not a big deal.
Our local group of a dozen or so galaxies is not destined to be split up. They will continue to orbit each other in a little cluster, and some will merge (like andromeda and us). Perhaps as orbits decay in very very long term all our local group of galaxies will merge.
But other galaxies, not in our local group, will continue to recede until their distances are so great (> 17.3 billion LY) that even the very slow percentage growth rate of 1/173 % per million years means that their distances will be increasing > c. That is already the case with the majority.
Not sure what your point is, or if you are trying to make any special point. Incidental fact: most galaxies we can see have redshift greater than 2. And if a galaxy redshift is > 2 then not only is the distance to it, today, increasing faster than c but also the distance to it WAS increasing faster than c at the time it emitted the light which today is reaching our telescope and allowing us to see it. I think the more precise limit is 1.6, or maybe 1.7. I forget which. Most of what we see in universe was receding faster than c at the time it emitted the light we are now getting from it. This is, of course, not like ordinary motion, just geometry change. and it is normal---the way things are, as a general rule.
Thanks, i'm trying to get my mind around the thought that it is only geometric change. It may take a while.
Considering the pictures describing the warping of pace in that space seems to go out of existence in locations of high gravity (motion is altered or made to seem like there is less distance than would have been expected between objects). Where does this space go to and when there is little gravity, where does this expanding space come from. Sorry but i know i'm ignoring your explanation of the geometry for the moment.
You can't ignore the geometry explanation, for that is the correct explanation. This "space" does not go anywhere, come from anywhere, or anything like that. Space is not a "thing". What you would call space is simply the distance between objects. If an object moves twice as far away from me as it was before that doesn't mean that more space was created, it only means that the distance between us doubled.
yes, it requires some different thinking. My prior post suggested one way to think in a new way... 'new distance' is no different than 'new time'.....Regarding "new distance",
doesn't fit the model so well. A major aspect of increasing distance as Marcus has explained results from the geometric assumptions of homogeneity and isotropic space. That is what leads to geometric distance change and expansion at speeds greater than the speed of light. But such a model does not apply well to 'little new distances' within galaxies when things have lumpy masses like planets and stars and black holes.
edit: so there is no easy way to extend the 'new distance' idea in a model consistent way....
[Frankly, I don't like a number of 'explanations' we have today, and although they may be absolutely, precisely correct and there is nothing more to know, I personally hope mankind will have something a lot better a 1,000 years from now. How sad if we did not understand more by then. On the other side of this 'coin', how absolutely incredible that Einstein uncovered as much as he did. It's almost unbelievable that we have the insights we do. ]
Naty1 -could it be that the 'new little distances" would only come in when gravity is weak such between galxies but not within galaxies?
could be possible.
Some think galaxies and galactic clusters don't expand because gravity holds them overpowering much weaker 'expansion'. Others think our FLRW cosmological model of homogeneity and isotropic space doesn't even apply at such 'small scales'.
I'm a bit more accepting of dark energy as a possibility than expressed here by Marcus, since as far as I know accelerated expansion in the early universe, inflation [ the exponential expansion of the very early universe] has as its generally accepted source a negative pressure...a temporary higher level of vacuum energy. So why not some remaining dark energy [negative pressure] as the cosmological constant....as a corollary to spacetime. However wiki also says:
so my conclusion so far is nobody really knows.
While I am on it, Drakkith posts:
and I'd say that likely reflects the majority opinion I have seen in these forums. I prefer to instead think of it as 'something', but who knows what....that's been a subject of other discussions in these forums:
Tomstoer has one post I saved:
also, it curves!
As i understand it, mathematics sometimes only describes relationships and changes but sometimes not the causes of the changes. If that is true, could you apply that reasoning to the cause of the geometric changes? I apologize if this is too philosophical.
Let me add three more observations on why distance growth is not the same as relative speed.
1) Even in SR, in standard coordinates, if object A is going left at .99c and object B is going right at .99c, the distance between them, in this frame, is growing by 1.98c. Despite this, the speed of A relative to B is .99995c. This is very analogous to the cosmology situation in that 'distance growing' is in a common CMB frame (coordintates, really), not in coordinates adapted to galaxy A or galaxy B.
2) The Milne coordinates, which model an expanding universe in flat, Minkowski space, show growth of distance (recession speed) of much greater than 2c. While the Milne universe is physically implausible (and contradicted by observation), this shows how recession speed is a function coordinates not an invariant fact. Ultimately, the only difference between the Milne 'universe' and Minkowski coordinates is choice of foliation, which can never be physically significant.
3) In GR there is no unique way to define relative velocity of distant objects. What makes it possible in SR is path independence of parallel transport. However, despite lack of uniqueness, one may ask what happens if you compare the relative speed of distant galaxies by parallel transport along various paths. The result is that while the 'relative speed' varies with path, it is always less than c.
I really think cosmologists create much confusion by calling a growth of separation, which is routinely greater than c, a recession velocity, which leads to the perception that it is like a relative velocity.
"in what sense do you think of space as a thing that comes into existence?"
I'm going to try to give a thought that will be pummeled but i'm here to learn.
Could space be like a blob of something that can be stretched or compressed? When it is stretched could space from the other dimensions (Calabai yau) move to fill the less dense space and vice versa could space from our dimensions be compressed into the Calabai Yau spaces. Could gravitons be the mediator of this stretching and compressing?
space by itself has no substance, its simply put geometric volume. At one time the substance of space was called the ether. This is no longer supported. The changes in volume are a direct relation to the energy densities (pressure relations) between baryonic matter, dark matter and dark energy, as well as relativistic and non relativistic radiation in a small contribution. The graviton itself is a mediator boson of gravity but has never been confirmed to exist.
Contrary to popular belief, science does not deal in causes, in the everyday sense of that term. Physicists sometimes talk about 'causality' but that usually means something quite different from the everyday meaning. What science does is find mathematical laws that describe how systems evolve. Einstein's GR equation is an example of that. Einstein didn't say anything about 'why' spacetime obeys that equation. He just hypothesised that it does, then did some calculations that enabled us to experimentally test whether it does, and it did.
If we ever find out the 'reason' for the GR equation, it'll probably be just another bunch of lower level equations, that have no apparent 'reason' for them either.
Science is a pragmatic discipline.
Separate names with a comma. | <urn:uuid:81ca81e3-4b14-43a2-aa0a-190438316910> | 2.53125 | 3,672 | Comment Section | Science & Tech. | 54.440736 | 95,541,472 |
|Genus:||Vespula or Dolichovespula|
Yellowjacket or Yellow jacket is the common name in North America for predatory social wasps of the genera Vespula and Dolichovespula. Members of these genera are known simply as "wasps" in other English-speaking countries. Most of these are black and yellow like the eastern yellowjacket Vespula maculifrons and the aerial yellowjacket Dolichovespula arenaria; some are black and white like the bald-faced hornet, Dolichovespula maculata. Others may have the abdomen background color red instead of black. They can be identified by their distinctive markings, their occurrence only in colonies, and a characteristic, rapid, side-to-side flight pattern prior to landing. All females are capable of stinging. Yellowjackets are important predators of pest insects.
Yellowjackets are sometimes mistakenly called "bees" (as in "meat bees"), given that they are similar in size and sting, but yellowjackets are actually wasps. They may be confused with other wasps, such as hornets and paper wasps. Polistes dominula, a species of paper wasp, is very frequently misidentified as a yellowjacket. A typical yellowjacket worker is about 12 mm (0.5 in) long, with alternating bands on the abdomen; the queen is larger, about 19 mm (0.75 in) long (the different patterns on their abdomens help separate various species). Workers are sometimes confused with honey bees, especially when flying in and out of their nests. Yellowjackets, in contrast to honey bees, have yellow or white markings, are not covered with tan-brown dense hair on their bodies, do not carry pollen, and do not have the flattened hairy hind legs used to carry it.
These species have lance-like stingers with small barbs, and typically sting repeatedly, though occasionally a stinger becomes lodged and pulls free of the wasp's body; the venom, like most bee and wasp venoms, is primarily only dangerous to humans who are allergic or are stung many times. All species have yellow or white on their faces. Their mouthparts are well-developed with strong mandibles for capturing and chewing insects, with probosces for sucking nectar, fruit, and other juices. Yellowjackets build nests in trees, shrubs, or in protected places such as inside man-made structures, or in soil cavities, tree stumps, mouse burrows, etc. They build them from wood fiber they chew into a paper-like pulp. Many other insects exhibit protective mimicry of aggressive, stinging yellowjackets; in addition to numerous bees and wasps (Müllerian mimicry), the list includes some flies, moths, and beetles (Batesian mimicry).
Life cycle and habits
Yellowjackets are social hunters living in colonies containing workers, queens, and males (drones). Colonies are annual with only inseminated queens overwintering. Fertilized queens are found in protected places such as in hollow logs, in stumps, under bark, in leaf litter, in soil cavities, and in man-made structures. Queens emerge during the warm days of late spring or early summer, select a nest site, and build a small paper nest in which they lay eggs. After eggs hatch from the 30 to 50 brood cells, the queen feeds the young larvae for about 18 to 20 days. Larvae pupate, then emerge later as small, infertile females called workers. Workers in the colony take over caring for the larvae, feeding them with chewed up meat or fruit. By midsummer, the first adult workers emerge and assume the tasks of nest expansion, foraging for food, care of the queen and larvae, and colony defense.
From this time until her death in the autumn, the queen remains inside the nest, laying eggs. The colony then expands rapidly, reaching a maximum size of 4000 to 5000 workers and a nest of 10,000 to 15,000 cells in late summer. (This is true of most species in most areas; however, Vespula squamata, in the southern part of its range, may build much larger perennial colonies populated by scores of queens, tens of thousands of workers, and hundreds of thousands of cells.) At peak size, reproductive cells are built with new males and queens produced. Adult reproductives remain in the nest fed by the workers. New queens build up fat reserves to overwinter. Adult reproductives leave the parent colony to mate. After mating, males quickly die, while fertilized queens seek protected places to overwinter. Parent colony workers dwindle, usually leaving the nest to die, as does the foundress queen. Abandoned nests rapidly decompose and disintegrate during the winter. They can persist as long as they are kept dry, but are rarely used again. In the spring, the cycle is repeated; weather in the spring is the most important factor in colony establishment.
The diet of the adult yellowjacket consists primarily of items rich in sugars and carbohydrates, such as fruits, flower nectar, and tree sap. Larvae feed on proteins derived from insects, meats, and fish, which are collected by the adults, which chew and condition them before feeding them to the larvae. Many of the insects collected by the adults are considered pest species, making the yellowjacket beneficial to agriculture. Larvae, in return, secrete a sugar material to be eaten by the adults; this exchange is a form of trophallaxis. In late summer, foraging workers pursue other food sources from meats to ripe fruits, or scavenge human garbage, sodas, picnics, etc., as additional sugar is needed to foster the next generation's queens.
- European yellowjackets, the German wasp (Vespula germanica), and the common wasp (Vespula vulgaris) were originally native to Europe, but are now established in North America, southern Africa, New Zealand, and eastern Australia
- The eastern yellowjacket (Vespula maculifrons), western yellowjacket (Vespula pensylvanica), and prairie yellowjacket (Vespula atropilosa) are native to North America.
- Southern yellowjacket (Vespula squamosa)
- Bald-faced hornets (Dolichovespula maculata) belong among the yellowjackets rather than the true hornets. They are not usually called "yellowjackets" because of their ivory-on-black coloration.
- Aerial yellowjacket (Dolichovespula arenaria)
- Tree wasp (Dolichovespula sylvestris)
Dolichovespula species such as the aerial yellowjacket, D. arenaria, and the bald-faced hornet, tend to create exposed aerial nests. This feature is shared with some true hornets, which has led to some naming confusion.
Vespula species, in contrast, build concealed nests, usually underground.
Yellowjacket nests usually last for only one season, dying off in winter. The nest is started by a single queen, called the "foundress". Typically, a nest can reach the size of a basketball by the end of a season. In parts of Australia, New Zealand, the Pacific Islands, and southwestern coastal areas of the United States, the winters are mild enough to allow nest overwintering. Nests that survive multiple seasons become massive and often possess multiple egg-laying queens.
In the United States
The German yellowjacket (V. germanica) first appeared in Ohio in 1975, and has now become the dominant species over the eastern yellowjacket. It is bold and aggressive, and can sting repeatedly and painfully. It will mark aggressors and pursue them. It is often confused with Polistes dominula, an invasive species in the United States, due to their very similar pattern. The German yellowjacket builds its nests in cavities—not necessarily underground—with the peak worker population in temperate areas between 1000 and 3000 individuals between May and August. Each colony produces several thousand new reproductives after this point through November. The eastern yellowjacket builds its nests underground, also with the peak worker population between 1000 and 3000 individuals, similar to the German yellowjacket. Nests are built entirely of wood fiber and are completely enclosed except for a small entrance at the bottom. The color of the paper is highly dependent on the source of the wood fibers used. The nests contain multiple, horizontal tiers of combs within. Larvae hang within the combs.
In the southeastern United States, where southern yellowjacket (Vespula squamosa) nests may persist through the winter, colony sizes of this species may reach 100,000 adult wasps. The same kind of nest expansion has occurred in Hawaii with the invasive western yellowjacket (V. pensylvanica).
In popular culture
The yellowjacket's most visible place in American popular culture is as a mascot, most famously with the Georgia Tech Yellow Jackets, represented by the mascot Buzz. Other college and university examples include the American International College, Baldwin-Wallace University, Black Hills State University, Cedarville University, Defiance College, Graceland University, Howard Payne University, LeTourneau University, Montana State University Billings, Randolph-Macon College, University of Rochester, University of Wisconsin–Superior, West Virginia State University, and Waynesburg University.
Though not specified by the team, the mascot of the Columbus Blue Jackets, named "Stinger," closely resembles a yellowjacket. In the years since its original yellow incarnation, the mascot's color has been changed to a light green, seemingly combining the real insect's yellow and the team's blue.
Note that yellowjacket is often spelled as two words (yellow jacket) in popular culture and even in some dictionaries. The proper entomological spelling, according to the Entomological Society of America, is as a single word (yellowjacket).
- Akre, R.D. et al. (1980). The yellowjackets of America north of Mexico. USDA Agriculture Handbook 552. 102 pp.
- Lives of Social Insects Peggy Larson p.13
- "About Yellowjackets and the Benefits of Wasps in the Garden". Mother Earth News.
- "Yellow jackets building enormous nests". TuscaloosaNews.com. Retrieved 2013-01-14.
- Extension Daily: What is Causing Super-sized Yellow Jacket Nests? Archived 2007-06-29 at the Wayback Machine.
- "Response of Native Plant Communities to Alien Species Management on the Island of Hawaii" on the Hawaiian Cooperative Studies Program website
- "Common Names of Insects Database | Entomological Society of America". Entsoc.org. Retrieved 2018-06-25.
- "Yellowjackets and Hornets of Florida" on the UF / IFAS Featured Creatures Web site
- Successful Removal of German Yellow Jackets by Toxic Baiting | <urn:uuid:1edfdd65-013f-426a-a892-2b9c20b7df73> | 3.78125 | 2,310 | Knowledge Article | Science & Tech. | 36.360815 | 95,541,480 |
NASA Telescope Reveals Largest Batch of Earth-Size, Habitable-Zone Planets Around a Single StarFebruary 22, 2017 / Written by: NASA
This illustration shows the possible surface of TRAPPIST-1f, one of the newly discovered planets in the TRAPPIST-1 system. Scientists using the Spitzer Space Telescope and ground-based telescopes have discovered that there are seven Earth-size planets in the system. Credits: NASA/JPL-Caltech
NASA’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.
The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.
“This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of the agency’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.”
Seven Earth-sized planets have been observed by NASA’s Spitzer Space Telescope around a tiny, nearby, ultra-cool dwarf star called TRAPPIST-1. Three of these planets are firmly in the habitable zone. Credits: NASA
At about 40 light-years (235 trillion miles) from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets.
This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory’s Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven.
The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington.
Using Spitzer data, the team precisely measured the sizes of the seven planets and developed first estimates of the masses of six of them, allowing their density to be estimated.
Based on their densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will not only help determine whether they are rich in water, but also possibly reveal whether any could have liquid water on their surfaces. The mass of the seventh and farthest exoplanet has not yet been estimated – scientists believe it could be an icy, “snowball-like” world, but further observations are needed.
“The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star,” said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. “It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds.”
Article on Many Worlds Blog
Previous stories about TRAPPIST-1:
- Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea
- Yosemite Granite Tells New Story About Earth's Geologic History
- Supporting SHERLOC in the Detection of Kerogen as a Biosignature
- New Estimates of Earth's Ancient Climate and Ocean pH
- How Microbes From Spacecrafts Survive Clean Rooms
- Radical Factors in the Evolution of Animal Life
- Understanding Oxygen as an Exoplanet Biosignature
- Recap of the 2018 Astrobiology Graduate Conference (AbGradCon)
- Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award
- Searching for the Great Oxidation Event in North America | <urn:uuid:86ede0de-c7ce-4253-b254-9c15448a7ca8> | 3.5625 | 916 | News (Org.) | Science & Tech. | 22.02781 | 95,541,504 |
A primitive spider with a scorpion-like tail has been found in amber dating back 100 million years.
The arthropod has been named Chimerarachne after Chimera, a monster from Greek mythology who was made of the parts of more than one creature.Lava 'bomb' strikes tour boat injuring 22 people near Kilauea Volcano
It is believed to have scurried around the undergrowth of the rainforests of Burma during the age of the dinosaurs.
Upon inspection, it was found that the creature’s tail was longer that its body – meaning it was used as a sensory device to seek out prey or escape predators.
Called a ‘telson’, the tail is seen today in scorpions – but it has never been known before in a spider.
The newly discovered species also had fangs – just like today’s arachnids – through which it would inject venom into insects it trapped in pincer like claws.
Four fossils were so perfectly preserved scientists could also identify specialised male sexual organs called pedipalps.DC-based gun rights advocate charged with being a Russian spy
Similar to a tiny hypodermic needle they are used to transfer sperm to females.
The spider itself is tiny – about 2.5 millimeters in body length – excluding the nearly 3 millimeter-long tail.
Palaeontologist Professor Paul Selden, of Kansas University, said: ‘Any sort of flagelliform appendage tends to be like an antenna.
‘It’s for sensing the environment. Animals that have a long whippy tail tend to have it for sensory purposes.’
The spider has been christened Chimerarachne after the mythological Chimera.
The extraordinary finding is described in Nature Ecology & Evolution by an international team which included earth scientist Dr Russell Garwood of Manchester University.Donald Trump condemned for his 'disgraceful' press conference with Vladimir Putin
It’s the latest in a series of Cretaceous-period fossils from the amber deposits in northern Myanmar’s Hukawng Valley.
Professor Selden said: ‘We can only speculate that, because it was trapped in amber, we assume it was living on or around tree trunks.
‘Amber is fossilised resin, so for a spider to have become trapped, it may well have lived under bark or in the moss at the foot of a tree.’
While the tailed spider was capable of producing silk due to its spinnerets it was unlikely to have constructed webs to trap bugs like many modern spiders.
It was added that the spider’s remote habitat means it is possible that tailed descendants may still be alive in Myanmar’s back country to this day.
Professor Selden added: ‘It makes us wonder if these may still be alive today.
‘We haven’t found them, but some of these forests aren’t that well-studied, and it’s only a tiny creature.’ | <urn:uuid:0c6398b5-f0c0-451a-8ee5-9064ace2c980> | 3.546875 | 641 | News Article | Science & Tech. | 43.208012 | 95,541,516 |
The giant panda’s immune system is fairly diverse, genetically speaking, suggesting the endangered species may be more resilient to environmental change than previously thought, scientists say.
Biologists estimate that only about 1,500 giant pandas live in the wild today, confined to six isolated mountain ranges in south-central China. Panda fossil remains suggest the charismatic bears once roamed through parts of Burma and northern Vietnam as well, but have since suffered from environmental change and habitat fragmentation, and have been listed as endangered by the International Union for the Conservation of Nature since 1990.
Researchers based at Zhejiang University in China who were interested in determining the genetic diversity within the dwindling wild population recently collected genetic material within either the blood, skin or fecal materials of 218 wild pandas from all six isolated mountain ranges the bears now roam. [In Photos: Giant Panda Mei Xiang Gives Birth]
The team specifically analyzed the bears’ major histocompatibility complex (MHC) — the part of the genome that embodies parts of the immune system — because this is known to be an adaptive loci, meaning that different populations adapt to have different MHCs. Other parts of the genome are the same within all individuals of a given species and would therefore not be good indicators of genetic diversity.
Animal populations need genetic diversity because otherwise, a single threat to the population — such as the introduction of a certain pathogen — could theoretically wipe out the entire population, if all individuals were equally prone to it.
(MORE: 12 Animals at Risk from Climate Change)
“The assumption is that a decrease in genetic variation and a lack of exchange between isolated populations increase the likelihood of extinction by reducing the population’s ability to adapt to changing environments,” the team writes in a report that details their findings Oct. 21 in the journal BioMed Central.
The team says the giant panda shows more diversity than several other endangered species, including the Bengal tiger and Namibian cheetah, but less diversity than the more stable brown bear.
Paul Hohenlohe, a biologist at the University of Idaho who was not involved in the study said that this diversity suggests pandas did not experience the same type of population “bottleneck” that biologists think cheetahs experienced at some point in the animal’s history; that bottleneck caused cheetahs to become more genetically uniform than many other wild animals.
The new genetic data can be used to develop captive breeding programs that help perpetuate diversity, Hohenlohe said.
“If you need to capture 10 pandas for a captive breeding program, then you choose those 10 to encompass the most diversity,” Hohenlohe told LiveScience. “You can do that by getting them from multiple populations, or one population that has the most diversity.”
Management groups can also use the new genetic data to prioritize habitat restoration projects to revolve around genetically diverse populations.
Follow Laura Poppick on Twitter. Follow LiveScience on Twitter, Facebook and Google+. Original article on LiveScience.
- Butter Balls: Photos of Playful Pandas
- Baby Panda Pics: See A Cub Growing Up
- Images: One-of-a-Kind Places on Earth
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:afd2e01b-08b8-48bf-bf40-238abf48967c> | 3.734375 | 693 | News Article | Science & Tech. | 18.260031 | 95,541,520 |
Aluminum bromide is an inorganic binary compound. The substance is a salt of aluminum and bromine-hydrogen acid. Outside, these are colorless monoclinic crystals. In solid and liquid state, it exists in form of a dimer. Its molar/atomic mass is 266.69.
The anhydrous aluminum bromide is a colorless crystalline substance melting at temperature of 97,5 °C. Its temperature of boiling in a liquid state is 255 °C.
The substance is highly hygroscopic: in the open air, it becomes diffuse, easily absorbing moisture with formation of the hexahydrate AlBr3•6H2O. It is well soluble in water, alcohol, carbon sulphide, acetone. The interaction of aluminum bromide with water is accompanied by an extremely violent reaction with emission of a large quantity of heat and a leakage of reaction mass. It fumes in the open air.
The commercial use of aluminum bromide is relatively insignificant at the moment. Aluminum bromide belongs as the main component to xylene electrolytes for electric precipitation of aluminum coatings. The anhydrous aluminum bromide is used in organic synthesis, in particular, in the alkylation reaction or Friedel-Crafts reaction by analogy with aluminum chloride.
While contacting with the skin, aluminum bromide can scald. The compound is moderately poisonous.
The application of aluminum bromide is relatively small. However, aluminum bromide is the main component of xylol electrolytes for electric precipitation of aluminum coatings. The anhydrous aluminum bromide is also used in organic synthesis, in particular, in an alkylation reaction, which is also called Friedel-Crafts acylation. This compound can be catalyst in a bromo-alkanes` isomerizating reaction. Aluminum bromide can also be used as bromating agent, for example, in a reaction with chloroform.
Formula for Aluminum Bromide
The structure of a molecule of aluminum bromide represents the twinned tetrahedrons, in the center of which the aluminum atoms covalently bound to bromine atoms are located. The coordination number of aluminum bromide in a molecule is equal to 4.
In solid and liquid phase, aluminum bromide exists in form of a dimer Al2Br6 partially dissociating in AlBr3.
Its empirical formula (Hill's system) is the following: AlBr3.
Its structural formula is: Al2Br6
The anhydrous aluminum bromide is received by interaction of elements (Al and Br2) at heating:
2Al + 3Br2 = Al2Br6
Its aqueous solution can be received by a reaction of aluminum shavings with bromine-hydrogen acid:
2Al + 6HBr = 2AlBr3 + 3H2 ↑
Properties for Aluminum Bromide
Molar mass 266.69 g/mol
Melting Point 97.8 °C (208.0 °F; 370.9 K)
Boiling Point 265 °C (509 °F; 538 K)
Density 3.205 g/cm3 | <urn:uuid:e1e3cd29-94c8-4fbd-b5e4-54187f978a0c> | 3.3125 | 676 | Knowledge Article | Science & Tech. | 31.881089 | 95,541,526 |
Species Detail - Common Garden Slug (Arion (Kobeltia) distinctus) - Species information displayed is based on the dataset "All Ireland Non-Marine Molluscan Database".
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Arion (Kobeltia) distinctus
Common Garden Slug
J. Mabille, 1868
1 January (recorded in 1975)
31 December (recorded in 1973)
Conchological Society of Great Britain and Ireland, All Ireland Non-Marine Molluscan Database, National Biodiversity Data Centre, Ireland, Common Garden Slug (Arion (Kobeltia) distinctus), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Dataset/1/Species/123808> | <urn:uuid:26d0702f-b4ca-402f-a765-ef2868793092> | 2.546875 | 209 | Structured Data | Science & Tech. | 26.650055 | 95,541,555 |
Consider a homogeneous medium, containing only one diffraction point (Fig. 14.1 a). In this case, the time-section (record surface) is a hyperbola (Fig. 14.1 b) whereas the geological inhomogeneity (a diffractor) is a single point. It is evident that the reflection or diffraction events distributed on a hyperbola shown in Fig. 14.1 b are to be brought to their actual (point) position. This step is taken care of in the process of ‘migration’.
KeywordsSeismic Source Downward Continuation Depth Migration Record Section Record Surface
Unable to display preview. Download preview PDF. | <urn:uuid:81ac43b2-0dc8-48ab-8088-accb23d09fb4> | 2.890625 | 142 | Truncated | Science & Tech. | 53.984167 | 95,541,595 |
Gene variants influence maritime pine survival under climate stress
Data from only a small number of gene variants can predict which maritime pine trees are most vulnerable to climate change, scientists report in the March issue of GENETICS. The results will improve computer models designed to forecast where forests will grow as the climate changes, and promises to help forestry managers decide where to focus reforestation efforts. The results will also guide the choice of tree stocks.
Santiago C. González-Martínez.
Maritime pine forest in Serra Calderona, eastern Spain. Typical Mediterranean forests as the one pictured here are under severe risk due to summer droughts and wildfire. It is expected that extinction risk of this valuable ecosystem will increase due to climate change.
The maritime pine (Pinus pinaster) grows widely in southwestern Europe and parts of northern Africa. But the tree's important economic value and ecological roles in the region may be at risk as the changing climate threatens the more vulnerable forests and the productivity of commercial plantations.
To predict which regions will sustain pine forests in the future, researchers and managers rely on computer models. But these forecasts don't take into account two major factors that influence a forest's fate: genetics and evolution. Genetic differences between tree populations mean that forests vary in how well they cope with warmer, drier conditions. Ongoing evolution of trees also influences the prevalence of these genetic differences; for example, trees with gene variants allowing them to withstand higher temperatures will become increasingly common as the climate changes.
"These genetic effects are not included in forest range shift models, but we know they can completely change the resulting predictions. Our goal was to identify such effects in a way that can be readily incorporated into the forecasts," said study leader Santiago González-Martínez, from the Forest Research Centre of Spain's Institute for Agricultural Research (CIFOR-INIA).
To find genetic variants that affect the species’ fitness in different climate conditions, maritime pine researchers from around the world pooled their expertise and the results of previous research, yielding a list of more than 300 variants in 200 candidate genes. Creating a shortlist of targets is considerably faster and more economical than searching the entire genome of the maritime pine, which is about nine times larger than the human genome.
From this list, the team tested whether any of the candidates were more common in regions that shared similar climates. Such geographic patterns can be the result of natural selection and point to gene variants that influence tree survival and reproduction according to climate. By testing the frequency of each variant at 36 locations in Portugal, Spain, France, Morocco, and Tunisia, the researchers found 18 variants that showed correlations with the local climate. These variants affected genes involved in many different biological processes, including growth and response to heat stress.
The researchers then looked for evidence that these variants are important for the trees’ fitness by planting seedlings from 19 of the locations together in a dry part of Spain, at the extreme end of the species' climatic range. This allowed the team to compare how well genetically different trees would survive under similar conditions. After five years, the seedlings carrying gene variants predicted to be beneficial in the local climate indeed tended to have higher survival rates.
These results demonstrate the feasibility of this relatively fast approach of finding and confirming genetic variants associated with climate. "Now that we have shown that the method works well, we are planning similar experiments on a bigger scale, with more test sites, looking at more genes, and different traits. For example, the single biggest climate change threat to pine forests is the increased frequency of wildfires, so we're searching for variants that affect fire tolerance," said González-Martínez.
"Good decisions require good data, and this collaborative work shows how crucial genetic data can be for managing biodiversity and commercial forestry amid a changing climate," said GENETICS Editor-in-Chief Mark Johnston.
Molecular Proxies for Climate Maladaptation in a Long-Lived Tree (Pinus pinaster Aiton, Pinaceae)
Juan-Pablo Jaramillo-Correa, Isabel Rodríguez-Quilón, Delphine Grivet, Camille Lepoittevin, Federico Sebastiani, Myriam Heuertz, Pauline H. Garnier-Géré, Ricardo Alía, Christophe Plomion, Giovanni G. Vendramin, and Santiago C. González-Martínez
GENETICS March 2015 199:793-807 doi:10.1534/genetics.114.173252
The study was funded by grants from the European Commission (FP6 NoE EvolTree and FP7 NovelTree Breeding), the Spanish National Research Plan (ClonaPin, RTA2010-00120-C02-01; VaMPiro, CGL2008-05289-C02-01/02; AdapCon, CGL2011-30182-C02-01; and AFFLORA, CGL2012-40129-C02-02), the Italian Science Ministry (MIUR project ‘Biodiversitalia’, RBAP10A2T4), and the ERA-Net BiodivERsA (LinkTree project, EUI2008-03713), which included the Spanish Ministry of Economy and Competitiveness as national funder (part of the 2008 BiodivERsA call for research proposals).
Institutions involved in research:
Forest Research Centre, Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria, Spain
Universidad Nacional Autónoma de México
Institut National de la Recherche Agronomique
University of Bordeaux
Institute of Biosciences and Bioresources, National Research Council, Italy
University of Lausanne
Phone: +1 412-478-3537
Cristy Gelling | newswise
Researchers discover natural product that could lead to new class of commercial herbicide
16.07.2018 | UCLA Samueli School of Engineering
Advance warning system via cell phone app: Avoiding extreme weather damage in agriculture
12.07.2018 | Leibniz-Zentrum für Agrarlandschaftsforschung (ZALF) e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:8b63a6c8-da5e-4c2b-b3c2-a3919e9ac50f> | 3.515625 | 1,834 | Content Listing | Science & Tech. | 31.702826 | 95,541,614 |
Silicon Microwires Could Have a Sunny Future
New solar cells show gains in efficiency.
The race for inexpensive, highly efficient solar cells may have gained another contender in the form of silicon microwires. Efforts to develop ultra-thin wires that convert sunlight into electricity are not new to the solar power field, but a new method for growing the wires has roughly doubled their conversion efficiency and may hold the key for even larger gains.
“All wires thus far have had 1 or 2 percent efficiency [at the array level] with fundamental questions about whether they could ever go higher,” says Nathan Lewis, a chemist at Caltech who coauthored the study, which appears in Science.“We’ve demonstrated 3 percent efficiency and shown that there is no fundamental reason they can’t perform at over 10 percent.”
Silicon nanowires, or in this case slightly larger-diameter microwires, are typically grown from a silicon substrate with the help of tiny gold droplets. Under high temperatures, a single wire will quickly sprout from each droplet like a blade of grass. Gold is an excellent catalyst for wire growth, but it also introduces impurities that are generally believed to inhibit electron transport within the wires, reducing their overall efficiency.
Using copper instead of gold as the catalyst, Lewis and colleagues achieved roughly twice the efficiency of prior efforts in an array of wires. They believe the results are due to higher silicon purity and increased electron transport capacity compared to prior efforts that relied on gold catalysts.
In what they are calling a proof of concept study, the researchers kept the “packing fraction” of their array at 4 percent. Packing is a measure of how much of the surface of an array has wires protruding from it. A packing fraction of 4 percent means that 96 percent of the surface of the array has no wires and therefore is incapable of capturing sunlight and converting it into electricity. Lewis says that simply increasing the packing fraction to 15 to 20 percent will result in a fourfold increase in efficiency.
Some doubt it will be that simple. “If it’s that easy, why haven’t they done it?” asks Ray LaPierre a professor in the engineering physics department at McMaster University in Ontario. LaPierre says increasing the packing fraction is technically feasible through a technique known as “photolithography,” but this would likely be prohibitively expensive for commercial solar cell production.
Another potential problem that LaPierre thinks may inhibit higher efficiencies is the electron transport capacity of the wires. When photons of sunlight are captured by a wire, they produces electrons that must then escape from the material to produce an electric current. Electrons, however, are easily trapped along the surface of the wires, reducing their overall efficiency. Thin-film solar cells have to overcome this challenge as well, but the problem is especially acute in thin wire cells because they have a much larger surface area per volume than planar films.
The wires that Lewis and colleagues grew, however, are 1.6 micrometers in diameter, three orders of magnitude thicker than typical solar cell nanowires. The thicker microwires have a lower surface area to volume ratio that, according to modeling conducted by the group, boosts the electron transport capacity of the wires.
Matthew Beard, a senior scientist at the National Renewable Energy Laboratory in Golden, CO, says the relatively high surface area of the wires could be a plus for converting solar power into hydrogen fuel. The high surface area and low cost of raw materials of the silicon microwires means they could be used directly as electrodes to hydrolyze water into hydrogen.
Still, Beard says microwire solar technology will have a tough time competing as a source of power against currently available thin films that are relatively inexpensive and already achieve 10 to 12 percent efficiencies. But Beard adds that silicon, the raw material for the wires, is more readily available than metals such as cadmium and telluride that make up today’s most efficient thin films. “This technology has a long way to go, but potentially can compete, as silicon is more abundant than those materials and potentially cheaper,” he says.
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:dae0b4f0-f7f0-40d5-8c13-af7b46b9e7a3> | 3.53125 | 887 | Truncated | Science & Tech. | 31.808016 | 95,541,618 |
– the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests operating under the UNECE Convention on Long-range Transboundary Air Pollution (CLRTAP) –
ICP Forests was launched in 1985 under the Convention on Long-range Transboundary Air Pollution (CLRTAP) of the United Nations Economic Commission for Europe (UNECE) in response to wide public and political concern over extensive forest damage that had been observed in Europe in the beginning of the 1980s. ICP Forests monitors forest condition in Europe at two monitoring intensity levels: The Level I monitoring is based on around 6000 observation plots on a systematic transnational grid of 16 x 16 km throughout Europe and beyond to gain insight into the geographic and temporal variations in forest condition while the Level II intensive monitoring comprises around 500 plots in selected forest ecosystems with the aim to clarify cause-effect relationships. At present 42 countries participate in ICP Forests.
The Norwegian Institute for Water Research (NIVA) in Oslo is looking for a researcher in catchment biogeochemistry with a strong interest in water chemistry in semi-natural boreal and temperate catchments, and an understanding of long-term biogeochemical change, including acid-base chemistry, related to air pollution and climate.
Posted by Alexa Michel on June 27, 2018 at 11:32
The draft minutes from the ICP Forests Task Force meeting in Riga, 24-25 May 2018, are now available on the ICP Forests website. Their final version will be adopted at the next Task Force meeting in 2019.
Posted by Alexa Michel on June 25, 2018 at 11:12
We are very happy and proud to announce that yesterday evening the study on "Environment and host as large-scale controls of ectomycorrhizal fungi“ by Sietse van der Linde and others (2018) was published in Nature. Please, have a look at https://www.nature.com/articles/s41586-018-0189-9.
This important publication is based on a close cooperation between the Imperial College London and ICP Forests partners, bringing together methods of specialists on fungi and mycorrhizas and data and experience of transcontinental air pollution monitoring. As a major result, the study recommends a down-ward adjustment of critical loads for eutrophying nitrogen for forests in order to keep the balance between mycorrhizal fungi and forest trees…Continue
Posted by Anne-Katrin Prescher on June 7, 2018 at 18:00
Diversity (ISSN 1424-2818; CODEN: DIVEC6) is an open access journal on the science of biodiversity published quarterly online by MDPI.
They encourage researchers to send their manuscript on the following topics:
Posted by Alexa Michel on June 5, 2018 at 9:01
Dear colleagues,It is planned to purchase a new mill, Kjeldahl Steam Distillation System for N analysis and combined system to analyze pH, EC and alkalinity for water samples for my lab. 1. There are…Continue
Started by Rabia GUNHAN. Last reply by Robert Menegotto Apr 30.
I would like to ask on IPM and how benefits us in protecting the environment?Continue
Started by Mohamed Dadamouny Dec 19, 2016.
Dear all,I would like to announce that we will organise a workshop on 8-9 October 2015 at the Royal Botanic Gardens Kew in London, with the topic environmental change and forest ectomycorrhizal…Continue
Started by Sietse van der linde Mar 30, 2015. | <urn:uuid:46b952e9-cc34-4d7b-b26f-eb345e086e6d> | 3.046875 | 758 | Comment Section | Science & Tech. | 30.616697 | 95,541,627 |
Partition (number theory)
In number theory and combinatorics, a partition of a positive integer n, also called an integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same partition. (If order matters, the sum becomes a composition.) For example, 4 can be partitioned in five distinct ways:
- 3 + 1
- 2 + 2
- 2 + 1 + 1
- 1 + 1 + 1 + 1
The order-dependent composition 1 + 3 is the same partition as 3 + 1, while the two distinct compositions 1 + 2 + 1 and 1 + 1 + 2 represent the same partition 2 + 1 + 1.
A summand in a partition is also called a part. The number of partitions of n is given by the partition function p(n). So p(4) = 5. The notation λ ⊢ n means that λ is a partition of n.
Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials, the symmetric group and in group representation theory in general.
- 1 Examples
- 2 Representations of partitions
- 3 Partition function
- 4 Restricted partitions
- 5 Rank and Durfee square
- 6 Young's lattice
- 7 See also
- 8 Notes
- 9 References
- 10 External links
The seven partitions of 5 are:
- 4 + 1
- 3 + 2
- 3 + 1 + 1
- 2 + 2 + 1
- 2 + 1 + 1 + 1
- 1 + 1 + 1 + 1 + 1
In some sources partitions are treated as the sequence of summands, rather than as an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as the tuple (2, 2, 1) or in the even more compact form (22, 1) where the superscript indicates the number of repetitions of a term.
Representations of partitions
There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named after Norman Macleod Ferrers, and as Young diagrams, named after the British mathematician Alfred Young. Both have several possible conventions; here, we use English notation, with diagrams aligned in the upper-left corner.
The partition 6 + 4 + 3 + 1 of the positive number 14 can be represented by the following diagram:
The 14 circles are lined up in 4 rows, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are listed below:
|4||=||3 + 1||=||2 + 2||=||2 + 1 + 1||=||1 + 1 + 1 + 1|
An alternative visual representation of an integer partition is its Young diagram. Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is
while the Ferrers diagram for the same partition is
While this seemingly trivial variation doesn't appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: in particular, filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects called Young tableaux, and these tableaux have combinatorial and representation-theoretic significance. As a type of shape made by adjacent squares joined together, Young diagrams are a special kind of polyomino.
In number theory, the partition function p(n) represents the number of possible partitions of a natural number n, which is to say the number of distinct ways of representing n as a sum of natural numbers (with order irrelevant). By convention p(0) = 1, p(n) = 0 for n negative.
The first few values of the partition function, starting with p(0) = 1, are:
- 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, … (sequence A000041 in the OEIS).
The exact value of p(n) for larger values of n, is for example p(100) = 190,569,292, p(1000) is 24,061,467,864,032,622,473,692,149,727,991 or approximately 2.40615×1031, and p(10000) is 36,167,251,325,...,906,916,435,144 or approximately 3.61673×10106.
Expanding each factor on the right-hand side as a geometric series, one can rewrite it as
- (1 + x + x2 + x3 + ...)(1 + x2 + x4 + x6 + ...)(1 + x3 + x6 + x9 + ...) ....
The xn term in this product counts the number of ways to write
- n = a1 + 2a2 + 3a3 + ... = (1 + 1 + ... + 1) + (2 + 2 + ... + 2) + (3 + 3 + ... + 3) + ...,
where each number i appears ai times. This is precisely the definition of a partition of n, so our product is the desired generating function. More generally, the generating function for the partitions of n into numbers from a set A can be found by taking only those terms in the product where k is an element of A. This result is due to Euler.
where the exponents of x on the right hand side are the generalized pentagonal numbers; i.e., Pk = k(3k-1)/2 for k = 1, −1, 2, −2, 3, ... The signs in the summation alternate as . This theorem can be used to derive a recurrence for the partition function:
where p(0) is taken to equal 1, and p(k) is taken to be zero for negative k. (Thus, although the sum on the right side appears infinite, its terms are nonzero if and only if k is an integer in the (finite) range .)
For instance, the number of partitions for the integer 4 is 5. For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This is implied by the identity
A short proof of this result is obtained from the partition function generating function here.
He also discovered congruences related to 7 and 11:
and proved the identity
Since 5, 7, and 11 are consecutive primes, one might think that there would be such a congruence for the next prime 13, for some a. This is, however, false. In fact, it can be shown that there is no congruence of the form for any prime b other than 5, 7, or 11.
Partition function formulas
Approximation formulas exist that are faster to calculate than the exact formula given above.
An asymptotic expression for p(n) is given by
- as .
This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering p(1000), the asymptotic formula gives about 2.4402 × 1031, reasonably close to the exact answer given above (1.415% larger than the true value).
Hardy and Ramanujan obtained an asymptotic expansion with this approximation as the first term:
The error after v terms is of the order of the next term, and v may be taken to be of the order of . As an example, Hardy and Ramanujan showed that p(200) is the nearest integer to the sum of the first v=5 terms of the series.
It may be shown that the k-th term of Rademacher's series is of the order
so that the first term gives the Hardy–Ramanujan asymptotic approximation.
Techniques for implementing the Hardy–Ramanujan–Rademacher formula efficiently on a computer are discussed in Johansson, where it is shown that p(n) can be computed in softly optimal time O(n1/2+ε). The largest value of the partition function computed exactly is p(1020), which has slightly more than 11 billion digits.
Other recurrence relations
If q(n) denotes the number of partitions of n with no repeated parts then also
In both combinatorics and number theory, families of partitions subject to various restrictions are often studied. This section surveys a few such restrictions.
Conjugate and self-conjugate partitions
If we flip the diagram of the partition 6 + 4 + 3 + 1 along its main diagonal, we obtain another partition of 14:
|6 + 4 + 3 + 1||=||4 + 3 + 3 + 2 + 1 + 1|
By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to be conjugate of one another. In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest is the partition 2 + 2, which has itself as conjugate. Such a partition is said to be self-conjugate.
Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts.
Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram:
One can then obtain a bijection between the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example:
|9 + 7 + 3||=||5 + 5 + 4 + 3 + 2|
Odd parts and distinct parts
Among the 22 partitions of the number 8, there are 6 that contain only odd parts:
- 7 + 1
- 5 + 3
- 5 + 1 + 1 + 1
- 3 + 3 + 1 + 1
- 3 + 1 + 1 + 1 + 1 + 1
- 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
Alternatively, we could count partitions in which no number occurs more than once. If we count the partitions of 8 with distinct parts, we also obtain 6:
- 7 + 1
- 6 + 2
- 5 + 3
- 5 + 2 + 1
- 4 + 3 + 1
For all positive numbers the number of partitions with odd parts equals the number of partitions with distinct parts. This result was proved by Leonhard Euler in 1748 and is a special case of Glaisher's theorem.
For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example is q(n), the number of partitions of n into distinct parts. The first few values of q(n) are (starting with q(0)=1):
The second product can be written ϕ(x2) / ϕ(x) where ϕ is Euler's function; the pentagonal number theorem can be applied to this as well giving a recurrence for q:
- q(k) = ak + q(k − 1) + q(k − 2) − q(k − 5) − q(k − 7) + q(k − 12) + q(k − 15) − q(k − 22) − ...
where ak is (−1)m if k = 3m2 − m for some integer m and is 0 otherwise.
Restricted part size or number of parts
By taking conjugates, the number pk(n) of partitions of n into exactly k parts is equal to the number of partitions of n in which the largest part has size k. The function pk(n) satisfies the recurrence
- pk(n) = pk(n − k) + pk−1(n − 1)
with initial values p0(0) = 1 and pk(n) = 0 if n ≤ 0 or k ≤ 0. This recurrence is correct because pk(n − k) counts the partitions of n where the smallest part is greater than 1 (remove k 1's and add them to each partition) and pk−1(n − 1) counts the partitions where the smallest part is 1. One recovers the function p(n) by
One possible generating function for such partitions, taking k fixed and n variable, is
More generally, if T is a set of positive integers then the number of partitions of n, all of whose parts belong to T, has generating function
This can be used to solve change-making problems (where the set T specifies the available coins). As two particular cases, one has that the number of partitions of n in which all parts are 1 or 2 (or, equivalently, the number of partitions of n into 1 or 2 parts) is
and the number of partitions of n in which all parts are 1, 2 or 3 (or, equivalently, the number of partitions of n into at most three parts) is the nearest integer to (n + 3)2 / 12.
The asymptotic expression for p(n) implies that
If A is a set of natural numbers, we let pA(n) denote the number of partitions of n into elements of A. If A possesses positive natural density α then
If A is a finite set, this analysis does not apply (the density of a finite set is zero). If A has k elements whose greatest common divisor is 1, then
Partitions in a rectangle and Gaussian binomial coefficients
One may also simultaneously limit the number and size of the parts. Let p(N, M; n) denote the number of partitions of n with at most M parts, each of size at most N. Equivalently, these are the partitions whose Young diagram fits inside an M × N rectangle. There is a recurrence relation
obtained by observing that counts the partitions of n into exactly M parts of size at most N, and subtracting 1 from each part of such a partitions yields a partition of n − M into at most M parts.
The Gaussian binomial coefficient is defined as:
The Gaussian binomial coefficient is related to the generating function of p(N, M; n) by the equality
Rank and Durfee square
The rank of a partition is the largest number k such that the partition contains at least k parts of size at least k. For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rank r, the r × r square of entries in the upper-left is known as the Durfee square:
There is a natural partial order on partitions given by inclusion of Young diagrams. This partially ordered set is known as Young's lattice. The lattice was originally defined in the context of representation theory, where it is used to describe of the irreducible representations of symmetric groups Sn for all n, together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of a differential poset.
|Wikimedia Commons has media related to Integer partitions.|
- Rank of a partition, a different notion of rank
- Crank of a partition
- Dominance order
- Integer factorization
- Partition of a set
- Stars and bars (combinatorics)
- Plane partition
- Polite number, defined by partitions into consecutive integers
- Multiplicative partition
- Twelvefold way
- Ewens's sampling formula
- Faà di Bruno's formula
- Newton's identities
- Leibniz's distribution table for integer partitions
- Smallest-parts function
- A Goldbach partition is the partition of an even number into primes (see Goldbach's conjecture)
- Kostant's partition function
- Andrews 1976, p. 199.
- Josuat-Vergès, Matthieu (2010), "Bijections between pattern-avoiding fillings of Young diagrams", Journal of Combinatorial Theory, Series A, 117 (8): 1218–1230, arXiv: , doi:10.1016/j.jcta.2010.03.006, MR 2677686.
- Sloane, N.J.A. (ed.). "Sequence A070177". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
- Caldwell, Chris K. (2017). "Partitions". The Top Twenty.
- Abramowitz & Stegun 1964, p. 825.
- Hardy & Wright 2008, p. 380.
- Berndt; Ono. "Ramanujan's Unpublished Manuscript on the Partition and Tau Functions with Proofs and Commentary" (PDF). Archived from the original (PDF) on 2011-09-27.
- Ono 2004, p. 87.
- Andrews 1976, p. 69.
- Erdős, Pál (1942). "On an elementary proof of some asymptotic formulas in the theory of partitions". Ann. Math. (2). 43: 437–450. doi:10.2307/1968802. Zbl 0061.07905.
- Nathanson 2000, p. 456.
- Johansson, F. (2012). "Efficient implementation of the Hardy–Ramanujan–Rademacher formula". LMS Journal of Computation and Mathematics. 15: 341–59.
- Johansson, Fredrik (March 2, 2014). "New partition function record: p(1020) computed".
- Alder, Henry L. (1969). "Partition identities - from Euler to the present". American Mathematical Monthly. 76: 733–746. doi:10.2307/2317861.
- Hardy & Wright 2008, p. 362.
- Hardy & Wright 2008, p. 368.
- Hardy & Wright 2008, p. 365.
- Andrews, George E. (1971). Number Theory (Dover ed.). Philadelphia: W. B. Saunders Company. pp. 149–50.
- Notation follows Abramowitz & Stegun 1964, p. 825
- Abramowitz & Stegun 1964, p. 825, 24.2.2 eq. I(B)
- Abramowitz & Stegun 1964, p. 826, 24.2.2 eq. II(A)
- Here the notation follows that of Stanley 1999, Section 1.
- Hardy, G.H. (1920). Some Famous Problems of the Theory of Numbers. Clarendon Press.
- Andrews 1976, pp. 70,97.
- Nathanson 2000, pp. 475-85.
- Nathanson 2000, p. 495.
- Nathanson 2000, pp. 458-64.
- Andrews 1976, pp. 33–34.
- see, e.g., Stanley 1999, p. 58
- Abramowitz, Milton; Stegun, Irene (1964). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. United States Department of Commerce, National Bureau of Standards. ISBN 0-486-61272-4.
- Ahlgren, Scott; Ono, Ken (2001). "Congruence properties for the partition function" (PDF). Proceedings of the National Academy of Sciences. 98 (23): 12882–12884. Bibcode:2001PNAS...9812882A. doi:10.1073/pnas.191488598. MR 1862931. Archived from the original (PDF) on 2011-06-07.
- Andrews, George E. (1976). The Theory of Partitions. Cambridge University Press. ISBN 0-521-63766-X.
- Andrews, George E.; Eriksson, Kimmo (2004). Integer Partitions. Cambridge University Press. ISBN 0-521-60090-1.
- Apostol, Tom M. (1990) . Modular functions and Dirichlet series in number theory. Graduate Texts in Mathematics. 41 (2nd ed.). New York etc.: Springer-Verlag. ISBN 0-387-97127-0. Zbl 0697.10023. (See chapter 5 for a modern pedagogical intro to Rademacher's formula).
- Bóna, Miklós (2002). A Walk Through Combinatorics: An Introduction to Enumeration and Graph Theory. World Scientific Publishing. ISBN 981-02-4900-4. (an elementary introduction to the topic of integer partitions, including a discussion of Ferrers graphs)
- Complicite (2007). A Disappearing Number., mentions Ramanujan's work on the Partition Function.
- Hardy, G. H.; Wright, E. M. (2008) . An Introduction to the Theory of Numbers. Revised by D. R. Heath-Brown and J. H. Silverman. Foreword by Andrew Wiles. (6th ed.). Oxford: Oxford University Press. ISBN 978-0-19-921986-5. MR 2445243. Zbl 1159.11001.
- Lehmer, D. H. (1939). "On the remainder and convergence of the series for the partition function". Trans. Amer. Math. Soc. 46: 362–373. doi:10.1090/S0002-9947-1939-0000410-9. MR 0000410. Zbl 0022.20401. Provides the main formula (no derivatives), remainder, and older form for Ak(n).)
- Gupta, Hansraj; Gwyther, C.E.; Miller, J.C.P. (1962). Royal Society of Math. Tables. Volume 4, Tables of partitions. (Has text, nearly complete bibliography, but they (and Abramowitz) missed the Selberg formula for Ak(n), which is in Whiteman.)
- Macdonald, Ian G. (1979). Symmetric functions and Hall polynomials. Oxford Mathematical Monographs. Oxford University Press. ISBN 0-19-853530-9. Zbl 0487.20007. (See section I.1)
- Nathanson, M.B. (2000). Elementary Methods in Number Theory. Graduate Texts in Mathematics. 195. Springer-Verlag. ISBN 0-387-98912-9. Zbl 0953.11002.
- Ono, Ken (2000). "Distribution of the partition function modulo m". Ann. of Math. 151 (1): 293–307. arXiv: . doi:10.2307/121118. MR 1745012. Zbl 0984.11050.
- Ono, Ken (2004). The web of modularity: arithmetic of the coefficients of modular forms and q-series. CBMS Regional Conference Series in Mathematics. 102. Providence, RI: American Mathematical Society. ISBN 0-8218-3368-5. Zbl 1119.11026.
- Rademacher, Hans (1974). Collected Papers of Hans Rademacher. v II. MIT Press. pp. 100–07, 108–22, 460–75.
- Sautoy, Marcus Du. (2003). The Music of the Primes. New York: Perennial-HarperCollins.
- Stanley, Richard P. (1999). Enumerative Combinatorics. Volumes 1 and 2. Cambridge University Press. ISBN 0-521-56069-1.
- Whiteman, A. L. (1956). A sum connected with the series for the partition function. Pacific Journal of Math. 6. pp. 159–176. Zbl 0071.04004. (Provides the Selberg formula. The older form is the finite Fourier expansion of Selberg.)
- Hazewinkel, Michiel, ed. (2001) , "Partition", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
- Partition and composition calculator
- First 4096 values of the partition function
- An algorithm to compute the partition function
- Weisstein, Eric W. "Partition". MathWorld.
- Weisstein, Eric W. "Partition Function P". MathWorld.
- Pieces of Number from Science News Online
- Lectures on Integer Partitions by Herbert S. Wilf
- Counting with partitions with reference tables to the On-Line Encyclopedia of Integer Sequences
- Integer partitions entry in the FindStat database
- Integer::Partition Perl module from CPAN
- Fast Algorithms For Generating Integer Partitions
- Generating All Partitions: A Comparison Of Two Encodings
- Amanda Folsom, Zachary A. Kent, and Ken Ono, l-adic properties of the partition function. In press.
- Jan Hendrik Bruinier and Ken Ono, An algebraic formula for the partition function. In press.
- Grime, James (April 28, 2016). "Partitions - Numberphile" (video). Brady Haran. Retrieved 5 May 2016. | <urn:uuid:17d95952-80e5-42c0-a195-734234c88190> | 4.3125 | 5,534 | Knowledge Article | Science & Tech. | 80.859891 | 95,541,631 |
A University of Colorado at Boulder research team has discovered the first definitive evidence of shorelines on Mars, an indication of a deep, ancient lake there and a finding with implications for the discovery of past life on the Red Planet.
Estimated to be more than 3 billion years old, the lake appears to have covered as much as 80 square miles and was up to 1,500 feet deep -- roughly the equivalent of Lake Champlain bordering the United States and Canada, said CU-Boulder Research Associate Gaetano Di Achille, who led the study. The shoreline evidence, found along a broad delta, included a series of alternating ridges and troughs thought to be surviving remnants of beach deposits.
"This is the first unambiguous evidence of shorelines on the surface of Mars," said Di Achille. "The identification of the shorelines and accompanying geological evidence allows us to calculate the size and volume of the lake, which appears to have formed about 3.4 billion years ago."
A paper on the subject by Di Achille, CU-Boulder Assistant Professor Brian Hynek and CU-Boulder Research Associate Mindi Searls, all of the Laboratory for Atmospheric and Space Physics, has been published online in Geophysical Research Letters, a publication of the American Geophysical Union.
Images used for the study were taken by a high-powered camera known as the High Resolution Imaging Science Experiment, or HiRISE. Riding on NASA's Mars Reconnaissance Orbiter, HiRISE can resolve features on the surface down to one meter in size from its orbit 200 miles above Mars.
An analysis of the HiRISE images indicate that water carved a 30-mile-long canyon that opened up into a valley, depositing sediment that formed a large delta. This delta and others surrounding the basin imply the existence of a large, long-lived lake, said Hynek, also an assistant professor in CU-Boulder's geological sciences department. The lake bed is located within a much larger valley known as the Shalbatana Vallis.
"Finding shorelines is a Holy Grail of sorts to us," said Hynek.
In addition, the evidence shows the lake existed during a time when Mars is generally believed to have been cold and dry, which is at odds with current theories proposed by many planetary scientists, he said. "Not only does this research prove there was a long-lived lake system on Mars, but we can see that the lake formed after the warm, wet period is thought to have dissipated."
Planetary scientists think the oldest surfaces on Mars formed during the wet and warm Noachan epoch from about 4.1 billion to 3.7 billion years ago that featured a bombardment of large meteors and extensive flooding. The newly discovered lake is believed to have formed during the Hesperian epoch and postdates the end of the warm and wet period on Mars by 300 million years, according to the study.
The deltas adjacent to the lake are of high interest to planetary scientists because deltas on Earth rapidly bury organic carbon and other biomarkers of life, according to Hynek. Most astrobiologists believe any present indications of life on Mars will be discovered in the form of subterranean microorganisms.
But in the past, lakes on Mars would have provided cozy surface habitats rich in nutrients for such microbes, Hynek said.
The retreat of the lake apparently was rapid enough to prevent the formation of additional, lower shorelines, said Di Achille. The lake probably either evaporated or froze over with the ice slowly turning to water vapor and disappearing during a period of abrupt climate change, according to the study.
Di Achille said the newly discovered pristine lake bed and delta deposits would be would be a prime target for a future landing mission to Mars in search of evidence of past life.
"On Earth, deltas and lakes are excellent collectors and preservers of signs of past life," said Di Achille. "If life ever arose on Mars, deltas may be the key to unlocking Mars' biological past."
Gaetano Di Achille | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:432bfc16-e1be-4f49-9d66-abf7a6128bcd> | 3.65625 | 1,474 | Content Listing | Science & Tech. | 38.466902 | 95,541,633 |
Paleocene-Eocene Thermal Maximum (PETM) for Global Warming
Published: Last Edited:
Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
- Josh Chaplin
“Can the Paleocene-Eocene Thermal Maximum (PETM) be used as an analogue for anthropogenically-induced global warming?”
The Paleocene-Eocene Thermal Maximum (PETM) was a global climatic occurrence in the early Paleogene period, spanning across the Paleocene and Eocene epoch overlap at 55.8 Ma (Gradstein et al 2004). Although the exact start and end points of the PETM are disputed, drilling in the North Sea and the Southern Ocean’s Weddell Sea has allowed an accurate duration of the phenomenon to be calculated to around 170,000 years. There is, though, much controversy over these figures and the mechanisms that caused such a climatic anomaly, but particularities aside, this geologically recent warming cycle may be of much use when it comes to understanding our changing climate today.
A proposed cause of the PETM thermal anomaly is the release of 1,500 Gt of methane and carbon from decomposing gas-hydrate reservoirs in the terrestrial biosphere, initiated as part of a sequence of events leading on from mass volcanism associated with the opening of the North Atlantic (Winguth 2011). The resulting rise in temperature of around average 5.6°C, and up to 9°C is comparable to extrapolations which predict the temperature shift by the end of the current century (Röhl 2000). Sluijs (2007) explains how the climatological evidence for this comes from Oxygen isotope excursions in foraminifera and terrestrial carbonates, increased levels of Magnesium and Calcium in foraminifera and the poleward migrations of tropical plankton, mammals and terrestrial plants. On the other hand, the inception of the modern warming period was human-induced, and carbon was released from traps much faster than it would have been during the PETM, and has thus accumulated in the atmosphere at a higher concentration in a shorter space of time. The PETM did not surpass a so-called ‘tipping point’ whereby the effects of the warming would be irreversible, whereas the exteremely rapid release of carbon over the course of the past three centuries may have catalysed such a scenario. As a result, it is a reasonable assumption that it may be possible to rely on the PETM as an analogue for modern anthropogenic climate change, due to the analogous effects and root cause, however it must be remembered that the speed of the onset may very well render these expectations void.
There are a number of close similarities between modern and past warming periods which vouch for the validity of using their analyses as an analogue for what is happening at present, and what will be still to come. For example, anthropogenically-induced warming and the natural warming of the PETM both were the consequence of excess carbon in the atmosphere. In addition, the scale of temperature rise during the PETM is consistent with predictions for the end of the current century, at around 6°C (Röhl et al 2000). Applying the principle of uniformitarianism, the climatic reconstruction of the conditions throughout the PETM can be useful in aiding us to understand what is happening and the effects we can expect. As the globe warms up, consequences occur such as the thermal expansion of the oceans and the release of terrestrial carbon which also have knock-on corollaries as is seen today (Hayward 2011); this is shown in Figure 1. Furthermore, due to the geologically relative recent occurrence of the event, it is more reliable to construct climatological models and replicate what the potential consequences of the warming may be. This is another advantage in using what we know of the PETM as an analogue for what is happening to the planet at present.
Modern Global Warming is, however, remarkably different from the rise in temperature associated with the PETM. Whilst the rise of the temperature today is comparable to the Late Paleocene at around 6°C (Röhl et al 2000), the onset of this thermal spike occurred naturally and at equilibrium climatic state. The industrial revolution and the resulting vast amounts of coal, oil and gas which have ended up in the atmosphere as CO2 since the early 1800s have bought about the temperature shift in a very small fraction of the time than it did during the Paleogene. Figure 2 puts this into perspective. The anthropological extraction of carbon reservoirs has depleted their stratigraphic storage much quicker than would be the case naturally through uplift and erosion. “PETM: Global warming, naturally” (2012) states around 5x109 tons of Carbon was released into the atmosphere each year during the PETM, whereas the figure for 2010 alone is 35x109 tons. Such rapid accumulation of carbon in the atmosphere has never been reconstructed within the last 20 million years, and current atmospheric concentrations of CO2 are higher than at any point within the last 800,000 years (Hayward 2011). With reference to Figure 1, the knock-on consequences of each occurrence within a warming cycle are numerous, and the simultaneous onset of several of these is largely uncharted territory. A tipping point could theoretically be reached whereby global temperatures spiral out of control because of this, and drawing from the reversal of snowball earth periods, such an extreme planetary climate could take many millions of years to return to optimal conditions, if ever again. When considering the centuries that are required to transport all heat to the deep ocean and the millennia needed to remove excess carbon from the atmosphere, the consequences of climate change will still be materialising for generations and will likely last for longer than the 170,000 years it took the earth to remove all excess CO2 from the atmosphere. By 2300 sea levels are expected to be up to 0.8m higher than 1980 levels, many more species will be extinct and the poles will have considerably smaller ice sheets if still in existence, and one can only assume the repercussions will be more severe than the altered migration patterns and dinoflagellate calcite levels of the PETM.
In conclusion, the PETM can be considered as an analogous representation of the warming of the globe. In the case of anthropogenic climate change however, the rate of carbon accumulation simply means that the potential consequences of the climatic shift are too unpredictable in years to come. Using the PETM as an analogous model for the current period of warming may suffice for the present, but future outcomes are uncertain as atmospheric carbon continues to accrue at an alarming rate.
- Gradstein et al (2004) – “A geological timescale”
- Hayward, A. et al (2011) – “Are there Pre-Quaternary Analogues for a future Greenhouse Warming?” 933-941
- “PETM: Global Warming, Naturally” (2012) – Found at http://www.wunderground.com/climate/PETM.asp?MR=1
- Röhl, U. et al (2000) – Geology Issue 28 – “New chronology for the late Paleocene thermal maximum and its environmental implications” 927-930
- Sluijs et al (2007) – “The Palaeocene–Eocene Thermal Maximum super greenhouse: biotic and geochemical signatures, age models and mechanisms of global change” 333-338
- Winguth, A. (2011) – “The Paleocene-Eocene Thermal Maximum: Feedbacks between Climate Change and Biogeochemical Cycles” 43-45
Cite This Essay
To export a reference to this article please select a referencing stye below: | <urn:uuid:8a45ed60-5f04-4a29-ba28-cfa5d3a744a4> | 3.078125 | 1,653 | Academic Writing | Science & Tech. | 29.58109 | 95,541,636 |
Microsoft® Visual Basic® Scripting Edition
| Language Reference |
The Nothing keyword in VBScript is used to disassociate an object variable from any actual object. Use the Set statement to assign Nothing to an object variable. For example:
Set MyObject = Nothing
Several object variables can refer to the same actual object. When Nothing is assigned to an object variable, that variable no longer refers to any actual object. When several object variables refer to the same object, memory and system resources associated with the object to which the variables refer are released only after all of them have been set to Nothing, either explicitly using Set, or implicitly after the last object variable set to Nothing goes out of scope.
|file: /Techref/language/asp/vbs/vbscript/214.htm, 1KB, , updated: 1996/11/22 11:12, local time: 2018/7/18 00:24,
|©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?|
<A HREF="http://www.piclist.com/techref/language/asp/vbs/vbscript/214.htm"> Microsoft® Visual Basic® Scripting Edition </A>
|Did you find what you needed?|
PICList 2018 contributors:
o List host: MIT, Site host massmind.org, Top posters @20180718 RussellMc, Van Horn, David, Sean Breheny, Isaac M. Bavaresco, David C Brown, Bob Blick, Neil, Denny Esterline, John Gardner, Brent Brown,
* Page Editors: James Newton, David Cary, and YOU!
* Roman Black of Black Robotics donates from sales of Linistep stepper controller kits.
* Ashley Roll of Digital Nemesis donates from sales of RCL-1 RS232 to TTL converters.
* Monthly Subscribers: Gregg Rew. on-going support is MOST appreciated!
* Contributors: Richard Seriani, Sr.
Welcome to www.piclist.com! | <urn:uuid:c241b0c4-14ac-465c-ad4a-6283480538cb> | 2.78125 | 476 | Knowledge Article | Software Dev. | 55.78905 | 95,541,647 |
Under what conditions do parallel river networks occur?
- Jung Kichul, Niemann Jeffrey D, Huang Xiangjiang
- Department of Civil and Environmental Engineering, Campus Delivery 1372, Colorado State University, Fort Collins, CO 80523, USA, Innovyze, Inc., 370 Interlocken Blvd, Suite 300, Broomfield, CO 80021, USA
- Geomorphology SCI(E) SCOPUS
- Elsevier in 2011
- Cited Count
Geologists have long recognized that channel networks can deviate from a typical dendritic form when they develop under certain geologic or topographic constraints. One such deviation is the parallel form, which is thought to occur when networks develop on sloping surfaces. The objectives of this research are to determine the specific conditions under which parallel networks occur and the nature of the transition between dendritic and parallel networks. Natural and simulated channel networks are considered in this study. The natural networks were obtained from the digital elevation models of basins that include remnants of the preexisting topographic surface. These remnants were identified as locations with small drainage areas and topographic curvatures that are close to zero. For each basin, the average slope of the preexisting surface was calculated, and the channel network was classified using an existing method that can distinguish five network types (including dendritic and parallel) based on measures that are derived from scaling invariance. The natural networks become consistently parallel when the average slope of the preexisting surface exceeds about 3%. Simulated channel networks were also generated using a detachment-limited model for fluvial erosion. The parameters of the model were determined to imitate the natural basins, and the average slopes of the observed preexisting surfaces were used for the slopes of the initial surfaces in the simulations. The model can also produce a transition between dendritic and parallel networks for an initial slope around 3%, but this transition depends on the roughness of the initial surface and the boundary conditions.
1.Process–response models of denudation at different spatial scales. Catena. Vol. 10. 3150(1987) Ahnert F.
2.Quantitative description and classification of drainage patterns. Photogrammetric Engineering and Remote Sensing. Vol. 54. 505509(1988) Argialas D. et al.
3.River spacing and drainage network growth in widening mountain ranges. Basin Research. Vol. 18. 267276(2006) Castelltort S. et al.
4.Slope-control on the aspect ratio of river basins. Terra Nova. Vol. 21. 265270(2009) Castelltort S. et al.
5.Reconstruction of hillslope and valley paleotopography by application of a geomorphic model. Computers and Geosciences. Vol. 35. 17761784(2009) Coleman M.L. et al.
6.Sediment budget for a small catchment in mountainous terrain. Zietshrift fur Geomorphologie Suppl. Bd.. Vol. 29. 191206(1978) Dietrich W.E. et al.
7.Hollows, colluvium, and landslides in soil-mantled landscapes. Hillslope Processes. 361388(1986) Dietrich W.E. et al.
9.Analysis of Erosion Thresholds, Channel Networks, and Landscape Morphology Using a Digital Terrain Model(1993) Dietrich W.E. et al. The Journal of GeologyEarth Science cited 102 times
10.General Geomorphometry, Derivatives of Altitude, and Descriptive Statistics.(1972) Evans I.S. | <urn:uuid:c49266c2-4407-4395-86e9-eaa86a740274> | 2.875 | 770 | Academic Writing | Science & Tech. | 44.200122 | 95,541,659 |
Looking for the dispositions scientists have on using non-renewable energy resources. Should we be using it and if so if what way? What would be some examples (present or past) to support an answer for or against such theory. How integral is science as being a stake-holder in these processes?© BrainMass Inc. brainmass.com July 19, 2018, 4:02 am ad1c9bdddf
Renewable sources of energy are obtained from natural resources that are naturally replenished or renewed within a human lifespan. Simply put, it is a sustainable source of energy. Some natural resources, such as moving water, wind, and the sun, are not at risk of depletion if they are used for energy production. However, some renewable energy, such as biomass, is only sustainable based on how heavily it is used. Biomass, for instance, is a renewable resource only if its rate of consumption does not exceed regeneration. Thus, switching to a renewable energy based economy is important for the long term sustainability of both the environment and economy. The world should be moving away from non-renewable energy before it is completely depleted. The Unites States relies heavily on coal, oil, and natural gas. These fossil fuels are non-renewable. The come from a finite amount of resource that will eventually dwindle down to nothing. The less of the resource that is less, the more expensive and environmentally damaging it becomes to retrieve it. Conversely, many ...
The expert examines energy resources dispositions. | <urn:uuid:648148bb-feb0-4f8f-bc40-3f18373f7b47> | 2.890625 | 313 | Truncated | Science & Tech. | 38.670355 | 95,541,660 |
Perepiteia is claimed to be a new generator developed by the Canadian inventor Thane Heins. The device is named after the Greek word for peripety, a dramatic reversal of circumstances or turning point in a story. The device was quickly attributed the term "perpetual motion machine" by several media outlets. Due to the long history of hoaxes and failures of perpetual motion machines and the incompatibility of such a device with accepted principles of physics, Heins' claims about Perepiteia have been treated with considerable skepticism.
In 2003, Heins filed a patent application in Canada but no patent was granted. Heins also founded Potential Difference Inc, the website of which contains a series of videos of the inventor demonstrating the machine. US patent #9,230,730 issued in 2016 pertaining to another of Thane's inventions, a bi-toroidal topology transformer.
Heins has recently stated that he is unsure whether or not the machine really produces energy, but in communications with science writer David Bradley of ScienceBase, Heins made claims of up to 7000% efficiency for the bi-toroidal transformer. Heins, who reportedly works 8–12 hours a day on the Perepiteia, insists that it is viable and that "This technology should be mainstream."
Mechanically, the device appears to be an induction motor with a magnetic material placed inside the rotor core. Heins believes that the device's potential may rest in its atypical manipulation of the back electromotive force (back EMF). A more detailed description of the device may be found in the patent application, minus supporting figures.
The apparent unique quality of the Perepiteia machine is that instead of maintaining a certain state of motion, it appears to generate acceleration. According to Heins, the Perepiteia produces magnetic friction which somehow gets turned into a magnetic boost. Using an electric motor, the drive shaft is attached to a steel rotor with small round magnets lining its outer edges. In this set-up of a simple generator, the rotor spins so that the magnets pass by a wire coil just in front of them, generating electrical energy.
Perepiteia's process begins by overloading the generator to get a current, which typically causes the wire coil to build up a large electromagnetic field. Usually, this kind of electromagnetic field creates an effect called the back electromotive force (back EMF) due to Lenz's law. The effect should repel the spinning magnets on the rotor, and slow them down until the motor stops completely, in accordance with the law of conservation. However, instead of stopping, the rotor accelerates - i.e. the magnetic friction did not repel the magnets and wire coil. Heins states that the steel rotor and driveshaft had conducted the magnetic resistance away from the coil and back into the electric motor. In effect, the back EMF was boosting the magnetic fields used by the motor to generate electrical energy and cause acceleration. The faster the motor accelerated, the stronger the electromagnetic field it would create on the wire coil, which in turn would make the motor go even faster. Heins seemed to have created a positive feedback loop. To confirm the theory, Heins replaced part of the driveshaft with plastic pipe that wouldn't conduct the magnetic field. There was no acceleration.
In early 2008, Heins was given access to equipment to demonstrate it by professor Riadh Habash of the University of Ottawa, who says of it, "It accelerates, but when it comes to an explanation, there is no backing theory for it. That's why we're consulting MIT. But at this time we can't support any claim."
After examining the machine and witnessing a demonstration, Massachusetts Institute of Technology (MIT) professor Markus Zahn admitted that he could not fully explain its operation. Although he refused to call it perpetual motion, he stated that it might be an extremely efficient motor. Regarding the device, Zahn stated that "It's an unusual phenomena [sic] I wouldn't have predicted in advance. But I saw it. It's real. Now I'm just trying to figure it out...To my mind this is unexpected and new, and it's worth exploring all the possible advantages once you're convinced it's a real effect." However, even if Perepiteia does not produce perpetual motion, Zahn still believes that the device could have considerable practical applications, noting that "There are an infinite number of induction machines in people's homes and everywhere around the world. If you could make them more efficient, cumulatively, it could make a big difference."
However, Zahn later stated in an interview that "I can't understand how [Heins] can even breathe the words 'perpetual motion.' He plugs it into the wall." In a subsequent e-mail to Heins, Zahn wrote that: "Any talk of perpetual motion, over unity efficiency, etc. discredits you, now me, and your ideas." Zahn further stated that he would not endorse Heins' device until "the foolishness is stopped of hinting that your motor violates fundamental laws of physics."
Critics of the system have pointed out that the system described by Heins simply demonstrates a change in the motor's hysteresis drag, increasing the speed of the rotor but not producing any energy. In other words, when the rotor exhibits acceleration following a specific electrical short-out, the device is merely more efficiently converting the input electricity to mechanical energy than in the other test configurations.
On February 29, 2008, six members of Ottawa Skeptics, met at the Colonel By building at the University of Ottawa to witness a demonstration of Perepiteia. Heins, who conducted the demonstration, later met with the members to discuss his device and answer questions. In a subsequent report released in May, Ottawa Skeptics expressed severe doubts about Heins' claims regarding Perepiteia. They noted that Perepiteia produces either observed acceleration or a slight increase in generator electrical output but that this alone does not automatically mean that "free energy" or perpetual motion is being produced or that there is a "real and measurable effect." While acknowledging that the speed-up behaviour of the generator cannot be fully explained, they stated that there is no evidence that Perepiteia "represents any challenge to currently known laws of physics."
On May 21, 2009, a skeptic writing under the name Natan Weissman wrote an explanation of Perepiteia in relation to its motor, a Ryobi bench grinder. The author states that the acceleration behavior of the machine is due to the consumption of torque from the induction motor, rather than any unconventional manipulation of Electromagnetic fields or Counter-electromotive force.
On June 3, 2013, posting in response to questions Pure Energy Blog, Heins provided an explanation of his claims, stating that: "A generator that requires a 1 Watt increase in mechanical drive shaft power to deliver 1 Watt of electrical power to a load would be 100% efficient. A generator that delivers 0.95 Watts with a 1 Watt increase in mechanical drive shaft power from no-load to on-load would be 95% efficient."
- CA application 2437745
- Office, Government of Canada, Industry Canada, Office of the Deputy Minister, Canadian Intellectual Property. "Canadian Patent Database". brevets-patents.ic.gc.ca. Retrieved 21 June 2017.
- "Archived copy". Archived from the original on 2008-02-08. Retrieved 2008-02-07.
- "United States Patent: 9230730 - Bi-toroidal topology transformer". Retrieved 21 June 2017.
- "Free energy with magnetic reluctance". Archived from the original on 30 January 2009. Retrieved 21 June 2017.
- The next great Canadian idea: Peripiteia generator Archived 2009-02-03 at the Wayback Machine. by Sharda Prashad, Canadian Business magazine, July 11, 2008. (retrieved on January 3, 2009).
- Up, Beam Me (6 February 2008). "Thane Heins' Perepiteia device". Retrieved 21 June 2017.
- Inventor Doesn't Dare Say 'Perpetual Motion Machine', Physorg.com, February 7, 2008.
- Hamilton, Tyler (February 4, 2008). "Turning physics on its ear". The Star. Toronto. Retrieved May 22, 2010.
- "The Chef Who Could Change The World of Physics » Popular Fidelity » Unusual Stuff". Retrieved 21 June 2017.
- Perepiteia Perpetual-Motion Machine May Actually Do...Something, gizmodo.com, February 7, 2008.
- An Internet commotion over 'perpetual motion', Ottawa Citizen (reprinted by Canada.com), March 1, 2008.
- Slashdot | Yet Another Perpetual Motion Device[unreliable source?]
- In This Town We Obey the Laws of Thermodynamics by Seanna Watson, Ottawa Skeptics May 4, 2008.
- Explanation of the Perepiteia rotating machine and the accompanying theory concerning "Back EMF" by Natan Weissman, SciScoop, May 21, 2009.
- Thane Heins Allows Open License of his Regenerative Acceleration (ReGenX) Technology by Sterling D. Allan, Pure Energy Systems News (Pure Energy Blog), June 9, 2013. | <urn:uuid:71d1d9c1-323b-4028-8945-6079a0c9c1e9> | 3.015625 | 1,924 | Knowledge Article | Science & Tech. | 42.550641 | 95,541,661 |
The system will be unavailable due to maintenance on Thursday July 19 from 7:00-8:30 am ET.
Spectral and Physicochemical Characteristics of nC60 in Aqueous Solutions
MetadataShow full item record
Despite its extremely low solubility in water, fullerite C₆₀ can form colloidally stable aqueous suspensions containing nanoscale C₆₀ particles (nC₆₀) when it is subject to contact with water. nC₆₀ is the primary fullerene form following its release to the environment. The aim of the present study was to provide fundamental insights into the properties and environmental impacts of nC₆₀. nC₆₀ suspensions containing negatively charged and heterogeneous nanoparticles were produced via extended mixing in the presence and absence of citrate and other carboxylates. These low-molecular weight acids were employed as simple surrogates of natural organic matter. The properties of nC₆₀ were characterized using dynamic light scattering (DLS), transmission electron microscopy (TEM), and UV-Vis spectroscopy. nC₆₀ produced in the presence of carboxylate differs from that produced in water alone (aq/nC₆₀) with respect to surface charge, average particle size, interfacial properties, and UV-Vis spectroscopic characteristics. Importantly, regularly shaped (spheres, triangles, squares, and nano-rods) nC₆₀ nanoparticles were observed in carboxylate solutions, but not in water alone. This observation indicates that a carboxylate-mediated 'bottom-up' process occurs in the presence of carboxylates. Changes in the UV-Vis spectra over time indicate that reactions between C₆₀ and water or other constituents in water never stop, potentially leading to significant morphologic changes during storage or as a result of simple dilution. These results suggest that studies examining the transport, fate, and environmental impacts of nC₆₀ should take the constituents of natural waters into consideration and that careful examination on the properties of the tested nC₆₀ should be conducted prior to and during each study.
- Doctoral Dissertations | <urn:uuid:d596f0d9-8adb-4da7-96d1-9723a5914a5e> | 2.515625 | 497 | Academic Writing | Science & Tech. | 11.944053 | 95,541,665 |
NOTE: All the example Questions in this page use this Example Dataset.
Functions are usually composed of the Function's name, parentheses and one or more Arguments separated by commas between the parentheses. Additionally, there can be Grouping or a Filter inside.
Some Functions have a fixed number of Arguments. For example, the
RATIO Function always takes two arguments - a dividend and a divisor.
Some Functions have variable number of Arguments. These functions can behave differently depending on the number of Arguments used.
SUM(SALES)adds the Sales value from all the rows in the Data.
SUM(SALES, PROFIT)adds the Sales and Profit values in each row. You'll usually wrap it in a Function like
AVERAGEto get the final Metric.
Filters in Functions
You can set up a Filter inside a Function. It will filter the Data on which the computation will be performed.
WHERE SALES > 0
)adds the Sales value from only those rows where Sales is more than 0.
WHERE PROFIT > SALES/10
)adds the Sales value from only those rows where the Profit is more than 1/10th of Sales.
Grouping in Functions
You can do grouping within Functions using the
ACROSS Keyword. Take the following example:
First, this groups the Data by each City. Then, for each of these groups, it adds the Sales values. Finally, it computes the average value of these totals. | <urn:uuid:4017d367-d5ba-408c-9045-0a50e6370090> | 3.5625 | 326 | Documentation | Software Dev. | 52.252512 | 95,541,693 |
Summer weather preview: For Texas summers, variety is good
By John W. Nielsen-Gammon, Regents Professor of Atmospheric Sciences and Texas State Climatologist
Much like the old saying about life, weather is just one damn thing after another. It’s sunny, then it rains. It’s windy, then calm. It’s a drought ended with a flood. And on and on.
In Texas in the summertime, imagine how bad things would be if we had the same weather day after day!
At first, that might not seem so bad. If you have something planned for a certain day, you might want that day to be bright and sunny. And if every day is bright and sunny, well then there’s nothing to worry about! Alas, Texas needs rain during the summer, too. Rivers, streams, lakes, stock tanks, plants, animals… all need rain. Humans need rain, too, but our needs are not quite so urgent. We’ve developed the ability to capture and store water and save it for a (not) rainy day.
Unfortunately, our ability to store water for later doesn’t mean these supplies increase when we need more of them. The same weather that drives water demand (high temperatures, low humidity, little rainfall) also depletes surface water supplies and reduces aquifer recharge.
Nature can even fall into an ugly little feedback loop. Lack of rain during the summer allows the soil to dry out quickly, which in turn lets the ground get hotter. Then the air gets hotter and drier, which makes it less likely for thunderstorms to form and produce rain.
The worst case scenario (or something close to it) was 2011. By the end of June, eight of the previous nine months had seen below-normal rainfall, and temperatures were already hot. Then they got even hotter as the lack of rain continued. Some stations in Texas hit the double-triple: at least 100 days reached at least 100 degrees.
The opposite extreme happened in 2002. After a dry start to the year, San Antonio had received only eight inches of rain, compared to a normal of 16 inches. Then, in late June and early July, it rained. And rained. A week later, after widespread flooding in south-central Texas, San Antonio’s total was up to 25 inches. As a result, daytime highs in the rest of July were about 5 degrees milder.
It’s too soon to tell whether this year will be more like 2002 or 2011, but both scenarios are possible. Recent months have been fairly dry across most of Texas, and the dryness extends back a year or more in parts of central and west Texas. Temperatures have recently been unusually high, too, partly caused by the lack of rain prior to May and partly because of the lack of clouds and rain during May. By the second week of June, Texas was on track to have its third-warmest May and June on record, behind 2011 and 1953. May was the second-hottest May ever, and the jump in temperature from April to May had never been so large.
So far, that sounds a lot like 2011, but a change may be in the offing. Much like 2002, the computer weather forecasting models are showing the possibility of an extended influx of tropical moisture and wet weather toward the latter half of June. If the wetness materializes, soil moisture would be replenished and temperatures would calm down.
Seasonal precipitation forecasts, by the way, are no help. That great alterer of seasonal weather patterns, El Niño, mainly affects our weather in the wintertime. So aside from helping to make this year’s hurricane season most likely more benign than last year’s, El Niño and La Niña don’t offer us any clues for summer rain.
Atlantic Ocean temperatures seem to affect our summertime rains, but mainly on a decade to decade basis; individual years are just too variable.
Temperature is a lot easier to forecast. (Well, okay, they’re both easy to forecast, but only with temperature do you have a reasonable chance of forecasting correctly.) This is because of the warming trend experienced across all of Texas in the past few decades, closely linked to global temperature increases. So just about every year, the summer temperature forecast is for enhanced chances of above-normal temperatures. And indeed, for the past decade, every Texas summer has been warmer than the twentieth century average.
About the only way we can get summer temperatures below normal is if precipitation is well above normal. That’s happened three times this century, in 2004, 2007, and 2017, and the (Harvey) rains in 2017 came too late to affect average summer temperatures.
OK, so we want plenty of variety this summer. Lots of sunny days, but a decent amount of rain, too. Not enough to cause lots of flooding, but enough to keep temperatures on the mild side and keep plants green. Enough to short-circuit a really hot summer before it has a chance to kick into high gear. And just the right amount to give us some beautiful Texas sunsets.
That’s what we want. This being Texas, we know the weather probably won’t cooperate. But we can hope and pray.
John Nielsen-Gammon been on the faculty at Texas A&M University since 1991. He is currently a Regents Professor of Atmospheric Sciences and also serves as the Texas State Climatologist. He graduated from the Massachusetts Institute of Technology, receiving a Ph.D. there in 1990. He does research on various types of extreme weather from droughts to floods, as well as air pollution and computer modeling. As Texas State Climatologist, he helps the State of Texas make the best possible use of weather and climate information, through applied research, outreach, and service on state-level committees. He is a fellow of the American Meteorological Society. | <urn:uuid:946ae129-63d0-4ce2-8277-6eb34f0c071f> | 2.84375 | 1,231 | Personal Blog | Science & Tech. | 56.884176 | 95,541,695 |
Scientists from the Excellence Cluster Universe at the Ludwig-Maximilians-Universität Munich have establised "Cosmowebportal", a unique data centre for cosmological simulations located at the Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences. The complete results of a series of large hydrodynamical cosmological simulations are available, with data volumes typically exceeding several hundred terabytes. Scientists worldwide can interactively explore these complex simulations via a web interface and directly access the results.
With current telescopes, scientists can observe our Universe’s galaxies and galaxy clusters and their distribution along an invisible cosmic web. From the exact measurement of the cosmic microwave background (CMB) with the Planck space observatory and many other measurements for example with the Hubble space telescope, the scientists were able to develop a precise model of our Universe. However, little is yet known about how these structures could form from the distribution of matter in the early universe.
In order to answer this question, theoretical astrophysicists work with cosmological, hydrodynamical simulations. They test their hypotheses about the universe by developing mathematical models that describe the underlying complex physical processes and run them on high-performance computers trying to reproduce the evolution of the Universe over billions of years. If the underlying assumptions are correct, the simulations should match the current astronomical observations and findings.
A group of astrophysicists led by Dr. Klaus Dolag from the Excellence Cluster Universe at the Ludwig-Maximilians-Universität Munich in close collaboration with the LRZ have now initiated "Cosmowebportal". This unique data centre for cosmological simulations provides access to the results of the world's most extensive set of cosmological hydrodynamic simulations, Magneticum Pathfinder, also developed by Klaus Dolag’s team and carried out at the LRZ.
The complete simulations are saved at the LRZ in Garching on a data store for large datasets, which is connected to the supercomputer SuperMUC. Using a web interface, interested scientists can, for example, select objects from the raw simulation data, process it, and even create virtual observations mimicing existing or future space telescopes.
"Large astronomical projects such as the space telescopes Euclid or eRosita, which are to be launched in the next few years, will observe large areas of the Universe, as well as provide further insight into the evolution of the first structures of the Universe so that the significance of cosmological hydrodynamic simulations will even increase in future,” says Klaus Dolag. "A data centre that pools and makes these simulations available therefore is an important facility for scientists working in the field."
Besides Klaus Dolag and Antonio Ragagnin, scientists from the following institutions were involved in the project: C2PAP, the data center of the Excellence Cluster Universe, LRZ, University of Trieste, the INAF Osservatorio Astronomico di Trieste and the Max Planck Computing and Data Facility.
Ragagnin et al.: „A web portal for hydrodynamical, cosmological simulations”, Astronomy and Computing, Vol. 20, July 2017; online 6 June 2017,
PD Dr. Klaus Dolag
University Observatory of the Ludwig-Maximilians-Universität Munich
Excellence Cluster Universe
Scheinerstraße 1, Munich, Germany
Tel: +49 (0) 89 2180 5994
Dr. Nicolay J. Hammer
Leibniz Supercomputing Centre (LRZ)
of the Bavarian Academy of Sciences
Boltzmannstraße 1, 85748 Garching n. Munich, Germany
Tel: +49 (0)89 35831 8072
Caption: Visualizations of the simulated distributions of gas and stars in the Universe from data provided by Cosmowebportal: The cube represents a space section of the Universe (more than 300 million light years), the bright spots on the cube faces show galaxies and galaxy clusters along the cosmic web. The first two disks zoom into the central galaxy cluster, the third disk (far right) demonstrates how an observation of the zoom area would look with an X-ray telescope ("virtual telescope").
Petra Riedel | idw - Informationsdienst Wissenschaft
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:f56c6022-0f6d-4567-83c3-bfcda659aaf9> | 2.78125 | 1,479 | Content Listing | Science & Tech. | 26.357618 | 95,541,710 |
By James V. Anderson
Read Online or Download Advances in Plant Dormancy PDF
Similar developmental biology books
Platforms biology is outlined for the aim of this research because the realizing of organic community behaviors, and particularly their dynamic points, which calls for the usage of mathematical modeling tightly associated with scan. This comprises various methods, reminiscent of the id and validation of networks, the production of acceptable datasets, the advance of instruments for facts acquisition and software program improvement, and using modeling and simulation software program in shut linkage with scan.
Indonesia possesses the second one biggest primate inhabitants on the earth, with over 33 assorted primate species. even though Brazil possesses extra primate species, Indonesia outranks it by way of its variety of primates, starting from prosimians (slow lorises and tarsiers), to a large number of outdated international Monkey species (macaques, langurs, proboscis moneys) to lesser apes (siamangs, gibbons) and nice apes (orangutans).
Somatic embryogenesis (SE) is a distinct technique through which a vegetative/somatic plant telephone transforms into an embryo. This in vitro embryogeny has sizeable primary and useful functions. The SE technique is advanced and is managed by way of numerous exterior and inner triggers. This publication compiles the most recent advances in embryogenesis study on ornamentals and discusses the significance of embryogenic cultures/tissues in elevating transgenic vegetation.
The beginning, Nature and Evolution of Protoplasmic participants and their institutions explores residing beings of all degrees of complexity when it comes to one another and to many of the ambient assets that they use to outlive: protoplasmic members and their institutions, cells and their institutions, animals, and guy.
Extra info for Advances in Plant Dormancy | <urn:uuid:a592cf3a-9988-45db-ad3a-6565176b8e44> | 2.609375 | 376 | Truncated | Science & Tech. | 5.766201 | 95,541,733 |
Then, v, [sigma]([alpha]) and [sigma]([beta]) form an asteroidal
triple in G.
1994): Chemical profiles in K / T boundary section of Meghalaya, India: Cometar, asteroidal
Not until years after Hiroshima, Nagasaki, and the repeated irradiation of New Mexico and Kazakhstan were the physics of massive explosions properly understood, and the strangely craterless Tunguska site was subsequently adduced to have been caused by an asteroidal
airburst, an event more common than one might suspect.
titanium, copper, and other metals might be used as construction materials for satellites or space stations; even ordinary rock could prove valuable, as a radiation shield for spacecraft.
Several of my family went along on such an archaeological tour for a couple of weeks into Yucatan a few years ago, well before the time that the archeo-astronomers discovered that the ancient Chicxulub Crater on this peninsula appeared to be the site of an asteroidal
impact which decimated the dinosaurs some 65 million years ago.
In November 2005, the probe landed on the asteroid and collected samples in the form of tiny grains of asteroidal
material, which were finally returned to earth five years later in June 2010.
To date, the only known mini-moon was the small asteroid 2006 RH120 Likely only 3 to 7 meters cross, this asteroidal
fragment was discovered on September 14, 2006, with the Catalina Sky Survey's 0.
Objective: Platinum group element (PGE) abundances & 187Os-isotope compositions determined for magmas of Earth, the Moon, Mars, & asteroidal
bodies place important constraints on planetary evolution but these data, & current analytical approaches, have largely focused on whole-rock analyses.
An apparently asteroidal
object discovered by LINEAR on Jan 15 and posted on the NEOCP was found to have cometary appearance on CCD images taken by P.
Those results set the age boundary for the oldest terrains on Mercury to be contemporary with the so-called Late Heavy Bombardment (LHB), a period of intense asteroid and comet impacts recorded in lunar and asteroidal
rocks and by the numerous craters on the Moon, Earth, and Mars, as well as Mercury.
triple is a set of three pairwise non-adjacent vertices such that there is a path between any pair of them avoiding the neighbors of the third.
For the first time, they used radar to discern asteroidal
features as small as 7. | <urn:uuid:ddd946cc-7dfc-424f-8842-f805f722d70e> | 3.203125 | 536 | Structured Data | Science & Tech. | 20.725455 | 95,541,761 |
12 July 2018
Living bird species help estimate extinctions driven by humans
Published online 15 April 2013
The first humans to colonize remote islands in the Pacific preceded a mass extinction of birds. Incomplete fossil records from these islands, however, have made it difficult to determine how many bird species went extinct during this first wave of human colonization.
An international research team has now overcome this obstacle by employing a mark-recapture method, reporting their findings in a recent issue of Proceedings of the National Academy of Sciences1. Mark-recapture uses the number of extant bird species that have not been discovered on the fossil record to estimate the number of extinct species missing from the fossil record.
The researchers from the University of Canberra, University of Tennessee in Knoxville and King Saud University in Riyadh, used fossil bone data for large, flightless birds found across 41 Pacific islands, colonized by humans in the past 3,500 years. The rates of extinctions were much higher among the large flightless birds – easy prey for humans.
Besides hunting, humans also cleared forests, further accelerating the rate of extinctions. Human arrival, according to the researchers, drove at least 983 species of flightless birds to extinction from the remote Pacific islands.
"Smaller islands with lower rainfall had higher rates of habitat loss, contributing to higher rates of bird extinctions," says Richard P. Duncan, a co-author of the study.
- Duncan, P. R. et al. Magnitude and variation of prehistoric bird extinctions in the Pacific. Proc. Natl. Acad. Sci. Unit. States. Am. (2013) doi:10.1073/pnas.1216511110 | <urn:uuid:c7a86e4e-a040-436b-9301-b0786f2cdf3e> | 3.140625 | 347 | Truncated | Science & Tech. | 46.493656 | 95,541,764 |